AI powered decision is not final in the socialist society

The immense potential of AI and big data must be harnessed through democratic regulation and ethical considerations to address its inherent risks and biases for the betterment of society, not for surveillance and profit-driven discrimination. By Yong-Hui Hong.

 

The use of Artificial Intelligence (AI) is now essential for companies to continue to grow and maintain their competitive advantage in our society. In addition, AI technology is becoming an integral part of our daily lives because of the constant and rapid technological advancements. As a result, the technology is beginning to have an impact on various sectors of society.

On the other hand, a number of issues have been raised, such as racial discrimination through “Crime Nabi” and gender discrimination through AI recruiting tools and AI-based credit scoring. Other AI-based applications also sometimes discriminate on the basis of gender, and sometimes on the basis of race, religion, wealth, or health status. How should we deal with AI in a reality where AI chatbots routinely spew racist rhetoric online?1

Needless to say, techno-optimism that overestimates the benefits of AI is a major detriment to our lives. Moreover, AI is always unpredictable. The conclusions drawn by the AI can hardly be externally verified, and humans cannot understand why such conclusions were drawn.

If an AI makes a bad decision, no one can explain why, making it difficult to hold it accountable. AI also has inherent dangers such as unpredictability, lack of transparency and accountability, and bias due to skewed data, and many problems can be expected in its military applications. Today, the military use of AI is already well advanced, making it extremely difficult to stop.

AI and big data

AI is generally considered to be a technology or system that allows machines to replace human intellectual abilities. Human intellectual abilities include, for example, the perceptual ability to distinguish objects, the linguistic ability, and the ability to reason and make decisions. AI improves these abilities through a function called machine learning, which is based on large amounts of data.

With the recent development of a new machine learning technology called deep learning, AI has already become a familiar part of our daily lives. For example, voice assistants such as Apple’s Siri and Amazon’s Alexa are also enabled by AI technology. Big Data plays a complementary role to AI.

Big Data is a vast and diverse collection of data that continues to grow in real time. The decision-making process of AI technology requires large amounts of data (Big Data) to make certain decisions. A portion of the Big Data is then divided into subsets and labeled by humans as training data according to the output target. While there are many benefits to the using this combination of AI and Big Data, there are also many problems that have been pointed out.

Problems with AI

Cases of the inadvertent operation of AI that have led to the violation of human rights and the invasion of privacy have often been highlighted. The effective use of AI and big data requires people who can handle data appropriately and effectively. However, there is a global shortage of IT professionals with data science and related skills.

In this context, there have been many AI-related incidents in the past. In 2014, several biases were discovered in the algorithms of Amazon’s AI recruiting system, resulting in gender discrimination against job seekers.

In 2016, Microsoft’s AI chatbot Tay was released to the public. However, shortly after its release, Tay went on a racist tirade and the service was shut down, as mentioned above. In 2019, Goldman Sachs received significant public criticism when it was discovered that women were unfairly given lower scores when calculating users’ credit scores, resulting in a gender gap in the services the company provided to its users.

In 2020, a predictive grading system used by the UK’s exams regulator, the Office of Qualifications and Examinations Regulation (Ofqual), was found to disadvantage working-class and minority students, leading to protests against the agency. In profit-oriented capitalist societies, no drastic measures were taken to address these problems, and they continued to recur.

AI, on the other hand, as an authoritarian technology, has expanded its influence to promote its alternative model of governance. For example, China has used the power of machine learning to increase surveillance of its population, cracking down and controlling the Uyghurs and other ethnic minorities. The use of AI-based technological devices runs the risk of creating situations that differentiate and categorize people.

AI to increase and amplify hate

The many problems caused by AI mentioned in the previous section are just the tip of the iceberg. The more AI permeates society, the greater its negative impact on society. And in recent years, several countries have stepped up their legal and regulatory actions against AI.

In April 2021, the European Commission presented a comprehensive regulatory proposal for AI. The Federal Trade Commission (FTC) is also increasingly regulating AI in North America. The UNESCO 2020 survey results clearly show the impact of racial ideology on AI2 . The survey results show that gender bias, especially against women and LGBTQI people, is transmitted to AI software through Big Data, leading to discrimination. When the predictive AI algorithm is trained on a biased dataset, existing biases will be reproduced.

Another problem is that AI reinforces and amplifies biases. AI could also negatively affect biases in ways that humans do not anticipate. AI and algorithms are programmed by humans, often necessarily using data based on the past. The past has been a very racist place. And we primarily use historical data to train our AI. AI performance depends on the quality and quantity of training data. So if you don’t have enough high-quality, reliable training data, and the data is biased, the AI’s decisions will be biased.

The process by which AI technologies reach biased, discriminatory, or simply inaccurate conclusions complicates and amplifies other issues, such as lack of transparency and accountability, threats to the presumption of innocence, and concerns about technical challenges.

Mass-produced fake news

ChatGPT is one of our most popular AIs. If you can use ChatGPT, ask ChatGPT about politics and other topics you are familiar with. In many cases, the answers ChatGPT gives are different every time, with a lot of inaccurate or false information mixed in with the correct information. The recent emergence of large-scale language models such as ChatGPT has made it possible to create large amounts of disinformation, fake news, etc. more elaborately and at lower cost. For this reason, AI can be used on the offensive side in situations where human cognition is manipulated by false information or other means.

In recent wars, with the development of information and communication technology, there are fears of further military use of AI. Today, AI is being used not only to operate unmanned weapons, but also to process information essential to military operations and to support human decision-making based on that information. AI has a number of problems, including unpredictability, lack of transparency and accountability, and the risk of bias due to distorted data. In the absence of solutions to these problems, there is concern that the use of AI in decision-making in military operations will increase in the future.

The technological basis of the socialist society

Recent history has been a period of continuous and amazing technological progress. Nuclear, digital, and other technologies are fundamentally changing our lives, our societies, and our environment. And the accelerating convergence of technology and our lives has brought us into the age of AI and the Internet of Things (IoT). Today, not a day goes by without encountering a semi-automated or automated system. But AI, Big Data, and other digital technologies complement experts, not replace them. In the past, Lenin, in order to preserve the Soviet regime he had created and because of the need to raise Russia’s productive capacity to the level of the West, advocated the following thesis.

“Communism equals Soviet power plus the electrification of the whole country.”3

In Lenin’s time, electrification was considered an innovative technology and the technological basis of socialist society. After the October Revolution, it was declared that socialism would be realized only when all industries were rebuilt on the basis of electrification. Lenin considered electrification at the energy level as the technological basis of socialist society. However, historical results show that electrification alone cannot make the economy of a socialist society successful. So what is the technological basis of modern socialist society?

Lenin’s thesis called for technologies at the energy level, including communication technologies. On the other hand, today’s economy in a socialist society requires advanced technologies at the level of information processing, and AI is one of them. At present, however, AI cannot be the technological basis of socialist society. Technology must at least be socially controlled, not capital controlled. And the current controls on AI are inadequate in this respect. AI is also unpredictable, lacks transparency and accountability, and carries many risks of bias due to distorted data. This unpredictable technology should not be used in an area where decisions are made to take human lives.

We are facing a society in which machines linked to AI automatically take human lives. At the same time, the digital colonialism and surveillance capitalism created by AI has reached a level that threatens human dignity. The technology of surveillance power that creates such class discrimination requires strong regulation. The technological basis of a socialist society must be under the control and supervision of democratic empowerment of the working class, with the dignity of life as the primary consideration for gender equality, sexual freedom, and desirable relationships with nature.

Source >> International Viewpoint

Footnotes

  1. The service outage that occurred in 2016, shortly after Microsoft’s service began, is a prime example. Tay, which is an acronym for “Thinking About You”, is Microsoft’s artificial intelligence chatterbot that’s designed to learn and interact with people. Tay was released in 2016, but 16 hours after its release, hate speech was spewed on Twitter against feminists and Jews and the service was shut down.
    See also https://dailywireless.org/internet/what-happened-to-microsoft-tay-ai-chatbot/ ↩︎
  2. UNESCO, 2020, “Artificial intelligence and gender equality: key findings of UNESCO’s global dialogue” ↩︎
  3. When Lenin wrote “The State and Revolution,” he studied the “Critique of the Gotha Programme” and discussed the social theory of the future. In it, he wrote “What is usually called socialism was termed by Marx the “first”, or lower, phase of communist society.” for Marx’s “first phase, or first stage, communism”. However, after the October Revolution, there are few examples of Lenin himself using the terms socialism and communism in different phases. ↩︎

Art (50) Book Review (109) Books (112) Capitalism (65) China (77) Climate Emergency (97) Conservative Government (90) Conservative Party (45) COVID-19 (44) Economics (37) EcoSocialism (50) Elections (82) Europe (44) Fascism (54) Film (48) Film Review (61) France (68) Gaza (59) Imperialism (97) Israel (117) Italy (44) Keir Starmer (51) Labour Party (110) Long Read (42) Marxism (47) Palestine (139) pandemic (78) Protest (146) Russia (324) Solidarity (126) Statement (46) Trade Unionism (133) Ukraine (326) United States of America (124) War (360)


Yong-hui Hong is a member of the Japanese section

Join the discussion

MORE FROM ACR