Skip to content

AI: Ambiguous Intentions

Had I asked you about Artificial Intelligence two years ago, you might have thought about robots sent from the future to kill us all or human batteries plugged into a virtual reality simulation. But as of November 30, 2022, with the release of ChatGPT, AI has developed a whole new meaning in the public eye.

In 2015, OpenAI, an AI research organization founded by incredibly wealthy tech entrepreneurs, was created to develop “safe and beneficial” Artificial General Intelligence (AGI). Although there are many definitions for AGI, a generally accepted one is an AI tool that can perform any given task as well as the average human. An Artificial Super Intelligence, on the other hand, can perform better than a human expert in any field. OpenAI was originally founded as a non-profit but transitioned into a ‘capped’ for-profit company in 2019. Critics of this move claim that it directly contradicts the company’s stated goal of democratizing AI, but it has undoubtedly increased their ability to grow and become more economically viable. Since then, the company has partnered with Microsoft and focuses on developing new deep-learning models. Their most well-known products are Dall-e, an AI image generator, and ChatGPT, the text-generation tool that most people are familiar with.

But what is ChatGPT? A GPT, or Generative Pre-trained Transformer, is a Large Language Model (LLM) that is trained on a massive database of text to generate unique, human-like content. Fundamentally, a GPT will attempt to generate a reasonable continuation of previously generated text based on the context, what you have asked it, and its training. When choosing what word, or rather, token (a group of characters), to use next, it doesn’t just choose the one most likely to follow; if it did, it would generate loops of repeating, almost identical text. Instead, it will occasionally, at random, pick a less probable word. The likelihood of using a lower-ranked word is determined by a “temperature”. In training, a GPT will process a huge volume of unlabeled text, meaning that the text has not been annotated with grammar or any other context. After processing this text, the GPT will come up with the probability of any word appearing after a prior set of words; each such probability is known as a parameter. GPT-3, ChatGPT’s free version, has 175 billion parameters. GPT-4 meanwhile, only available to paid users, has one and a half trillion.

Almost all of the data ChatGPT uses to generate text has been harvested from the internet in a manner that is ethically questionable, and quite possibly illegal. This use of large-scale data collection through scraping the internet may even contribute to the failure of ChatGPT and its competitors: as more and more AI-generated content floods the internet, 'model collapse’ is becoming a serious concern. When AI models are trained on AI-generated content, they amplify the average data, drowning out anything more unique or uncommon. As a consequence, any data that was scraped from the internet before the AI revolution is far more valuable than the data now available. Not only will this trend of diluting information make training new GPTs much more difficult, it will also make finding authentic information on the internet much harder.

At face value, it seems that ChatGPT is incredibly powerful: It can generate comprehensible essay-length texts that appear to be of human origin even under scrutiny. Given larger and larger datasets and more and more parameters, it should be able to do anything. Unfortunately, this isn’t the case: ChatGPT doesn’t understand a single word that it produces. It doesn’t know that 2 + 2 = 4; it just sees that the most common character to follow “2 + 2 =” is “4”. Yet more evidence for the limited scope of GPT-based ‘AI’ can be found in the way human babies learn. When infants develop language skills, they receive a wide range of cues along with any new words they are taught. A baby’s ability to learn a language is severely limited when given a single input source, such as TV or audio, and deprived of social interaction and its accompanying cues. ChatGPT receives massive amounts of training data yet has no real context for this input, and, even if it did, it would not be able to interpret such data. This is a major issue when trying to create human-like intelligence; as is, ChatGPT cannot act as more than a parrot of pre-existing information: it cannot generalize and come up with new knowledge.

So is ChatGPT and its ilk a dead end? Is it doomed to be a novelty– something that lazy students use to write their essays and nothing more? That seems to be too simplistic a view. Although not able to create new information or understand the text it produces, AI, if used correctly, can greatly improve the efficiency of employees. AI is already massively impacting the job market: if you need proof, research shows that 47% of business leaders are already looking to use AI instead of hiring new employees. Other research shows that, by 2030, Generative AI will make up nearly 30% of job loss by automation.. Anthony Seldon, a British education expert, claims that robots will replace teachers in the classroom by 2027. Although plenty of this may simply be fear-mongering, the impact that AI has had and will have on society cannot be overstated.

At this point, it seems that ensuring safety and ethics in the development of AI in the future is of critical importance. Well, remember how OpenAI restructured itself into a for-profit company in 2019? Coincidentally, that was the last time that the company has been remotely transparent about its latest models. OpenAI, founded as an open-source non-profit competitor to tech giants like Google, is now a closed-source partner of Microsoft and has become increasingly secretive regarding its technology. In recent news, Sam Altman, the long-time CEO of OpenAI, was fired and then rehired with a brand new board of directors. The old board members, whose purpose was to rein in the for-profit side of OpenAI and make sure that the company maintained its founding principles, have almost all resigned and been replaced by more corporate-minded, profit-focused business leaders.

AI development is at an inflection point. It is set to continue developing at a breakneck pace, and its growth will drastically impact our lives. Perhaps this technology, being developed in secret by massive corporations worth more than entire countries, will make work more efficient and improve everyone’s lives. Or, AI might simply provide a new frontier for already dominant corporations to expand into, allowing the elite one percent to better enforce their techno-feudal world order upon the remaining 99%. But that’s not going to happen, everyone knows that big tech has your best interests at heart.