April 5, 2023

AI Experts On “AI Pause” Letter: Let’s Focus On Real And Present Problems, Not Imaginary Powerful Minds

Last week, notable tech figures, including Elon Musk, Apple co-founder Steve Wozniak, and politician Andrew Yang, signed an open letter by Future of Life calling for researchers to “pause the development of any AI systems more powerful than GPT-4 for 6 months.” 

Commotion ensued. Some people withdrew their signatures, others were exposed as fake, and AI experts worldwide criticized the letter’s ethos, claims, and demands. 

Here’s what you need to know.

What’s so special about GPT-4?

In an interview on March 15, OpenAI’s chief scientist said “”we are now reaching a point where the language of psychology is starting to be appropriate for understanding the behavior of neural networks. Soon after, a study found evidence that GPT-4 (the technology behind ChatGPT) had “theory of mind-like” capabilities. It performed as well as an average human adult on a series of psychological tests for theory of mind, causing some to wonder how far we are from “conscious machines”.

These advances have sparked discussions about Artificial General Intelligence (AGI). AI systems in use today are “narrow AI,” they can do one task or a narrow range of tasks very well. An AGI is an AI that can understand or learn to do any intellectual task. 

The Future of Life Institute, a non-profit organization concerned with preventing risks that could wipe out humanity, believe that AGI, if not designed and managed carefully, could be one of these risks. Their open letter, published on March 22, included signatures from many big names in the world of tech, including Turing award winner Yoshua Bengio.

What the open letter got wrong

1. Hype and fear

The Future of Life Institute believes that the time for action is now but there is no consensus on when AGI will arrive. Experts’ predictions range from in the next five years to the next 100 years to never. The language models creating all the AI hype now are good at imitating human text, but to say they “know” or “understand” what they talk about is still up for debate. Critics have argued that the language used in this letter creates unnecessary fear about what these systems can do. The letter may be a tactic to create hype and attract further investments.


2. Longtermism

Longtermism is the philosophy that the problems and lives of the present are unimportant compared to the problem of protecting the future of humanity, who will greatly outnumber us. It sounds good on paper, but it means that we neglect today’s issues to do work that we cannot be sure will result in anything beneficial. It’s also a great excuse that billionaires can use to justify pursuing their own interests.

“Saving a life in a rich country is substantially more important than saving a life in a poor country”

Nick Beckstead, an early contributor to longtermism

Nick Beckstead, an early contributor to longtermism, said that “saving a life in a rich country is substantially more important than saving a life in a poor country because richer countries have more innovation, and their workers are more economically productive.” The Future of Life Institute focuses on long-term problems and their letter distracts from the pressing issues in AI, affecting people now.

3. Pressing issues

There are many urgent problems in the field of AI. For example, AI systems are prone to algorithmic bias, their development exploits workers and causes environmental damage and these systems are very dangerous in the wrong hands.

Although these problems are not as catastrophic as AGI, they are happening right now, so they deserve our immediate attention. Princeton computer science professor Arvind Narayanan said that these existential risks “are valid long-term concerns, but they’ve been repeatedly strategically deployed to divert attention from present harms.”

4. Logistically impossible

Pausing AI research beyond GPT-4 for six months would be a difficult achievement. There are many companies actively working on this goal now, all over the world. Pausing this would require global cooperation.

You would have a better chance solving AI alignment than reaching human alignment.

Dustin Tran, AI researcher at Google

Many of the signatures to this open letter turned out to be fake shortly after it was published, including the signature of Xi Jinping, the president of China. It’s likely that his name was added in anticipation of this problem. Dustin Tran, an AI researcher at Google, pointed out that “you would have a better chance solving AI alignment than reaching human alignment.”

5. Slow and steady

Currently, OpenAI is leading progress toward AGI, but its competitors are not far behind. A pause may allow these competitors to catch up. If one organization is approaching AGI it can easily slow down research or stop when it thinks it’s getting close. However, if competitors catch up and an AGI ”arms race” occurs, one of these organizations might hurtle into AGI unprepared in an attempt to stay ahead of the competition.

Alternatively, a pause could mean many advancements are made but not put into practice. Then once the pause ends, putting these advancements all into play at once would be a dangerous leap into the unknown.

What the open letter got right

The letter does make suggestions intending to deal with some of the present issues in AI, for example using watermarks to differentiate between man-made and AI-generated media. However this is overshadowed by talk of “powerful digital minds” with “human-competitive intelligence.”

It is indeed time to act: but the focus of our concern should not be imaginary “‘powerful digital minds.’

Timnit Gebru, Distributed Artificial Intelligence Research Institute

The capabilities of large language models are exaggerated despite the letter referencing “On the Dangers of Stochastic Parrots”, an academic paper that criticizes these systems, highlighting their tendency to randomly output fictitious statements. 

Timnit Gebru, a leader in ethical AI research who co-authored this paper, co-authored a statement addressing the open letter.

The authors write: “It is indeed time to act: but the focus of our concern should not be imaginary “‘powerful digital minds.” Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”

Christian Ilube

Arts and Sciences undergraduate student at University College London, passionate about tackling algorithmic bias.