AI ‘Godfather’ Warns Of Better Form Of Intelligence As He Quits Google, But Black Women Said It First
Dr Geoffrey Hinton, dubbed the ‘godfather of AI’, recently resigned from Google, echoing ethical concerns raised by AI experts like Timnit Gebru.
After more than a decade at the tech giant, Hinton quit Google due to fears about the dangers of AI, but told the BBC that his age played a role: “I’m 75, so it’s time to retire.”
In 2012, Hinton and two of his graduate students at the University of Toronto created a neural network that laid the foundation for AI technologies like ChatGPT and Google Bard. Neural networks are mathematical systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would – deep learning.
However, recent developments by companies like Google and OpenAI have left Hinton questioning his life’s work. He cited concerns about AI’s potential to surpass human intelligence, take away jobs, and be used for immoral purposes.
His concerns add to those raised by leading AI ethicist Dr Timnit Gebru, who was forced forced out of her position as the co-head of Google’s AI ethics team after raising issues of workplace discrimination.
Similarly, Black technologists like Dr Joy Buolamwini of Algorithmic Justice League, Yeshimabeit Milner of Data for Black Lives, and Harvard’s Rediet Abebe have long called highlighted the need for a focus on regulation and equity in AI development.
Eclipsing human intelligence
“The idea that this stuff could actually get smarter than people — a few people believed that,” he told the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Hinton is concerned that AI will soon eclipse human intelligence. He pointed out that GPT-4, for example, eclipses humans in terms of general knowledge and can already do simple reasoning.
“Our brains have 100 trillion connections,” Hinton told MIT Technology Review. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.”
“It’s a completely different form of intelligence. A new and better form of intelligence.”
Bad Actors
He added that smart machines will soon be able to create their subgoals to carry out a task, something that can be taken advantage of by bad actors.
“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he told MIT Technology Review.
“He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”
Read: Black Women, Bias, And Burnout: Let’s Talk About Responsible AI
Should we pause AI development?
Hinton’s resignation adds to calls for regulation in the AI industry. Until last year, he believed that Google was a “proper steward” of the technology, but his opinion has changed as the competition between Google, Microsoft, and others escalates – and regulation fails to keep up.
In March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month pause on the development of new AI systems because of “profound risks to society and humanity.”
Among the signatories was Yoshua Bengio, another so-called godfather of AI, who along with Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning.
However, Hinton did not sign the letter. He told the BBC that AI would deliver more benefits in the short term than risks and an international competition would make an AI development pause difficult.
He also did not want to publicly criticize Google before resigning. He told MIT Technology Review: “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Timnit Gebru also addressed AI concerns in a recent statement co-authored with Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face): “It is indeed time to act: but the focus of our concern should not be imaginary ‘powerful digital minds.'”
“Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”
A new era
“There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” Yann LeCun, Meta’s chief AI scientist, told MIT Technology Review. “It’s a question of when and how, not a question of if.”
However, LeCun has a more positive outlook than Hinton. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment. I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”
“Even within the human species, the smartest among us are not the ones who are the most dominating. And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”