Tech Elite’s AI Ideologies Have Racist Foundations, Say AI Ethicists
More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations.
“So another “godfather” of AI, Turing Award Winner Yoshua Bengio has decided to FULLY align himself with the #TESCREAL bundle, writing about “rogue AI” and prominently citing people like Nick Bostrom,” tweeted Gebru, founder of the Distributed AI Research Institute (DAIR).
TESCREAL, coined by Émile Torres and Gebru, stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. For a breakdown of each term, check out this article in The Washington Spectator.
According to Torres, these overlapping emergent belief systems all trace their lineage back to the first-wave Anglo-American eugenics tradition, and underlying all is a kind of techno-utopianism and a sense that one is genuinely saving the world. Moreover, these communities send to overlap: many who identify as one of the letters in “TESCREAL” also identify as others.
The TESCREAL ideologies are hugely influential among AI researchers and it is a significant problem.
Whose life matters?
Let’s take longtermism, for example. Longtermism is the philosophy that the problems and lives of the present are unimportant compared to the problem of protecting the future of humanity, who will greatly outnumber us. It is the idea that what matters most is for earth-originating intelligent life to fulfill its potential in the cosmos.
Nick Beckstead, an early contributor to longtermism, said that “saving a life in a rich country is substantially more important than saving a life in a poor country because richer countries have more innovation, and their workers are more economically productive.”
Oxford University Professor Nick Bostrom, the ‘father’ of longtermism, is particularly concerned about “dysgenic pressures” as an existential threat. Essentially, he’s worried that less intelligent people might out-breed more intelligent people for a total loss of species-wide intelligence. Who are these less intelligent people?
Well in an email in which he used the n-word, Bostrom said he believed it was “true” that “Blacks are more stupid than whites.” Although Bostrom issued an apology, he did not redact the racial slur or explicitly address the substance of his comments on relative IQ.
Silicon Valley’s elites
Nevertheless, longtermism continues to gain popularity among Silicon Valley elites. Figures like Sam Altman, Elon Musk, and Sam Bankman-Fried have expressed support for Bostrom’s ideas.
OpenAI co-founder and CEO Sam Altman praised Bostrom’s book as the “best thing” he’s seen written on AI risk. Elon Musk called the philosophy a “close match” to his own beliefs and retweeted Bostrom’s work on longtermism. Musk’s company Neuralink reportedly plans to “kickstart transhuman evolution” with “brain-hacking” tech. Disgraced former FTX CEO Sam Bankman-Fried was known for his die-hard longtermist beliefs.
The Future of Life institute recently published an open letter calling for a six-month pause on AI development, which was rooted in longtermism. Turing Award Winner Yoshua Bengio, one of the many signatories on the letter, has now written a blog post about “existential risk” which references Bostrom.
Let’s sort out the racism first
In a Twitter thread, Gebru wrote: “Maybe stop your racist ass field first…before trying to “save humanity.” Why don’t you start by cleaning your own damn house and stop platforming the actual humans whose ideologies are threats to actual living humans right now? The entire foundation of this field is the most racist shit I’ve ever seen.”
While discussions about AI regulation gain traction, it is crucial not to overlook the real and present harms caused by exploitative practices, the perpetuation of oppressive systems, widening social inequalities, the environmental impact of AI, and the concentration of power in the hands of a few.
As discussions of AI regulation take center stage, it is important that the fantasies of the (white male) tech elite, which are built on racist foundations, don’t eclipse the need for transparency, accountability, and preventing harm to actual present-day people.