Part of the Series
Movement Memos
“It’s really important for people to understand what this bundle of ideologies is, because it’s become so hugely influential, and is shaping our world right now, and will continue to shape it for the foreseeable future,” says philosopher and historian Émile P. Torres. In this episode of “Movement Memos,” host Kelly Hayes and Torres discuss what activists should know about longtermism and TESCREAL.
Music by Son Monarcas & David Celeste
TRANSCRIPT
Note: This a rush transcript and has been lightly edited for clarity. Copy may not be in its final form.
Kelly Hayes: Welcome to “Movement Memos,” a Truthout podcast about solidarity, organizing and the work of making change. I’m your host, writer and organizer Kelly Hayes. Today, we are talking about longtermism, and I can already hear some of you saying, “What the hell is that,” which is why it’s so important that we have this conversation. Longtermism is a school of thought that has gained popularity in Silicon Valley and much of the tech world, and it’s an ideology that’s come a long way in a fairly short period of time. Proponents say it’s “a set of ethical views concerned with protecting and improving the long-run future,” which sounds innocuous, or even good, really. But as we’ll discuss today, the ideas invoked by longtermists are cultish and often devalue the lives and liberty of people living in the present, and unfortunately, there is currently a lot of power and money behind them. In fact, computer scientist Timnit Gebru has argued that longtermism has become to the tech world what Scientology has long been to Hollywood — an almost inescapable network of influence that can govern the success or failure of adherents and non-adherents.
One of the reasons I find longtermism frightening is that while it has gained a whole lot of financial, political and institutional momentum, some of the smartest people I know still don’t understand what it is, or why it’s a threat. I wanted to do something about that. So our first block of episodes this season will be an exploration of tech issues, including what activists and organizers need to know about artificial intelligence, and a discussion of the kind of storytelling we’ll need in order to resist the cult-ish ideas coming out of Silicon Valley and the tech world.
Today, we will be hearing from Émile P. Torres. Émile is a philosopher and historian whose work focuses on existential threats to civilization and humanity. They have published on a wide range of topics, including machine superintelligence, emerging technologies, and religious eschatology, as well as the history and ethics of human extinction. Their forthcoming book is Human Extinction: A History of the Science and Ethics of Annihilation. I am a big fan of Émile’s work and I am excited for you all to hear their analysis of longtermism, and why we urgently need to educate fellow organizers about it. I feel sure that once most people understand what longtermism and the larger TESCREAL bundle of ideologies are all about (and we’ll explain what we mean by TESCREAL in just a bit), a lot of people will be concerned, appalled, or just plain disgusted, and understand that these ideas must be opposed. But when a movement backed by billionaires is gaining so much political, financial and institutional momentum without the awareness of most activists and everyday people, we’re basically sitting ducks. So I’m hoping that this episode can help us begin to think strategically about what it means to actively oppose these ideas.
It’s great to be back, by the way, after a month-long break, during which I did a lot of writing. I also had the opportunity to support our friends in the Stop Cop City movement during their week of action in Atlanta. I am grateful to Truthout, for a schedule that allows me to balance my activism and my other writing projects with this show, which is so dear to my heart. This podcast is meant to serve as a resource for activists, organizers and educators, so that we can help arm people with the knowledge and analysis they need to make transformative change happen. As our longtime listeners know, Truthout is a union shop, we have not laid anyone off during the pandemic, and we have the best family and sick leave policies in the industry. So if you would like to support that work, you can subscribe to our newsletter or make a donation at truthout.org. You can also support the show by sharing your favorite episodes, leaving reviews on the streaming platforms you use, and by subscribing to Movement Memos wherever you get your podcasts. I also want to give a special shout out to our sustainers, who make monthly donations to support us, because you all are the reason I still have a job, and I love you for it.
And with that, I am so grateful that you’re all back here with us, for our new season, and I hope you enjoy the show.
[musical interlude]
Émile P. Torres: My name is Émile P. Torres, and I’m a philosopher and historian who’s based in Germany at Leibniz University. And my pronouns are they/them. Over the past decade plus, plus a few extra years, my work has focused on existential threats to humanity and civilization.
For the longest time, I was very much aligned with a particular worldview, which I would now describe in terms of the TESCREAL bundle of ideologies. Over the past four or five years, I’ve become a quite vocal critic of this cluster of ideologies.
So longtermism is an ideology that emerged out of the effective altruism community. The main aim of effective altruism is to maximize the amount of good that one does in the world. Ultimately, it’s to positively influence the greatest number of lives possible. So longtermism arose when effective altruists realized that humanity could exist in the universe for an extremely long amount of time. On earth, for example, we could persist for another billion years or so. The future number of humans, if that happens, could be really enormous.
Carl Sagan in 1983 estimated that if we survive for just another 10 million years, there could be 500 trillion future people. That’s just an enormous number. You compare that to the number of people who have existed so far in human history, the estimate is about 117 billion. That’s it. So 500 trillion is just a much larger number, and that’s just the next 10 million years. On earth, we have another potentially billion years, during which we could survive.
If we spread into space, and especially if we spread into space and become digital beings, then the future number of people could be astronomically larger. One estimate is within the Milky Way alone, there could be 10 to the 54[th power] digital people. 10 to 54, that’s a one followed by 45 zeros. If we go beyond the Milky Way to other galaxies, we spread throughout the entire accessible universe, a lower bound estimate is 10 to the 58. Again, a one followed by 58 zeros, that’s how many future people there could be.
So if you’re an effective altruist and your goal is to possibly influence the greatest number of people possible, and if most people who could exist will exist in the far future, once we colonize space and create these digital worlds in which trillions and trillions of people live, then you should be focused on the very far future. Even if there’s a small probability that you’ll influence in a positive way, 1% of these 10 to the 58 digital people in the future, that still is just a much greater value in expectation, so much greater expected value, than focusing on current people and contemporary problems.
To put this in perspective, once again, there are 1.3 billion people today in multidimensional poverty. So lifting them out of poverty would be really good, but influencing in some beneficial way, 1% of 10 to the 58 future digital people in the universe, that is a much, much larger number. Longtermism was this idea that okay, maybe the best way to do the most good is to pivot our focus from contemporary issues towards the very far future.
That’s not to say that contemporary issues should be ignored entirely. We should focus on them, only insofar as doing so might influence the very far future. Ultimately, it’s just a numbers game. That’s really the essence of longtermism.
KH: We are going to dive more deeply into the implications of this longtermist idea, that we need to focus on and prioritize outcomes in the deep future, but first, we are going to talk a bit about the TESCREAL bundle of ideologies. TESCREAL is an acronym that can help us understand how longtermism connects with some of the other ideologies and concepts that are driving the new space race, as well as the race to create artificial general intelligence. It’s important to note that the concept of artificial general intelligence, or AGI, bears no relation to the products and programs that are currently being described as AI in the world today. In fact, Emily Tucker, the Executive Director of the Center on Privacy & Technology, has argued that programs like ChatGPT, should not be characterized as artificial intelligence at all. Tucker writes that our public adoption of AI, as a means to describe current technologies, is the product of “marketing campaigns, and market control,” and of tech companies pushing products with a “turbocharged” capacity for extraction. Using massive data sets, Large Language Models like ChatGPT string together words and information in ways that often make sense, and sometimes don’t.
The branding of these products as AI has helped create the illusion that AGI is just around the corner. Artificial general intelligence lacks a standard definition, but it usually refers to an AI system that’s cognitive abilities would either match or exceed those of human beings. An artificial superintelligence would be a system that profoundly exceeds human capacities. As we will discuss in our next episode, we are about as close to developing those forms of AI as we are to colonizing Mars — which is to say, Elon Musk’s claims that we will colonize Mars by the 2050s are complete science fiction. Having said that, let’s get into what we mean when we use the acronym TESCREAL, and why it matters.
ET: The acronym TESCREAL was coined by myself while writing an article with Dr. Timnit Gebru, who’s this world-renowned computer scientist who used to work for Google, and then was fired after sounding the alarm about algorithmic bias. We were trying to understand why it is that artificial general intelligence, or AGI, has become the explicit aim of companies like OpenAI and DeepMind, that are backed by billions of dollars in huge corporations.
DeepMind is owned by Google. OpenAI gets lot of its funding from Microsoft, I think $11 billion so far. Why is it that they are so obsessed with AGI? I think part of the explanation is the obvious, which is that Microsoft and Google believe that AGI is going to yield huge profits. There’s just going to be billions of dollars in profits as a result of creating these increasingly so-called powerful artificial intelligence systems. I think that explanation is incomplete.
One really has to recognize the influence of this TESCREAL bundle of ideologies. The acronym stands for Transhumanism, Extropianism — it’s a mouthful — Singularitarianism, Cosmism, Rationalism, Effective Altruism, and longtermism. The way I’ve described it is that transhumanism is the backbone of the bundle, and longtermism is kind of the galaxy brain atop the bundle. It sort of binds together a lot of the themes and important ideas that are central to these other ideologies.
Transhumanism in its modern form emerged in the late 1980s and 1990s. The central aim of transhumanism is to develop advanced technologies that would enable us to radically modify, or they would say radically enhance, ourselves to ultimately become a posthuman species. So by becoming posthuman, we could end up living forever. We could maybe abolish all suffering, radically enhance our cognitive systems, augment our cognitive systems so that we ourselves become super intelligent, and ultimately usher in this kind of utopian world of immortality and endless pleasure.
Some transhumanists even refer to this as paradise engineering. In fact, the parallels between transhumanism and traditional religion are really quite striking. That’s really not a coincidence. If you look at the individuals who initially developed the transhumanist ideology, they were explicit that this is supposed to be a replacement for traditional religion. It’s a secular replacement.
And so, AGI was always pretty central to this vision. Once we create AGI, if it is controllable, so if it behaves in a way that aligns with our intentions, then we could instruct it to solve all of the world’s problems. We could just delegate it the task of curing the so-called problem of aging. Maybe it takes a minute to think about it. After that minute, because it’s super intelligent, it comes up with a solution. Same goes for the problem of scarcity.
It would potentially be able to immediately introduce this new world of radical abundance. So AGI is sort of the most direct route from where we are today to this techno-utopian world in the future that we could potentially create. Sam Altman himself, the CEO of OpenAI, has said that without AGI, space colonization is probably impossible. Maybe we could make it to Mars, but getting to the next solar system, which is much, much, much further than Mars, that’s going to be really difficult.
So we probably need AGI for that. That’s, from the start, when the bundle really just consisted of transhumanism, AGI was very important. It was already very central to this worldview. Then over time, transhumanism took on a number of different forms. There was extropianism, it was the first organized transhumanist movement. Then you had singularitarianism, which emphasized the so-called technological singularity. It’s this future moment when the pace of scientific and technological development accelerates to the point where we just simply cannot comprehend the rapidity of new innovations. Perhaps that could be triggered by the creation of AGI. Since AGI is by definition, at least as smart as humans, and since the task of designing increasingly powerful AI systems is an intellectual task, if we have this system that just has our level of “intelligence,” then it could take over that task of designing better and better machines.
You’d get this, what they would call recursive self-improvement, a positive feedback loop, whereby the more capable the AI system becomes, the better positioned it is to create even more capable AI systems, and so on and so on. That’s another notion of the singularity. So for the singularitarianism version of transhumanism, AGI really is just right there, center stage. Then you have cosmism, which is another variant of transhumanism, which is just even broader and even grander, you might even say even more grandiose, than transhumanism. Because it’s about spreading into space, reengineering galaxies engaging in things that they call like spacetime engineering. The creation of scientific magic is another term that they use. This particular view of the future has become really central to longtermism.
So to get to the other letters in the acronym real fast, rationalism is basically a spinoff of the transhumanist movement. That is based on this idea that, okay, we’re going to create this techno-utopian future in the world, that’s going to require a lot of “smart people doing very smart things.” So let’s take a step back and try to figure out the best ways to optimize our smartness, in other words, to become maximally rational. That’s the heart of rationalism. Then EA [effective altruism] is what I had mentioned before, which actually was greatly influenced by rationalism. Whereas rationalists focus on optimizing our rationality, effective altruists focus on optimizing our morality.
Again, if you were an EA who’s trying to optimize your morality by increasing the amount of good you do in the world, once you realize that, the future could be huge. We could colonize space, create these vast computer simulations, in which trillions and trillions of digital people supposedly live happy lives, then it’s only rational to focus on the very far future rather than on the present. That’s the TESCREAL bundle in a nutshell.
Again, transhumanism is the backbone. Longtermism is the galaxy brain that sits atop the bundle, and it’s this bundle of ideologies that has become hugely influential in Silicon Valley, the tech world more generally. Elon Musk calls longtermism, “A close match for my philosophy.” Sam Altman is a transhumanist whose vision of the future aligns very closely with cosmism and longtermism. According to a New York Times profile of Sam Altman, he’s also a product of the effective altruist and rationalist communities.
So this ideology is everywhere. It’s even infiltrating major international governing bodies like the United Nations. There was a UN Dispatch article from just last year that noted that foreign policy circles in general and the United Nations in particular are increasingly embracing the longtermism ideology. If you embrace longtermism, there is a sense in which you embrace the core commitments of many of the other TESCREAL ideologies.
It’s really, really important for people to understand what this bundle of ideologies is, because it’s become so hugely influential, and is shaping our world right now, and will continue to shape it for the foreseeable future.
KH: Something that I was eager to discuss with Émile was how they became interested in longtermism and the larger bundle of TESCREAL ideologies. In following Émile’s work, I learned that they once subscribed to transhumanist ideas. I wanted to understand how and why they were pulled into that ideology, because, if we are going to counter these ideas in the world, we need to understand how and why they appeal to people who aren’t tech bros trying to take over the world.
ET: My background with this bundle of ideologies is that I discovered transhumanism, I think around 2005, as a result of Ray Kurzweil’s book, which was published in 2005, called The Singularity is Near.
And to be honest, my initial reaction to transhumanism was horror, in part because the very same individuals who were promoting the development of these advanced technologies, like synthetic biology, and molecular nanotechnology, advanced artificial superintelligence, and so on, also acknowledged that these technologies would introduce unprecedented threats to human survival.
So on the TESCREAL view, failing to create these technologies means we never get to utopia. We have no option except to develop them. There’s only one way forward, and it’s by way of creating these technologies, but they’re going to introduce extraordinary hazards to every human being on earth. Consequently, what we need to do is create this field, which is called existential risk studies, to study these risks, figure out how to neutralize them. That way, we can have our technological cake and eat it too.
So my initial thought was that the safer option would be just to not develop these technologies in the first place. Ray Kurzweil himself says that we have probably a better than 50 percent chance of surviving the 21st century. Those odds are dismal. That’s alarming. He’s a techno-optimist, widely known as a techno-optimist. He says, “Okay, we have probably a better than 50 percent chance of not all dying.” I thought it’s just better to never develop these. In fact, this was a view that was proposed and defended by a guy named Bill Joy in 2000, in a famous 2000 article published in Wired Magazine, called Why the Future Doesn’t Need Us.
Bill Joy was the co-founder of Sun Microsystems. He’s not a Luddite, he’s not anti-technology, but he had basically the same response that I had, these technologies are just way too dangerous. Transhumanists and the early TESCREALists said, “No, no, no, standing still is not an option. We have to develop them because they’re our vehicle to tech-utopia in the far future, or maybe the very near future.”
And so, over time, I became convinced that the enterprise of technology probably can’t be stopped. There probably are no breaks on this train that we all find ourselves sitting on. So consequently, the best thing to do is to join them, and to try to do what one can to change the trajectory of civilizational development into the future, in ways that are as good as possible. So that’s how I ended up in the transhumanist movement.
And I would say that over time, for probably about six years, I came to not just reluctantly join this movement, but actually to become enthusiastic about it. I think part of that is that I was raised in a really religious community. There was a lot of talk about the future of humanity, in particular, like end times events like the rapture, and like the rise of the antichrist, and this seven-year period of just absolute terrors called the tribulation, during which the antichrist reigns.
Then ultimately, Jesus descends. That’s the second coming. There’s the battle of Armageddon, and it’s all just dark and bleak. But once the clouds clear, then there was paradise with God forever. So I mention this because I started to lose my faith when I was really around 19 or 20. What was left behind was a religion shaped hole, and transhumanism fit that very nicely. I mentioned before that the individuals who developed the idea of transhumanism in the first place all were explicit that it is a secular replacement for traditional religion.
For example, one of the first times that this idea of transhumanism was developed was in a 1927 book by Julian Huxley, very prominent eugenicist from the 20th century. The book was revealingly called, Religion Without Revelation. So instead of relying on supernatural agency to usher in paradise forever, and immortality, and radical abundance, and so on, let’s try to figure out how to do this on our own. And by using technology, by employing the tools of science and eugenics, and through increasingly sophisticated innovations, we can devise means ourselves to create heaven on earth, and maybe even heaven in the heavens if we spread beyond earth, which we should. So transhumanism really fit, by intention, this void that was left behind when I lost my faith in Christianity.
And so the more I found myself in the transhumanist community, the more convinced I was that actually, maybe it’s possible to develop these technologies and usher in utopia by ourselves, to use radical life extension technologies to enable us to live indefinitely long lives, use these technologies like brain computer interfaces to connect our brains to the internet, thereby making us much more intelligent than we currently are, and so on. It just seems like maybe actually this is technologically feasible.
That’s what led me to focus on studying existential risk. Again, existential risk is any event that would prevent us from creating this techno-utopian world in the future. And so if we mitigate those threats, then we are simultaneously increasing the likelihood that we will live in this utopian world. Really, what changed my mind about all of this, there were two things. One is very embarrassing, and it’s that I actually started to read scholars who aren’t white men. I got a completely different perspective over several years, sort of diving into this non-white male literature, on what the future could look like.
That was a bit of an epiphany for me, that actually, the vision of utopia that is at the heart of the TESCREAL bundle, is deeply impoverished. I think, I now believe, that its realization would be catastrophic for most of humanity. If you look in the TESCREAL literature, you will find virtually zero reference to ideas about what the future ought to look like from non-Western perspectives, such as Indigenous, Muslim, Afro-futurism, feminist, disability rights, queerness, and so on, these perspectives.
There’s just no reference to what the future might look like from these alternative vantage points. Consequently, you just end up with this very homogenized, like I said, just deeply impoverished view of what the future should be. We just need to go out into space, create these vast computer simulations, where there are just trillions and trillions of digital people who are all, for some reason, living these happy lives, being productive, maximizing economic productivity.
In the process, we subjugate nature. We plunder the cosmos for all of its resources. This is what longtermisms call our cosmic endowment of negentropy, which is just negative entropy. It’s just energy that is usable to us, in order to create value structures like human beings. That’s literally how longtermists refer to future people, just value structures. And so, I thought that okay, it’s really impoverished. Then so increasingly, this utopian vision became kind of a non-starter for me.
A lot of people can agree on what dystopia would look like, but few people can agree about what utopia should be. And I really think if the vision of utopia at the heart of the TESCREAL bundle were laid out in all its details to the majority of humanity, they’d say, “That is not a future I want.” Beyond that, though, I also became convinced that longtermism, and TESCREALism more generally, could be super dangerous. And this is because I started to study the history of utopian movements that became violent.
And I noticed that at the core of a lot of these movements were two components, a utopian vision of the future, and also a broadly utilitarian mode of moral reasoning. So this is a kind of reasoning according to which the ends justify the means, or at least the ends can justify the means. And when the ends are literal utopia, what is off the table for ensuring that we reach that utopia? In the past, these two components, when smashed together, have led to all sorts of violent acts, even genocides.
I mean, World War II, Hitler promised the German people a thousand-year Reich. He was very much drawing from the Christian tradition of utopia. This thousand year Reich is a period when Germany’s going to reign supreme and everything for the Aryan people is going to be marvelous. That’s partly what justified, at least for true believers in this particular vision of the future, it justified extreme actions, even genocidal actions. At the heart of longtermism are just these two components.
It became increasingly clear to me that longtermism itself could be profoundly dangerous. If there are true believers out there who really do expect there to be this utopian future among the heavens, full of astronomical amounts of value, 10 to the 58 happy digital people, then it’s not difficult to imagine them in a situation where they justify to themselves the use of extreme force, maybe even violence, maybe even something genocidal, in order to achieve those ends.
When I initially wrote about this concern, it was 2021, published in Eon, and the concern was merely hypothetical. My claim was not that there are actual longtermists out there who are saying that engaging in violence and so on is in fact justified, but rather that this ideology itself is dangerous. And if you fast-forward two years into the future up to the present, these hypothetical concerns that I expressed are now really quite concrete.
For example, Eliezer Yudkowsky, the founder of rationalism, former extropian transhumanist singularitarian, who also is greatly influential among effective altruists and longtermists, he believes that if we create artificial general intelligence in the near future, it will kill everybody. So as a result, this techno-utopian future will be erased forever. He also believes that an all-out thermonuclear war would not kill everybody on the planet.
In fact, the best science today supports that. An all-out thermonuclear war probably would not kill everybody. There was a paper published in 2020 that found that an exchange between Russia and the US would kill about 5 billion people. That’s just an enormous catastrophe, but it leaves behind about a reassuring 3 billion to carry on civilization, and ultimately develop this posthuman future by colonizing space, subjugating nature, plundering the cosmos, and so on.
Yudkowsky, looking at these two possibilities, argues that we should do everything we can to prevent AGI from being developed in the near future because we’re just not ready for it yet. We should even risk an all-out thermonuclear war, because again, a thermonuclear war probably is not going to kill everybody, whereas AGI in the foreseeable future is going to.
When he was asked on Twitter “How many people are allowed to die to prevent AGI in the near future,” his response was, “So long as there are enough people, maybe this is just a few thousand, maybe it’s 10,000 or so. As long as there are enough people to survive the nuclear holocaust and then rebuild civilization, then maybe we can still make it to the stars someday.”
That was his response. It’s exactly that kind of reasoning that I was screaming about two years ago. It’s really dangerous. Here you see people in that community, expressing the very same extremist views.
KH: I started hearing about longtermism last year, around the time that Elon Musk launched his bid to acquire Twitter. Some of you may recall Jack Dorsey, the former CEO of Twitter, justifying Musk’s takeover of the platform by saying, “Elon is the singular solution I trust. I trust his mission to extend the light of consciousness.” That bit about extending “the light of consciousness” piqued my interest. I assumed Dorsey was referring to Musk’s space fetish, but I couldn’t figure out what that had to do with allowing a man with terrible politics, and a history of bullshitting on a grand scale, to take over one of the most important social media platforms in the world. We are talking about a man who recently tweeted that he wants to have “a literal dick-measuring contest” with Mark Zuckerberg. So any investment in his larger vision is just baffling to me.
Well, an investigative journalist named Dave Troy broke things down in a blog post on Medium, in which he explained that Dorsey and Musk both subscribe to longtermist ideas, and that Musk’s takeover of Twitter was not so much a business venture, but an ideological maneuver. According to Troy, Musk was angling to disempower so-called “woke” people and ideas that he claims are “destroying civilization,” for the sake of his larger political agenda.
So why does Musk view people and movements emphasizing the well-being of marginalized people, or the environment, in the here and now, as threats to civilization? The longtermist philosophy dictates that the only threats that matter are existential threats, which could interfere with the realization of the utopian, interplanetary future longtermists envision. The only goals that matter are the advancement of AI and the new space race. Because those two pursuits will supposedly allow us to maximize the number of happy, future people, including vast numbers of digital people, to such a degree that our concern for those enormous future communities should outweigh any concern we have for people who are suffering or being treated unfairly today. As Émile explains, it’s a numbers game.
ET: Everybody counts for one, but there could be 10 to the 58 digital people in the future, whereas there are only 8 billion of us right now. So by virtue of the multitude of potential future people, they deserve our moral attention more than contemporary people. That’s really the key idea.
And the reason many TESCREALists, Longtermists in particular, are obsessed with our future being digital is that, you can cram more digital people per unit of space than you can biological people. That’s one reason. If you want to maximize the total amount of value in the universe, that’s going to require increasing the total human population. The more happy people there are, the more value there’s going to be in total within the universe as a whole.
You have a moral obligation to increase the human population. If it’s the case that you can create more digital people in the future than biological people, then you should create those digital people. That’s one reason they’re obsessed with this particular view. Also, if we want to colonize space in the first place, we’re almost certainly going to need to become digital.
Like I mentioned before, spreading to Mars, that may be possible if we’re biological, but getting to the next solar system, much less the next galaxy, the Andromeda Galaxy, that is going to take an enormous amount of time, and the conditions of outer space are extremely hostile. Biological tissue is just not very conducive to these sorts of multi-million year trips to other solar systems or other galaxies. We really need to become digital. This notion that the future is digital is, I think, just really central to the longtermist, and more generally, the TESCREAList worldview.
Maybe it’s just worth noting that longtermism sounds really good. There’s something very refreshing about the word itself, because there is a huge amount of short termism in our society. In fact, it’s baked into our institutions. There are quarterly reports that discourage thinking about the very long term. Our election cycles are four or six years, and consequently, politicians are not going to be campaigning on promoting policies that consider the welfare of people hundreds, or thousands, or even more years into the future.
So short termism is pervasive. Myopia is the standard perspective on the future. In fact, there was a study out from probably about a decade ago, in which a scholar named Bruce Kahn surveyed people about their ability to foresee the future. He found that our vision of what is to come tends not to extend further than 10 years or so. That’s sort of the horizon that most people find to be comprehensible. Beyond that, it’s just too abstract to think clearly about.
So any shift towards thinking about the longer term future of humanity seems, at least at first glance, to be very attractive. After all, the catastrophic effects of climate change will persist for another 10,000 years or so. That’s a much longer time than civilization has so far existed. Civilization’s maybe 6,000 years or so. 10,000 years, what we’re doing right now and what we’ve done since the Industrial Revolution, that will shape the livability of our planet for many millennia.
Surely one would want a kind of long term, you might say longtermist, perspective on these things. The longtermist ideology goes so far beyond long-term thinking. There are ideological commitments to longtermism that, I think, most people, probably the large majority of people who care about long-term thinking, would find very off-putting. One is what I had gestured at a moment ago, that is there’s a moral imperative to increase the human population. So long as people are on average happy, they will bring value into the universe.
On even a moderate interpretation of longtermism, we have a moral obligation to increase the amount of value in the universe as a whole. That means bigger is better. To quote William MacAskill in his book from last year called What We Owe The Future, “bigger is better.” He even writes that on this account, there is a moral case for space settlement. If bigger is better, and if the surface of earth is finite, which it is, then we need to spread beyond earth. That’s the reason he concludes that there’s a moral case for space settlement.
That is a very radical idea. Once again, if you can create a bigger population by creating digital people, by replacing the biological substrate with some kind of digital hardware, then we should do that. Ultimately, that is how we fulfill our long-term potential in the universe. That’s one sense in which these ideologies are counterintuitive and quite radical.
Another, rationalism, also might seem to have a certain appeal because surely, we want to be as individuals more rational. Actually, if you look at the understanding of rationality that is most popular within the rationalist community, it leads to all sorts of very strange conclusions. Here’s one example, Eliezer Yudkowsky, who more or less founded the rationalist community, and is a transhumanist singularitarian, who participated in the extropian movement.
His views these days are very closely aligned to effective altruism and longtermism. He has a foot in just about all of these ideologies within the TESCREAL bundle. He has suggested that morality should be much more about number crunching than a lot of us would naturally suspect. For example, he published a blog post on a website called LessWrong, which he founded in 2009. That’s sort of the online epicenter of the rationalist community.
In this blog post, he asked a question, what would be worse: one individual being tortured mercilessly for 50 years straight? Just endless interminable suffering for this one individual, or some extremely large number of individuals who have the almost imperceptible discomfort of having an eyelash in their eye? Which of these would be worse?
Well, if you crunch the numbers, and if the number of individuals who experience this eyelash in their eye is large enough, then you should choose to have the individual being tortured for 50 years, rather than this huge number of individuals being slightly bothered by just a very small amount of discomfort in their eye. It’s just a numbers game. And so he refers to this as the heuristic of shut up and multiply.
Over time, he’s gotten a little less dogmatic about it. He suggested that maybe there are situations in which shut up and multiply doesn’t always hold. This sort of gives you a sense of how extreme this approach to trying to optimize our rationality can be. Some of the conclusions of this community have been really quite radical and problematic. Another example is there have been a number of individuals in the rationalist community who have also been quite sympathetic with eugenics.
So if we want to realize this tech-utopian future, then we’re going to need some number of sufficiently “intelligent” individuals in society. Consequently, if the number of people who have lower “intelligence” outbreed their more intellectually capable peers, then the average intelligence of humanity is going to fall. This is what they argue. This is a scenario called dysgenics. That is a term that goes back to the early 20th century eugenicists, many of whom were motivated or inspired by certain racist, ableist, classist, sexist, and otherwise elitist views.
Those views are still all over the place in the rationalist community. I think even more, this notion of eugenics and the anxiety surrounding the possibility of dysgenics pressures is still pretty pervasive. Another example would be from Nick Bostrom’s paper published in 2002, in which he introduces the notion of existential risk. Existential risk is any event that would prevent us from realizing this techno-utopian posthuman future among the stars full of astronomical amounts of value.
He lists a number of existential risk scenarios. Some of them are really quite obvious, like a thermonuclear war. Maybe if the U.S. and Russia and India, Pakistan, all the other nuclear nations were to be involved in it, all out thermonuclear exchange, then the outcome could potentially be human extinction. But there are also various survivable scenarios that could preclude the realization of this tech-utopian world in the future. He explicitly identifies one of them as dysgenic pressures.
Again, where less so-called intelligent people outbreed their more intelligent peers. Consequently, humanity becomes insufficiently smart to develop the technologies needed to get us to utopia. Perhaps that might include artificial general intelligence. And as a result, this utopia is never realized, and that would be an existential catastrophe. This is just a sense of how radical and extreme some of the views are in this community.
KH: The idea of human value, and maximizing that value, is a prominent concept in longtermist ideology. In a 2003 paper called Astronomical Waste — a document that is foundational to longtermism — Nick Bostrom forwarded the idea that any delay in space colonization is fundamentally harmful, because such delays would reduce the number of potential future, happy humans in the distant future. Bostrom wrote that “the potential for one hundred trillion potential human beings is lost for every second of postponement of colonization of our supercluster.” Last year, Elon Musk retweeted a post that called Astronomical Waste “Likely the most important paper ever written.” This understanding of human value as a numbers game, the expansion of which is dependent on space colonization, allows longtermists to dismiss many social concerns.
I think many of us would take issue with the idea that because something has value, our priority should be the mass production and mass proliferation of that thing, but how does one even define human value? This was one of the topics I found most confusing, as I dug into longtermist ideas.
ET: What is the value that they’re referring to when they talk about maximizing value? It depends on who you ask. Some would say that what we need to do as a species right now is first and foremost, address the problem of mitigating existential risk. Once we do that, we end up in a state of existential security. That gives us some breathing room. We figure out how to eliminate the threat of thermonuclear war, to develop nanotechnology, or different kinds of synthetic biology in a way that is not going to threaten our continued survival.
Once we have obtained existential security, then there is this epoch, this period that they call the long reflection. This could be centuries or millennia. When we all get together, all humans around the world, and we just focus on trying to solve some of the perennial problems in philosophy: what do we value? What should we value? What is this thing that we should try to maximize in the future? Is it happiness?
One of the main theories of value within philosophy is called hedonism. This states that the only intrinsically valuable thing in the whole universe is happiness or pleasure. There are other theories that say no, it’s actually something like satisfying desires or there are yet still other theories that would say it’s things like knowledge, and friendship, and science, and the arts, in addition to happiness.
How exactly we understand value is a bit orthogonal to the longtermist argument, because what they will say is that the future could be huge. If we colonize space and we create all of these digital people, the future could be enormous. That means that whatever it is you value, there could be a whole lot more of it in the future. So the real key idea is this notion of value maximization. You could ask the question, “What is the appropriate response to intrinsic value?” Whatever it is that has intrinsic value, what’s the right way to respond to that?
The longtermists would say, “Maximize it.” If you value walks on the beach, then two walks on the beach is going to be twice as good as one. If you value great works of art, then 100 great works of art is going to be twice as good as just 50 great works of art. Whatever it is you value, there should be more of it. This idea that value should be maximized historically arose pretty much around the same time as capitalism.
It’s associated with a particular ethical theory called utilitarianism. I don’t think it’s a coincidence that utilitarianism and capitalism arose at the same time. Because utilitarianism is really a very quantitative way of thinking about morality. The fundamental precept is that value should be maximized. Consequently, there are all sorts of parallels between it and capitalism. You could think of utilitarianism as kind of a branch of economics, whereas capitalists are all about maximizing the bottom line, which is profit, utilitarians take the bottom line that should be maximized to be just value in a more general and impersonal sense.
That’s really the key idea. It’s worth noting that there are other answers to the question, “What is the appropriate response to value?” You could say, “Well actually, what you should do is when presented with something that is intrinsically valuable, you should treasure it or cherish it, love it, protect it, preserve it, sustain it.” There’s any number of possible answers here that don’t involve just increasing the total number of instances of that thing in the universe.
I think if you ask a lot of longtermists, they’ll say that we should probably embrace some kind of pluralistic view of value. We don’t really know what value is. It probably includes happiness. The more instances of happiness there are in the future, the better the universe becomes as a whole. But ultimately, this is something we can decide during the long reflection.
By the way, this notion of the long reflection I find to be a complete non-starter. When you think about all people around the world just sort of hitting pause on everything, sitting around, joining hands for a couple centuries or millennia to solve these perennial philosophical problems, to figure out what value is, that seems just absolutely entirely implausible.
Nonetheless, this is part of the longtermist blueprint for the future. So yeah, the key idea is that whatever it is we value, there just needs to be more of it.
KH: The idea of massive numbers of future happy people, digital and non-digital, as a maximization of human value, is one that I have heard a lot, from people whose worldviews fall within the TESCREAL bundle, and it’s a concept I find quite laughable. Because if I were to ask all of you listening or reading right now, “What is happiness,” I would get a lot of wildly different answers. I can’t even answer the question, “What is happiness?” Is it how I felt on my wedding day? Is it how I feel when I eat an edible, or see a Nazi get punched? Happiness is not a concrete concept that can be measured, so how can it be a defining feature of longtermism’s notion of human value, and how can it be effectively maximized?
ET: I think one thing that’s completely missing from the longtermist, or I’d say the TESCREAL literature more generally, is any kind of philosophically serious analysis of the meaning of life. What makes life meaningful? The focus is really just maximizing value in the universe as a whole, but you could potentially maximize value while rendering lives meaningless. For me, understanding what makes a life meaningful is just much more important than maximizing happiness.
Also, I think you’re totally right that this sort of quantitative notion of happiness is really bizarre. Utilitarians have this term for a unit of happiness. It’s called a util, and that comes from the word utility. Utility is more or less interchangeable with value. You want to maximize value, that means you want to maximize utility. Consequently, the more utils there are in the universe, the better the universe becomes. What exactly a util is, I have no idea.
I have no idea how many utils I introduced into the universe yesterday. I don’t know if I’ve created more utils today than a week ago. It’s all just very strange, and it’s trying to understand this extraordinarily complex and rich domain, which is morality, in a kind of quantitative, procrustean way. I think when a lot of people understand that this is the kind of philosophical foundations of a large part of the TESCREAL worldview, they will immediately recoil.
KH: The concept of the util has me wondering how many utils I introduce by eating an edible, but we won’t dwell on that.
Anyway, the TESCREAL bundle is a relatively new concept. I personally find the acronym quite useful, when exploring these ideas. But Émile and Timnit Gebru have received some fierce criticism for their ideas. In June, PhD student Eli Sennesh and James J. Hughes, who is the Executive Director of the Institute for Ethics and Emerging Technologies, published a Medium post called Conspiracy Theories, Left Futurism, and the Attack on TESCREAL, which levels some harsh critiques of Émile and Timnit Gebru’s work, including the idea that the ideologies in the TESCREAL acronym can be bundled together at all. I had a lot of issues with this piece, which we don’t have time to dive into today, but I wanted to give Émile a chance to respond to some of the piece’s criticisms of their work and ideas.
ET: James Hughes and another individual published this Medium article, in which they argued that the TESCREAL bundle is essentially a conspiracy theory. I do not find their arguments to be very compelling at all. In fact, I can’t even recall what the central thrust of their argument is exactly. It’s this notion that TESCREALism is, this is just conspiratorial thinking, is completely untenable. It’s not a conspiracy theory if there’s a huge amount of evidence supporting it.
There is an enormous amount of evidence that corroborates this notion that there is this bundle of ideologies that really do belong together. They do. They constitute a kind of single entity or single organism, extending from the late 1980s up to the present, and that this bundle is very influential within Silicon Valley. One thing I should point out is these movements and these ideologies, they share a whole lot of the same ideological real estate.
Historically, they emerged one out of the other. You could think about this as a suburban sprawl. It started with transhumanism. Extropianism was the first organized transhumanist movement. Many extropians then went on to participate in singularitarianism. The founder of modern cosmism himself was an extropian transhumanist. Then Nick Bostrom is more or less the founder of longtermism, he was hugely influential among rationalists and EAs, and he also was one of the original transhumanists, a participant in the extropian movement.
These ideologies and the movements are overlapping and interconnected in all sorts of ways. In fact, one of the individuals who retweeted the criticism, the article critiquing the TESCREAL concept by James Hughes, was an individual named Anders Sandberg. He was, as far as I can tell, endorsing this objection that TESCREALism is just a conspiracy theory. I found this to be quite amusing, because Anders Sandberg is a transhumanist who participated in the extropian movement, who’s written about the singularity, and in fact hopes that the singularity will come about.
He’s a singularitarian, essentially. He’s had been very closely associated with the founder of cosmism, Ben Goertzel. He participates in the rationalist community, hugely influential among the EAs, effective altruists, and is a longtermist. He’s an example. He exemplifies all of these different ideologies, maybe with the exception of cosmism, at least not explicitly. Although, again, the central idea, the vision of cosmism is very much alive within longtermism.
So there’s an individual who is basically at the very heart of the TESCREAL movement, who’s retweeting this claim that TESCREALism is just a conspiracy theory. So I feel like this gestures at the extent to which someone should take this criticism of the term and concept that Timnit Gebru and I came up with, should look at these criticisms with a bit of a chuckle.
KH: The article also argues that some of these ideologies have progressive wings or progressive under-pinnings, and that we disregard or wrong those progressive adherents, who could be our allies, when we engage in some of the generalizations that they claim are inherent in the bundling of TESCREAL. This is a terrible argument, on its face. Because there have always been oppressive and reactionary ideas that have spread on both the left and right, even if they were concentrated on the right, and that remains true today. Transphobic ideas are a prime example, in our time, as those ideas are primarily pushed by the right, but can also be found among liberals and people who identify as leftists.
ET: It’s also worth noting that eugenics, throughout the 20th century, we associate it with the fascists, with Nazi Germany. It was hugely popular among progressives as well as fascists, the whole political spectrum. There were individuals along the entire political spectrum who were all gung ho about eugenics. Just because something is progressive doesn’t mean that it’s not problematic.
Also, I would say that the genuinely progressive wings of the transhumanist movement, of the extropians, well the extropian movement is an exception because that’s very libertarian, but the progressive wings of the transhumanist movement, they’re just not nearly as influential. And so, this notion of the TESCREAL bundle absolutely makes room for all sorts of nuances. I’m not saying that all transhumanists are TESCREALists. I’m not saying that all EAs are TESCREALists. There are plenty of EAs who think longtermism is nuts, and want nothing to do with longtermism.
They just want to focus on eliminating factory farming and alleviating global poverty. EA, there’s a growing and very powerful part of EA, which is longtermist. This idea of the TESCREAL bundle absolutely makes plenty of room for a variation within the communities corresponding to each letter in the acronym. So that was another reason I found the article to kind of miss the target, because my and Gebru’s claim is not that every transhumanist and every singularitarian is a TESCREAList.
It’s just that the most powerful figures within these communities are TESCREALists. I think that’s really the key idea.
KH: You may be hearing all of this and thinking, “Well, I don’t have any sway in Silicon Valley, so what the hell am I supposed to do about all of this?” If you’re feeling that way, I want you to stop and think about how most people would feel about longtermism, as a concept, if they had any notion of what we’ve discussed here today. Simply shining a light on these ideas is an important start. We have to know our enemies, and not enough people know or understand what we are up against when it comes to longtermism or TESCREAL.
ET: I think it’s really important for people to understand what this ideology is, how influential it is, and why it could potentially be dangerous. This is what I’ve been writing about, one of the things I’ve been writing about, at least for the past year, and hoping to just alert the public that there’s this bundle of ideologies out there and behind the scenes, it has become massively influential. It’s infiltrating the UN, it’s pervasive in Silicon Valley.
I think for all of the talk among TESCREALists of existential risks, including risks arising from artificial general intelligence, I think there is a really good argument to make that one of the most significant threats facing us is the TESCREAL bundle itself. After all, you have hugely influential people in this TESCREAL community, like Eliezer Yudkowsky, who’s writing in major publications like Time Magazine, that we should risk thermonuclear war to prevent the AGI apocalypse. We should utilize military force to strike data centers that could be used by nations to create an artificial general intelligence. This is really incredibly dangerous stuff, and it’s exactly what I was worried about several years ago. And to my horror, we’re now in a situation where my fear that extreme violence might end up being viewed as justified in order to prevent an existential catastrophe. My worries have been validated. And I think that’s a really scary thing.
KH: Émile also has an upcoming book that will be released in July which I’m really excited about.
ET: So the upcoming book is called Human Extinction, A History of the Science and Ethics of Annihilation, and it basically traces the history of thinking about human extinction throughout the western tradition, from the ancient Greeks, all the way up to the present.
Then it also provides really the first comprehensive analysis of the ethics of human extinction. And so this is out July 14th. You could find it on Amazon or the website of the publisher, which is Routledge. It’s a bit pricey, but hopefully if you do buy it, it’ll be worth the money.
KH: I got so much out of this conversation, and I hope our readers and listeners have as well. For me, the bottom line is that, in longtermism, we have an ideology where suffering, mass death, and even extreme acts of violence, can all be deemed acceptable, if these actions support the project of extending the light of consciousness, through space colonization and the development of AGI. It’s important to understand just how disposable you and I, and everyone suffering under white supremacy, imperialism and capitalism are according to this ideology. And while longtermism may be relatively new to many of us, the truth is, its adherents have been working for years to infiltrate educational institutions, policy-making organizations, and government structures, so we are talking about a social and political project that is well underway. As people who would oppose this work, we have a lot of catching up to do.
I also think Émile’s point about transhumanism filling a religion-sized hole in their life, at one time, is also a really important thought for activists to consider, because a whole lot of people are walking around with a religion-sized hole in their worldview. We are living through uncertain, catastrophic times, and as Mike Davis described in Planet of Slums, people can become more vulnerable to cults and hyper religious movements during times of collapse. We are social animals, and as such, we are always searching for leadership — for someone who is bigger, stronger, and faster, who has a plan. Some people find those comforts within the realm of faith, but many of us are not religious, and many more do not subscribe to literal interpretations of their faiths, and are therefore still searching for scientific and philosophical answers. People are extremely vulnerable, in these times, and if we, as organizers and activists, do not attempt to fill the religion-sized hole in people’s lives with meaningful pursuits and ideas, destructive and dehumanizing ideas will fill that empty space. We have to welcome people into movements that address their existential, epistemic and relational needs, so that they are less likely to fall victim to cultish ideas and ideologies.
I realize that’s easier said than done, but we are going to talk a bit about what kind of stories and ideas will be helpful to us, in countering the religiosity of the new space race and the race for AGI in upcoming episodes. For now, I would just like to remind us of Ruth Wilson Gilmore’s words, which are forever in my heart: “where life is precious, life is precious.” That means that here and now, life is precious if we, as human beings, are precious to each other. That is a decision that we make, every day, and I consider it a political commitment. We do not need to maximize ourselves, by having as many children as possible, or creating infinite hordes of digital people, as the pronatalists and AI obsessed tech bros insist. We need to cherish one another, and all of the beings and stuff that makes our existence possible. That’s what it means to fight for each other, and for the future. Whatever human value might be, its preservation will not be realized in some sci-fi fantasyland, spread across the galaxy. The fight for everything that is worth preserving about who and what we are is happening here and now, as we combat inequality, extraction, militarism, and every other driver of climate chaos and dehumanization in our times. We are that fight, if we cherish each other enough to act.
I want to thank Émile P. Torres for joining me today. I learned so much, and I am so grateful for this conversation. Don’t forget to check out Émile’s upcoming book Human Extinction, A History of the Science and Ethics of Annihilation. And do be sure to join us for our next couple of episodes, in which we will discuss artificial intelligence and the kind of storytelling we’ll need to wage the ideological battles ahead of us.
I also want to thank our listeners for joining us today. And remember, our best defense against cynicism is to do good, and to remember that the good we do matters. Until next time, I’ll see you in the streets.
Show Notes
- Don’t forget to check out Émile’s book, Human Extinction, A History of the Science and Ethics of Annihilation.
Referenced:
- Artifice and Intelligence (Center on Privacy & Technology)
- Towards artificial general intelligence via a multimodal foundation model by Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun & Ji-Rong Wen
- How “Longtermism” is Shaping Foreign Policy | Will MacAskill by Mark Leon Goldberg
- Why the Future Doesn’t Need Us by Bill Joy
- Humiliation Machine: 10 Broken Promises From Elon Musk by Thomas Germain
- Elon Musk goes low against Zuckerberg as Twitter-Threads spat intensifies by Martin Pengelly
- No, Elon and Jack are not “competitors.” They’re collaborating by Dave Troy
- Understanding TESCREAL with Dr. Timnit Gebru and Émile Torres by Dave Troy
- Astronomical Waste: The Opportunity Cost of Delayed Technological Development by Nick Bostrom
- Existential risks: analyzing human extinction scenarios and related hazards by Nick Bostrom
- Conspiracy Theories, Left Futurism, and the Attack on TESCREAL by James J. Hughes PhD and Eli Sennesh
Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn
Dear Truthout Community,
If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.
We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.
Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.
There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.
After the election, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?
It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.
We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.
We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.
Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment.
We’re with you. Let’s do all we can to move forward together.
With love, rage, and solidarity,
Maya, Negin, Saima, and Ziggy