If Anyone Builds It, Everyone Dies - Yudkowsky at his Yudkowskiest
Don’t hold back guys, tell us how you really feel.
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All
By: Eliezer Yudkowsky and Nate Soares
Published: 2025
272 Pages
Briefly, what is this book about?
This book makes the AI doomer case at its most extreme. It asserts that if we build artificial superintelligence (ASI) then that ASI will certainly kill all of humanity.
Their argument in brief: the ASI will have goals. These goals are very unlikely to be in alignment with humanity’s goals. This will bring humanity and the ASI into conflict over resources. Since the ASI will surpass us in every respect it will have no reason to negotiate with us. Its superhuman abilities will also leave us unable to stop it. Taken together this will leave the ASI with no reason to keep us around and many reasons to eliminate us—thus the “Everyone Dies” part of the title.
What’s the author’s angle?
Yudkowsky is the ultimate AI doomer. No one is more vocally worried about misaligned ASI than he. Soares is Robin to Yudkowsky’s Batman.1
Who should read this book?
For those familiar with the argument I don’t think the book covers much in the way of new territory.
For those unfamiliar with the argument I might recommend Superintelligence by Nick Bostrom instead. It makes the same points without being quite so tendentious.
What Black Swans does it reveal?
I don’t know that it “reveals” any exactly. More that it makes the case for treating danger from ASI as less of a Black Swan and more as an inevitability.
Specific thoughts: The parable of the alchemists and the unfairness of life
The authors start each of their chapters with a story of one form or another. There’s one in particular I’d like to emphasize. They used it to begin Chapter 11. It was designed to illustrate the foolishness of pursuing research that could lead to ASI.
The story takes place in a medieval kingdom, at a time when the secrets of alchemy were still being pursued, and it was rumored that some alchemists had managed to turn lead into gold. The ruler of this kingdom has heard these rumors and he wishes to find out once and for all if they’re true. In pursuit of this goal the king offers vast wealth to any alchemist who presents himself at the royal palace successfully demonstrates this ability. Should an alchemist turn lead into gold, the king promises to not only make them wealthy, but also everyone in their village. However, there’s a catch. If, after some reasonable amount of time, the alchemist fails to turn lead into gold, then he and everyone in his village will be killed.
Of course we know that lead cannot be turned into gold.2 And even the citizens of the medieval kingdom believe that it’s going to be very, very difficult. Given this, and the terrible consequences which attach to failure, it’s clear that no alchemist, however skilled they think they are, should journey to the castle and attempt this transmutation. Still the vast wealth makes it very tempting. The vast wealth also makes it very easy to convince a talented alchemist that they’re close to figuring it out. The story in the book features just such a talented alchemist, who wishes to journey to the palace. His wiser sister tells him this would be very foolish, and that the only sensible course of action would be for him to remain at home. And that further more, the two of them should, with all possible speed, convince all of the other alchemists in the village to stay at home as well.
In the book, this is where the story ends. You can probably identify all of the elements acting as stand-ins for our own situation. The alchemists are AI companies. Journeying to the palace is pushing forward on the development of ASI. The sister represent the authors of this book. Etc. The story is fine as far as it goes, and it gets their point across, but it understates the difficulties. I think that by adding some details and changing others that we can make the story much closer to representing the real world situation described by the authors.
The king isn’t going to kill just the village that sent the alchemist, he’s going to kill everyone everywhere if any alchemist tries and fails. So just preventing the alchemists in our own village from going would not be sufficient to eliminate the danger, we would need to keep all alchemists everywhere from going.
How are we supposed to do that? We have to threaten war on any village anywhere that evne looks like it’s going to send an alchemist.
But this is not all. There are rumors that some alchemists left a while ago, and they’re just about to reach the castle. And we’re too far away to stop them.3
Also persuading the villagers of the danger is going to be very difficult. There’s a whole group of people that claim the King can’t possibly be serious when he says he’s going to kill everyone. And there’s another group of people who’s been claiming for years the King is going to kill everyone unless we stop using so much wood.4 And the old folks still remember decades ago when people were saying that everyone was going to die because we were going to run out of food.5 Finally there’s the largest group of all, the ones that don’t care about alchemy at all (but they’re sick of the people trying to get them to stop burning wood).
Once you add in those elements you begin to see the difficulty of truly stopping all alchemical experiments… Which is not to say that Yudkowsky and company should give up, but you can see how daunting it is. Obviously I’m not a mind reader, but I suspect that they didn’t toss in all of the elements I tossed in because they didn’t want to portray the situation as entirely futile. But if we accept their assumptions it easily could be. A fact which they allude to at the start of the book.
In the book’s intro the authors take aim at complacency. In the parable, this is represented by the people who don’t care about alchemy and figure that life in their medieval village will continue as it always has. The authors assume a great many people might reject the ideas of this book because it sounds too dramatic, too quick, and too all-encompassing—and beyond all that too ridiculous. To refute that idea they offer up some examples of rapid and dramatic changes from the historical record. One example in particular jumped out at me:
Adopting a historical perspective can help us appreciate what is so hard to see from the perspective of our own short lifespans: Nature permits disruption. Nature permits calamity. Nature permits the world to never be the same again.
Once upon a time, 2.5 billion years ago, an event occurred that biologists call the Oxygen Catastrophe: A new life form learned to use the energy of sunlight to strip valuable carbon out of air. That life form exhaled a dangerously toxic and reactive chemical as waste, poisonous to most existing life: a chemical we now call “oxygen.” It began to build up in the atmosphere. Most life—including most of the bacteria exhaling that oxygen—could not handle its reactivity, and died. A lucky few lines of cells adapted, and eventually evolved into organisms that use oxygen as fuel. But things never went back to the old normal. The world was never the same again.
They assume we are facing a similar catastrophe. They also assume we have a chance to escape this catastrophe. That we can avoid the fate of the bacteria, and not be destroyed by our own creation. But also, as they clearly point out, nature doesn’t care. There’s nothing in the rules of nature preventing the development of a species smart enough to build its own replacement, but too dumb to see that it’s going to be replaced.
Of course when you’re talking about the eventual fate of all humanity, you can easily end up in some interesting philosophical places. One of the reasons why I’m less concerned about AI than Yudkowsky and Soares is that I believe that there’s a God. And I would actually contend that they should as well, just based on their priors. But getting there is going to require a pretty deep dive, so you’ll have to wait until the next post to see how I pull it off.
If this book is correct we’re only a few years from being completely wiped out as a species. If that’s the case you should almost certainly be doing something else other than reading this blog (or any blog.) And perhaps that’s secretly why I’ve been downplaying AI risk all this time. It has nothing to do with faith and everything to do with protecting my subscriber numbers. With that in mind it should hardly be surprising for me to ask any who haven’t already subscribed to do so now.
This is probably unfair to Soares, but I couldn’t resist.
At least not by any means available to a medieval kingdom. The Large Hadron Collider has turned very small amounts of lead into gold.
This represents the idea that ASI may be 2-3 years away. See for instance the AI-2027 site. Should ASI really be this close we’re too late to stop it.
I.e. global warming alarmists.
Paul Ehrlich’s book The Population Bomb, but also many others.



If we are staring down the end of the world, I can't think of a better [named] blog to be reading
Perfect timing! What if their goals just arent ours?