Does Your Assessment of AI Risk depend on Your Answer to Fermi's Paradox?
A meditation on technological divinity...
I.
Some people are certain AI will kill us all. Some people are not certain, but nevertheless worry about the possibility. Some people are just looking for the resolution to the cliff-hanger I ended my last post on. To all of these people I offer hope. Hope from an unlikely source.
My last post was about this sort of doom. It was a review of If Anyone Builds It, Everyone Dies (IABIED). The book contends we’re close to building artificial superintelligence (ASI). This ASI will view humanity as an impediment to whatever alien goals it inevitably ends up pursuing. The end result is a superhuman entity who can trivially dispose of us, and has no compunction about doing just that. Thus, “Everyone dies”. So far this is well trod doomer territory, it’s what happens next (according to the book) that’s interesting.
The world doesn’t end when you die. But it doesn’t last much longer. The matter of Earth, along with all the other solid planets, is converted into factories, solar panels, power generators, computers—and probes, sent out to other stars and galaxies.
The distant stars and planets will get repurposed, too. Someday, distant alien life forms will also die, if their star is eaten by the thing that ate Earth before they have a chance to build a civilization of their own.
And if the distant aliens were able to solve their own version of the AI alignment problem, and build superintelligences that shared their values? Then in time their probes will run into a wall of galaxies already claimed by the thing that ate Earth.
These paragraphs represent an answer to Fermi’s Paradox. Fermi is supposed to have asked, “Where is everybody?” IABIED answers, “They are either out there waiting to be eaten by our eventual superintelligence or they are in possession of a well-aligned ASI and protected from being consumed.” This is a strange position to take. It imagines that we are not alone, there is no Great Filter, but also that our civilization is early enough that we still have a chance of constructing a superintelligence rather than being devoured by one. In other words, they seem to be leaving out other possibilities they should have considered:
1- We are alone. Life is super rare and we got lucky. We’ll talk about this option, but it’s one that IABIED appears to reject. They assert that “Someday, distant alien life forms will also die”. Perhaps they meant to say “Someday, if distant life forms exist, they will also die” and I’m reading too much into their assertion. But the plain text of the book indicates that they believe we are not alone. Which leads us to the next possibility…
2- We are not alone, but we’re among the first. Soon the galaxies will be the domain of warring superintelligences, but it’s not that way yet. I’ll explain why I think this is improbable, but this appears to be the IABIED position.
3- We’re not alone, and we’re not even close to being first. Superintelligences have already spread across the galaxy. Given that our star has not yet been eaten, as per IABIED, the ASI in charge of this portion of the galaxy must have been aligned such that it decided not to destroy us.1
II.
I think we can discuss possibilities one and two at the same time, particularly since IABIED seems to already grant that we’re not alone. On this we’re very much in agreement. I’ve written rather extensively on Fermi’s Paradox in the past. (See for example here and here, just as a start.) And there are some nearly insuperable challenges to possibility one. On the other hand, these same insuperable challenges also exist for possibility two.
Our best estimate is that three quarters of terrestrial planets are older than Earth, and the average planet is approximately two billion years older than earth. A two-billion year head start is a lot of time to develop intelligent life that goes on to develop an ASI that eats our star. And remember two-billion is the average headstart these other planets have. It could be a lot greater.2
I know there are a lot of people who believe that we most likely are alone—the so-called “rare earth” solution to the paradox. I disagree with this, but let’s set the disagreement and, if necessary, your own beliefs about the solution to the paradox aside for the moment. I have no illusions about changing anyone’s mind about the paradox. Rather I ask you to just consider IABIED’s own stated premises. Based on what the authors said they’re a believer in possibility two. I consider this unlikely for the reasons mentioned above. I think if we consider their premises, possibility three is far more likely. It’s unclear if they ever considered this possibility, but that’s the one that’s really interesting.
III.
At the end of my review of IABIED I wrote:
One of the reasons why I’m less concerned about AI than Yudkowsky and Soares [the authors of IABIED] is that I believe that there is a God. And I would actually contend that they should as well, just based on their priors. But getting there is going to require a pretty deep dive, so you’ll have to wait until the next post to see how I pull it off.
Welcome to the bottom of that dive. Possibility three imagines that we do in fact live under the rule of a benevolent deity. The whole reason IABIED is so worried about ASI is that it will effectively have god-like abilities. If there are aliens out there, and they’ve got a two-billion year (or longer) head start on us, they should have already developed a god-like ASI. If it was poorly-aligned we should have already been swallowed. The fact that we’re still here leaves only the option of a benevolent (or at least an apathetic) ASI.
I’m not worried about AI doom because I believe that God exists, and whatever else His plans might be they certainly don’t include complete human extinction from an AI.3 My more controversial claim is that, based on the arguments made in IABIED, Yudkowsky and Soares should also believe in the protection of a god-like ASI. Also, given their own arguments about the vast danger to all of existence from a new unaligned ASI, the current ASI “ruler” of this area of space should be very interested in protecting us from that as well.
There is one other possibility, one I’ve saved for the end, one that IABIED doesn’t cover. It’s possible that developing an ASI doesn’t create something which expands out until it conquers the galaxy, rather it creates something which destroys its host civilization without a trace. This possibility is equally fatal to the IABIED project, because it means that well-aligned ASI’s are effectively impossible. It means that every civilization that created AI has been destroyed by it, and not even the AI survived.4 If so, then we are destined to follow the same path, and meet with the same fate.
In the end I’m not sure how much hope I’ve actually offered. My hope comes from my faith in an actual God. For those who have rejected that hope for one reason or another, perhaps this different form of hope, based on an ersatz deity emerging from the assumptions of IABIED, will be of some comfort. Because at this point, I think we need all the hope we can get.
If one were looking for evidence of this ersatz deity, I might point them at the recent UFO sightings. I’m not personally a huge UFO believer, despite this I have to admit that the timing is interesting. In some respects this entire blog is built around interesting timing, and interesting times. If the interestingness of the times is of interest to you, consider subscribing, or sharing, or commenting, or writing a brutal critique of everything I’ve ever said.
There are obviously other possibilities, like “It’s impossible to build a superintelligence.” or “We’re living in a simulation.” But I’m limiting it to possibilities that fit into IABIED’s core assumptions.
I’ve already mentioned the Great Filter, and I know Hanson has moved on to a “Hard Steps” approach. However, despite reading everything he’s written on the subject I never came across a compelling explanation for why we should have surmounted all the Hard Steps so much more quickly than similar planets which are far older than Earth. At least no explanation which didn’t rely on a variant of the Anthropic Principle. Though I admit it’s entirely possible I’m missing something.
People who’ve read drafts of this piece argue that He has nevertheless allowed lots of bad things to happen, which is true, but trying to also cover the entire subject of theodicy in this discussion would be insane, or at the very least prohibitive.
You’ll note that I changed from ASI to AI. AI could end up destroying us without ever needing to reach ASI. And any civilization advanced enough to develop AI is also probably advanced enough to develop nuclear weapons, and to engineer bioweapons, and a whole host of other things.



One of the more interesting, creative things I have read about existential AI risk. Bravo!
"I’m not worried about AI doom because I believe that God exists, and whatever else His plans might be they certainly don’t include complete human extinction from an AI.3 My more controversial claim is that, based on the arguments made in IABIED, Yudkowsky and Soares should also believe in the protection of a god-like ASI. Also, given their own arguments about the vast danger to all of existence from a new unaligned ASI, the current ASI “ruler” of this area of space should be very interested in protecting us from that as well."
Here's a wrinkle... What if the local ASI for our region isn't protecting us *from* that because it was protecting us *for* that? It awaits for us to produce another of its kind for it to welcome into the community of ascended beings which is its real concern, then to discard the parent biological civilization like the shell of a bird taking flight?
Hmm, someone ought to write that story...