Discussion about this post

User's avatar
The Sentient Dog Group's avatar

"They turn a 99% probability of something happening into 100% probability, and take no precautions for the 1% chance that it won’t happen. As bad as this is, it only covers the predictions people have thought to make. There’s a whole universe of potential events which get entirely ignored by superforecasters."

This seems a bit off. If you know there's a 1% chance of something bad happening, let's say it is prudent to take some steps to prevent that or deal with it if it does. If that chance becomes 0.001%, then that would rationally alter how much prudence you should deploy against it. Likewise if you discover the odds are actually 25%, then that should dramatically increase your prudence. You may have a lightening rod on your house, even though the odds of it helping you are really low but you don't have a system to protect you from a plane crashing into your roof from a near 90 degree straight down descent. (Actually you probably don't have a lightening rod on your house, they are not often done these days because the odds are too low but I bet you can be more easily talked into a lightening rod investment than plane proofing your roof investment)

It doesn't really matter how you know this. If the superforecaster is able to produce a more accurate risk profile, that is no different than if you improved your estimate of risk by building very sensitive engineering models updated with lots of data from various tests you performed.

Is the difference here between a better forecaster and an Oracle? The Oracle would be defined as someone who isn't refining the odds calculation but actually saying what will happen. If you know for certain the 1% risk you home won't burn down next year is actually 0%, you can cancel the fire insurance you have even if it only costs you $1.

Oracles are different from refining probability estimates in the sense that they are probably near supernatural. If they are just better oddsmakers, well maybe no big deal? An oracle knows the outcome of the coin flip, she doesn't actually change the 50-50 odds of it. If oracles are possible, then they are not simply probability but something else.

"To be fair, prediction markets are something of a different beast, but still nevertheless closely aligned with the discipline of superforecasting."

Well to be fair prediction markets seem perhaps a bit too open to Black Swans. At this moment, 2025 has a 3% chance Jesus Christ will come back according to at least one such market (https://polymarket.com/event/will-jesus-christ-return-in-2025). I personally think Jordan Peterson, after his latest 'debate' stunt, has a 15% chance of declaring himself Christ in 2025 so I'm going to have to think about how I can hedge these two against each other.

But here's an idea:

Let's take some AI's and have them back trade prediction markets. By that I mean have them pretend to buy and sell contracts using only information that was available at past periods of time. This process can be repeated over and over again even though we have a finite amount of information and a finite amount of past. Still the idea would be to build wickedly good AI driven predictors. Note you don't need historical prediction market prices. You can have AI's trade against each other over the outcome of, say, the Civil War by just reading old newspaper accounts each day.

Now here's the Black Swan angle. Once the agents get very good, let them essentially keep a trader's diary. The purpose is to articulate what is behind their strategy.

Here's what we'd be looking for. Agents that short the stock market before 9/11 because they are picking up patterns of over brought stocks would be the first type of superforecaster you are concerned about. Those that are just really good at refining predictions based on 'normal data' when it is probably the Black Swans that are more important. But suppose you got an AI that, say, shorts the market before 9/11 because it has a hunch something like major is going to happen. Remember these trades happen in a 'holodeck' where the AIs are only allowed to process information that was available from time periods in the past. If you had an AI that picked up on big bad things like 9/11, Trump's election, or the release of Rise of Skywalker from backdated information, you'd have something interesting if it could do it consistently.

If we did this a lot, like a few billion times at least, we will have AIs that predict Black Swans correctly for the simple reason out of billions you'll have a few that will predict a 9/11 for literally every day that happens so just like a lucky managed mutual fund, some will seem to predict things they shouldn't be able to predict. The question is can we reliably find an oracle, an AI that has found a system that predicts known Black Swans more reliably than the laws of chance would admit? If we do I'd say that would raise even more profound questions than the risks of over trusting superforecasters.

Expand full comment
2 more comments...

No posts