More on Foreseeing Existential Risks
But Fairchy requires more from its measure of existential risks than refuge markets alone can deliver. It needs to measure all significant existential risks, not just the ones that I am thinking of now.
Add more metrics later? Not so simple.
It's tempting to answer "We'll add those things later when we think of them". But who counts as "we"2? Once the system starts, there will be all sorts of players in it. It is not likely that they would all simultaneously agree to a redesign.
You might suppose that an existential risk would be so universally appreciated that everyone would agree to measure it well. History suggests otherwise. For example, regardless of where you stand on global warming, you can agree that one side or the other resists real measures of that existential risk.
What sort of mechanism?
Since we will need to add new existential risk metrics but can't expect to just all agree, we need a mechanism for adding them. This mechanism must have these properties:
- Vested interests in seeing the risk as large or small must not affect the outcome.
- It should measure the risks with reasonable information economy; it should neither starve for information nor spend more than the Expected Value Of Perfect Information measuring them.
- It must be flexible enough to "see" new risks; beyond that, it should aggregate understanding about new risks. This suggests a decision market solution.
This implies that there is some sort of overarching perspective on existential risks that this mechanism leans on. But that is a circular situation: if we can't measure the specific existential risks, how can we hope to measure the general risk? We can hardly hope to make "our continued existence" an issue in a prediction market. For similar reasons, we can't make continued existence part of the general utility metric.
Not an answer: The personal utility metric
You might suppose, based on the central role of the individual satisfaction reports in Fairchy, that the answer simply falls out: people, preferring of course to live, would proxy part of their satisfaction report to measures of existential risk. But this does not satisfy any of the three properties above. It's really no wiser than voting.
Not an answer: Last minute awareness
There is one general source of information about existential risks: last-minute awareness.
The idea is that at the last minute, doomed people would know either "we saw it coming" or "we never saw it coming". Too late, of course. But previously, a prediction market would have bet on the outcomes. Using that, we would predict not only whether we will have seen it coming, but whether existential risks metrics contemplated
But even though bets would be technically be settled before the end of the world, settling them a few days before the end of the world is not much better.
One might say that, since we contemplate refuge markets, the people in the refuges could spend winnings. But that exactly misses. The whole point in needing multiple measures of existential risk is that refuges would save people in some situations, and in other situations, they wouldn't - think an underground bomb shelter in a flood. For each refuge, the situations where it would save people are exactly the situations that a refuge market can already measure.
So last minute awareness adds nothing.
So what are we missing?
But we humans are aware of existential risks. Collectively we're aware of a great many, some serious, some not. We know about them right now with no special social mechanism helping us. Of course, sometimes we're way off; see whichever side of Global Warming you disagree with. But in principle, if not in widespread social practice, we can understand many of these existential risks.
If it's so hard to predict existential risks in general, how do we do it now?
The answer is that we use analysis and logic, of course. We (some of us) think rationally about these things.
Of course, it's not as simple as exhorting Fairchy citizens to "be rational". Nearly everybody thinks they already are quite rational, and sensible, reasonable and every other mental virtue.
Nor can we simply require that analysis be "scientific" or presented in the form of a scholarly paper. See Wrong by David Freedman for why experts are frequently just plain wrong in spite of all scientific posturing. For analysis to work, it must not be something that a priesthood does and presents to the rest of us.
So I believe that analysis (and its "molecular building block", logic) must be intrinsic to the decision system. If we have that, we can simply3 add analysis-predicted survival as a component of the utility function.
Adding logic to the picture
Even if you follow the field, you probably haven't heard logic in connection with prediction markets or decision markets before. Analysis is seen as something that bettors do privately before placing their bets. It's not seen as something the system ought to support.
I thought about this topic years ago - starting about 1991 when Robin Hanson first told me his idea of prediction markets. I think I know how to do it. I call the idea "argument markets". In the next few posts I hope to describe this idea fully.
1 My version, fixing Robin Hanson's design of them.
2 A good general rule I use in thinking about Fairchy is to not picture myself in charge of it all. I don't picture my friends and political allies in charge, either. I picture the dumbest, craziest, and evillest people I know pushing their agendas with all the tools available to them, and I picture a soulless AI following its programming to its logical conclusion. I always assume there are fools, maniacs, villains, and automatons in the mix. So it doesn't appeal to me to make it all up as "we" go along.
3 There are a few free parameters, but those are preferences, not predictions, so they can be set via the individual satisfaction metric or similar.