Though the utility functions for FAI and futarchy have the same
requirements, thinking about them has developed very differently. The
FAI (Singularity Institute) idea seems to be that earlier AIs would
think up the right utility function. But there's no way to test that
the AI got it right or even got it reasonable.
In contrast, in talking about futarchy it's been clear that a
pre-determined utility function is needed. So much more thought has
gone into it from the futarchy side. In all modesty, I have to take a
lot of the credit for that myself. However, I credit Robin Hanson
with originally proposing using GDP. GDP as such won't work, of
course, but it is at least pointed in the right general direction.
My thinking about the utility function is more than can be easily
summed up here. But to give you a general flavor of it, the problem
isn't defining the utility function itself, it's designing a secure,
measurable proxy for it. Now I think it should comprise:
-
Physical metrics (health, death, etc)
-
Economic metrics
-
Satisfaction surveys.
-
To be taken in physical circumstances similar to secret-ballot
voting, with similar measures against vote-selling, coercion, and
so forth.
-
Ask about overall satisfaction, so nothing falls into the cracks
between the categories.
-
Phrase it to compare satisfaction across time intervals, rather
than attempting an absolute measure.
-
Compare multiple overlapping intervals, for robustness.
-
Existential metrics
-
Metrics of the security of the other metrics.
-
Citizen's proxy metrics. Citizens could pre-commit part of their
measured satisfaction metric according to any specific other metric
they chose.
-
This is powerful:
-
It neatly handles personal identity issues such as mind
uploading and last wills.
-
It gives access to firmer metrics, instead of the soft metric
of reported satisfaction.
-
It lets individuals who favor a different blend of utility
components effect that blend in their own case.
-
May provide a level of control when we transition from
physical-body-based life to whatever life will be in the
distant future.
-
All in all, it puts stronger control in individual hands.
-
But it's also dangerous. There must be no way to compel anyone
to proxy in a particular way.
-
Proxied metrics should be silently revocable. Citizens should
be encouraged, if they were coerced, to revoke and report.
-
It should be impossible to confirm that a citizen has made a
certain proxy.
-
Citizens should not be able to proxy all of their satisfaction
metric.
-
(Not directly a utility component) Advisory markets
-
Measure the effectiveness of various possible proxies
-
Intended to help citizens deploy proxies effectively.
-
Parameterized on facets of individual circumstance so individuals
may easily adapt them to their situations and tastes.
-
These markets' own utility function is based on satisfaction
surveys.
This isn't future-proof, of course. For instance, the part about
physical circumstances won't still work in 100 years. It is, however,
something that an AI could learn from and learn with.