14 February 2012

"Hard To Vary" and personal identity

"Hard To Vary" and personal identity

Previously

I read and sorta-lightly-reviewed David Deutsch's The Beginning Of Infinity. But in this post I'm going to talk about a tangential question to which it suggested an answer. So this post is my thoughts evoked by Deutsch's book.

Transporters as abattoirs

As a way of introducing the question, I'm going to recount a conversation that I sometimes hear in nerdspace. It starts with someone observing that transporters a la Star Trek "are really death machines". Why? Because they make a copy and destroy the original. "They kill you and make an identical twin".

Someone else (usually me) asks socratically, if this twin is so completely identical, what have you lost? Describe any test for this lost thing, other than where the guy is now standing.

The next point in the usual exchange is fraught with exasperation. I'll paraphrase it as this: In our normal experience if the body that we walk around in and whose eyes we see out of is destroyed, that's the end of us.

Then someone observes that in the normal course of events, every atom in our bodies is periodically replaced. Some faster than others, but after a few months, we have been mostly replaced with new material.

"But that's different, it's gradual"

"What's so magical about change being gradual?"

"There's continuity."

"In the transporter, there's continuity too, of information. All the relevant information reaches the other end. Otherwise it wouldn't know to how rebuild the guy there."

"But you're always conscious"

"What about when you're asleep?"

"You're alive the whole time."

"With Heisenberg uncertainty, on a short enough time-scale you're not really continuously anything."

Various other points are made. Usually this debate gets repetitive and exasperated and ends without a meeting of minds, but with a feeling that "transporters kill you" is simplistic.

The question

And a question is left hanging in the air: Then what exactly is it that we want preserved? We value personal survival. What is X, that if we have it, we have this valuable personal survival, and if we don't have X, we don't?

It's not being materially unchanged. We change atoms all the time.

It's not physically breathing or heart-beating - we all know about coma patients.

It has something to do with being faithfully copied. But it isn't being 100% unchanged. If you could never learn anything new, that wouldn't be perfect, ideal personal survival, it'd be scarcely better than death.

Personal identity is the hardest to vary (to ourselves)

I've already given away a big chunk of the answer. We value the "hard to vary" parts of ourselves. Our atoms aren't hard to vary. Our good parts are. Almost any oxygen will do for breathing. No other set of friends or childhood memories are suitable replacements for our own.

With art, we had to ask what design terrain it was hard to vary in, and the answer seemed to be the audience's perceptive powers. But with personal identity, we are both the art and the audience.

So the criterion is self-referential. Art and explanation didn't have self-referentiality, at least not in the all-consuming way that personal identity does.

So our "hard to vary" criterion has a lot of chicken-and-egg-ness to it. We value aspects of ourselves because we appreciate them in contrast to the possible variants that we perceive - but those perceptions were in turn informed and molded by what we value. It's a path-dependent metric.

Does it fit?

It's a quick stab at a deep problem, so there's plenty of room for this idea to be misguided. But it seems about right. It doesn't fall into the trap of valuing our atoms or our continuous wakefulness, or making a frozen-in-carbonite body our ideal.

The path-dependency fits. Personal identity is full of path-dependent phenomena. Our friends and families are irreplacable to us, but we're not seriously under the impression that had our lives been different and we met another random-ish set of people, those putative other people would all have been second-rate unlike our actual friends and family.

It seems compatible with Wei Dai's observation (can't find the link) that people of all cultures have an abundance of apparently terminal values. At least, it's not obviously suspicious for there to be many hard-to-vary values. On the other hand it doesn't fall into the deontic trap of stipulating a list of terminal values, leaving us to ask "Why those?".

So I find this to be a promising theory of the value in personal identity.

04 February 2012

The Beginning Of Infinity

David Deutsch's The Beginning Of Infinity

images/boi_cover_large.jpg http://beginningofinfinity.com/

At first, I was disappointed in this book. I had liked his earlier book The Fabric Of Reality and I had high expectations. The Beginning Of Infinity seemed pedestrian after that - at first.

His main topic, the central intellectual value of good explanations, was interesting in principle, but I'd already got that from his earlier book. He describes good explanations as "hard to vary".

Then he examined themes that I was already familiar with: Evolution as unintelligent design (a la Dennett). Many-worlds. Memetics. Infinity (a la Cantor). It's hard to get excited about stuff I already knew.

Why Are Flowers Beautiful?

Chapter 14 "Why Are Flowers Beautiful?" was the first exciting offering in the book, at least to my eyes. Good art, he says is also hard to vary, just like explanation and design.

He makes his case by talking about flowers. You may think flowers peaceable creatures, but they are the product of a sort of arms race. Flowering plants are symbiotic with pollinating insects, which need to recognize them. But if their flower designs were too easy to imitate, other flowers with poorer nectar would look like them. The insects would sometimes visit the poorer flowers instead, undesirable for both the insects and the proper flowers, benefitting only the free-loading flowers.

So each flower species has an appearance that's hard to imitate. Since no flower has a monopoly on any color or shape, a free-loading flower could easily get the gross appearance right. So flowers have appearances that are "hard to vary". Getting the appearance kind of grossly right won't fool the pollinating insects.

patchworkZombie points out that this is unlikely, more likely the sincere flowers try to be memorable while free-loading flowers try to be forgettable

For flowering plants, it's a vital evolutionary design, for us, a pretty sight. This is the nexus Deutsch finds between design and art.

What does he mean, "hard to vary?"

I had to mentally fill in what he means by "hard to vary". By this point in the book I think I basically got what he meant, but he never says what he means by it in so many words. So here's my guess as to what he means by "hard to vary" as it applies to art.

What is it about art that he's saying is hard to vary? You could easily (say) play a wrong note in a Beethoven piano sonata or paint a stupid moustache on the Mona Lisa. That's not hard.

So is he saying it's hard to vary on the receiver's side? That by itself makes no sense. Of course art doesn't vary on the receiver's side, it's the artist who can make it vary, not the audience.

But if I understand rightly, it is nevertheless the audience that delineates what is "hard to vary". We can perceive some sensations and patterns easily, some with difficulty, and some not at all. Far more sophisticated than insects in many ways, but real perceptual powers and perceptual limitations nonetheless.

So an oeuvre is hard to vary if it has cornered a niche in perceptual space from which the easily-made variations produce something not much like the oeuvre. The easy variations give wrong notes and not new tunes, as it were.

It almost seems circular. Varying an oeuvre that's hard to vary produces one that is less hard to vary. That's not a good criterion for "hard to vary".

But it's really about the interaction between ease of variation and subtle perceptual powers. Varying an oeuvre in an easy way, say by changing the pitch of one note, produces something that our subtler perceptual powers see as grossly different, say, by messing up an otherwise good match to an established motif and not leading anywhere.

So if I understand right, he's saying that quality in art is precisely the same thing as being hard to vary in light of the audience's perceptual powers.

This is a good theory

It adds up to the first compelling theory of art that I have seen. It lets subjective perception into the picture, but avoids the post-modern notion that it's all subjective and "ugly is the new pretty". It's properly grounded; the concepts that it's built from are universal, not parochial, and can't be accused of being merely disguised synonyms for beauty. And most importantly, when I hold it in mind while listening to music, it seems to apply reasonably to what I'm hearing.

01 January 2012

Gdp Up, Happiness Down

Thoughts on Gdp Up, Happiness Down

Previously

In the past I've talked about decision markets, more on the Futarchy discussion group I created than on this blog. One idea in Futarchy is that GDP or some extension thereof is a reasonable proxy for happiness. (There's much more. I'm greatly short-changing it)

Now

Today an article on Science Daily 1 says that while GDP has risen in the past two years, "happiness" as measured by the researchers has fallen. Uh-oh.

"After a gradual upward trend [in happiness] that ran from January to April, 2009, the overall time series has shown a gradual downward trend, accelerating somewhat over the first half of 2011."

Is that comparison right?

The article didn't say what they denominated GDP in, or I missed it. It's surely in the paper but I just read the article 2 minutes before starting this blog post, so I haven't checked.

But if their GDP is denominated in dollars, the apparent paradox may just mean that the dollar is worth less now. No surprise there.

How they measured happiness

It was interesting how they tried to measure happiness: by analyzing Tweets. They had a database from Amazon's Mechanical Turk of the emotional positiveness of English words. They graded 46 billion words worth of Tweets on emotional positiveness.

Is it representative? Not a lot

The researchers admit that their happiness metric is less than representative: "It does skew toward younger people and people with smartphones and so on - but Twitter is nearly universal now," Dodds said, "Every demographic is represented."

Could that metric work for futarchy? Not like that.

We might want to for two reasons:

  • We'd rather measure happiness than dollars, which we were treating as a proxy for something like happiness anyways.
  • Having more good metrics is always better.

But that's about the most spoofable metric ever. If I had money riding on "Happiness will go down", I would have an army of webbots out there tweeting variations of "I feel suicidal" over and over and over.

The researchers would catch on, you say? Sure, and then there'd be an arms race. I gave you the cheap quip version, but off the top of my head I can think of half a dozen ways to fake more human-like miserable tweets. Ultimately I am confident that the fakes would win the arms race.

So it's another example of Goodhart's Law that proxy measures only work OK when nothing important depends on them.

Could that sort of idea work at all?

Well, can we design around Goodhart's Law with this? Perhaps only choose the proxy after the fact, and reject ones that look like they've gotten distorted.

It might work in some situations:

  • If there so many chooseable proxies that beforehand, the expected cost of distorting them, spread over many proxies, are more than the expected gain of distorting the right one(s).
  • If the procedure for rejecting distorted ones is not itself gameable.

So the idea isn't that promising, but is worth a mention.

Footnotes:

1 University of Vermont (2011, December 16). GDP up, happiness down: From Twitter, scientists measure mood. ScienceDaily. Retrieved January 1, 2012, from http://www.sciencedaily.com/releases/2011/12/111216174440.htm

24 December 2011

Trade Logic - what mechanisms would a system need?

The mechanisms that a Trade Logic system would need

Previously

I introduced and specified Trade Logic, and posted a fair bit about it.

Now

Here's another post that's mostly a bullet-point list. In this post I try to mostly nail down all the mechanisms TL needs in order to function as intended. Some of it, notably the treatment of selector bids, is still only partly defined.

The mechanisms that a TL system needs

  • Pre-market
    • Means to add definitions (understood to include selectors and bettable issues)
      • Initiated by a user
      • A definition is only accepted if it completely satisfies the typechecks
      • Definitions that are accepted persist.
  • In active market
    • Holdings
      • A user's holdings are persistent except as modified by the other mechanisms here.
    • Trading
      • Initiated by a user
      • User specifies:
        • What to trade
        • Buy or sell
        • Price
        • How much
        • How long the order is to persist.
      • A trade is accepted just if:
        • The user holds that amount or more of what he's selling.
        • It can be met from booked orders
        • It is to be retained as a booked order
      • Booked orders persist until either:
        • Acted on
        • Cancelled
        • Timed out
    • Issue conversion
      • Initiated by a user
      • User specifies:
        • What to convert from
        • What to convert to
        • How much
      • A trade is accepted just if:
        • It's licensed by one of the conversion rules
        • User has sufficient holdings to convert from
  • Selectors - where the system meets the outside world. Largely TBD.
    • As part of each selector issue, there is a defined mechanism that can be invoked.
      • Its actions when invoked are to:
        • Present itself to the outside world as invoked.
          • Also present its invocation parameters.
          • But not reveal whether it was invoked for settlement (randomly) or as challenge (user-selected values)
        • Be expected to act in the outside world:
          • Make a unique selection
          • Further query that unique result. This implies that selector issues are parameterized on the query, but that's still TBD.
          • Translate the result(s) of the query into language the system understands.
          • Present itself to the system as completed.
        • Accept from the real world the result(s) of that query.
          • How it is described to the system is TBD. Possibly the format it is described in is yet another parameter of a selector.
      • How an invocable mechanisms is described to the system is still to be decided.
      • (NB, a selector's acceptability is checked by the challenge mechanism below. Presumably those selectors that are found viable will have been defined in such a way as to be randomly inspectable etc; that's beyond my scope in this spec)
    • Challenge mechanism to settle selector bets
      • Initiated by challengers
        • (What's available to pro-selector users is TBD, but probably just uses the betting mechanism above)
      • Challenger specifies:
        • The value that the choice input stream should have. It is basically a stream of bits, but:
          • Where split-random-stream is encountered, it will split and the challenger will specify both branches.
          • Challenger can specify that from some point on, random bits are used.
        • Probably some form of good-faith bond or bet on the outcome
      • The selector is invoked with the given values.
      • Some mechanism queries whether exactly one individual was returned (partly undecided).
      • The challenge bet is settled accordingly.
    • There may possibly also be provision for multi-turn challenges ("I bet you can't specify X such that I can't specify f(X)") but that's to be decided.
  • Settling (Usually post-market)
    • Public fair random-bit generating
      • Initiated as selectors need it.
      • For selectors, some interface for retrieving bits just as needed, so that we may use pick01 etc without physically requiring infinite bits.
        • This interface should not reveal whether it's providing random bits or bits given in a challenge.
    • Mechanism to invoke selectors wrt a given bet:
      • It decides:
        • Whether to settle it at the current time.
        • If applicable, "how much" to settle it, for statistical bets that admit partial settlement
      • Probably a function of:
        • The cost of invoking its selectors
        • Random input
        • An ancillary market judging the value of settlement
        • Logic that relates the settlement of convertible bets. (To be decided, probably something like shares of the ancillary markets of convertible bets being in turn convertible)
      • Being activated, it in fact invokes particular selectors with fresh random values, querying them.
      • In accordance with the results of the query:
        • Shares of one side of the bet are converted to units
        • Shares of the other side are converted to zeros.
        • But there will also be allowance for partial results:
          • Partial settlement
          • Sliding payoffs

Trade Logic pick-boolean

Trade Logic pick-boolean

Previously

I initially said that the basic choice operator in TL was pick01, which outputs a random scalar between 0 and 1.

What's wrong with pick01 as basic

It occurs to me that picking a random scalar is not the most basic thing we could do.

It also occurred to me that pick01 as defined needs to provide infinite precision, which of course isn't physically realizable.

It couldn't be sensibly defined to generate both exact 0 and exact 1.

And the fact that I forget whether its interval was left-closed or right-closed suggests that there's a problem there.

So pick-boolean

So from infinite to minimal: pick-boolean is analogous to pick01, except that it outputs a bit, not a scalar.

But it shouldn't be unary

The temptation is to make pick-boolean unary. But if we did, it wouldn't be like other unary predicates. When two instances of these pick-boolean have the same inputs (ie, none), the outputs are not neccessarily the same. This lack of referential transparency potentially infects the outputs of any predicate that uses pick-boolean. This would restrict and complicate the conversions we are allowed to make.

So instead, let's provide a system random stream. A "random output" predicate such as pick-boolean will actually be ternary; its args will be:

  • Input bitstream
  • Output bitstream
  • Output random

This has the nice property that it gives us an upper bound on a proposition's sample space: It's the difference in position between the input stream and the output stream. Of course, that is not always predetermined.

We said earlier that bettable propositions had no arguments, and we need these to be bettable, so we have to revise slightly: A definition is bettable if its only inputs and outputs are system random streams, which will be a type.

There's another way this treatment is helpful: Sometimes we want to sample to settle issues (population bets). Such issues can sample by using the random stream argument, which feeds any selectors that are involved.

Random stream arguments also help validate selectors by challenge - the challenger specifies the bitstream. In this one case, the output bitstream is useful. It gives the various random calls an ordering, otherwise the challenger couldn't specify the bitstream.

Stream splitting

But sometimes a single stream doesn't work nicely. A predicate like pick01 may need to use arbitrarily many bits to generate arbitrary precision. So what can it do? It shouldn't consume the entire stream for itself, and it shouldn't break off just a finite piece and pass the rest on.

So let's provide split-random-stream. It inputs a random stream and outputs two random streams. We'll provide some notation for a challenger to split his specified stream accordingly.

Can't be built by users

We will provide no way to build an object of this new type from first principles. Users can get two of them from one by using split-random-stream, but can't get one from none. So the only way to have one is to get it from an input. Ultimately any proposition that uses a random stream must get it from system mechanisms of proposition resolution such as sampling and challenging.

New pieces

So let's define these built-ins:

  • Type: random-stream, the type of the system's random stream. No built-in predicate has any mode that outputs this type but doesn't also input it.
  • split-random-stream, ternary predicate that inputs a random stream and outputs two random streams.
  • pick-boolean, a ternary predicate outputting a bit.
  • Type: lazy scalar. A lazy scalar can only be compared with given precision, not compared absolutely. So any comparison predicates that take this also manage precision, possibly by inputting an argument giving the required precision.
  • pick01, ternary, outputs a lazy scalar.
  • Convenience
    • Maybe pick-positive-rational, similar but picking binarily from a Stern-Brocot tree.
    • Maybe pick-from-bell-curve, similar but picking lazily from the standard bell curve.

09 December 2011

Kernel WYSIWYG Digraphs

Kernel WYSIWYG rooted digraphs

Previously

I said in comments that I view backquote as a WYSIWYG tree ctor, analogous to `list'.

That's not fully general

But backquote doesn't suffice to construct the most general form of open structure of pairs, a rooted digraph. Those require the ability to share objects between multiple parents.

Things that don't do the job

One could write shared objects by naming objects in a `let' form and including them by name. That's not WYSIWYG, though. WYSIWYG would at least present the objects in place once, as #N# / #N= reader notation does.

But that's in the reader. It's not available to general code.

What to do

One could:

  • Enclose the whole construction in an empty `let'.
  • Provide a combiner expanding to ($sequence ($define! x DEF) x)

Signals, continuations, and constraint-handling

Signals, continuations, constraint-handling, and more

Previously

I wrote about mutability and signals and automatically forcing promises where they conflict with required types.

Signals for Constraint-handling

Constraint-handling, and specifically the late-ordered evaluation it uses, could be treated by signals. Something like this:

  • Promises are our non-ground objects. They are, if you think about it, analogous to Prolog variables.
    • They can be passed around like objects.
    • But they're not "really" objects yet.
    • For them to "succeed", there must be some value that they can take.
    • That value can be determined after they are passed.
    • Everything else is either ground or a container that includes one or more of them.
  • As I already do in Klink, use of an non-ground argument is detected where it's unacceptable.
    • Generally, they're unacceptable where a type is expected (eg they're acceptable as arguments to cons)
  • Unlike what I do now, non-ground unacceptables send signals to a scheduler.
    • This implies that each evaluation is uniquely related to (owned by) one scheduler.
    • That signal passes as objects
      • the continuation as object.
      • The promise to be forced.
  • It is the job of the scheduler to ensure that the resulting computation completes.
    • In appropriate circumstances, eg with resource-quotas, or when doing parallel searches, it is reasonable to not complete.

An analogy

Computations whose flow can't be changed are analogous to constant objects and flow-changable ones analogous to mutable ones.

Another use: Precision

This mechanism could also be useful for "dataflow" precision management. Here the objects are not exactly promises, but they are incomplete in a very precise way. They are always numbers but their precision can be improved by steps.

Combiners that require more precision could signal to their generators, which could signal back their new values, propagating back their own requirements as needed. This is very manageable.

Apply-continuation and signals can mutually define

apply-continuation, while it could be used to implement signals, can also be implemented by signals between evaluations. It signals a scheduler to schedule the given continuation and deschedule the current continuation.

Done this way, it gives a scheduler flexibility. This might appear to make analysis more difficult. But a simple (brain-dead) scheduler could do it exactly the continuation way: exactly one evaluation is "in control" at all times. This might be the default scheduler.

In both cases this functionality is a neither-fish-nor-fowl hybrid between objects and computational flow. ISTM it has to be so.

signals allow indeterminate order

One mixed blessing about signals is that since they are "broadcast" 1-to-N, it can be indeterminate what is to be evaluated in what order.

  • Pro: This gives schedulers flexibility.
  • Con: Computations can give different results.
    • Continuation-based scheduling doesn't neccessarily do better. It could consult "random" to determine the order of computations.
  • Pro: Signals and their schedulers give us another handle to manage evaluation order dependency. A scheduler could co-operate with analysis by:
    • Promising that evaluations will behave "as if" done in some particular order.
    • Taking into account analysis
    • Providing information that makes analysis easier, short of simulating the scheduler.
      • Probably promising that no signals escape the scheduler itself, or only a few well-behaved ones do.
  • Pro: Regardless of the scheduler, analysis can treat signals as a propagating fringe. Everything not reached by the fringe is unaffected.

Should signal-blockers cause copies?

I suggested earlier that combiners might promise non-mutation by blocking mutation signals out.

One way to code this is to make such combiners deep-copy their arguments. Should this situation automatically cause copies to be made? ISTM yes.

  • Not an issue: User confusion. There is no case where a user reasonably expects different functionality than he gets.
    • He might expect a function to mutate an object and it doesn't - but why is he expecting that of a function that doesn't claim to mutate anything and moreover explicitly blocks that from happening? Bad docs could mislead him, but that's a documentation issue.
  • Not an issue: Performance.
    • Performance is always secondary to correctness.
    • Performance can be rescued by
      • copy-on-write objects
      • immutable objects
      • Analysis proving that no mutation actually occurs.