20 May 2011

Automatic forcing of promises in Klink

Automatic forcing of promises in Klink (addendum)

I had meant to explain also about auto-forcing in `$let' or `$define!', but for some reason I didn't. So I'm adding this now.

Background: In Kernel, combiners like `$let' and `$define!' destructure values. That is, they define not just one thing, but an arbitrarily detailed tree of definiendums.

So when a value doesn't match the tree of definiendums, or only partly matches, but the part that doesn't match is a promise, Klink forces the promise and tries the match again.

Unlike argobject destructuring, this doesn't check type.

Automatic forcing of promises in Klink

Automatic forcing in Klink

As I was coding EMSIP in Kernel, I realized that I was spending entirely too much time and testing to manage the forcing of promises.

I needed to use promises. In particular, some sexps had to have the capability of operating on what followed them, if only to quote the next sexp. But I couldn't expect every item to read its tail before operating. That would make me always read an entire list before acting. This isn't just inefficient, it is inconsistent with the design as an object port, from which objects can be extracted one by one.

What I do instead

So what I have now is automatic forcing of promises. This occurs in two destructuring situations. I've coded one, and I'm about to code the other.

Operatives' typespecs

Background: For a while now, Klink has been checking types before it calls any built-in operative. This operation can check an argobject piece by piece against a typespec piece by piece. That destructures it treewise to arguments that fill an array that is exactly the arguments for to the C call. It's very satisfactory.

Now when an argobject doesn't match a typespec, but the argobject is a promise, the destructure operation arranges for the promise to be forced. After that comes the tricky part. While the destructuring was all in C, it could just return, having filled the target array. But now it has to also reschedule another version of itself, possibly nested, and reschedule the C operation it was working towards.

All fairly tricky, but by using the chain combiners and their support, and by passing destructure suitable arguments, I was able to make it work.

Defining

As in `$let' or `$define!'. I'm about to code this part. I expect it to be along similar lines to the above, but simpler (famous last words).

Status

I haven't pushed this branch to the repo yet because I've written only one of the two parts, the destructuring. That part passes the entire test suite.

I haven't yet tried it with EMSIP to see if it solves the problem.

Does it lose anything?

ISTM this does not sacrifice anything, other than the {design, coding, testing, debugging} effort I've spent on it.

Functionality

It subtracts no functionality. `force' is still available for those situations when manual control is needed.

Restraint

The opposite side of functionality. Does this sacrifice the ability to refrain from an action? No. In every circumstance that a promise is forced, the alternative would be an immediate error. There could never have been a capacity to do the same thing except refraining from forcing.

But does it sacrifice the ability to make other code refrain from an action? No, the other code could have just called `force' at the same points.

Exposure

Does this expose whether a promise has been forced? No, not in any way that wasn't already there. Of course one can deduce that a promise has been forced from the fact that an operation has been done that must force that promise. That's always been the case.

Code size

The init.krn code is actually slightly smaller with this. The C code grew, but largely in a way that it would have had to grow anyways.

13 May 2011

FAIrchy diagram

FAIrchy diagram

I wrote yesterday about FAIrchy, my notion that combines FAI and futarchy. Here is an i* diagram that somewhat captures the system and its rationale. It's far from perfect, but captures a lot of what I was talking about.

Many details are left out, especially for peripheral roles,

Link to this diagram

Some comments on this diagram technically

I felt like I needed another type of i* goal-node to represent measurable decision-market goal components, which are goal-like but are unlike both hard and soft i* goals. Similarly I wanted a link-type that links these to measurement tasks. I used the dependency link, which seemed closest to what I want, but it's not precisely right.

There's some line-crossing. Dia's implementation of i* makes that inevitable for a large diagram.

FAIrchy

FAIrchy1

In this blog post I'm revisiting a comment I made on overcomingbias2. I observed that Eliezer Yudkowsky's Friendly Artificial Intelligence (FAI) and futarchy have something in common, that they are both critically dependent on a utility function that has about the same requirements. The requirements are basically:

  • Society-wide
  • Captures the panorama of human interests
  • Future-proof
  • Secure against loophole-finding

Background: The utility function

Though the utility functions for FAI and futarchy have the same requirements, thinking about them has developed very differently. The FAI (Singularity Institute) idea seems to be that earlier AIs would think up the right utility function. But there's no way to test that the AI got it right or even got it reasonable.

In contrast, in talking about futarchy it's been clear that a pre-determined utility function is needed. So much more thought has gone into it from the futarchy side. In all modesty, I have to take a lot of the credit for that myself. However, I credit Robin Hanson with originally proposing using GDP3. GDP as such won't work, of course, but it is at least pointed in the right general direction.

My thinking about the utility function is more than can be easily summed up here. But to give you a general flavor of it, the problem isn't defining the utility function itself, it's designing a secure, measurable proxy for it. Now I think it should comprise:

  • Physical metrics (health, death, etc)
  • Economic metrics
  • Satisfaction surveys.
    • To be taken in physical circumstances similar to secret-ballot voting, with similar measures against vote-selling, coercion, and so forth.
    • Ask about overall satisfaction, so nothing falls into the cracks between the categories.
    • Phrase it to compare satisfaction across time intervals, rather than attempting an absolute measure.
    • Compare multiple overlapping intervals, for robustness.
  • Existential metrics
  • Metrics of the security of the other metrics.
  • Citizen's proxy metrics. Citizens could pre-commit part of their measured satisfaction metric according to any specific other metric they chose.
    • This is powerful:
      • It neatly handles personal identity issues such as mind uploading and last wills.
      • It gives access to firmer metrics, instead of the soft metric of reported satisfaction.
      • It lets individuals who favor a different blend of utility components effect that blend in their own case.
      • May provide a level of control when we transition from physical-body-based life to whatever life will be in the distant future.
      • All in all, it puts stronger control in individual hands.
    • But it's also dangerous. There must be no way to compel anyone to proxy in a particular way.
      • Proxied metrics should be silently revocable. Citizens should be encouraged, if they were coerced, to revoke and report.
      • It should be impossible to confirm that a citizen has made a certain proxy.
      • Citizens should not be able to proxy all of their satisfaction metric.
  • (Not directly a utility component) Advisory markets
    • Measure the effectiveness of various possible proxies
    • Intended to help citizens deploy proxies effectively.
    • Parameterized on facets of individual circumstance so individuals may easily adapt them to their situations and tastes.
    • These markets' own utility function is based on satisfaction surveys.

This isn't future-proof, of course. For instance, the part about physical circumstances won't still work in 100 years. It is, however, something that an AI could learn from and learn with.

Background: Clippy and the box problem

One common worry about FAI is when the FAI gets really good at implementing the goals we give it, the result for us would actually be disastrous due to subtle flaws in the goals. This perverse goal is canonically expressed as Clippy trying to tile the solar system with paper clips, or alternatively with smiley faces.

Clippy-the-paper clip jpeg

The intuitive solution is to "put the AI in a box". It would have no direct ability to do anything, but would only give suggestions which we could accept or disregard. So if the FAI told us to tile the solar system with paper clips, we wouldn't do it.

This is considered unsatisfactory by most people. To my mind, that's very obvious. It almost doesn't need supporting argument, but I'll offer this: To be useful, the FAI's output would certainly have to be information-rich, more like software than like conversation. That information-richness could be used to smuggle out actions, or failing that, to smuggle out temptations. Now look how many people fall for phishing attacks even today. And now imagine a genius FAI phishing. A single successful phish could set in motion a chain of events that allows the FAI out of the box.

FAIrchy: The general idea

What I propose is this: The most important AIs, rather than directly doing things or even designing and advising, should be traders in a futarchy-like system. As such, they would in effect govern other AIs that design, advise, and directly do things.

At first, they'd be trading alongside humans (as now). Inevitably with Moore's Law they'd dominate trading, and humans would only use the market to hedge. By then, AIs would have organically evolved to do the right (human-satisfying) thing.

Treat these AI traders as individuals in a population-style search algorithm (think genetic programming). Select for the most profitable ones and erase those that overstepped their roles.

Advantages

  • There's a built-in apprenticeship stage, in that the AIs are basically doing their eventual job even in the early stages, so any striking problems will be apparent while humanity can still correct them.
  • We get the advantage of a reasonable satisfaction metric up front, rather than hoping AIs will design it well.
  • These AIs have no incentive to try to get themselves unboxed. Earlier I talked about subtly perverse utility functions. But with these, we understand the utility function: make a profit in the decision markets. They can't go subtly off the track of human happiness, because that's not even the track they're intended to be on. We do need to make sure that corrupting the utility metric can't pay off, of course, but that's not a new issue.
  • The AIs would learn from people's real satisfaction, not just from theoretical projections.

About the separate AI roles

In general

The healthy performance of each role should be a component of the overall utility function.

Separation of roles: Why

Don't allow mingling of AI roles, and especially not the speculators role and the security-tester role. The threat here is that a speculator AI that also moves in the real world may find a way to short-circuit the system for profit. For instance, it might find a way to distort the satisfaction reports, or destroy things corresponding to issues it had shorted.

Put a different way, we don't want the various roles to co-evolve outside of their proper functions. We never want a situation where one role (say, security) is compromised because on the whole, it's more profitable to compromise it and profit somewhere else (say, in speculating)

Effectively, this separation creates a sort of distributed system that includes us and our satisfaction metric. This was never a desideratum but it is encouraging.

Separation of roles: How

Of course we'd use the obvious physical and computational security measures. We'd run the trader AIs in a way that lets us physically monitor their traffic and operations. Probably they'd be run in virtual machines.

I'll talk about other measures in the next section, in regard to specific roles.

The AI roles

Speculator

The role described above. Trades in a big futarchy decision market.

Effector

AIs that "do things". This includes probably every role you pictured an AI in before now.

They, like society, would be governed by the FAIrchy system. This probably cannot be improved on because it subsumes every other conceivable proposal for making rules for them or turning them off.

Sub-speculator

Sub-speculators also trade in a big futarchy decision market. Their purpose is to trade against trivial market patterns, for instance the "january slump", so that the speculators can only focus on meaningful patterns.

Sub-speculators would be deliberately limited in information and processing power, so that they couldn't out-think the real speculators, but must simply react faster and more lightly.

Several possible means of effecting this situation:

  • These might be "brokers" thru which the speculators trade.
  • They might simply be allowed quicker access.
  • They might be run in short bursts so they have no time to think deeply but plenty of CPU power to react fast.

Tempter

To keep roles from mingling, we might use defect seeding: mechanically present AIs with apparent temptations to overstep their role; erase the ones took the bait and subtract fitness from their relatives.

This would eventually be done by other AIs that constructed the temptations and were rewarded accordingly. Running them in a virtual machine helps: clone the AI without the copy knowing it's a clone.

Temptation-faller, etc

To keep the tempters themselves honest, the temptations would also be applied to AIs that looked like speculators, but are known to fall to temptation. They might be copies of speculators that fell and were turned off. If they didn't fall, we'd know the temptations weren't effective.

Less crucially, known honest AIs would be used too and expected not to fall, so that temptations are kept fair.

Temptation-faller designers would also need testing; this can regress quite a bit.

Information provider

We'd remove any incentive to get out into the world to look for better information by feeding all the speculators the best-quality information available.

This would also removes any incentive for outside competition that doesn't follow the non-mingling rules. So we are looking for a self-sustaining monopoly on best information-for-AI-speculators. This condition might form part of the utility function.

Information providers would be rewarded according to both:

  • Overall decision market success
  • Individual orders by speculators

Speculator support modules

When two or more AI speculators find certain computations useful, have them farm off that particular code to a support module. This probably shouldn't be optional. This profits the two speculators because they are now spending half as much computing time for the same information.

A speculator support module becomes a separate entity, obliged to make its results available to any speculator. It would have its own fitness function. If its fitness function fell below 0, it could be reabsorbed by a speculator, if it was still wanted. Its fitness wouldn't fall below zero if two or more speculators are interested enough to run it themselves.

Security testers

These would try to measure the security of the human satisfaction metric and other sensitive parts of the system. They don't try to improve it or safeguard it, just to test it without compromising it.

They must be separate from speculators in the overall market.

Security test testers

Again, the roles regress but dwindle.

Satisfaction extrapolator (Maybe)

This role is inspired by Eliezer's CEV but structured very differently. Like CEV, these AIs would try to predict human satisfaction in particular future conditions. Here, that would be done in a separate futures market.

But they don't replace the satisfaction metric, nor even directly contribute their prediction to the utility function. Rather, when their extrapolated expectation disagrees significantly with the satisfaction that is reported, it triggers an extra-hard look at the security of the human satisfaction metric. If corruption is found, it triggers penalization of responsible parties.

The extra-hard look should include some metrics that were kept secret, or shared only with the security testers.

These AIs are generally rewarded according to the accuracy of their extrapolation. But they are also deliberately insulated from the results of security testing so that they have no incentive to adjust their extrapolation when security is judged to be weak. This might be effected by subtracting the security result metric out so that the net effect of weak security on their estimates is zero.

Physical upkeep

Physical upkeep of the hardware. Sure, people could do that for a long time, but eventually Moore's Law prevails.

Software upkeep

Runs the population algorithm and the various interactions (tempter, etc). Sure, human-written code could do that for a long time, but again Moore's Law prevails. It should prove new versions to be functionality equivalent of old versions, test new versions in sandboxes, etc.

Footnotes:

1 I had originally called this "futurairchy" and then "futairchy", but both seemed clumsy.

2 Which was moved from overcomingbias.com to lesswrong in the great split.

3 He then proposed GDP+, but he just defines that as GDP plus unspecified other components.

08 May 2011

Chains in Klink

Chains in Klink

As I mentioned in the previous post, Klink is a hybrid of C functions and a dispatch loop (_klink_cycle). The dispatch loop finds a "next" C-based combiner and runs it. This can be an applicative or an operative. Under the hood, operatives come in several C types.

About a week ago, I added chains to this set. Chains are really a vector of combiners. Evaling a chain queues all those combiners to run, then continues to them with the value it was passed.

Store and load

In order to make this work neatly, I also added two other new types of (pseudo)combiners, T_STORE and T_LOAD. Those are only available "under the hood" for optimization, because they know things about the stack that shouldn't be exposed.

T_STORE destructures the current value and stores on the stack a vector of the resulting objects.

T_LOAD builds a tree from recent values according to a template. "Leaves" of the tree index an element from a stored vector. That's a slight simplification: Leaves can also be constants, which are used unchanged.

About the stored vectors

These stored vectors collectively are somewhat like an environment in that they retain and retrieve values. But unlike the environment, they are distinct across distinct continuations that have yet to assign to them. In other words, if you use this mechanism to capture a combiner's return value, and then before using that value, you re-enter and capture it again, the new return value will not overwrite the old return value, nor vv. Both will be available in their respective continuations.

These stored vectors expire a certain number of frames down the stack, at the bottom of the chain. That keeps them from confusing other chains and keeps the vectors from hanging around uselessly forever.

Future improvements

At the moment, T_LOAD treats pairs of type (integer . integer) specially, as indexes. I should clean this up by making a dedicated type for indexes.

Together, T_LOAD and T_STORE may supersede T_CURRIED, a type of operative that combines current value with stored data in various possible ways. In order to do that, T_LOAD will have to can access the current value.

Eventually I may switch to a more linear spaghetti stack. Then I will have to change the indexing scheme from (integer . integer) to single integer.

I said earlier the stored vectors were somewhat like an environment but different. Eventually I'd like to use essentially this mechanism to optimize well-behaved environments, so we can index rather than always looking up a symbol. Those in contrast would have to be capturable by continuations. They would also need special care in cases where the environment is captured at some point - either skip those cases or make this data part of environments.

History

Originally instead of T_STORE and T_LOAD, I used special instructions that were assign values when the chain was run. This proved complex and difficult to work with.

Tracing in Klink

Tracing in Klink

This week I added improved tracing to Klink.

The new tracing is intended to complement the C code. Rather than report the eval/apply cycle, it reports each time thru the main loop, which is in _klink_cycle.

Klink code is a mixture of pure C and eval cycles. The pure C can't capture continuations, so it can't call arbitrary code. But it is needed in order to run, and it is faster.

But tracing in C is only good for the pure C parts. For the rest, it just reports that _klink_cycle is being visited over and over. Tracing in Kernel, inherited from Tinyscheme, didn't correspond to much of anything. It both skipped over important parts and redundantly traced things that could have been traced in C.

The new tracing code traces _klink_cycle. That exactly complements the C code.

Other enhancements

My new tracing code does two other nice things

Reports the stack depth

It reports how deep the current call is in the spaghetti stack.

In languages with continuations, that's neccessarily imperfect, since you can jump all around there with continuations. Nevertheless, it's quite useful for finding callers. Before, tracing was just a mass of calls and trying to discern which call invoked a call of interest was nearly hopeless.

It names every C operative

Before, traces used to read like:

 (#<OPERATIVE> arg another-arg)
 (#<OPERATIVE> arg this-arg that-arg)
 (#<OPERATIVE> arg arg another-arg)
 (#<OPERATIVE> arg arg)

That didn't tell me much. So just for tracing, there is a special environment that the printer sees that tells it the name of each native C operative.

The way it's done isn't flexible right now, but I hope to make that a parameter to print that tracing will use.

Looks like:

klink> (new-tracing 1)
0
klink> (new-tracing 1)

Decurry 
10: Eval: (C-kernel_eval_aux #,new-tracing (1) #<ENVIRONMENT>)
Decurry 
11: Eval: (C-kernel_mapeval (1) () #<ENVIRONMENT>)
Decurry 
10: Eval: (,(unwrap #,eval) (,(unwrap #,new-tracing) 1) #<ENVIRONMENT>)
Decurry 
10: Eval: (C-kernel_eval_aux ,(unwrap #,new-tracing) (1) #<ENVIRONMENT>)1
klink> (new-tracing 0)

Decurry 
10: Eval: (C-kernel_eval_aux #,new-tracing (0) #<ENVIRONMENT>)
Decurry 
11: Eval: (C-kernel_mapeval (0) () #<ENVIRONMENT>)
Decurry 
10: Eval: (,(unwrap #,eval) (,(unwrap #,new-tracing) 0) #<ENVIRONMENT>)
Decurry 
10: Eval: (C-kernel_eval_aux ,(unwrap #,new-tracing) (0) #<ENVIRONMENT>)1
klink> 

To use it

To use new tracing:

(new-tracing 1) ;;On
(new-tracing 0) ;;Off

The old tracing inherited from Tinyscheme is still there:

(tracing 1) ;;On
(tracing 0) ;;Off