12 September 2013

My opinion, literally

Lexicography

Previously

I wrote this a few weeks back, in response to something my friend Michael wrote, but I was lazy about posting it.

It's about the Merriam-Webster dictionary defining "literally" to mean "figuratively". See slate.

My opinion

The dictionary has to be descriptive, not prescriptive. It should reflect how people have actually spoken and used words. Ultimately, that's what language is.

But by the same token, people actually use dictionaries prescriptively. They turn to dictionaries for authority on the "right" meaning of words.

I would take a middle position. A dictionary must eventually track usage, but there's no need for it to rush to anoint every popular solecism.

What lexicographers do it collect a corpus of contemporary usage and then group the words according to word sense, as they see it. I'm not surprised that they found so many hyperbolic "literally"s. I'm sure they also had access to literally tons of people who felt figurative "literally" to be a solecism.

There's merit, as some lexicographers do, in characterizing these groups of words in more sophisticated ways. Hyperbolic senses can be noted. So can loose senses ("sarcasm" that lacks irony) and apparent substitutions ("effect" where "affect" is meant).

It's too bad Merriam-Webster stopped before doing that, and I think they deserve all the criticism for it.

Patent Fix 1

Patent Fix 1

Previously

Jan Wolfe blogs Patent defendants aren't copycats. So who's the real inventor here?.

Robin Hanson also writes about this. While his central illustration is somewhat implausible, it nevertheless puts the issue on a concrete footing. Briefly, a hypothetical business finds better routes for city drivers for a price. Then they want to forbid all the drivers in the city from driving "their" route without a license. (This example is apparently set in a world where nobody every heard of Google maps, Mapquest, or even paper maps.)

Also see Defending reinvention, So How Do We Fix The Patent System?, and A Call for an Independent Inventor Defense.

Recent changes in patent law don't seem to have addressed reinvention.

Some points of consensus

We all seem to agree that the biggest issue is patent protection vs re-invention. If re-invention was somehow metaphysically impossible, the patent situation would be more defensible1.

We also seem to agree that software patents aggravate this problem.

This is a problem that I have been kicking around for a while too. I have, I think, a different analysis to offer and something of a solution.

The deep problem

The deep problem, as I see it, is a problem is common to all intellectual pseudoproperty. First the discoverer sells it - to an individual or to society via the social contract of patenting. Only afterwards can the invention be seen by the buyer and evaluated.

For the individual, such as the driver licensing a better route in Robin's example, this makes the route an experience good - he can only tell whether it's worth what he paid after he receives it. Then he can't return the good by unlearning it.

He may find that it's not worth what he paid because it's bad. More relevant to patent re-invention, he may find that he already knew of that route. Perhaps he sometimes uses it but mostly prefers a slightly longer scenic route. He shouldn't lose use of a route he already knew just because he "bought" the same from this business and then returned it.

This at least could be solved by contract - perhaps the driver can get a refund if he proves that he already knew that route. For society, it's worse. Patent law basically throws up its hands at the question of what the loss of opportunity for re-invention costs. It deems the cost to be zero, which is simply insane.

Why it's tricky to solve

It's sometimes said that the invention of the eraser on the pencil was obvious - after it was invented; before that, nobody could think of that. As it turns out, that's questionable2 for the pencil-eraser but the general point stands.

So we don't want re-invention to consist of stating, after one has seen the invention, "Why, it's obvious to anybody". That's hindsight bias. That's cheating. We want to measure problem hardness in foresight. How hard did it appear before you knew the answer?

So how can we measure problem hardness?

For any patent, there is some problem that it solves. This isn't just philosophical; a patent application must contain this. It's part of one of the less formal sections of an application, but even so, it's made explicit for every patent the USPTO ever granted.

Imagine a public blackboard where anybody can write down a problem. Our would-be inventor writes down the problem that his as-yet undisclosed invention solves. He leaves the problem on the board for a year. After a year, nobody has written down the solution.

Our inventor then says "Given an entire year, nobody has solved this problem except me. Therefore even if my invention is granted patent protection, nobody has lost the chance to invent it themselves. Since opportunity for re-invention was the only thing anybody really stood to lose, I should be granted patent protection. I stand ready to reveal my solution, which in fact solves the problem. If I'm granted patent protection, everybody wins."

We're not assuming that our inventor has already invented the invention. He could pose a problem that he feels pretty sure he can solve and only start working on it later in the year.

There are of course holes in that solution. Let's try to fill them.

Issues

A dialog about attention

"Not so fast". (Now we need names. I will call the first inventor Alice and the new voice Bob)

"Not so fast" says Bob. "I could have solved that problem, but I was working on something else more important. I'm confident that when the day comes that I actually need a solution, I can solve it myself. I like that better than paying your license fee. So your patent protection does deprive me of the opportunity for re-invention."

"You haven't proved you can do that." replies Alice. "Nobody answered my challenge, so why should I believe you?"

"Correction: they didn't answer it for free." says Bob. "It does take non-zero time and attention - and by the way, so would using your patent even if the license fee was $0"

Bob continues "Maybe other possible inventors felt the same as I did. Or maybe they're just not paying attention to that blackboard. If everybody has to pay attention to everything you write on that blackboard, that imposes a public cost too."

"I'll tell you what: Offer a reasonable bounty to make it worth my time, say $5000, and I will show you my solution to your problem."

"I haven't got $5000 to spend on that" says Alice, "I'm a struggling independent inventor with one killer idea in my pocket. And if I did have $5000 then I'd just be paying you to reveal your invention. I already have one, so the value to me is $0. If you're not bluffing, I'll be down $5000."

"If you don't have to offer a reward, I can play that game too," replied Bob, "but I won't leave it at one problem. I'll write write down thousands of them. Then I'll own all the problems that nobody paid attention to, many of which will be actually quite easy, just ignored. I'll solve some of them and nobody can use the solutions without licensing them from me."

"I see where that's going" says Alice. "I'd do the same." A look of absolute horror crossed her face. "We'd be back in the Dark Ages of the US patent system!"

How do we direct attention reasonably?

Collectively, Alice and Bob have a pretty good idea what's easy to solve and what's not. The trick is truthfully revealing that knowledge at a practical cost.

One possible solution quickly suggests itself to those acquainted with idea markets: We can adapt Hanson's old lottery trick.

I'll start by explaining the lottery. We'll go beyond that later. The idea is that Alice and everybody else who writes a problem on the blackboard pays a small license fee. The whole amount - let's say it's still $5000 - is offered on one problem chosen by lottery. Maybe it's offered on a few of them, but in any case a small fraction of what there is.

That's great for that one problem, but it leaves the other problems unmeasured.

That's where decision markets come in. Investors place bets on whether a problem, should it be chosen for a bounty, will in fact be solved. Then one particular problem is chosen by lottery. The bets about its hardness are settled while the other bets are called off. It's easy for the system to see whether someone has claimed the bounty. We won't tackle the quality of the solution until later in this post.

The hardness-market price determines whether the problem is considered easy or not - there may be finer gradations but we won't get into that here.

So we've amplified the signal. By offering a bounty on one problem, we've measured the hardness of all the problems. We'll improve on this later.

More dialog

So this scheme is implemented. A few weeks later, Bob comes storming in.

"Somebody wrote on the blackboard the exact problem that I've been working on!"

"That's odd." says Alice

"Did you do this to get back at me, Alice?"

"How does that inconvenience you? You could earn $5000."

"My invention is worth millions! Now I have to disclose it for a one-time payment of $5000? That doesn't even cover my research and development costs!"

"Well, you can hardly expect anyone to offer a million dollars for a bounty."

"That's true. Still, this is very wrong. Since you wrote the problem, you should be required to have a solution too."

"It just so happens that I do, but if I disclosed it, you'd just copy it. You should have to show your solution first."

"Then you'd copy it. See, Alice, I don't trust you either."

"I really had no idea you were working on it too, Bob. But if you really do have a million-dollar invention too, why should either of us sell it for $5000? As far as we know, only the two of us have it. Why should the two of us collectively get less than it's worth?"

"Luckily we found out in time. That's just sheer luck. We could agree to ignore the bounty and split the rewards of the patent between us."

"50/50"

The two inventors shook hands and then compared notes.

"Hey!" exclaimed Bob "I shouldn't have assumed that just because your invention solved the same problem, it was as good as mine. It's not! Mine's cheaper to manufacture. I'd have got about 95% of the market share."

"No, I'd have beat you. Mine's more stylish and easier to use."

The two inventors glared at each other, each convinced they had gotten the worst of the deal.

Bugs

So we have the following bugs:

  • The bounty acted like a price, but wasn't really a sensible price. In fact, it didn't even try, it just set one fixed price for everything.
  • The bounty overrode the market payoffs, which are a better measure of invention quality.
  • Relating the hardness to the number of times the bounty is claimed measures the wrong thing. What we need is solution unobviousness. This is trickier, since we have to look past the expected number of solutions and see how many actual solutions overlap.
  • If the first inventor to disclose gets a patent monopoly, it's unfair to Alice, who posed the problem and paid to do so. It shuts her out of her own invention.
  • If the first to disclose doesn't get a patent monopoly, for inventions whose expected market value is more than the bounty, the likelihood of the bounty being accepted will be too low. We'll see fewer than we should and therefore we'll underestimate obviousness.

Second draft

In order to fix those bugs, we're going to redo the bounty system:

  • Claiming the bounty doesn't require openly disclosing an invention. It requires only an unforgeable commit, probably as a secure hash of the disclosure document. The reveal and the payment will come later. NB this allows the original problem-poser to participate in the race for the solution.
  • After a predetermined period of time:
    • The claimed inventions are to be disclosed (all at once)
    • Inventions not disclosed within a given time-frame are ineligible.
    • Disclosures that don't match the prior commit are ineligible.
    • Unworkable inventions are ineligible.
    • Each distinct workable invention gets patent protection
      • Technically, that's each distinct patent claim - a USPTO patent application typically contains many related patent claims.
      • If multiple inventors invented the same invention, they each get an equal share. This is suboptimal vs sockpuppets, though.
      • Here I'm assuming that novelty against prior art isn't an issue. It won't be much of one, because why would Alice pay money to pose a problem whose solution is already publicly known? We can just say that the existence of a prior invention voids a patent claim, just like now.
    • Each distinct workable invention gets an equal fraction of the bounty.
  • Non-bountied inventions are treated essentially the same way, just minus the bounty and the bounty time period.
  • The market pays off in proportion to the count of unique workable solutions, rather than the count of all solutions.
    • We don't want to use some sort of "density of uniqueness", because that's too easily spoofed by submitting the same solution many times.
  • To estimate solution unobviousness:
    • For bountied problems, we use a count of solutions, backed off somewhat towards the market's prior estimate.
    • For non-bountied problems, we directly use the market's prior estimate.

Measuring invention workability and quality

We still need a way to measure invention workability. This is traditionally the job of a patent examiner. However, we've piled more work onto them, and there are concerns about how good a job they do. This post is already long, though, so I won't try to design a better mechanism for that here.

Progressively increasing bounties

One way to aim bounties more efficiently is to start by offering small bounties. Then for some of those problems whose bounties were not claimed, raise the bounty. That way we are not overpaying for easy answers.

Time frame

We haven't addressed the question of how long the problem period should be. We may want it to work differently when there is a bounty, since then we need a standard measuring period.

It's not hard to propose. I could say "six months", but I want to leave it open to more flexible rules, so I won't propose anything here.

Relates to how long a patent monopoly should be in force

This mechanism also provides a somewhat more principled way of deciding how long a patent should be in force: it should relate to how long the problem period is. Perhaps the two should be linearly proportional.

Bug: We charge for posing a problem

Posing a problem well is valuable and is a task in and of itself. Yet we've charged the problem-poser for the privelege. This isn't good, and I'd like it to be the other way.

We could try to recurse, with the problem being: "What are some unsolved problems in field X?" but then the solution is no longer in a standard form as formal patent applications are.

This post is already long, so I will leave it at that.

Footnotes:

1 I will mention in passing that AIUI re-invention is allowed but only under stringent conditions that are only practical for well-heeled institutions. The slightest exposure to a patent "taints" a would-be rediscoverer forever. IANAL so take this with a grain of salt.

2 That patent was later disallowed because it was a mere combination of two things, which is not patentable. See Eraser. Regardless, the general point stands.

05 December 2012

Causal Dynamical Triangulation

Causal Dynamical Triangulation

I've been reading up on Causal Dynamical Triangulation (CDT) (by Loll, Ambjoern, and Jurkiewicz). It's an attempted unified field theory related to Loop Quantum Gravity (LQG), which you may have read the Scientific American article on a few years back.
What it (like LQG) has to recommend it is that the structure of space emerges from the theory itself. Basically, it proposes a topological substrate (spin-foam) made of simplexes (lines, triangles, tetrahedrons, etc). Spatial curvature emerges from how those simplexes can join together.

Degeneration and the arrow of time

The big problem for CDT in its early form was that the space that emerged was not our space. What emerged was one of two degenerate forms. It either has infinite dimensions or just one. The topology went to one of two extremes of connectedness.
The key insight for CDT was that space emerges correctly if edges of simplexes can only be joined when their arrows of time are pointing in the same direction.

So time doesn't emerge?

But some like to see the "arrow of time" as emergent. The view is that it's not so much that states only mix (unmix) along the arrow of time. It's the other way around: "time" has an arrow of time because it has an unmixed state at one end (or point) and a mixed state at the other.
To say the say thing in a different way, the rule isn't that the arrow of time makes entropy increase, it's that when you have an entropy gradient along a time-like curve, you have an arrow of time.
The appeal is that we don't have to say that the time dimension has special rules such as making entropy increase in one direction. Also, both QM and relativity show us a time-symmetrical picture of fundamental interactions and emergent arrow-of-time doesn't mess that picture up.

Observables and CDT

So I immediately had to wonder, could the "only join edges if arrows of time are the same" behavior be emergent?
In quantum mechanics, you can only observe certain aspects of a wavefunction, called Observables. Given a superposition of a arrow-matched and arrow-mismatched CDT states, is it the case that only the arrow-matched state is observable? Ie that any self-adjoint operator must be only a function of arrow-matched states?
I frankly don't know CDT remotely well enough to say, but it doesn't sound promising and I have to suspect that Loll et al already looked at that.

A weaker variant

So I'm pessimistic of a theory where mismatched arrows are simply always cosmically censored.
But as far as my limited understanding CDT goes, with all due humility, there's room for them to be mostly censored. Like, arrow-mismatched components are strongly suppressed in all observables in cases where there's a strong arrow of time.

Degeneration: A feature, not a bug?

It occured to me that the degeneration I described earlier might be a feature and not a bug.
Suppose for a moment that CDT is true but that the "only join edges if arrows of time are the same" behavior is just emergent, not fundamental. What happens in the far future, the heat death of the universe, when entropy has basically maxxed out?
Space degenerates. It doesn't even resemble our space. It's either an infinite-dimensioned complete graph or a 1-dimensioned line.

The Boltzmann Brain paradox

What's good about that is that it may solve the Boltzmann Brain paradox. Which is this:
What's the likelihood that a brain (and mind) just like yours would arise from random quantum fluctuations in empty space? Say, in a section of interstellar space a million cubic miles in volume which we observe for one minute?
Very small. Very, very small. But it's not zero. Nor does it even approach zero as the universe ages and gets less dense, at least not if the cosmological constant is non-zero. The probability has a lower limit.
Well, multiplying an infinite span of time times that gives an infinite number of expected cases of Boltzmann Brains exactly like our own. The situation should be utterly dominated by those cases. But that's the opposite of what we see.

Degeneracy to the rescue

But if CDT and emergent time are true, the universe would have degenerated long before that time. Waving my hands a bit, I doubt that a Boltzmann Brain could exist even momentarily in that sort of space. Paradox solved.

Is that the Big Rip?

(The foregoing was speculative and hand-waving, but this will be far more so)
Having described that degeneration, I can't help noticing its resemblance to the Big Rip, the hypothesized future event when cosmological expansion dominates the universe and tears everything apart.
That makes me wonder if the accelerating expansion of space that we see could be explained along similar lines. Like, the emergent arrow-of-time-matching isn't quite 100% perfect, and when it "misses", space expands a little.
This would fit with the weaker variant proposed above.

Problems

For one thing, it's not clear how it could explain the missing 72.8% of the universe's mass as dark energy was hypothesized to.

End

Now my hands are tired from all the hand-waving I'm doing, so I'll stop.

Edit: dynamic -> dynamical

Meaning 2

Meaning 2

Previously

I relayed the definition of "meaning" that I consider best, which is generally accepted in semiotics:
X means Y just if X is a reliable indication of Y
Lameen Souag asked a good question
how would [meaning as reliable indication] account for the fact that lies have a meaning?

Lies

"Reliable" doesn't mean foolproof. Good liars do abuse reliable indicators.
Second, when we have seen through a lie, we do use the term "meaning" in that way. When you know that someone is a liar, you might say "what she says doesn't mean anything" (doesn't reliably indicate anything). Or you might speak of a meaning that has little to do with the lie's literal words, but accords with what it reliably indicates: "When he says `trust me', that means you should keep your wallet closed."

Language interpretation

Perhaps you were speaking of a more surface sense of the lie's meaning? Like, you could say "Sabrina listed this item on Ebay as a 'new computer', but it's actually a used mop." Even people who considered her a liar and her utterances unreliable could understand what her promise meant; that's how they know she told a lie. They extract a meaning from an utterance even though they know it doesn't reliably indicate anything. Is that a fair summation of your point?
To understand utterances divorced from who actually says them, we use a consensus of how to transform from words and constructions to indicators; a language.
Don't throw away the context, though. We divorced the utterance from its circumstances and viewed it thru other people's consensus. We can't turn around and treat what we get thru that process as things we directly obtained from the situation; they weren't.
If Sabrina was reliable in her speech (wouldn't lie etc), we could take a shortcut here, because viewing her utterance thru others' consensus wouldn't change what it means. But she isn't, so we have to remember that the reliable-in-the-consensus indicators are not reliable in the real circumstances (Sabrina's Ebay postings).
So when interpreting a lie, we get a modified sense of meaning. "Consensus meaning", if you will. It's still a meaning (reliable indication), but we mustn't forget how we obtained it: not from the physical situation itself but via a consensus.

The consensus / language

NB, that only works because the (consensus of) language transforms words and constructions in reliable ways. If a lot of people used language very unreliably, it wouldn't. What if (say) half the speakers substituted antonyms on odd-numbered days, or when they secretly flipped a coin and it came up tails. How could you extract much meaning from what they said?

Not all interpretations are created equal

This may sound like All Interpretations Are Created Equal, and therefore you can't say objectively that Sabrina commited fraud; that's just your interpetation of what she said; there could be others. But that's not what I mean at all.
For instance, we can deduce that she committed fraud (taking the report as true).
At the start of our reasoning process, we only know her locutionary act - the physical expression of it, posting 'new computer for sale'. We don't assume anything about her perlocutionary act - convincing you (or someone) that she offers a new computer for sale.
  1. She knows the language (Assumption, so we can skip some boring parts)
  2. You might believe what she tells you (Assumption)
  3. Since the iterm is actually an old mop, making you believe that she offers a new computer is fraud. (Assumption)
  4. Under the language consensus, 'new computer' reliably indicates new computer (common vocabulary)
  5. Since she knows the language, she knew 'new computer' would be transformed reliably-in-the-consensus to indicate new computer (by 1&4)
  6. Reliably indicating 'new computer' to you implies meaning new computer to you. (by definition) (So now we begin to see her perlocutionary act)
  7. So by her uttering 'new computer', she has conveyed to you that she is offering a new computer (by 5&6)
  8. She thereby attempts the perlocutionary act of persuading you that she offers a new computer (by 2&7)
  9. She thereby commits fraud (by 3&8)
I made some assumptions for brevity, but the point is that with no more than this definition of meaning and language-as-mere-consensus, we can make interesting, reasonable deductions.

(Late edits for clarity)

30 August 2012

Fairchy 3: Ultimate secure choice

Ultimate secure choice

Previously

I wrote about Fairchy, an idea drawn from both decision markets and FAI that I hope offers a way around the Clippy and the box problem that FAI has.

Measuring human satisfaction without human frailties

One critical component of the idea is that (here comes a big mental chunk) the system predictively optimizes a utility function that's partly determined by surveying citizens. It's much like voting in an election, but it measures each citizen's self-reported satisfaction.
But for that, human frailty is a big issue. There are any number of potential ways to manipulate such a poll. A manipulator could (say) spray oxytocin into the air at a polling place, artificially raising the reported satisfaction. And it can only get worse in the future. If elections and polls are shaky now, how meaningless would they be with nearly godlike AIs trying to manipulate the results?
But measuring the right thing is crucial here, otherwise it won't optimize the right thing.

Could mind uploading offer a principled solution?

It doesn't help non-uploads

I'll get this out of the way immediately: The following idea will do nothing to help people who are not uploaded. Which right now is you and me and everyone else. That's not its point. Its point is to arrive before super-intelligent AIs do.
This seems like a reasonable expectation. Computer hardware probably has to get fast enough to "do" human-level intelligence before it can do super-human intelligence.
It's not a sure thing, though. It's conceivable that running human-level intelligence via upload-and-emulating, even with shortcuts, could be much slower than running a programmed super-human AI.

First part: Run a verified mind securely

Enough caveats. On to the idea itself.
The first part of the idea is to run uploaded minds securely.
  • Verify that the mind data is what was originally uploaded.
  • Verify that the simulated environment is a standard environment, one designed not to prejudice the voter. This environment may include a random seed.
  • Poll the mind in the secure simulated environment.
  • Output the satisfaction metric.
This seems doable. There's been a fair amount of work on secure computation on untrusted machines, and there's sure to be further development. That will probably be secure even in the face of obscene amounts of adversarial computing power.
And how I propose to ensure that this is actually done:
One important aspect of secure computation is that it provides hard-to-forge evidence of compliance. With this in hand, FAIrchy gives us an easy answer: Make this verification a component of the utility function (Further on, I assume this connection is elaborated as needed for various commit logs etc).
This isn't primarily meant to withhold reward from manipulators, but to create incentive to keep the system running and secure. To withhold reward from manipulators, when a failure to verify is seen, the system might escrow a proportionate part of the payoff until the mind in question is rerun and the computation verifies.

Problems

  • It's only as strong as strong encryption.
  • How does the mind know the state of the world, especially of his personal interests? If we have to teach him the state of the world:
    • It's hard to be reasonably complete wrt his interests
    • It's very very hard to do so without creating opportunities for distortion and other adverse presentation.
    • He can't have and use secret personal interests
  • Dilemma:
    • If the mind we poll is the same mind who is "doing the living":
      • We've cut him off from the world to an unconscionable degree.
      • Were he to communicate, privacy is impossible for him.
      • We have to essentially run him all the time forever with 100% uptime, making maintenance and upgrading harder and potentially unfair.
      • Presumably everyone runs with the same government-specified computing horsepower, so it's not clear that individuals could buy more; in this it's socialist.
      • Constant running makes verification harder, possibly very much.
    • If it isn't, his satisfaction can diverge from the version(s) of him that are "doing the living". In particular, it gives no incentive for anyone to respect those versions' interests, since they are not reflected in the reported satisfaction.
  • On failure to verify, how do we retry from a good state?
  • It's inefficient. Everything, important or trivial, must be done under secure computation.
  • It's rigidly tied to the original state of the upload. Eventually it might come to feel like being governed by our two-year-old former selves.

Strong encryption

The first problem is the easy one. Being only as strong as strong encryption still puts it on very strong footing.
  • Current encryption is secure even under extreme extrapolations of conventional computing power.
  • Even though RSA (prime-factoring) encryption may fall to Shor's Algorithm when quantum computing becomes practical, some encryption functions are not expected to.
  • Even if encryption doesn't always win the crypto "arms race" as it's expected to, it gives the forces of legitimacy an advantage.

Second part: Expand the scope of action

ISTM the solution to these problems is to expand the scope of this mechanism. No longer do we just poll him, we allow him to use this secure computation as a platform to:
  • Exchange information
    • Surf-wise, email-wise, etc. Think ordinary net connection.
    • Intended for:
      • News and tracking the state of the world
      • Learning about offers.
      • Negotiating agreements
      • Communicating and co-ordinating with others, perhaps loved ones or coworkers.
      • Anything. He can just waste time and bandwidth.
  • Perform legal actions externally
    • Spend money or other possessions
    • Contract to agreements
    • Delegate his personal utility metric, or some fraction of it. Ie, that fraction of it would then be taken from the given external source; presumably there'd be unforgeable digital signing involved. Presumably he'd delegate it to some sort of external successor self or selves.
    • Delegate any other legal powers.
    • (This all only goes thru if the computation running him verifies, but all attempts are logged)
  • Commit to alterations of his environment and even of his self.
    • This includes even committing to an altered self created outside the environment.
    • Safeguards:
      • This too should only go thru if the computation running him verifies, and attempts should be logged.
      • It shouldn't be possible to do this accidentally.
      • He'll have opportunity and advice to stringently verify its correctness first.
      • There may be some "tryout" functionality whereby his earlier self will be run (later or in parallel) to pass judgement on the goodness of the upgrade.
  • Verify digital signatures and similar
    • Eg, to check that external actions have been performed as represented.
    • (This function is within the secure computation but external to the mind. Think running GPG at will)
The intention is that he initially "lives" in the limited, one-size-fits-all government-issue secure computing environment, but uses these abilities to securely move himself outwards to better secure environments. He could entirely delegate himself out of the standard environment or continue to use it as a home base of sorts; I provided as much flexibility there as I could.

Problems solved


This would immediately solve most of the problems above:
  • He can know the state of the world, especially of his personal interests, by surfing for news, contacting friends, basically using a net connection.
  • Since he is the same mind who is "doing the living" except as he delegates otherwise, there's no divergence of satisfaction.
  • He can avail himself of more efficient computation if he chooses, in any manner and degree that's for sale.
  • He's not rigidly tied to the original state of the upload. He can grow, even in ways that we can't conceive of today.
  • His inputs and outputs are no longer cut off from the world even before he externalizes.
  • Individuals can buy more computing horsepower (and anything else), though they can only use it externally. Even that restriction seems not neccessary, but that's a more complex design.
Tackling the remaining problems:
  • Restart: Of course he'd restart from the last known good state.
    • Since we block legal actions for unverified runs, a malicious host can't get him into any trouble.
    • We minimize ambiguity about which state is the last known good state to make it hard to game on that.
      • The verification logs are public or otherwise overseen.
      • (I think there's more that has to be done. Think Bitcoin blockchains as a possible model)
  • Running all the time:
    • Although he initially "lives" there, he has reasonable other options, so ISTM the requirements are less stringent:
      • Uneven downtime, maintenance, and upgrading is less unfair.
      • Downtime is less unconscionable, especially after he has had a chance to establish a presence outside.
    • The use of virtual hosting may make this easier to do and fairer to citizens.
  • Privacy of communications:
    • Encrypt his communications.
    • Obscure his communications' destinations. Think Tor or Mixmaster.
  • Privacy of self:
    • Encrypt his mind data before it's made available to the host
    • Encrypt his mind even as it's processed by the host (http://en.wikipedia.org/wiki/Homomorphic_computing). This may not be practical, because it's much slower than normal computing. Remember, we need this to be fast enough to be doable before super-intelligent AIs are.
    • "Secret-share" him to many independent hosts, which combine their results. This may fall out naturally from human brain organization. Even if it doesn't, it seems possible to introduce confusion and diffusion.
    • (This is a tough problem)

Security holes

The broader functionality opens many security holes, largely about providing an honest, empowering environment to the mind. I won't expand on them in this post, but I think they are not hard to close with creative thinking.
There's just one potential exploit I want to focus on: A host running someone multiple times, either in succession or staggered in parallel. If he interacts with the world, say by reading news, this introduces small variations which may yield different results. Not just different satisfaction results, but different delegations, contracts, etc. A manipulator would then choose the most favorable outcome and report that as the "real" result, silently discarding the others.
One solution is to make a host commit so often that it cannot hold multiple potentially-committable versions very long.
  • Require a certain pace of computation.
  • Use frequent unforgeable digital timestamps so a host must commit frequently.
  • Sign and log the citizen's external communications so that any second stream of them becomes publicly obvious. This need not reveal the communications' content.

Checking via redundancy

Unlike the threat of a host running multiple diverging copies of someone, running multiple non-diverging copies on multiple independent hosts may be desirable, because:
  • It makes the "secret-share" approach above possible
  • A citizen's computational substrate is not controlled by any one entity, which follows a general principle in security to guard against exploits that depend on monopolizing access.
  • It is likely to detect non-verification much earlier.
However, the CAP theorem makes the ideal case impossible. We may have to settle for softer guarantees like Eventual Consistency.

(Edit: Fixed stray anchor that Blogspot doesn't handle nicely)

09 August 2012

Parallel Dark Matter

Parallel Dark Matter 9

Previously

I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.

In particular, I mentioned recent news that the solar system appears devoid of dark matter, something that PDM predicted and no other dark matter theory did.

Watch that title!

So I wes very surprised to read Plenty of Dark Matter Near the Sun (or here). It appeared to contradict not only the earlier success of PDM but also the recent observations.

But when I got the paper that the article is based on (here and from the URL it looks like arXiv has it too), the abstract immediately set the record straight.

By "near the sun", they don't mean "in the solar system" like you might think. They mean the stellar neighborhood. It's not immediately obvious just how big a chunk of stellar neighborhood they are talking about, but you may get some idea from the fact that their primary data is photometric distances to a set of K dwarf stars.

The paper

Silvia Garbari, Chao Liu, Justin I. Read, George Lake. A new determination of the local dark matter density from the kinematics of K dwarfs. Monthly Notice of the Royal Astronomical Society, 9 August, 2012; 2012arXiv1206.0015G (here)

But that's not the worst

science20.com got it worse: "Lots Of Dark Matter Near The Sun, Says Computer Model". No and no. They used a simulation of dark matter to calibrate their mass computations. They did not draw their conclusions from it.

And the Milky Way's halo may not be spherical

The most interesting bit IMO is that their result "is at mild tension with extrapolations from the rotation curve that assume a spherical halo. Our result can be explained by a larger normalisation for the local Milky Way rotation curve, an oblate dark matter halo, a local disc of dark matter, or some combination of these."

04 August 2012

Plastination 3

Plastination 3

Previously

I blogged about Plastination, a potential alternative to cryonics, and suggested storing, along with the patient, an EEG of their healthy brain activity.

Why?

Some people misunderstood the point of doing that. It is to provide a potential cross-check. I won't try to guess how future simulators might best use the cross-check.

And it isn't intended to rule out storing fMRI or MEG data also, although neither seems practical to get every six months or so.

MEG-MRI

But what to my wondering eyes should appear a few days after I wrote that? MEG-MRI, a technology that claims unprecedented accuracy in measuring brain activity.

So I wrote this follow-up post to note that MEG-MRI as another potential source of cross-checking information.