16 December 2013

Spontaneous Dimensional Reduction

Spontaneous Dimensional Reduction

Previously

I have been reading a few papers by Steven Carlip on spontaneous dimensional reduction, also essentially the same here and most recently here.
Carlip is probably best known for his article in Scientific American on similar themes. There he played with a 2+1 dimension "Flatland" universe; here he is seriously proposing a 1+1 one.
It's not as crazy as it sounds. In fact, I find it quite promising.

In a nutshell

Spontaneous dimensional reduction is his idea that at the very smallest scales, space is 1-dimensional (so space-time is 1+1 dimensional). He brings together various lines of evidence that support this, including his own treatment of the Wheeler-deWitt equation at extremely small scales.
Discussing the last point, he suggests that spacetime at small scales "spends most of its time" in or near a Kasner solution, a anisotropic solution to general relativity that applies in 3 dimensions or more. He argues that Kasner solutions favor 1 dimension - strongly so if contracting, less strongly if expanding.
Elsewhere he argues that focussing effects dominate, albeit in a slightly different context. This would imply that the contracting state dominates, which is basically what he needs for this to work. To my knowledge he hasn't explicitly applied this to 1+1 dimensions - that puzzles me, since the two ideas of his seem to fit each other nicely.
Kasner solutions are vacuum solutions - solutions that only apply to empty space. Carlip argues that by looking at extremely small scales, spacetime is effectively flattened.
At larger scales, he says that expansion and contraction change repeatedly and choatically, the general idea being a Mixmaster universe or a BKL singularity. The familiar 3 spatial dimensions are built from 1-dimensional pieces not unlike tinkertoys are.

Features

Carlip doesn't appear to cover some of the nice features of 1+1 dimensionality, but I will.

Scalar propagator for gravity

The first one he does mention: In 1+1, the gravitational propagator is a scalar. All the problems with renormalizing gravity come from it having a non-scalar propagator (in fact, a rank 2 tensor; the other fundamental forces have rank 1 tensor propagators (ie, vectors)). With a scalar propagator, they should all go away.
My guess is that the other fundamental forces might also see a solution without renormalization from this. Nobody really likes renormalization, it's just been a neccessary evil in quantum field theory. Presumably that'd happen at an intermediate scale that has 2+1 dimensions.

The hierarchy problem

The hierarchy problem asks, why is gravity so much weaker than the other forces? For instance, you can lift a brick against the pull of the entire earth. The electromagnetic forces of the molecular bonds in your hand and arm exceed the gravitational forces levied by the 6.6 sextillion ton earth.
This offers an answer. By way of background, gravity requires 3 dimensions in order to propagate: 1 dimension of travel and 2 transverse dimensions. That's because it's a spin 2 force, which is why its propagator is a rank 2 tensor.
So spontaneous dimensional reduction says that gravity can't propagate at all at small scales, only at large scales. This may be enough to explain the hierarchy problem. (That's me conjecturing this)
"But wait", you say. "If it can't propagate at small scales, how does it get anywhere at larger scales? That's like saying, I can't walk three feet but I can walk a mile. Surely the big journey is made of little journeys?"
Well, what Carlip suggests elsewhere (here he may be summarizing others' work) is that for reduced dimensions, what happens instead is that gravity rearranges the topology of space, presumably affecting the BKL or Mixmaster behavior. This may be enough to let it propagate.

The self-energy problem

In a nutshell, the self-energy problem is that if forces like the coulomb force go as 1/r2 and therefore diminish over distance, then at small distances they should increase over distance, becoming infinite at r=0.
But (me again) in a 1-dimensional space, that doesn't happen. Forces go as 1/r0, which is to say they are insensitive to distance. No self-energy problem.
Further, there's helpful logic in the other direction. Why does spacetime do this at small distances? Why a Kasner-like solution instead of a simpler isotropic solution? Because if it didn't, then there would be infinite forces at small distances. If we don't need renormalization, we can just say as a principle that energies can't be infinite and then we'll find that 1+1 dimensional Kasner-like spacetime is needed at small scales.

Potential for insight about dark energy

Everybody's heard this for years so I'll be brief: Dark energy, whatever it is, is making the universe expand faster.
If Carlip's theory wrt Kasner solutions is true, then at small scales space is constantly expanding and contracting. This suggests (me again) some relation to dark energy. Maybe it's as simple as whether contraction or expansion dominates at that scale, and by how much.

18 November 2013

Heidegger 2

Heidegger 2

Previously

I previously blogged my answer to Martin Heidegger's deep question, "Why is there something rather than nothing?"

I just wrote it up for a friend. It says basically the same thing the earlier post does, but in a more accessible form.

What's not a good answer

First, I like to say what is not a good answer. For instance, it's not a good answer to talk about quantum fluctuations creating matter out of empty space. That may or may not follow from the rules of quantum mechanics, but those rules are a "something" too. Why do they exist? So to my mind, that doesn't really answer the question.

The full flavor of the question

Heidegger's question is deeper than that. What it asks to explain is not why is there matter, or why there is quantum mechanics, but why is there anything at all. Why does the world have any structure whatsoever?

My insight

My insight was that the question still assumes one little thing: that it's one or the other, either/or, obeying the law of the excluded middle. Which I know sounds like simple common sense, but consider this: any evidence it could possibly be based on is a something too, and so is the law of the excluded middle. Even antinomy, the law of non-contradiction, is a something about which we can ask why it exists.

So take a deep intellectual breath and imagine for a moment that it could be both ways. Imagine that you can see both a world of nothingness and a normal world. Doesn't matter how. If you like, you can imagine some sort of blend of a something-world and a nothing-world, or a split-screen of both worlds, or perhaps you gaze alternately on one world and the other, or teleport between them.

What would the nothingness look like? Seems like nothingness wouldn't make much of an impression. It wouldn't even mark its absence by the passage of time or an empty reach of space. It hasn't got time and space or anything else. It hasn't got its half of the split-screen you may have imagined. It hasn't even got a you in it to do the perceiving. Seems to me nothingness makes absolutely zero impression of any kind.

Now add up the impressions of both worlds. You get all the impressions from the normal world of somethings, plus zero. So you see just the normal world.

So that's my anthrophic, multiple-worlds answer to Heidegger's question. Even if you start with no assumption of something-ness, you end up seeing a world of somethings, a world with some propositions about it that aren't both true and false or neither. QED.

T's to cross and i's to dot

There are some philosophical t's to cross and i's to dot, but AFAICT they cross and dot easily. (Like, are there otherwise ways to aggregate the impressions of two worlds that give a different result? No, by definition aggregating X with nothing gives X.)

12 September 2013

My opinion, literally

Lexicography

Previously

I wrote this a few weeks back, in response to something my friend Michael wrote, but I was lazy about posting it.

It's about the Merriam-Webster dictionary defining "literally" to mean "figuratively". See slate.

My opinion

The dictionary has to be descriptive, not prescriptive. It should reflect how people have actually spoken and used words. Ultimately, that's what language is.

But by the same token, people actually use dictionaries prescriptively. They turn to dictionaries for authority on the "right" meaning of words.

I would take a middle position. A dictionary must eventually track usage, but there's no need for it to rush to anoint every popular solecism.

What lexicographers do it collect a corpus of contemporary usage and then group the words according to word sense, as they see it. I'm not surprised that they found so many hyperbolic "literally"s. I'm sure they also had access to literally tons of people who felt figurative "literally" to be a solecism.

There's merit, as some lexicographers do, in characterizing these groups of words in more sophisticated ways. Hyperbolic senses can be noted. So can loose senses ("sarcasm" that lacks irony) and apparent substitutions ("effect" where "affect" is meant).

It's too bad Merriam-Webster stopped before doing that, and I think they deserve all the criticism for it.

Patent Fix 1

Patent Fix 1

Previously

Jan Wolfe blogs Patent defendants aren't copycats. So who's the real inventor here?.

Robin Hanson also writes about this. While his central illustration is somewhat implausible, it nevertheless puts the issue on a concrete footing. Briefly, a hypothetical business finds better routes for city drivers for a price. Then they want to forbid all the drivers in the city from driving "their" route without a license. (This example is apparently set in a world where nobody every heard of Google maps, Mapquest, or even paper maps.)

Also see Defending reinvention, So How Do We Fix The Patent System?, and A Call for an Independent Inventor Defense.

Recent changes in patent law don't seem to have addressed reinvention.

Some points of consensus

We all seem to agree that the biggest issue is patent protection vs re-invention. If re-invention was somehow metaphysically impossible, the patent situation would be more defensible1.

We also seem to agree that software patents aggravate this problem.

This is a problem that I have been kicking around for a while too. I have, I think, a different analysis to offer and something of a solution.

The deep problem

The deep problem, as I see it, is a problem is common to all intellectual pseudoproperty. First the discoverer sells it - to an individual or to society via the social contract of patenting. Only afterwards can the invention be seen by the buyer and evaluated.

For the individual, such as the driver licensing a better route in Robin's example, this makes the route an experience good - he can only tell whether it's worth what he paid after he receives it. Then he can't return the good by unlearning it.

He may find that it's not worth what he paid because it's bad. More relevant to patent re-invention, he may find that he already knew of that route. Perhaps he sometimes uses it but mostly prefers a slightly longer scenic route. He shouldn't lose use of a route he already knew just because he "bought" the same from this business and then returned it.

This at least could be solved by contract - perhaps the driver can get a refund if he proves that he already knew that route. For society, it's worse. Patent law basically throws up its hands at the question of what the loss of opportunity for re-invention costs. It deems the cost to be zero, which is simply insane.

Why it's tricky to solve

It's sometimes said that the invention of the eraser on the pencil was obvious - after it was invented; before that, nobody could think of that. As it turns out, that's questionable2 for the pencil-eraser but the general point stands.

So we don't want re-invention to consist of stating, after one has seen the invention, "Why, it's obvious to anybody". That's hindsight bias. That's cheating. We want to measure problem hardness in foresight. How hard did it appear before you knew the answer?

So how can we measure problem hardness?

For any patent, there is some problem that it solves. This isn't just philosophical; a patent application must contain this. It's part of one of the less formal sections of an application, but even so, it's made explicit for every patent the USPTO ever granted.

Imagine a public blackboard where anybody can write down a problem. Our would-be inventor writes down the problem that his as-yet undisclosed invention solves. He leaves the problem on the board for a year. After a year, nobody has written down the solution.

Our inventor then says "Given an entire year, nobody has solved this problem except me. Therefore even if my invention is granted patent protection, nobody has lost the chance to invent it themselves. Since opportunity for re-invention was the only thing anybody really stood to lose, I should be granted patent protection. I stand ready to reveal my solution, which in fact solves the problem. If I'm granted patent protection, everybody wins."

We're not assuming that our inventor has already invented the invention. He could pose a problem that he feels pretty sure he can solve and only start working on it later in the year.

There are of course holes in that solution. Let's try to fill them.

Issues

A dialog about attention

"Not so fast". (Now we need names. I will call the first inventor Alice and the new voice Bob)

"Not so fast" says Bob. "I could have solved that problem, but I was working on something else more important. I'm confident that when the day comes that I actually need a solution, I can solve it myself. I like that better than paying your license fee. So your patent protection does deprive me of the opportunity for re-invention."

"You haven't proved you can do that." replies Alice. "Nobody answered my challenge, so why should I believe you?"

"Correction: they didn't answer it for free." says Bob. "It does take non-zero time and attention - and by the way, so would using your patent even if the license fee was $0"

Bob continues "Maybe other possible inventors felt the same as I did. Or maybe they're just not paying attention to that blackboard. If everybody has to pay attention to everything you write on that blackboard, that imposes a public cost too."

"I'll tell you what: Offer a reasonable bounty to make it worth my time, say $5000, and I will show you my solution to your problem."

"I haven't got $5000 to spend on that" says Alice, "I'm a struggling independent inventor with one killer idea in my pocket. And if I did have $5000 then I'd just be paying you to reveal your invention. I already have one, so the value to me is $0. If you're not bluffing, I'll be down $5000."

"If you don't have to offer a reward, I can play that game too," replied Bob, "but I won't leave it at one problem. I'll write write down thousands of them. Then I'll own all the problems that nobody paid attention to, many of which will be actually quite easy, just ignored. I'll solve some of them and nobody can use the solutions without licensing them from me."

"I see where that's going" says Alice. "I'd do the same." A look of absolute horror crossed her face. "We'd be back in the Dark Ages of the US patent system!"

How do we direct attention reasonably?

Collectively, Alice and Bob have a pretty good idea what's easy to solve and what's not. The trick is truthfully revealing that knowledge at a practical cost.

One possible solution quickly suggests itself to those acquainted with idea markets: We can adapt Hanson's old lottery trick.

I'll start by explaining the lottery. We'll go beyond that later. The idea is that Alice and everybody else who writes a problem on the blackboard pays a small license fee. The whole amount - let's say it's still $5000 - is offered on one problem chosen by lottery. Maybe it's offered on a few of them, but in any case a small fraction of what there is.

That's great for that one problem, but it leaves the other problems unmeasured.

That's where decision markets come in. Investors place bets on whether a problem, should it be chosen for a bounty, will in fact be solved. Then one particular problem is chosen by lottery. The bets about its hardness are settled while the other bets are called off. It's easy for the system to see whether someone has claimed the bounty. We won't tackle the quality of the solution until later in this post.

The hardness-market price determines whether the problem is considered easy or not - there may be finer gradations but we won't get into that here.

So we've amplified the signal. By offering a bounty on one problem, we've measured the hardness of all the problems. We'll improve on this later.

More dialog

So this scheme is implemented. A few weeks later, Bob comes storming in.

"Somebody wrote on the blackboard the exact problem that I've been working on!"

"That's odd." says Alice

"Did you do this to get back at me, Alice?"

"How does that inconvenience you? You could earn $5000."

"My invention is worth millions! Now I have to disclose it for a one-time payment of $5000? That doesn't even cover my research and development costs!"

"Well, you can hardly expect anyone to offer a million dollars for a bounty."

"That's true. Still, this is very wrong. Since you wrote the problem, you should be required to have a solution too."

"It just so happens that I do, but if I disclosed it, you'd just copy it. You should have to show your solution first."

"Then you'd copy it. See, Alice, I don't trust you either."

"I really had no idea you were working on it too, Bob. But if you really do have a million-dollar invention too, why should either of us sell it for $5000? As far as we know, only the two of us have it. Why should the two of us collectively get less than it's worth?"

"Luckily we found out in time. That's just sheer luck. We could agree to ignore the bounty and split the rewards of the patent between us."

"50/50"

The two inventors shook hands and then compared notes.

"Hey!" exclaimed Bob "I shouldn't have assumed that just because your invention solved the same problem, it was as good as mine. It's not! Mine's cheaper to manufacture. I'd have got about 95% of the market share."

"No, I'd have beat you. Mine's more stylish and easier to use."

The two inventors glared at each other, each convinced they had gotten the worst of the deal.

Bugs

So we have the following bugs:

  • The bounty acted like a price, but wasn't really a sensible price. In fact, it didn't even try, it just set one fixed price for everything.
  • The bounty overrode the market payoffs, which are a better measure of invention quality.
  • Relating the hardness to the number of times the bounty is claimed measures the wrong thing. What we need is solution unobviousness. This is trickier, since we have to look past the expected number of solutions and see how many actual solutions overlap.
  • If the first inventor to disclose gets a patent monopoly, it's unfair to Alice, who posed the problem and paid to do so. It shuts her out of her own invention.
  • If the first to disclose doesn't get a patent monopoly, for inventions whose expected market value is more than the bounty, the likelihood of the bounty being accepted will be too low. We'll see fewer than we should and therefore we'll underestimate obviousness.

Second draft

In order to fix those bugs, we're going to redo the bounty system:

  • Claiming the bounty doesn't require openly disclosing an invention. It requires only an unforgeable commit, probably as a secure hash of the disclosure document. The reveal and the payment will come later. NB this allows the original problem-poser to participate in the race for the solution.
  • After a predetermined period of time:
    • The claimed inventions are to be disclosed (all at once)
    • Inventions not disclosed within a given time-frame are ineligible.
    • Disclosures that don't match the prior commit are ineligible.
    • Unworkable inventions are ineligible.
    • Each distinct workable invention gets patent protection
      • Technically, that's each distinct patent claim - a USPTO patent application typically contains many related patent claims.
      • If multiple inventors invented the same invention, they each get an equal share. This is suboptimal vs sockpuppets, though.
      • Here I'm assuming that novelty against prior art isn't an issue. It won't be much of one, because why would Alice pay money to pose a problem whose solution is already publicly known? We can just say that the existence of a prior invention voids a patent claim, just like now.
    • Each distinct workable invention gets an equal fraction of the bounty.
  • Non-bountied inventions are treated essentially the same way, just minus the bounty and the bounty time period.
  • The market pays off in proportion to the count of unique workable solutions, rather than the count of all solutions.
    • We don't want to use some sort of "density of uniqueness", because that's too easily spoofed by submitting the same solution many times.
  • To estimate solution unobviousness:
    • For bountied problems, we use a count of solutions, backed off somewhat towards the market's prior estimate.
    • For non-bountied problems, we directly use the market's prior estimate.

Measuring invention workability and quality

We still need a way to measure invention workability. This is traditionally the job of a patent examiner. However, we've piled more work onto them, and there are concerns about how good a job they do. This post is already long, though, so I won't try to design a better mechanism for that here.

Progressively increasing bounties

One way to aim bounties more efficiently is to start by offering small bounties. Then for some of those problems whose bounties were not claimed, raise the bounty. That way we are not overpaying for easy answers.

Time frame

We haven't addressed the question of how long the problem period should be. We may want it to work differently when there is a bounty, since then we need a standard measuring period.

It's not hard to propose. I could say "six months", but I want to leave it open to more flexible rules, so I won't propose anything here.

Relates to how long a patent monopoly should be in force

This mechanism also provides a somewhat more principled way of deciding how long a patent should be in force: it should relate to how long the problem period is. Perhaps the two should be linearly proportional.

Bug: We charge for posing a problem

Posing a problem well is valuable and is a task in and of itself. Yet we've charged the problem-poser for the privelege. This isn't good, and I'd like it to be the other way.

We could try to recurse, with the problem being: "What are some unsolved problems in field X?" but then the solution is no longer in a standard form as formal patent applications are.

This post is already long, so I will leave it at that.

Footnotes:

1 I will mention in passing that AIUI re-invention is allowed but only under stringent conditions that are only practical for well-heeled institutions. The slightest exposure to a patent "taints" a would-be rediscoverer forever. IANAL so take this with a grain of salt.

2 That patent was later disallowed because it was a mere combination of two things, which is not patentable. See Eraser. Regardless, the general point stands.