tag:blogger.com,1999:blog-59835637760194779792024-03-05T13:26:56.925-08:00TehomTehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.comBlogger143125tag:blogger.com,1999:blog-5983563776019477979.post-14599296999968770712014-01-01T18:25:00.001-08:002014-01-01T18:25:24.609-08:00Set Up Mathjax
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Previously</h2>
<div id='text-1' class='outline-text-2'>
<p>
I occasionally tried to post equations here, but gave up when they
rendered incomprehensibly. The previews my software was showing me
rendered OK but depended on local stuff.
</p>
</div>
<div class='outline-3' id='outline-container-1-1'>
<h3 id='sec-1-1'>My editing toolset</h3>
<div id='text-1-1' class='outline-text-3'>
<p>
FWIW, I write and post this stuff using:
</p><ul>
<li><a href='http://www.emacswiki.org/emacs/GnuEmacs'>emacs</a>
</li>
<li><a href='http://www.emacswiki.org/emacs/OrgMode'>org-mode</a> (by Carsten Dominick)
</li>
<li><a href='http://www.emacswiki.org/emacs/Org2BlogAtom'>org2blog:atom</a>, which I wrote. It was previously called org2blog
until we realized there were two programs called org2blog)
</li>
<li><a href='http://www.emacswiki.org/emacs/GoogleClient'>gclient</a> (by T V Raman, used by org2blog:atom)
</li>
<li><b>Not</b> Blogger's online editor, which I don't like because it
isolates me from emacs and everything else.
</li>
</ul>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>So I set up MathJax</h2>
<div id='text-2' class='outline-text-2'>
<p>
Fixing math display was on my todo list for an embarrassingly long
time. I finally got around to fixing it when I became aware of
MathJax. I have arXiv.org to thank for the pointer to MathJax.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Not many problems</h2>
<div id='text-3' class='outline-text-2'>
<p>
MathJax was actually quite easy to set up. Only two caveats:
</p>
<ul>
<li>Of course you've got to use the Tex delimiters around the equations.
</li>
<li>The include recommended by the MathKax site didn't seem to handle
single-dollar-sign delimiters. Fortunately, <a href='http://irrep.blogspot.com/2011/07/mathjax-in-blogger-ii.html'>this site</a> provides a
version that works.
</li>
</ul>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-28669552102456641322013-12-16T13:08:00.001-08:002013-12-16T14:07:35.727-08:00Spontaneous Dimensional Reduction<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Spontaneous Dimensional Reduction</h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1-1">
<h3 id="sec-1-1">
Previously</h3>
<div class="outline-text-3" id="text-1-1">
I have been reading a few papers by Steven Carlip on <a href="http://arxiv.org/abs/0909.3329">spontaneous dimensional reduction</a>, also essentially the same <a href="http://arxiv.org/abs/1009.1136">here</a> and most
recently <a href="http://arxiv.org/abs/1207.4503">here</a>.
<br />
Carlip is probably best known for his article in Scientific American
on similar themes. There he played with a 2+1 dimension "Flatland"
universe; here he is seriously proposing a 1+1 one.
<br />
It's not as crazy as it sounds. In fact, I find it quite promising.
</div>
</div>
<div class="outline-3" id="outline-container-1-2">
<h3 id="sec-1-2">
In a nutshell</h3>
<div class="outline-text-3" id="text-1-2">
Spontaneous dimensional reduction is his idea that at the very
smallest scales, space is 1-dimensional (so space-time is 1+1
dimensional). He brings together various lines of evidence that
support this, including his own treatment of the <a href="http://en.wikipedia.org/wiki/Wheeler-deWitt_equation">Wheeler-deWitt equation</a> at extremely small scales.
<br />
Discussing the last point, he suggests that spacetime at small scales
"spends most of its time" in or near a <a href="http://en.wikipedia.org/wiki/Kasner_metric">Kasner solution</a>, a anisotropic
solution to general relativity that applies in 3 dimensions or more.
He argues that Kasner solutions favor 1 dimension - strongly so if
contracting, less strongly if expanding.
<br />
<a href="http://arxiv.org/abs/1103.5993">Elsewhere</a> he argues that focussing effects dominate, albeit in a
slightly different context. This would imply that the contracting state dominates, which is basically what he needs for this to work. To my knowledge he hasn't explicitly
applied this to 1+1 dimensions - that puzzles me, since the two ideas
of his seem to fit each other nicely.<br />
Kasner solutions are vacuum solutions - solutions that only apply to
empty space. Carlip argues that by looking at extremely small scales,
spacetime is effectively flattened.
<br />
At larger scales, he says that expansion and contraction change
repeatedly and choatically, the general idea being a <a href="http://en.wikipedia.org/wiki/Mixmaster_universe">Mixmaster universe</a> or a <a href="http://en.wikipedia.org/wiki/BKL_singularity">BKL singularity</a>. The familiar 3 spatial dimensions are
built from 1-dimensional pieces not unlike tinkertoys are.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Features</h2>
<div class="outline-text-2" id="text-2">
Carlip doesn't appear to cover some of the nice features of 1+1
dimensionality, but I will.
</div>
<div class="outline-3" id="outline-container-2-1">
<h3 id="sec-2-1">
Scalar propagator for gravity</h3>
<div class="outline-text-3" id="text-2-1">
The first one he does mention: In 1+1, the gravitational propagator is
a scalar. All the problems with renormalizing gravity come from it
having a non-scalar propagator (in fact, a rank 2 tensor; the other
fundamental forces have rank 1 tensor propagators (ie, vectors)).
With a scalar propagator, they should all go away.
<br />
My guess is that the other fundamental forces might also see a
solution without renormalization from this. Nobody really likes
renormalization, it's just been a neccessary evil in quantum field
theory. Presumably that'd happen at an intermediate scale that has 2+1 dimensions.</div>
</div>
<div class="outline-3" id="outline-container-2-2">
<h3 id="sec-2-2">
The hierarchy problem</h3>
<div class="outline-text-3" id="text-2-2">
The hierarchy problem asks, why is gravity so much weaker than the
other forces? For instance, you can lift a brick against the pull of
the entire earth. The electromagnetic forces of the molecular bonds
in your hand and arm exceed the gravitational forces levied by the 6.6
sextillion ton earth.
<br />
This offers an answer. By way of background, gravity requires 3
dimensions in order to propagate: 1 dimension of travel and 2
transverse dimensions. That's because it's a spin 2 force, which is
why its propagator is a rank 2 tensor.
<br />
So spontaneous dimensional reduction says that gravity can't propagate
at all at small scales, only at large scales. This may be enough to
explain the hierarchy problem. (That's me conjecturing this)
<br />
"But wait", you say. "If it can't propagate at small scales, how does
it get anywhere at larger scales? That's like saying, I can't walk
three feet but I can walk a mile. Surely the big journey is made of
little journeys?"
<br />
Well, what Carlip suggests <a href="http://arxiv.org/abs/gr-qc/0409039">elsewhere</a> (here he may be summarizing
others' work) is that for reduced dimensions, what happens instead is
that gravity rearranges the topology of space, presumably affecting
the BKL or Mixmaster behavior. This may be enough to let it propagate.
</div>
</div>
<div class="outline-3" id="outline-container-2-3">
<h3 id="sec-2-3">
The self-energy problem</h3>
<div class="outline-text-3" id="text-2-3">
In a nutshell, the self-energy problem is that if forces like the
coulomb force go as 1/r<sup>2</sup> and therefore diminish over distance, then
at small distances they should increase over distance, becoming
infinite at r=0.
<br />
But (me again) in a 1-dimensional space, that doesn't happen. Forces
go as 1/r<sup>0</sup>, which is to say they are insensitive to distance. No
self-energy problem.
<br />
Further, there's helpful logic in the other direction. Why does
spacetime do this at small distances? Why a Kasner-like solution
instead of a simpler isotropic solution? Because if it didn't, then
there <b>would</b> be infinite forces at small distances. If we don't need
renormalization, we can just say as a principle that energies can't be
infinite and then we'll find that 1+1 dimensional Kasner-like
spacetime is needed at small scales.
</div>
</div>
<div class="outline-3" id="outline-container-2-4">
<h3 id="sec-2-4">
Potential for insight about dark energy</h3>
<div class="outline-text-3" id="text-2-4">
Everybody's heard this for years so I'll be brief: Dark energy,
whatever it is, is making the universe expand faster.
<br />
If Carlip's theory wrt Kasner solutions is true, then at small scales space is constantly expanding and contracting. This suggests
(me again) some relation to dark energy. Maybe it's as simple as
whether contraction or expansion dominates at that scale, and by how
much.
</div>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com3tag:blogger.com,1999:blog-5983563776019477979.post-67588472499987513722013-11-18T10:44:00.001-08:002013-11-18T10:44:22.224-08:00Heidegger 2
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Heidegger 2</h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1-1'>
<h3 id='sec-1-1'>Previously</h3>
<div id='text-1-1' class='outline-text-3'>
<p>I <a href='http://tehom-blog.blogspot.com/2011/08/superpositionality-answers-heidegger.html'>previously blogged</a> my answer to Martin Heidegger's deep question,
"Why is there something rather than nothing?"
</p>
<p>
I just wrote it up for a friend. It says basically the same thing the
earlier post does, but in a more accessible form.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1-2'>
<h3 id='sec-1-2'>What's not a good answer</h3>
<div id='text-1-2' class='outline-text-3'>
<p>
First, I like to say what is not a good answer. For instance, it's
not a good answer to talk about quantum fluctuations creating matter
out of empty space. That may or may not follow from the rules of
quantum mechanics, but those rules are a "something" too. Why do they
exist? So to my mind, that doesn't really answer the question.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1-3'>
<h3 id='sec-1-3'>The full flavor of the question</h3>
<div id='text-1-3' class='outline-text-3'>
<p>
Heidegger's question is deeper than that. What it asks to explain is
not why is there matter, or why there is quantum mechanics, but why is
there anything at all. Why does the world have any structure
whatsoever?
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>My insight</h2>
<div id='text-2' class='outline-text-2'>
<p>
My insight was that the question still assumes one little thing: that
it's one or the other, either/or, obeying the law of the excluded
middle. Which I know sounds like simple common sense, but consider
this: any evidence it could possibly be based on is a something too,
and so is the law of the excluded middle. Even antinomy, the law of
non-contradiction, is a something about which we can ask why it
exists.
</p>
<p>
So take a deep intellectual breath and imagine for a moment that it
could be both ways. Imagine that you can see both a world of
nothingness and a normal world. Doesn't matter how. If you like, you
can imagine some sort of blend of a something-world and a
nothing-world, or a split-screen of both worlds, or perhaps you gaze
alternately on one world and the other, or teleport between them.
</p>
<p>
What would the nothingness look like? Seems like nothingness wouldn't
make much of an impression. It wouldn't even mark its absence by the
passage of time or an empty reach of space. It hasn't got time and
space or anything else. It hasn't got its half of the split-screen
you may have imagined. It hasn't even got a you in it to do the
perceiving. Seems to me nothingness makes absolutely zero impression
of any kind.
</p>
<p>
Now add up the impressions of both worlds. You get all the
impressions from the normal world of somethings, plus zero. So you
see just the normal world.
</p>
<p>
So that's my anthrophic, multiple-worlds answer to Heidegger's
question. Even if you start with no assumption of something-ness, you
end up seeing a world of somethings, a world with some propositions
about it that aren't both true and false or neither. QED.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>T's to cross and i's to dot</h2>
<div id='text-3' class='outline-text-2'>
<p>
There are some philosophical t's to cross and i's to dot, but AFAICT
they cross and dot easily. (Like, are there otherwise ways to
aggregate the impressions of two worlds that give a different result?
No, by definition aggregating X with nothing gives X.)
</p></div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-1411921561994483872013-09-12T13:27:00.001-07:002013-09-12T13:27:04.059-07:00My opinion, literally
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Lexicography </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I wrote this a few weeks back, in response to something my friend
<a href='http://michaelrmcguire.blogspot.com/'>Michael</a> wrote, but I was lazy about posting it.
</p>
<p>
It's about the Merriam-Webster dictionary defining "literally" to mean
"figuratively". See <a href='http://www.slate.com/articles/life/the_good_word/2011/04/the_nonplussed_problem.html'>slate</a>.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>My opinion </h2>
<div id='text-2' class='outline-text-2'>
<p>
The dictionary has to be descriptive, not prescriptive. It should
reflect how people have actually spoken and used words. Ultimately,
that's what language is.
</p>
<p>
But by the same token, people actually use dictionaries
prescriptively. They turn to dictionaries for authority on the "right"
meaning of words.
</p>
<p>
I would take a middle position. A dictionary must eventually track
usage, but there's no need for it to rush to anoint every popular
solecism.
</p>
<p>
What lexicographers do it collect a corpus of contemporary usage and
then group the words according to word sense, as they see it. I'm not
surprised that they found so many hyperbolic "literally"s. I'm sure
they also had access to literally tons of people who felt figurative
"literally" to be a solecism.
</p>
<p>
There's merit, as some lexicographers do, in characterizing these
groups of words in more sophisticated ways. Hyperbolic senses can be
noted. So can loose senses ("sarcasm" that lacks irony) and apparent
substitutions ("effect" where "affect" is meant).
</p>
<p>
It's too bad Merriam-Webster stopped before doing that, and I think
they deserve all the criticism for it.
</p></div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-78577879922718281572013-09-12T13:07:00.001-07:002013-09-12T13:07:14.971-07:00Patent Fix 1
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Patent Fix 1 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
Jan Wolfe blogs <a href='http://thepriorart.typepad.com/the_prior_art/2009/02/copying-in-patent-law.html'>Patent defendants aren't copycats. So who's the real inventor here?</a>.
</p>
<p>
<a href='http://www.overcomingbias.com/2013/09/let-re-discovery-evade-patents.html'>Robin Hanson</a> also writes about this. While his central illustration
is somewhat implausible, it nevertheless puts the issue on a concrete
footing. Briefly, a hypothetical business finds better routes for
city drivers for a price. Then they want to forbid all the drivers in
the city from driving "their" route without a license. (This example
is apparently set in a world where nobody every heard of Google maps,
Mapquest, or even paper maps.)
</p>
<p>
Also see <a href='http://marginalrevolution.com/marginalrevolution/2012/02/defending-independent-invention.html'>Defending reinvention</a>, <a href='http://www.techdirt.com/articles/20110819/14021115603/so-how-do-we-fix-patent-system.shtml'>So How Do We Fix The Patent System?</a>,
and <a href='http://www.stephankinsella.com/2009/11/common-misconceptions-about-plagiarism-and-patents-a-call-for-an-independent-inventor-defense/'>A Call for an Independent Inventor Defense</a>.
</p>
<p>
<a href='http://gigaom.com/2013/03/18/first-to-file-patent-law-starts-today-what-it-means-in-plain-english/'>Recent changes in patent law</a> don't seem to have addressed reinvention.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Some points of consensus </h2>
<div id='text-2' class='outline-text-2'>
<p>
We all seem to agree that the biggest issue is patent protection vs
re-invention. If re-invention was somehow metaphysically impossible,
the patent situation would be more defensible<sup><a href='#fn.1' name='fnr.1' class='footref'>1</a></sup>.
</p>
<p>
We also seem to agree that software patents aggravate this problem.
</p>
<p>
This is a problem that I have been kicking around for a while too. I
have, I think, a different analysis to offer and something of a
solution.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>The deep problem </h2>
<div id='text-3' class='outline-text-2'>
<p>
The deep problem, as I see it, is a problem is common to all
intellectual pseudoproperty. First the discoverer sells it - to an
individual or to society via the social contract of patenting. Only
afterwards can the invention be seen by the buyer and evaluated.
</p>
<p>
For the individual, such as the driver licensing a better route in
Robin's example, this makes the route an <a href='http://en.wikipedia.org/wiki/Experience_good'>experience good</a> - he can only
tell whether it's worth what he paid after he receives it. Then he
can't return the good by unlearning it.
</p>
<p>
He may find that it's not worth what he paid because it's bad. More
relevant to patent re-invention, he may find that he already knew of
that route. Perhaps he sometimes uses it but mostly prefers a
slightly longer scenic route. He shouldn't lose use of a route he
already knew just because he "bought" the same from this business and
then returned it.
</p>
<p>
This at least could be solved by contract - perhaps the driver can get
a refund if he proves that he already knew that route. For society,
it's worse. Patent law basically throws up its hands at the question
of what the loss of opportunity for re-invention costs. It deems the
cost to be zero, which is simply insane.
</p>
</div>
<div class='outline-3' id='outline-container-3_1'>
<h3 id='sec-3_1'>Why it's tricky to solve </h3>
<div id='text-3_1' class='outline-text-3'>
<p>
It's sometimes said that the invention of the eraser on the pencil was
obvious - <b>after</b> it was invented; before that, nobody could think of
that. As it turns out, that's questionable<sup><a href='#fn.2' name='fnr.2' class='footref'>2</a></sup> for the
pencil-eraser but the general point stands.
</p>
<p>
So we don't want re-invention to consist of stating, after one has
seen the invention, "Why, it's obvious to anybody". That's hindsight
bias. That's cheating. We want to measure problem hardness in
foresight. How hard did it appear before you knew the answer?
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>So how can we measure problem hardness? </h2>
<div id='text-4' class='outline-text-2'>
<p>
For any patent, there is some problem that it solves. This isn't just
philosophical; a patent application must contain this. It's part of
one of the less formal sections of an application, but even so, it's
made explicit for every patent the USPTO ever granted.
</p>
<p>
Imagine a public blackboard where anybody can write down a problem.
Our would-be inventor writes down the problem that his as-yet
undisclosed invention solves. He leaves the problem on the board for
a year. After a year, nobody has written down the solution.
</p>
<p>
Our inventor then says "Given an entire year, nobody has solved this
problem except me. Therefore even if my invention is granted patent
protection, nobody has lost the chance to invent it themselves. Since
opportunity for re-invention was the only thing anybody really stood
to lose, I should be granted patent protection. I stand ready to
reveal my solution, which in fact solves the problem. If I'm granted
patent protection, everybody wins."
</p>
<p>
We're not assuming that our inventor has already invented the
invention. He could pose a problem that he feels pretty sure he can
solve and only start working on it later in the year.
</p>
<p>
There are of course holes in that solution. Let's try to fill them.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>Issues </h2>
<div id='text-5' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-5_1'>
<h3 id='sec-5_1'>A dialog about attention </h3>
<div id='text-5_1' class='outline-text-3'>
<p>
"Not so fast". (Now we need names. I will call the first inventor
Alice and the new voice Bob)
</p>
<p>
"Not so fast" says Bob. "I could have solved that problem, but I was
working on something else more important. I'm confident that when the
day comes that I actually need a solution, I can solve it myself. I
like that better than paying your license fee. So your patent
protection does deprive me of the opportunity for re-invention."
</p>
<p>
"You haven't proved you can do that." replies Alice. "Nobody answered
my challenge, so why should I believe you?"
</p>
<p>
"Correction: they didn't answer it for free." says Bob. "It does take
non-zero time and attention - and by the way, so would using your
patent even if the license fee was $0"
</p>
<p>
Bob continues "Maybe other possible inventors felt the same as I did.
Or maybe they're just not paying attention to that blackboard. If
everybody has to pay attention to everything you write on that
blackboard, that imposes a public cost too."
</p>
<p>
"I'll tell you what: Offer a reasonable bounty to make it worth my
time, say $5000, and I will show you my solution to your problem."
</p>
<p>
"I haven't got $5000 to spend on that" says Alice, "I'm a struggling
independent inventor with one killer idea in my pocket. And if I did
have $5000 then I'd just be paying you to reveal your invention. I
already have one, so the value to me is $0. If you're not bluffing,
I'll be down $5000."
</p>
<p>
"If you don't have to offer a reward, I can play that game too,"
replied Bob, "but I won't leave it at one problem. I'll write write
down thousands of them. Then I'll own all the problems that nobody
paid attention to, many of which will be actually quite easy, just
ignored. I'll solve some of them and nobody can use the solutions
without licensing them from me."
</p>
<p>
"I see where that's going" says Alice. "I'd do the same." A look of
absolute horror crossed her face. "We'd be back in the Dark Ages of
the US patent system!"
</p>
</div>
</div>
<div class='outline-3' id='outline-container-5_2'>
<h3 id='sec-5_2'>How do we direct attention reasonably? </h3>
<div id='text-5_2' class='outline-text-3'>
<p>
Collectively, Alice and Bob have a pretty good idea what's easy to
solve and what's not. The trick is truthfully revealing that
knowledge at a practical cost.
</p>
<p>
One possible solution quickly suggests itself to those acquainted with
idea markets: We can adapt Hanson's old lottery trick.
</p>
<p>
I'll start by explaining the lottery. We'll go beyond that later.
The idea is that Alice and everybody else who writes a problem on the
blackboard pays a small license fee. The whole amount - let's say
it's still $5000 - is offered on one problem chosen by lottery. Maybe
it's offered on a few of them, but in any case a small fraction of
what there is.
</p>
<p>
That's great for that one problem, but it leaves the other problems
unmeasured.
</p>
<p>
That's where decision markets come in. Investors place bets on
whether a problem, should it be chosen for a bounty, will in fact be
solved. Then one particular problem is chosen by lottery. The bets
about its hardness are settled while the other bets are called off.
It's easy for the system to see whether someone has claimed the
bounty. We won't tackle the quality of the solution until later in
this post.
</p>
<p>
The hardness-market price determines whether the problem is considered
easy or not - there may be finer gradations but we won't get into that
here.
</p>
<p>
So we've amplified the signal. By offering a bounty on one problem,
we've measured the hardness of all the problems. We'll improve on
this later.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-5_3'>
<h3 id='sec-5_3'>More dialog </h3>
<div id='text-5_3' class='outline-text-3'>
<p>
So this scheme is implemented. A few weeks later, Bob comes storming
in.
</p>
<p>
"Somebody wrote on the blackboard the exact problem that I've been
working on!"
</p>
<p>
"That's odd." says Alice
</p>
<p>
"Did you do this to get back at me, Alice?"
</p>
<p>
"How does that inconvenience you? You could earn $5000."
</p>
<p>
"My invention is worth millions! Now I have to disclose it for a
one-time payment of $5000? That doesn't even cover my research and
development costs!"
</p>
<p>
"Well, you can hardly expect anyone to offer a million dollars for a
bounty."
</p>
<p>
"That's true. Still, this is very wrong. Since you wrote the
problem, you should be required to have a solution too."
</p>
<p>
"It just so happens that I do, but if I disclosed it, you'd just copy
it. You should have to show your solution first."
</p>
<p>
"Then <b>you'd</b> copy it. See, Alice, I don't trust you either."
</p>
<p>
"I really had no idea you were working on it too, Bob. But if you
really do have a million-dollar invention too, why should either of us
sell it for $5000? As far as we know, only the two of us have it.
Why should the two of us collectively get less than it's worth?"
</p>
<p>
"Luckily we found out in time. That's just sheer luck. We could
agree to ignore the bounty and split the rewards of the patent between
us."
</p>
<p>
"50/50"
</p>
<p>
The two inventors shook hands and then compared notes.
</p>
<p>
"Hey!" exclaimed Bob "I shouldn't have assumed that just because your
invention solved the same problem, it was as good as mine. It's not!
Mine's cheaper to manufacture. I'd have got about 95% of the market
share."
</p>
<p>
"No, I'd have beat you. Mine's more stylish and easier to use."
</p>
<p>
The two inventors glared at each other, each convinced they had gotten
the worst of the deal.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-5_4'>
<h3 id='sec-5_4'>Bugs </h3>
<div id='text-5_4' class='outline-text-3'>
<p>
So we have the following bugs:
</p><ul>
<li>
The bounty acted like a price, but wasn't really a sensible price.
In fact, it didn't even try, it just set one fixed price for
everything.
</li>
<li>
The bounty overrode the market payoffs, which are a better measure
of invention quality.
</li>
<li>
Relating the hardness to the number of times the bounty is claimed
measures the wrong thing. What we need is <b>solution unobviousness</b>. This is trickier, since we have to look past the
expected number of solutions and see how many actual solutions
overlap.
</li>
<li>
If the first inventor to disclose gets a patent monopoly, it's
unfair to Alice, who posed the problem and paid to do so. It shuts
her out of her own invention.
</li>
<li>
If the first to disclose doesn't get a patent monopoly, for
inventions whose expected market value is more than the bounty, the
likelihood of the bounty being accepted will be too low. We'll see
fewer than we should and therefore we'll underestimate obviousness.
</li>
</ul>
</div>
</div>
<div class='outline-3' id='outline-container-5_5'>
<h3 id='sec-5_5'>Second draft </h3>
<div id='text-5_5' class='outline-text-3'>
<p>
In order to fix those bugs, we're going to redo the bounty system:
</p>
<ul>
<li>
Claiming the bounty doesn't require openly disclosing an invention.
It requires only an unforgeable commit, probably as a secure hash
of the disclosure document. The reveal and the payment will come
later. NB this allows the original problem-poser to participate in
the race for the solution.
</li>
<li>
After a predetermined period of time:
<ul>
<li>
The claimed inventions are to be disclosed (all at once)
</li>
<li>
Inventions not disclosed within a given time-frame are
ineligible.
</li>
<li>
Disclosures that don't match the prior commit are ineligible.
</li>
<li>
Unworkable inventions are ineligible.
</li>
<li>
Each distinct workable invention gets patent protection
<ul>
<li>
Technically, that's each distinct patent <b>claim</b> - a USPTO
patent application typically contains many related patent
claims.
</li>
<li>
If multiple inventors invented the same invention, they each
get an equal share. This is suboptimal vs sockpuppets, though.
</li>
<li>
Here I'm assuming that novelty against prior art isn't an
issue. It won't be much of one, because why would Alice pay
money to pose a problem whose solution is already publicly
known? We can just say that the existence of a prior invention
voids a patent claim, just like now.
</li>
</ul>
</li>
<li>
Each distinct workable invention gets an equal fraction of the
bounty.
</li>
</ul>
</li>
<li>
Non-bountied inventions are treated essentially the same way, just
minus the bounty and the bounty time period.
</li>
<li>
The market pays off in proportion to the count of <b>unique</b> workable
solutions, rather than the count of all solutions.
<ul>
<li>
We don't want to use some sort of "density of uniqueness",
because that's too easily spoofed by submitting the same solution
many times.
</li>
</ul>
</li>
<li>
To estimate solution unobviousness:
<ul>
<li>
For bountied problems, we use a count of solutions, backed off
somewhat towards the market's prior estimate.
</li>
<li>
For non-bountied problems, we directly use the market's prior
estimate.
</li>
</ul>
</li>
</ul>
</div>
<div class='outline-4' id='outline-container-5_5_1'>
<h4 id='sec-5_5_1'>Measuring invention workability and quality </h4>
<div id='text-5_5_1' class='outline-text-4'>
<p>
We still need a way to measure invention workability. This is
traditionally the job of a patent examiner. However, we've piled more
work onto them, and there are concerns about how good a job they do.
This post is already long, though, so I won't try to design a better
mechanism for that here.
</p>
</div>
</div>
<div class='outline-4' id='outline-container-5_5_2'>
<h4 id='sec-5_5_2'>Progressively increasing bounties </h4>
<div id='text-5_5_2' class='outline-text-4'>
<p>
One way to aim bounties more efficiently is to start by offering small
bounties. Then for some of those problems whose bounties were not
claimed, raise the bounty. That way we are not overpaying for easy
answers.
</p>
</div>
</div>
<div class='outline-4' id='outline-container-5_5_3'>
<h4 id='sec-5_5_3'>Time frame </h4>
<div id='text-5_5_3' class='outline-text-4'>
<p>
We haven't addressed the question of how long the problem period
should be. We may want it to work differently when there is a bounty,
since then we need a standard measuring period.
</p>
<p>
It's not hard to propose. I could say "six months", but I want to
leave it open to more flexible rules, so I won't propose anything
here.
</p>
</div>
<div class='outline-5' id='outline-container-5_5_3_1'>
<h5 id='sec-5_5_3_1'>Relates to how long a patent monopoly should be in force </h5>
<div id='text-5_5_3_1' class='outline-text-5'>
<p>
This mechanism also provides a somewhat more principled way of
deciding how long a patent should be in force: it should relate to how
long the problem period is. Perhaps the two should be linearly
proportional.
</p>
</div>
</div>
</div>
<div class='outline-4' id='outline-container-5_5_4'>
<h4 id='sec-5_5_4'>Bug: We charge for posing a problem </h4>
<div id='text-5_5_4' class='outline-text-4'>
<p>
Posing a problem well is valuable and is a task in and of itself. Yet
we've charged the problem-poser for the privelege. This isn't good,
and I'd like it to be the other way.
</p>
<p>
We could try to recurse, with the problem being: "What are some
unsolved problems in field X?" but then the solution is no longer in a
standard form as formal patent applications are.
</p>
<p>
This post is already long, so I will leave it at that.
</p>
</div>
</div>
</div>
</div>
<div id='footnotes'>
<h2 class='footnotes'>Footnotes: </h2>
<div id='text-footnotes'>
<p class='footnote'><sup><a href='#fnr.1' name='fn.1' class='footnum'>1</a></sup> I will mention in passing that AIUI re-invention is allowed but only
under stringent conditions that are only practical for well-heeled
institutions. The slightest exposure to a patent "taints" a would-be
rediscoverer forever. IANAL so take this with a grain of salt.
</p>
<p class='footnote'><sup><a href='#fnr.2' name='fn.2' class='footnum'>2</a></sup> That patent was <a href='http://answers.yahoo.com/question/index?qid=20061024121019AAxH6WH'>later disallowed</a> because it was a mere
combination of two things, which is not patentable. See <a href='http://en.wikipedia.org/wiki/Eraser'>Eraser</a>.
Regardless, the general point stands.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-33295258998824592012-12-05T19:45:00.001-08:002012-12-08T11:52:53.067-08:00Causal Dynamical Triangulation<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Causal Dynamical Triangulation </h2>
<div class="outline-text-2" id="text-1">
I've been reading up on <a href="http://en.wikipedia.org/wiki/Causal_dynamical_triangulation">Causal Dynamical Triangulation</a> (CDT) (by Loll,
Ambjoern, and Jurkiewicz). It's an attempted unified field theory
related to <a href="http://en.wikipedia.org/wiki/Loop_quantum_gravity">Loop Quantum Gravity</a> (LQG), which you may have read the
Scientific American article on a few years back.
<br />
What it (like LQG) has to recommend it is that the structure of space
emerges from the theory itself. Basically, it proposes a topological
substrate (spin-foam) made of <a href="http://en.wikipedia.org/wiki/Simplex">simplexes</a> (lines, triangles,
tetrahedrons, etc). Spatial curvature emerges from how those
simplexes can join together.
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Degeneration and the arrow of time </h2>
<div class="outline-text-2" id="text-2">
The big problem for CDT in its early form was that the space that
emerged was not our space. What emerged was one of two degenerate
forms. It either has infinite dimensions or just one. The topology
went to one of two extremes of connectedness.
<br />
The key insight for CDT was that space emerges correctly if edges of
simplexes can only be joined when their arrows of time are pointing in
the same direction.
</div>
</div>
<div class="outline-2" id="outline-container-3">
<h2 id="sec-3">
So time doesn't emerge? </h2>
<div class="outline-text-2" id="text-3">
But some like to see the "arrow of time" as emergent. The view is
that it's not so much that states only mix (unmix) along the arrow of
time. It's the other way around: "time" has an arrow of time because
it has an unmixed state at one end (or point) and a mixed state at the
other.
<br />
To say the say thing in a different way, the rule isn't that the arrow
of time makes entropy increase, it's that when you have an entropy
gradient along a time-like curve, you have an arrow of time.
<br />
The appeal is that we don't have to say that the time dimension has
special rules such as making entropy increase in one direction. Also,
both QM and relativity show us a time-symmetrical picture of
fundamental interactions and emergent arrow-of-time doesn't mess that
picture up.
</div>
<div class="outline-3" id="outline-container-3_1">
<h3 id="sec-3_1">
Observables and CDT </h3>
<div class="outline-text-3" id="text-3_1">
So I immediately had to wonder, could the "only join edges if arrows
of time are the same" behavior be emergent?
<br />
In quantum mechanics, you can only observe certain aspects of a
wavefunction, called <a href="http://en.wikipedia.org/wiki/Observable">Observables</a>. Given a superposition of a
arrow-matched and arrow-mismatched CDT states, is it the case that
only the arrow-matched state is observable? Ie that any self-adjoint
operator must be only a function of arrow-matched states?
<br />
I frankly don't know CDT remotely well enough to say, but it doesn't
sound promising and I have to suspect that Loll et al already looked
at that.
</div>
</div>
<div class="outline-3" id="outline-container-3_2">
<h3 id="sec-3_2">
A weaker variant </h3>
<div class="outline-text-3" id="text-3_2">
So I'm pessimistic of a theory where mismatched arrows are simply
always cosmically censored.
<br />
But as far as my limited understanding CDT goes, with all due
humility, there's room for them to be mostly censored. Like,
arrow-mismatched components are strongly suppressed in all observables
in cases where there's a strong arrow of time.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-4">
<h2 id="sec-4">
Degeneration: A feature, not a bug? </h2>
<div class="outline-text-2" id="text-4">
It occured to me that the degeneration I described earlier might be a
feature and not a bug.
<br />
Suppose for a moment that CDT is true but that the "only join edges if
arrows of time are the same" behavior is just emergent, not
fundamental. What happens in the far future, the <a href="http://en.wikipedia.org/wiki/Heat-death_of_the_Universe">heat death of the universe</a>, when entropy has basically maxxed out?
<br />
Space degenerates. It doesn't even resemble our space. It's either
an infinite-dimensioned complete graph or a 1-dimensioned line.
</div>
<div class="outline-3" id="outline-container-4_1">
<h3 id="sec-4_1">
The Boltzmann Brain paradox </h3>
<div class="outline-text-3" id="text-4_1">
What's good about that is that it may solve the <a href="http://en.wikipedia.org/wiki/Boltzmann_brain">Boltzmann Brain</a>
paradox. Which is this:
<br />
What's the likelihood that a brain (and mind) just like yours would
arise from random quantum fluctuations in empty space? Say, in a
section of interstellar space a million cubic miles in volume which we
observe for one minute?
<br />
Very small. Very, very small. But it's not zero. Nor does it even
approach zero as the universe ages and gets less dense, at least not
if the cosmological constant is non-zero. The probability has a lower
limit.
<br />
Well, multiplying an infinite span of time times that gives an
infinite number of expected cases of Boltzmann Brains exactly like our
own. The situation should be utterly dominated by those cases. But
that's the opposite of what we see.
</div>
</div>
<div class="outline-3" id="outline-container-4_2">
<h3 id="sec-4_2">
Degeneracy to the rescue </h3>
<div class="outline-text-3" id="text-4_2">
But if CDT and emergent time are true, the universe would have
degenerated long before that time. Waving my hands a bit, I doubt
that a Boltzmann Brain could exist even momentarily in that sort of
space. Paradox solved.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-5">
<h2 id="sec-5">
<a href="http://www.blogger.com/blogger.g?blogID=5983563776019477979" id="ID-bb14eff4-5c98-483c-ac50-5cb27f2d4bd9" name="ID-bb14eff4-5c98-483c-ac50-5cb27f2d4bd9">Is that the Big Rip? </a></h2>
<div class="outline-text-2" id="text-5">
(The foregoing was speculative and hand-waving, but this will be far
more so)
<br />
Having described that degeneration, I can't help noticing its
resemblance to the <a href="http://en.wikipedia.org/wiki/Big_Rip">Big Rip</a>, the hypothesized future event when
<a href="http://en.wikipedia.org/wiki/Metric_expansion_of_space">cosmological expansion</a> dominates the universe and tears everything
apart.
<br />
That makes me wonder if the <a href="http://en.wikipedia.org/wiki/Accelerating_universe">accelerating expansion of space</a> that we
see could be explained along similar lines. Like, the emergent
arrow-of-time-matching isn't quite 100% perfect, and when it "misses",
space expands a little.
<br />
This would fit with the <a href="http://www.blogger.com/CDT-1.html#ID-d98a199b-0e9a-477e-aad6-a23c27106784">weaker variant</a> proposed above.
</div>
<div class="outline-3" id="outline-container-5_1">
<h3 id="sec-5_1">
Problems</h3>
<div class="outline-text-3" id="text-5_1">
For one thing, it's not clear how it could explain the missing 72.8%
of the universe's mass as <a href="http://en.wikipedia.org/wiki/Dark_energy">dark energy</a> was hypothesized to.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-6">
<h2 id="sec-6">
End </h2>
<div class="outline-text-2" id="text-6">
Now my hands are tired from all the hand-waving I'm doing, so I'll
stop.
<br />
<br />
Edit: dynamic -> dynamical</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-88104658200558754532012-12-05T19:25:00.001-08:002012-12-08T12:00:19.838-08:00Meaning 2<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Meaning 2 </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
I relayed the definition of "meaning" that I consider best, which is
generally accepted in semiotics:
<br />
<pre class="example">X means Y just if X is a reliable indication of Y
</pre>
Lameen Souag asked a good question
<br />
<blockquote>
how would [meaning as reliable indication] account for the fact that
lies have a meaning?
</blockquote>
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Lies </h2>
<div class="outline-text-2" id="text-2">
"Reliable" doesn't mean foolproof. Good liars do abuse reliable
indicators.
<br />
Second, when we have seen through a lie, we do use the term "meaning"
in that way. When you know that someone is a liar, you might say
"what she says doesn't mean anything" (doesn't reliably indicate
anything). Or you might speak of a meaning that has little to do with
the lie's literal words, but accords with what it reliably indicates:
"When he says `trust me', that means you should keep your wallet
closed."
</div>
</div>
<div class="outline-2" id="outline-container-3">
<h2 id="sec-3">
Language interpretation </h2>
<div class="outline-text-2" id="text-3">
Perhaps you were speaking of a more surface sense of the lie's
meaning? Like, you could say "Sabrina listed this item on Ebay as a
'new computer', but it's actually a used mop." Even people who
considered her a liar and her utterances unreliable could understand
what her promise meant; that's how they know she told a lie. They
extract a meaning from an utterance even though they know it doesn't
reliably indicate anything. Is that a fair summation of your point?
<br />
To understand utterances divorced from who actually says them, we use
a consensus of how to transform from words and constructions to
indicators; a language.
<br />
Don't throw away the context, though. We divorced the utterance from
its circumstances and viewed it thru other people's consensus. We
can't turn around and treat what we get thru that process as things we
directly obtained from the situation; they weren't.
<br />
If Sabrina was reliable in her speech (wouldn't lie etc), we could
take a shortcut here, because viewing her utterance thru others'
consensus wouldn't change what it means. But she isn't, so we have to
remember that the reliable-in-the-consensus indicators are not
reliable in the real circumstances (Sabrina's Ebay postings).
<br />
So when interpreting a lie, we get a modified sense of meaning.
"Consensus meaning", if you will. It's still a meaning (reliable
indication), but we mustn't forget how we obtained it: not from the
physical situation itself but via a consensus.
</div>
</div>
<div class="outline-2" id="outline-container-4">
<h2 id="sec-4">
The consensus / language </h2>
<div class="outline-text-2" id="text-4">
NB, that only works because the (consensus of) language transforms
words and constructions in reliable ways. If a lot of people used
language very unreliably, it wouldn't. What if (say) half the
speakers substituted antonyms on odd-numbered days, or when they
secretly flipped a coin and it came up tails. How could you extract
much meaning from what they said?
</div>
</div>
<div class="outline-2" id="outline-container-5">
<h2 id="sec-5">
Not all interpretations are created equal </h2>
<div class="outline-text-2" id="text-5">
This may sound like All Interpretations Are Created Equal, and
therefore you can't say objectively that Sabrina commited fraud;
that's just your interpetation of what she said; there could be
others. But that's not what I mean at all.
<br />
For instance, we can deduce that she committed fraud (taking the
report as true).
<br />
At the start of our reasoning process, we only know her locutionary
act - the physical expression of it, posting 'new computer for sale'.
We don't assume anything about her perlocutionary act - convincing you
(or someone) that she offers a new computer for sale.
<br />
<ol>
<li>
She knows the language (Assumption, so we can skip some boring
parts)
</li>
<li>
You might believe what she tells you (Assumption)
</li>
<li>
Since the iterm is actually an old mop, making you believe that
she offers a new computer is fraud. (Assumption)
</li>
<li>
Under the language consensus, 'new computer' reliably indicates new
computer (common vocabulary)
</li>
<li>
Since she knows the language, she knew 'new computer' would be
transformed reliably-in-the-consensus to indicate new computer (by
1&4)
</li>
<li>
Reliably indicating 'new computer' to you implies meaning new
computer to you. (by definition) (So now we begin to see her
perlocutionary act)
</li>
<li>
So by her uttering 'new computer', she has conveyed to you that
she is offering a new computer (by 5&6)
</li>
<li>
She thereby attempts the perlocutionary act of persuading you that
she offers a new computer (by 2&7)
</li>
<li>
She thereby commits fraud (by 3&8)
</li>
</ol>
I made some assumptions for brevity, but the point is that with no
more than this definition of meaning and language-as-mere-consensus,
we can make interesting, reasonable deductions.
<br />
<br />
(Late edits for clarity)</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-5291546363564942662012-08-30T12:22:00.001-07:002012-12-08T12:08:52.942-08:00Fairchy 3: Ultimate secure choice<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Ultimate secure choice </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
I wrote about <a href="http://tehom-blog.blogspot.com/2011/05/fairchy.html">Fairchy</a>, an idea drawn from both decision markets and FAI
that I hope offers a way around the <a href="http://tehom-blog.blogspot.com/2011/05/fairchy.html#ID-ed8b092a-8343-4077-9f10-53bf140292fe">Clippy and the box problem</a> that
FAI has.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Measuring human satisfaction without human frailties </h2>
<div class="outline-text-2" id="text-2">
One critical component of the idea is that (here comes a big mental
chunk) the system predictively optimizes a utility function that's
partly determined by surveying citizens. It's much like voting in an
election, but it measures each citizen's self-reported satisfaction.
<br />
But for that, human frailty is a big issue. There are any number of
potential ways to manipulate such a poll. A manipulator could (say)
spray oxytocin into the air at a polling place, artificially raising
the reported satisfaction. And it can only get worse in the future.
If elections and polls are shaky now, how meaningless would they be
with nearly godlike AIs trying to manipulate the results?
<br />
But measuring the right thing is crucial here, otherwise it won't
optimize the right thing.
</div>
</div>
<div class="outline-2" id="outline-container-3">
<h2 id="sec-3">
Could mind uploading offer a principled solution? </h2>
<div class="outline-text-2" id="text-3">
</div>
<div class="outline-3" id="outline-container-3_1">
<h3 id="sec-3_1">
It doesn't help non-uploads </h3>
<div class="outline-text-3" id="text-3_1">
I'll get this out of the way immediately: The following idea will do
nothing to help people who are not uploaded. Which right now is you
and me and everyone else. That's not its point. Its point is to
arrive before super-intelligent AIs do.
<br />
This seems like a reasonable expectation. Computer hardware probably
has to get fast enough to "do" human-level intelligence before it can
do super-human intelligence.
<br />
It's not a sure thing, though. It's conceivable that running
human-level intelligence via upload-and-emulating, even with
shortcuts, could be much slower than running a programmed super-human
AI.
</div>
</div>
<div class="outline-3" id="outline-container-3_2">
<h3 id="sec-3_2">
First part: Run a verified mind securely </h3>
<div class="outline-text-3" id="text-3_2">
Enough caveats. On to the idea itself.
<br />
The first part of the idea is to run uploaded minds securely.
<br />
<ul>
<li>
Verify that the mind data is what was originally uploaded.
</li>
<li>
Verify that the simulated environment is a standard environment,
one designed not to prejudice the voter. This environment may
include a random seed.
</li>
<li>
Poll the mind in the secure simulated environment.
</li>
<li>
Output the satisfaction metric.
</li>
</ul>
This seems doable. There's been a fair amount of work on <a href="http://en.wikipedia.org/wiki/Cloud_computing_security">secure computation on untrusted machines</a>, and there's sure to be further
development. That will probably be secure even in the face of obscene
amounts of adversarial computing power.
<br />
And how I propose to ensure that this is actually done:
<br />
One important aspect of secure computation is that it provides
hard-to-forge evidence of compliance. With this in hand, FAIrchy
gives us an easy answer: Make this verification a component of the
utility function (Further on, I assume this connection is elaborated
as needed for various commit logs etc).
<br />
This isn't primarily meant to withhold reward from manipulators, but
to create incentive to keep the system running and secure. To
withhold reward from manipulators, when a failure to verify is seen,
the system might escrow a proportionate part of the payoff until the
mind in question is rerun and the computation verifies.
</div>
</div>
<div class="outline-3" id="outline-container-3_3">
<h3 id="sec-3_3">
Problems </h3>
<div class="outline-text-3" id="text-3_3">
<ul>
<li>
It's only as strong as strong encryption.
</li>
<li>
How does the mind know the state of the world, especially of his
personal interests? If we have to teach him the state of the
world:
<ul>
<li>
It's hard to be reasonably complete wrt his interests
</li>
<li>
It's very very hard to do so without creating opportunities for
distortion and other adverse presentation.
</li>
<li>
He can't have and use secret personal interests
</li>
</ul>
</li>
<li>
Dilemma:
<ul>
<li>
If the mind we poll is the same mind who is "doing the living":
<ul>
<li>
We've cut him off from the world to an unconscionable degree.
</li>
<li>
Were he to communicate, privacy is impossible for him.
</li>
<li>
We have to essentially run him all the time forever with 100%
uptime, making maintenance and upgrading harder and potentially
unfair.
</li>
<li>
Presumably everyone runs with the same government-specified
computing horsepower, so it's not clear that individuals could
buy more; in this it's socialist.
</li>
<li>
Constant running makes verification harder, possibly very much.
</li>
</ul>
</li>
<li>
If it isn't, his satisfaction can diverge from the version(s) of
him that are "doing the living". In particular, it gives no
incentive for anyone to respect those versions' interests, since
they are not reflected in the reported satisfaction.
</li>
</ul>
</li>
<li>
On failure to verify, how do we retry from a good state?
</li>
<li>
It's inefficient. Everything, important or trivial, must be done
under secure computation.
</li>
<li>
It's rigidly tied to the original state of the upload. Eventually
it might come to feel like being governed by our two-year-old
former selves.
</li>
</ul>
</div>
</div>
<div class="outline-3" id="outline-container-3_4">
<h3 id="sec-3_4">
Strong encryption </h3>
<div class="outline-text-3" id="text-3_4">
The first problem is the easy one. Being only as strong as strong
encryption still puts it on very strong footing.
<br />
<ul>
<li>
Current encryption is secure even under extreme extrapolations of
conventional computing power.
</li>
<li>
Even though RSA (prime-factoring) encryption may fall to <a href="http://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor's Algorithm</a> when quantum computing becomes practical, some
encryption functions are not expected to.
</li>
<li>
Even if encryption doesn't always win the crypto "arms race" as
it's expected to, it gives the forces of legitimacy an advantage.
</li>
</ul>
</div>
</div>
<div class="outline-3" id="outline-container-3_5">
<h3 id="sec-3_5">
Second part: Expand the scope of action </h3>
<div class="outline-text-3" id="text-3_5">
ISTM the solution to these problems is to expand the scope of this
mechanism. No longer do we just poll him, we allow him to use this
secure computation as a platform to:
<br />
<ul>
<li>
Exchange information
<ul>
<li>
Surf-wise, email-wise, etc. Think ordinary net connection.
</li>
<li>
Intended for:
<ul>
<li>
News and tracking the state of the world
</li>
<li>
Learning about offers.
</li>
<li>
Negotiating agreements
</li>
<li>
Communicating and co-ordinating with others, perhaps loved ones
or coworkers.
</li>
<li>
Anything. He can just waste time and bandwidth.
</li>
</ul>
</li>
</ul>
</li>
<li>
Perform legal actions externally
<ul>
<li>
Spend money or other possessions
</li>
<li>
Contract to agreements
</li>
<li>
Delegate his personal utility metric, or some fraction of it.
Ie, that fraction of it would then be taken from the given
external source; presumably there'd be unforgeable digital
signing involved. Presumably he'd delegate it to some sort of
external successor self or selves.
</li>
<li>
Delegate any other legal powers.
</li>
<li>
(This all only goes thru if the computation running him verifies,
but all attempts are logged)
</li>
</ul>
</li>
<li>
Commit to alterations of his environment and even of his self.
<ul>
<li>
This includes even committing to an altered self created outside
the environment.
</li>
<li>
Safeguards:
<ul>
<li>
This too should only go thru if the computation running him
verifies, and attempts should be logged.
</li>
<li>
It shouldn't be possible to do this accidentally.
</li>
<li>
He'll have opportunity and advice to stringently verify its
correctness first.
</li>
<li>
There may be some "tryout" functionality whereby his earlier
self will be run (later or in parallel) to pass judgement on
the goodness of the upgrade.
</li>
</ul>
</li>
</ul>
</li>
<li>
Verify digital signatures and similar
<ul>
<li>
Eg, to check that external actions have been performed as
represented.
</li>
<li>
(This function is within the secure computation but external to
the mind. Think running GPG at will)
</li>
</ul>
</li>
</ul>
The intention is that he initially "lives" in the limited,
one-size-fits-all government-issue secure computing environment, but
uses these abilities to securely move himself outwards to better
secure environments. He could entirely delegate himself out of the
standard environment or continue to use it as a home base of sorts; I
provided as much flexibility there as I could.
</div>
</div>
<div class="outline-3" id="outline-container-3_6">
<h3 id="sec-3_6">
Problems solved
</h3>
<a href="http://www.blogger.com/blogger.g?blogID=5983563776019477979" id="ID-29bfc1e1-9bec-4fc3-8498-9dee8ee7bd40" name="ID-29bfc1e1-9bec-4fc3-8498-9dee8ee7bd40"> </a>
<br />
<div class="outline-text-3" id="text-3_6">
This would immediately solve most of the problems above:
<br />
<ul>
<li>
He can know the state of the world, especially of his personal
interests, by surfing for news, contacting friends, basically using
a net connection.
</li>
<li>
Since he is the same mind who is "doing the living" except as he
delegates otherwise, there's no divergence of satisfaction.
</li>
<li>
He can avail himself of more efficient computation if he chooses,
in any manner and degree that's for sale.
</li>
<li>
He's not rigidly tied to the original state of the upload. He can
grow, even in ways that we can't conceive of today.
</li>
<li>
His inputs and outputs are no longer cut off from the world even
before he externalizes.
</li>
<li>
Individuals can buy more computing horsepower (and anything else),
though they can only use it externally. Even that restriction
seems not neccessary, but that's a more complex design.
</li>
</ul>
Tackling the remaining problems:
<br />
<ul>
<li>
Restart: Of course he'd restart from the last known good state.
<ul>
<li>
Since we block legal actions for unverified runs, a malicious
host can't get him into any trouble.
</li>
<li>
We minimize ambiguity about which state is the last known good
state to make it hard to game on that.
<ul>
<li>
The verification logs are public or otherwise overseen.
</li>
<li>
(I think there's more that has to be done. Think <a href="http://en.wikipedia.org/wiki/Bitcoin#Blockchain">Bitcoin blockchains</a> as a possible model)
</li>
</ul>
</li>
</ul>
</li>
<li>
Running all the time:
<ul>
<li>
Although he initially "lives" there, he has reasonable other
options, so ISTM the requirements are less stringent:
<ul>
<li>
Uneven downtime, maintenance, and upgrading is less unfair.
</li>
<li>
Downtime is less unconscionable, especially after he has had a
chance to establish a presence outside.
</li>
</ul>
</li>
<li>
The use of virtual hosting may make this easier to do and fairer
to citizens.
</li>
</ul>
</li>
<li>
Privacy of communications:
<ul>
<li>
Encrypt his communications.
</li>
<li>
Obscure his communications' destinations. Think Tor or
Mixmaster.
</li>
</ul>
</li>
<li>
Privacy of self:
<ul>
<li>
Encrypt his mind data before it's made available to the host
</li>
<li>
Encrypt his mind even as it's processed by the host
(<a href="http://en.wikipedia.org/wiki/Homomorphic_computing">http://en.wikipedia.org/wiki/Homomorphic_computing</a>). This may
not be practical, because it's much slower than normal computing.
Remember, we need this to be fast enough to be doable before
super-intelligent AIs are.
</li>
<li>
"Secret-share" him to many independent hosts, which combine their
results. This may fall out naturally from human brain
organization. Even if it doesn't, it seems possible to introduce
confusion and diffusion.
</li>
<li>
(This is a tough problem)
</li>
</ul>
</li>
</ul>
</div>
</div>
<div class="outline-3" id="outline-container-3_7">
<h3 id="sec-3_7">
Security holes </h3>
<div class="outline-text-3" id="text-3_7">
The broader functionality opens many security holes, largely about
providing an honest, empowering environment to the mind. I won't
expand on them in this post, but I think they are not hard to close
with creative thinking.
<br />
There's just one potential exploit I want to focus on: A host running
someone multiple times, either in succession or staggered in parallel.
If he interacts with the world, say by reading news, this introduces
small variations which may yield different results. Not just
different satisfaction results, but different delegations, contracts,
etc. A manipulator would then choose the most favorable outcome and
report that as the "real" result, silently discarding the others.
<br />
One solution is to make a host commit so often that it cannot hold
multiple potentially-committable versions very long.
<br />
<ul>
<li>
Require a certain pace of computation.
</li>
<li>
Use frequent <a href="http://en.wikipedia.org/wiki/Trusted_timestamping">unforgeable digital timestamps</a> so a host must commit
frequently.
</li>
<li>
Sign and log the citizen's external communications so that any
second stream of them becomes publicly obvious. This need not
reveal the communications' content.
</li>
</ul>
</div>
</div>
<div class="outline-3" id="outline-container-3_8">
<h3 id="sec-3_8">
Checking via redundancy </h3>
<div class="outline-text-3" id="text-3_8">
Unlike the threat of a host running multiple diverging copies of
someone, running multiple <b>non-diverging</b> copies on multiple
independent hosts may be desirable, because:
<br />
<ul>
<li>
It makes the "secret-share" approach <a href="http://www.blogger.com/blogger.g?blogID=5983563776019477979#sec-3_6">above</a> possible
</li>
<li>
A citizen's computational substrate is not controlled by any one
entity, which follows a general principle in security to guard
against exploits that depend on monopolizing access.
</li>
<li>
It is likely to detect non-verification much earlier.
</li>
</ul>
However, the <a href="http://en.wikipedia.org/wiki/CAP_theorem">CAP theorem</a> makes the ideal case impossible. We may have
to settle for softer guarantees like <a href="http://en.wikipedia.org/wiki/Eventual_consistency">Eventual Consistency</a>.
<br />
<br />
(Edit: Fixed stray anchor that Blogspot doesn't handle nicely)</div>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-73279931222639452262012-08-09T19:01:00.001-07:002012-08-09T19:01:22.012-07:00Parallel Dark Matter
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Parallel Dark Matter 9 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I have been blogging about a theory I call <a href='http://tehom-blog.blogspot.com/2011/08/crazy-idea-parallel-dark-matter.html'>Parallel Dark Matter</a> (and
<a href='http://tehom-blog.blogspot.com/2011/08/simulated-evolution-of-dark-matter.html'>here</a> and <a href='http://tehom-blog.blogspot.com/2011/09/prediction-from-pdm.html'>here</a>), which I <a href='http://tehom-blog.blogspot.com/2012/05/i-may-not-be-first-to-propose-pdm.html'>may not be the first</a> to propose, though I seem
to be the first to flesh the idea out.
</p>
<p>
In particular, I <a href='http://tehom-blog.blogspot.com/2012/05/parallel-dark-matter-predicted-recent.html'>mentioned</a> recent news that <a href='http://www.nature.com/news/survey-finds-no-hint-of-dark-matter-near-solar-system-1.10494'>the solar system appears devoid of dark matter</a>, something that PDM predicted and no other
dark matter theory did.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Watch that title! </h2>
<div id='text-2' class='outline-text-2'>
<p>
So I wes very surprised to read <a href='http://www.sciencedaily.com/releases/2012/08/120809090423.htm'>Plenty of Dark Matter Near the Sun</a> (or
<a href='http://www.mediadesk.uzh.ch/articles/2012/wo-licht-ist-ist-auch-viel-dunkle-materie_en.html'>here</a>). It appeared to contradict not only the earlier success of PDM
but also the recent observations.
</p>
<p>
But when I got the paper that the article is based on (<a href='http://adsabs.harvard.edu/abs/2012arXiv1206.0015G'>here</a> and from
the URL it looks like arXiv has it too), the abstract immediately set
the record straight.
</p>
<p>
By "near the sun", they don't mean "in the solar system" like you
might think. They mean the stellar neighborhood. It's not
immediately obvious just how big a chunk of stellar neighborhood they
are talking about, but you may get some idea from the fact that their
primary data is photometric distances to a set of K dwarf stars.
</p>
</div>
<div class='outline-3' id='outline-container-2_1'>
<h3 id='sec-2_1'>The paper </h3>
<div id='text-2_1' class='outline-text-3'>
<p>
Silvia Garbari, Chao Liu, Justin I. Read, George Lake. A new
determination of the local dark matter density from the kinematics of
K dwarfs. Monthly Notice of the Royal Astronomical Society, 9 August,
2012; 2012arXiv1206.0015G (<a href='http://adsabs.harvard.edu/abs/2012arXiv1206.0015G'>here</a>)
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>But that's not the worst </h2>
<div id='text-3' class='outline-text-2'>
<p>
science20.com got it worse: "Lots Of Dark Matter Near The Sun, Says
Computer Model". No and no. They used a simulation of dark matter to
calibrate their mass computations. They did not draw their
conclusions from it.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>And the Milky Way's halo may not be spherical </h2>
<div id='text-4' class='outline-text-2'>
<p>
The most interesting bit IMO is that their result "is at mild tension
with extrapolations from the rotation curve that assume a spherical
halo. Our result can be explained by a larger normalisation for the
local Milky Way rotation curve, an oblate dark matter halo, a local
disc of dark matter, or some combination of these."
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com1tag:blogger.com,1999:blog-5983563776019477979.post-83002373498746776452012-08-04T17:08:00.001-07:002012-08-04T17:08:35.661-07:00Plastination 3
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Plastination 3 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I blogged about <a href='http://tehom-blog.blogspot.com/2012/07/plastination-new-cryonics.html'>Plastination</a>, a potential alternative to cryonics, and
suggested storing, along with the patient, an EEG of their healthy
brain activity.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Why? </h2>
<div id='text-2' class='outline-text-2'>
<p>
Some people misunderstood the point of doing that. It is to provide a
potential cross-check. I won't try to guess how future simulators
might best use the cross-check.
</p>
<p>
And it isn't intended to rule out storing fMRI or MEG data also,
although neither seems practical to get every six months or so.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>MEG-MRI </h2>
<div id='text-3' class='outline-text-2'>
<p>
But what to my wondering eyes should appear a few days after I wrote
that? <a href='http://www.sciencedaily.com/releases/2012/07/120726102756.htm'>MEG-MRI</a>, a technology that claims unprecedented accuracy in
measuring brain activity.
</p>
<p>
So I wrote this follow-up post to note that MEG-MRI as another
potential source of cross-checking information.
</p></div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-7863727632717796702012-07-19T10:28:00.001-07:002012-07-19T10:28:19.416-07:00Plastination 2
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Plastination 2 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>I blogged about <a href='http://tehom-blog.blogspot.com/2012/07/plastination-new-cryonics.html'>Plastination</a>, a potential alternative to cryonics.
</p>
<p>
Luke's comment got me to write more (always a risk commenters take)
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>The biggest problem </h2>
<div id='text-2' class='outline-text-2'>
<p>
The big problem in plastination is that it is hit-or-miss. What it
preserves, it seems to preserve well, but in current SOA, whole
sections of the brain might be unpreserved. The researchers who
developed it didn't care about bringing their lab rats back from the
dead, so that was considered good enough.
</p>
<p>
From a layman's POV, infusing the whole brain doesn't look harder than
cryonics infusing the whole brain with cryoprotectant, but there could
be all sorts of technical details that make me wrong.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>So which wins, plastination or cryonics? </h2>
<div id='text-3' class='outline-text-2'>
<p>
A lot depends on which you judge more likely in a reasonable
time-frame: repair nanobots or emulation. I'd judge emulation much
more likely. We can <a href='http://en.wikipedia.org/wiki/Mind_uploading#Current_research'>already</a> emulate roundworms and have partly
emulated fruit flies. So I suspect Moore's law makes human emulation
in a reasonable time-frame much more likely than not.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>Can we prove it? </h2>
<div id='text-4' class='outline-text-2'>
<p>
One thing I like about plastination-to-emulation is that we could
prove it out now. Teach a fruit fly some trick, or let it learn
something meaningful to a fruit fly - maybe the identity of a rival,
if fruit flies learn that.
</p>
<p>
Plastinate its brain, emulate it. Does it still know what it learned?
And know it equally well? If so, we can justifiably place some
confidence in this process. If not, we've just found a bug to fix.
</p>
<p>
So with plastination-to-emulation, we have the means to drive a
debugging cycle. That's very good.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>Difference in revival population dynamics </h2>
<div id='text-5' class='outline-text-2'>
<p>
One difference that I don't know what to make of: If they work, the
population dynamics of revival would probably be quite different.
</p>
<p>
In plastination-to-emulation, revival becomes possible for everybody
at the same time. If you can scan in one plastinated brain, you can
scan any one.
</p>
<p>
In cryonics-to-cure-and-thaw, I expect there'd be waves as the various
causes of death were solved. Like, death from sudden heart attack
might be cured long before Alzheimer's disease became reversible, if
ever.
</p></div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-29806243083460065372012-07-11T20:47:00.001-07:002012-07-12T09:35:43.021-07:00Plastination - the new cryonics?<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Plastination - an alternative to cryonics </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
I'll assume that everyone who reads my blog has heard of <a href="http://en.wikipedia.org/wiki/Cryopreservation">cryonics</a>.
</div>
</div>
<div class="outline-3" id="outline-container-1_2">
<h3 id="sec-1_2">
Trending </h3>
<div class="outline-text-3" id="text-1_2">
<a href="http://en.wikipedia.org/wiki/Chemical_brain_preservation">Chemopreservation</a> has been known for some time, but has recently
received some attention as a credible alternative to cryonics. These
pages (<a href="http://www.gwern.net/plastination">PLASTINATION VERSUS CRYONICS</a>, <a href="http://www.evidencebasedcryonics.org/2008/02/25/better-biostasis-through-chemosuspension/">Biostasis through chemopreservation</a>) make the case well. They also explain some nuances
that I won't go into. But basically, chemopreservation stores you
more robustly by turning your brain into plastic. There's no liquid
nitrogen required, no danger of defrosting. With chemopreservation,
they can't just fix what killed you and "wake you up", you'd have to
be <a href="http://en.wikipedia.org/wiki/Mind_uploading">scanned and uploaded</a>.
</div>
</div>
<div class="outline-3" id="outline-container-1_3">
<h3 id="sec-1_3">
Are thawing accidents likely? Yes. </h3>
<div class="outline-text-3" id="text-1_3">
Cryonics organizations such as Alcor just wouldn't let you thaw, because they take their mission very
seriously?
<br />
Without casting any aspersions on cryonics organizations' competence and integrity,
consider that recently, 150 autistic brains being stored for research
at McLean Hospital were accidentally allowed to thaw (<a href="http://theautismnews.com/2012/06/11/freezer-failure-at-brain-bank-hampers-autism-research">here</a>, <a href="http://www.clinicalpsychiatrynews.com/news/adult-psychiatry/single-article/brains-thaw-after-freezer-fails/3edcaf20ad5887860e2a2ef88ebf7bee.html">here</a>,
<a href="http://now.msn.com/now/0611-brain-freeze-autism.aspx">here</a>). McLean and Harvard presumably take their mission just as
seriously as Alcor and have certain organizational advantages.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
My two cents: Store EEG data too </h2>
<div class="outline-text-2" id="text-2">
In the cryonics model, storing your EEG's didn't make much sense.
When (if) resuscitation "restarted your motor", your brainwaves would
come back on their own. Why keep a reference for them?
<br />
But plastination assumes from the start that revival consists of
scanning your brain in and <a href="http://en.wikipedia.org/wiki/Mind_uploading">emulating it</a>. Reconstructing you would
surely be done computationally, so any source of information could be
fed into the reconstruction logic.
<br />
Ideally the plastinated brain would preserve all the information that
is you, and preserve it undistorted. But what if it preserved enough
information but garbled it? Like, the information that got thru was
ambiguous. There would be no way to tell the difference between the
one answer that reconstructs your mind correctly and many other
answers that construct something or someone else.
<br />
Having a reference point in a different modality could help a lot. I
won't presume to guess how it would best be used in the future, but
from an info-theory stance, there's a real chance that it might
provide crucial information to reconstruct your mind correctly.
<br />
And having an EEG reference could provide something less crucial but
very nice: verification.
</div>
</div>
</div>Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com2tag:blogger.com,1999:blog-5983563776019477979.post-67380165779286722752012-06-20T12:41:00.001-07:002012-06-20T12:41:14.675-07:00Parallel Dark Matter - make that five
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Hold that last brane </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I have been blogging about a theory I call <a href='http://tehom-blog.blogspot.com/2011/08/crazy-idea-parallel-dark-matter.html'>Parallel Dark Matter</a> (and
<a href='http://tehom-blog.blogspot.com/2011/08/simulated-evolution-of-dark-matter.html'>here</a> and <a href='http://tehom-blog.blogspot.com/2011/09/prediction-from-pdm.html'>here</a>), which I <a href='http://tehom-blog.blogspot.com/2012/05/i-may-not-be-first-to-propose-pdm.html'>may not be the first</a> to propose, though I seem
to be the first to flesh the idea out.
</p>
<p>
Recently I posted (<a href='http://tehom-blog.blogspot.com/2012/06/brown-dwarfs-may-support-pdm.html'>Brown dwarfs may support PDM</a>) that wrt brown
dwarfs, the ratio between the number we see by visual observation and
the number that we seem to see by gravitational microlensing, 1/5, is
similar to what PDM predicts.
</p>
<p>
I had another look and it turns out I was working from bad data. The
ratio is not just similar, it's the same.
</p>
<p>
Dark matter accounts for 23% of the universe's mass, while visible
matter accounts for 4.6% (the remainder is dark energy). Ie, exactly
1/5. I don't know why I accepted a source that put it as 1/6; lazy, I
guess.
</p>
<p>
That implies 5 dark branes rather than 6. I have updated my old PDM
posts accordingly.
</p>
</div>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-9451473859980005962012-06-11T10:38:00.001-07:002012-07-16T13:08:26.654-07:00Brown dwarfs may support PDM<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Some evidence from brown dwarfs may support PDM </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
I have been blogging about a theory I call <a href="http://tehom-blog.blogspot.com/2011/08/crazy-idea-parallel-dark-matter.html">Parallel Dark Matter</a> (and
<a href="http://tehom-blog.blogspot.com/2011/08/simulated-evolution-of-dark-matter.html">here</a> and <a href="http://tehom-blog.blogspot.com/2011/09/prediction-from-pdm.html">here</a>), which I <a href="http://tehom-blog.blogspot.com/2012/05/i-may-not-be-first-to-propose-pdm.html">may not be the first</a> to propose, though I seem
to be the first to flesh the idea out.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
We see fewer brown dwarfs than we expected </h2>
<div class="outline-text-2" id="text-2">
In recent news, <a href="http://www.jpl.nasa.gov/news/news.cfm?release=2012-164">here</a> and <a href="http://www.sciencedaily.com/releases/2012/06/120608183648.htm">here</a>, a visual survey of brown dwarfs
(Wide-field Infrared Survey Explorer, or WISE) shows far fewer of them
than astronomers expected.
<br />
<blockquote>
Previous estimates had predicted as many brown dwarfs as typical
stars, but the new initial tally from WISE shows just one brown dwarf
for every six stars.
</blockquote>
Note the ratio between observed occurence and predicted occurence:
1/6. That's not the last word, though. Davy Kirkpatrick of WISE says
that:
<br />
<blockquote>
the results are still preliminary: it is highly likely that WISE will
discover additional Y dwarfs, but not in vast numbers, and probably
not closer than the closest known star, Proxima Centauri. Those
discoveries could bring the ratio of brown dwarfs to stars up a bit,
to about 1:5 or 1:4, but not to the 1:1 level previously anticipated
</blockquote>
</div>
</div>
<div class="outline-2" id="outline-container-3">
<h2 id="sec-3">
But gravitational lensing appeared to show that they were common </h2>
<div class="outline-text-2" id="text-3">
But <a href="http://en.wikipedia.org/wiki/Gravitational_microlensing">gravitational microlensing</a> events suggested that <a href="http://www.universetoday.com/29600/brown-dwarfs-could-be-more-common-than-we-thought/">brown dwarfs are common</a>; if they weren't, it'd be unlikely that we'd see gravitational
microlensing by them to that degree.
<br />
While I don't have the breadth of knowledge to properly survey the
argument for brown dwarf commonness, it's my understanding that this
was the main piece of evidence for it.
</div>
</div>
<div class="outline-2" id="outline-container-4">
<h2 id="sec-4">
This is just what PDM would predict </h2>
<div class="outline-text-2" id="text-4">
PDM predicts that we would "see" gravity from all six branes, but
only visually see the brown dwarfs from our own brane.
<br />
The ratio isn't exact but seems well within the error bars. They
found 33, so leaving out other sources of uncertainty, you'd expect
only a 68% chance that the "right" figure - ie, if it were exactly the
same as the average over the universe - would be between 27 and 38.
<br />
Note that PDM predicts a 1/6 ratio between <b>gravitational observations</b> and <b>visual observations</b>. I emphasize that because in
the quotes above, the ratios were between something different, visual
observations of brown dwarfs vs visible stars.
</div>
</div>
</div>Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-61569451719141664852012-05-18T19:52:00.001-07:002012-05-18T19:52:09.308-07:00Emtest
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Emtest </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
Some years back, I wrote a testing framework for emacs called Emtest.
It lives in a repo hosted on <a href='http://savannah.nongnu.org/projects/emtest'>Savannah</a>, mirrored <a href='https://github.com/emacsmirror/emtest'>here</a>, doc'ed <a href='http://www.emacswiki.org/emacs/Emtest'>here</a>.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Cucumber </h2>
<div id='text-2' class='outline-text-2'>
<p>
Recently a testing framwork called <a href='http://cukes.info'>Cucumber</a> came to my attention. I
have multiple reactions to it:
</p>
</div>
<div class='outline-3' id='outline-container-2_1'>
<h3 id='sec-2_1'><a id='ID-e09a1955-2fca-4ec6-9bff-c5b16392a86b' name='ID-e09a1955-2fca-4ec6-9bff-c5b16392a86b'/>They somewhat adopted my approach of table-driven testing. </h3>
<div id='text-2_1' class='outline-text-3'>
<p>
Hooray! They somewhat adopted my approach of table-driven testing.
When I started using table-driven testing and made it available in
Emtest, nobody was doing that. Back then, factory methods were the
big thing.
</p>
<p>
I created it because I saw a dilemma. Often one is testing
functionality that builds an output from a related input. Before,
there were no good options to relate input and output. You could:
</p>
<ul>
<li>
Repeat yourself by writing both the inputs and the outputs that
often contained the same values. It's a huge error opportunity,
along with all the other vices of repeating yourself in source
code.
</li>
<li>
Write a test that constructed or deconstructed objects. Such testa
are typically almost as complex as the function they test.
</li>
<li>
Build the output examples from the input examples by name. Bug
juggling dozens of very similar names this way clutters the
namespace and is a huge PITA. This was the actual impetus for me
to invent a better way to do it.
</li>
</ul>
</div>
</div>
<div class='outline-3' id='outline-container-2_2'>
<h3 id='sec-2_2'>But they left important parts unadopted </h3>
<div id='text-2_2' class='outline-text-3'>
<p>
But they didn't really adopt table testing in its full power. There
are a number of things I have found important for table-driven testing
that they apparently have not contemplated:
</p>
<dl>
<dt>N/A fields</dt><dd>
These are unprovided fields. A test detects them,
usually skipping over rows that lack a relevant
field. This is more useful than you might think.
Often you are defining example inputs to a function
that usually produces output (another field) but
sometimes ought to raise error. For those cases, you
need to provide inputs but there is nothing sensible
to put in the output field.
</dd>
<dt>Constructed fields</dt><dd>
Often you want to construct some fields in
terms of other fields in the same row. The <a href='#sec-2_1'>rationale</a> above
leads directly there.
</dd>
<dt>Constructed fields II</dt><dd>
And often you want to construct examples
in terms of examples that are used in other tests. You know
those examples are right because they are part of working tests.
If they had some subtle stupid mistake in them, it'd have
already shown up there. Reuse is nice here.
</dd>
<dt>Persistent fields</dt><dd>
This idea is not originally mine, it comes
from an article on Gamasutra<sup><a href='#fn.1' name='fnr.1' class='footref'>1</a></sup>. I did expand it a lot,
though. The author looked for a way to test image generation
(scenes) and what he did was at some point, capture a "good"
image the same image generator. Then from that point on, he
could automatically compare the output to a known good image.
<ul>
<li>
He knew for sure when it passed.
</li>
<li>
When the comparison failed, he could diff the images and see
where and how badly; it might be unnoticeable dithering or the
generator might have omitted entire objects or shadows.
</li>
<li>
He could improve the reference image as his generator got better.
</li>
</ul>
</dd>
</dl>
<p>
I've found persistent fields indispensable. I use them for basically
anything that's easier to inspect that it is to write examples of.
For instance, about half of the <a href='http://repo.or.cz/r/Klink.git'>Klink</a> tests use it.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-2_3'>
<h3 id='sec-2_3'>They didn't even mention me </h3>
<div id='text-2_3' class='outline-text-3'>
<p>
AFAICT neither Cucumber nor Gherkin credits me at all. Maybe they're
honestly unaware of the lineage of the ideas they're using. Still, it
gets tiresome not getting credit for stuff that AFAICT I invented and
gave freely to everybody in the form of working code.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-2_4'>
<h3 id='sec-2_4'>They don't use TESTRAL or anything like it. </h3>
<div id='text-2_4' class='outline-text-3'>
<p>
TESTRAL is the format I defined for reporting tests. Without going
into great detail, TESTRAL is better than anything else out there.
Not just better than the brain-dead <code>ad hoc</code> formats, but better than
TestXML.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-2_5'>
<h3 id='sec-2_5'>BDD is nice </h3>
<div id='text-2_5' class='outline-text-3'>
<p>
Still, I think they have some good ideas, especially regarding
<a href='http://en.wikipedia.org/wiki/Behavior_Driven_Development'>Behavior Driven Development</a>. IMO that's much better than Test-Driven
Development<sup><a href='#fn.2' name='fnr.2' class='footref'>2</a></sup>.
</p>
<p>
In TDD, you're expected to test down to the fine-grained units. I've
gone that route, and it's a chore. Yes, you get a nice regression
suite, but pretty soon you just want to say "just let me write code!"
</p>
<p>
In constrast, where TDD is bottom-up, BDD is top-down. Your tests
come from use-cases (which are structured the way I structure inline
docstrings in tests, which is nice, and just how much did you Cucumber
guys borrow?) BDD looks like a good paradigm for development.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Not satisfied with Emtest tables, I replaced them </h2>
<div id='text-3' class='outline-text-2'>
<p>
But my "I was first" notwithstanding, I'm not satisfied with the way I
made Emtest do tables. At the time, because nobody anywhere had
experience with that sort of thing, I adopted the most flexible
approach I could see. This was tag-based, an idea I borrowed from
Carsten Dominick's org-mode<sup><a href='#fn.3' name='fnr.3' class='footref'>3</a></sup>.
</p>
<p>
However, over the years the tag-based approach has proved too
powerful.
</p>
<ul>
<li>
It takes a lot of clever code behind the scenes to make it work.
</li>
<li>
Maintaining that code is a PITA. Really, it's been one of the most
time-consuming parts of Emtest, and always had the longest todo list.
</li>
<li>
In front of the scenes, there's too much power. That's not as good
as it sounds, and led to complex specifications because too many
tags needed management.
</li>
<li>
Originally I had thought that a global tag approach would work
best, because it would make the most stuff available. That was a
dud which I fixed that years ago.
</li>
</ul>
</div>
<div class='outline-3' id='outline-container-3_1'>
<h3 id='sec-3_1'>So, new tables for Emtest </h3>
<div id='text-3_1' class='outline-text-3'>
<p>
So this afternoon I coded a better table package for Emtest. It's
available on Savannah right now; rather, the new Emtest with it is
available. It's much simpler to use:
</p><dl>
<dt>emt:tab:make</dt><dd>
define a table, giving arguments:
<dl>
<dt>docstring</dt><dd>
A docstring for the entire table.
</dd>
<dt>headers</dt><dd>
A list of column names. For now they are simply
symbols, later they may get default initialization
forms and other help
</dd>
<dt>rows</dt><dd>
The remaining arguments are rows. Each begins with a
namestring.
</dd>
</dl>
</dd>
<dt>emt:tab:for-each-row</dt><dd>
Evaluate <code>body</code> once for each row, with the
row bound to <code>var-sym</code>
</dd>
<dt>emt:tab</dt><dd>
Given a table row and a field symbol, get the value of
the respective field
</dd>
</dl>
<p>
I haven't added Constructed fields or Persistent fields yet. I will
when I have to use them.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>Also added foreign-tester support </h2>
<div id='text-4' class='outline-text-2'>
<p>
Emtest also now supports foreign testers. That is, it can communicate
with an external process running a tester, and then report that
tester's results and do all the bells and whistles (persistence,
organizing results, expanding and collapsing them, point-and-shoot
launching of tests, etc) So the external tester can be not much more
than "find test, run test, build TESTRAL result".
</p>
<p>
It communicates in Rivest-style canonical s-expressions, which is as
simple a structured format as anything ever. It's equally as
expressive as XML and there exist interconverters.
</p>
<p>
I did this with the idea of using it for the Functional Reactive
Programming <a href='file:///home/tehom/blog/mutability-and-signals-3.html'>stuff</a> I was talking about before, if in fact I make a test
implementation for it (Not sure).
</p>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>And renamed to tame the chaos </h2>
<div id='text-5' class='outline-text-2'>
<p>
At one time I had written Emtest so that the function and command
prefixes were all modular. Originally they were written-out, like
<code>emtest/explorer/fileset/launch</code>. That was huge and unwieldy, so I
shortened their prefixes to module unique abbreviations like <code>emtl:</code>
</p>
<p>
But when I looked at it again now, that was chaos! So now
</p><ul>
<li>
Everything the user would normally use is prefixed <code>emtest</code>
<ul>
<li>
Main entry point <code>emtest</code>
</li>
<li>
Code-editing entry point <code>emtest:insert</code>
</li>
<li>
"Panic" reset command <code>emtest:reset</code>
</li>
<li>
etc
</li>
</ul>
</li>
<li>
Everything else is prefixed <code>emt:</code> followed by a 2 or 3 letter
abbreviation of its module.
</li>
</ul>
<p>
I haven't done this to the define and testhelp modules, though, since
the old names are probably still in use somewhere.
</p>
</div>
</div>
<div id='footnotes'>
<h2 class='footnotes'>Footnotes: </h2>
<div id='text-footnotes'>
<p class='footnote'><sup><a href='#fnr.1' name='fn.1' class='footnum'>1</a></sup> See, when I borrow ideas, I <b>credit</b> the people it came from,
even if I have improved on it. Can't find the article but I did look;
it was somewhat over 5 years ago, one of the first big articles on
testing there.
</p>
<p class='footnote'><sup><a href='#fnr.2' name='fn.2' class='footnum'>2</a></sup> Kent Beck's. Again, crediting the originator.
</p>
<p class='footnote'><sup><a href='#fnr.3' name='fn.3' class='footnum'>3</a></sup> Again credit where it's due. He didn't invent tags, of course,
and I don't know who was upstream from him wrt that.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com2tag:blogger.com,1999:blog-5983563776019477979.post-30986547766626621562012-05-12T15:09:00.001-07:002012-05-12T15:09:51.890-07:00Mutability And Signals 3
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Mutability And Signals 3 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I have a crazy notion of using signals to fake mutability, thereby
putting a sort of functional reactive programming on top of formally
immutable data. (<a href='http://tehom-blog.blogspot.com/2011/10/mutability-and-signals.html'>here</a> and <a href='http://tehom-blog.blogspot.com/2011/12/signals-continuations-and-constraint.html'>here</a>)
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Now </h2>
<div id='text-2' class='outline-text-2'>
<p>
So recently I've been looking at how that might be done. Which
basically means by <a href='http://en.wikipedia.org/wiki/Persistent_data_structure'>fully persistent data structures</a>. Other major
requirements:
</p>
<ul>
<li>
Cheap deep-copy
</li>
<li>
Support a mutate-in-place strategy (which I'd default to, though
I'd also default to immutable nodes)
</li>
<li>
Means to propagate signals upwards in the overall digraph (ie,
propagate in its transpose)
</li>
</ul>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Fully persistent data promises much </h2>
<div id='text-3' class='outline-text-2'>
<ul>
<li>
As mentioned, signals formally replacing mutability.
</li>
<li>
Easily keep functions that shouldn't mutate objects outside
themselves from doing so, even in the presence of keyed dynamic
variables. For instance, type predicates.
</li>
<li>
From the above, cleanly support typed slots and similar.
</li>
<li>
Trivial undo.
</li>
<li>
Real <a href='http://en.wikipedia.org/wiki/Functional_reactive_programming'>Functional Reactive Programming</a> in a Scheme. Implementations
like Cell and FrTime are interesting but "bolted on" to languages
that disagree with them. Flapjax certainly caught my interest but
it's different (behavior based).
</li>
<li>
I'm tempted to implement logic programming and even constraint
handling on top of it. Persistence does some major heavy lifting
for those, though we'd have to distinguish "immutable",
"mutate-in-place", and "constrain-only" versions.
</li>
<li>
If constraint handling works, that basically gives us partial
evaluation.
</li>
<li>
And I'm tempted to implement <a href='http://en.wikipedia.org/wiki/Software_transactional_memory'>Software Transactional Memory</a> on it.
Once you have fully persistent versioning, STM just looks like
merging versions if they haven't collided or applying a failure
continuation if they have. Detecting in a fine-grained way whether
they have is the remaining challenge.
</li>
</ul>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>DSST: Great but yikes </h2>
<div id='text-4' class='outline-text-2'>
<p>
So for fully persistent data structures, I read the Driscoll, Sarnak,
Sleator and Tarjan paper (and others, but only DSST gave me the
details). On the one hand, it basically gave me what I needed to
impelement this, if in fact I do. On the other hand, there were a
number of "yikes!" moments.
</p>
<p>
The first was discovering that their solution did not apply to
arbitrary digraphs, but to digraphs with a constant upper bound <code>p</code> on
the number of incoming pointers. So the <code>O(1)</code> cost they reported is
misleading. <code>p</code> "doesn't count" because it's a constant, but really
we <b>do</b> want in-degree to be arbitrarily large, so it does count. I
don't think it will be a big deal because the typical node in-degree
is small in every code I've seen, even in some relentlessly
self-referring monstrosities that I expect are the high-water mark for
this.
</p>
<p>
Second yikes was a gap between the version-numbering means they refer
to (Dietz et al) and their actual needs for version-numbering. Dietz
et al just tell how to efficiently renumber a list when there's no
room to insert a new number.
</p>
<p>
Figured that out: I have to use a level of indirection for the real
indexes. Everything (version data and persistent data structure) hold
indirect indexes and looks up the real index when it needs it. The
version-renumbering strategy is not crucial.
</p>
<p>
Third: Mutation boxes. DSST know about them, provide space for them,
but then when they talk about the algorithm, totally ignore them.
That would make the description much more complex, they explain. Yes,
true, it would. But the reader is left staring at a gratuitously
costly operation instead.
</p>
<p>
But I don't want to sound like I'm down on them. Their use of
version-numbering was indispensable. Once I read and understood that,
the whole thing suddenly seemed practical.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>Deep copy </h2>
<div id='text-5' class='outline-text-2'>
<p>
But that still didn't implement a cheap deep copy on top of
mutate-in-place. You could freeze a copy of the whole digraph,
everywhere, but then you couldn't both that and a newer copy in a
single structure. Either you'd see two copies of version A or two
copies of version B, but never A and B.
</p>
<p>
Mixing versions tends to call up thoughts of confluent persistence,
but IIUC this is a completely different thing. Confluent persistence
IIUC tries to merge versions for you, which limits its generality.
That would be like (say) finding every item that was in some database
either today or Jan 1; that's different.
</p>
<p>
What I need is to hold multiple versions of the same structure at the
same time, otherwise deep-copy is going to be very misleading.
</p>
<p>
So I'd introduce "version-mapping" nodes, transparent single-child
nodes that, when they are<sup><a href='#fn.1' name='fnr.1' class='footref'>1</a></sup> accessed as one version, their child
is explored as if a different version. Explore by one path, it's
version A, by another it's version B.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-6'>
<h2 id='sec-6'>Signals </h2>
<div id='text-6' class='outline-text-2'>
<p>
Surprisingly, one part of what I needed for signals just fell out of
DSST: parent pointers, kept up to date.
</p>
<p>
Aside from that, I'd:
</p><ul>
<li>
Have signal receiver nodes. Constructed with a combiner and an
arbitrary data object, it evaluates that combiner when anything
below it is mutated, taking old copy, new copy, receiver object,
and path. This argobject looks very different under the hood. Old
and new copy are recovered from the receiver object plus version
stamps; it's almost free.
</li>
<li>
When signals cross the mappers I added above, change the version
stamps they hold. This is actually trivial.
</li>
<li>
As an optimization, so we wouldn't be sending signals when there's
no possible receiver, I'd flag parent pointers as to whether
anything above them wants a signal.
</li>
</ul>
</div>
</div>
<div class='outline-2' id='outline-container-7'>
<h2 id='sec-7'>Change of project </h2>
<div id='text-7' class='outline-text-2'>
<p>
If I code this, and that's a big <b>if</b>, it will likely be a different
project than Klink, my <a href='http://tehom-blog.blogspot.com/2010/10/kernel-better-scheme.html'>Kernel</a> interpreter, though I'll borrow code
from it.
</p>
<ul>
<li>
It's such a major change that it hardly seems right to call it a
Kernel interpreter.
</li>
<li>
With experience, there are any number of things I'd do differently.
So if I restart, it'll be in C++ with fairly heavy use of templates
and inheritance.
</li>
<li>
It's also an excuse to use EMSIP.
</li>
</ul>
</div>
</div>
<div id='footnotes'>
<h2 class='footnotes'>Footnotes: </h2>
<div id='text-footnotes'>
<p class='footnote'><sup><a href='#fnr.1' name='fn.1' class='footnum'>1</a></sup> Yes, I believe in using resumptive pronouns when it makes a
sentence flow better.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-72209108896471184722012-05-12T10:31:00.001-07:002012-05-12T10:56:20.757-07:00Review Inside Jokes 1<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
Review Inside Jokes 1 </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
I am currently reading <a href="http://insidejokesbook.com/"><span style="text-decoration: underline;">Inside Jokes</span></a> by Matthew M. Hurley, Daniel
C. Dennett, and Reginald B. Adams Jr. So far, the book has been
enlightening.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Brief summary </h2>
<div class="outline-text-2" id="text-2">
Their theory, which seems likely to me, is that humor occurs when you
retract an active, committed, covertly entered belief.
<br />
<dl>
<dt>Active</dt>
<dd>It's active in your mind at the moment. They base this
on a Just-In-Time <a href="http://en.wikipedia.org/wiki/Spreading_activation">Spreading Activation</a> model.
</dd>
<dt>Covertly entered</dt>
<dd>Not a belief that you consciously same to. You
assumed it "automatically".
</dd>
<dt>Committed</dt>
<dd>A belief that you're sure about, as opposed to a
"maybe". To an ordinary degree, not neccessarily to a
metaphysical certitude.
</dd>
</dl>
And a blocking condition: Strong negative emotions block humor.
</div>
<div class="outline-3" id="outline-container-2_1">
<h3 id="sec-2_1">
Basic humor </h3>
<div class="outline-text-3" id="text-2_1">
What they call "basic" humor is purely in your own "simple" (my word)
mental frame. That frame is not interpersonal, doesn't have a theory
of mind. Eg, when you suddenly realize where you left your car keys
and it's a place that you foolishly ruled out before, which is often
funny, that's basic humor.
</div>
</div>
<div class="outline-3" id="outline-container-2_2">
<h3 id="sec-2_2">
Non-basic humor </h3>
<div class="outline-text-3" id="text-2_2">
Non-basic humor occurs in other mental frames. These frames have to
include a theory of mind. Ie, we can't joke about clams - normal
clams, not anthropomorphized in some way. I expect this follows from
the requirement of retracting a belief in that frame.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-3">
<h2 id="sec-3">
Did they miss a trick? </h2>
<div class="outline-text-2" id="text-3">
They say that that in third-person humor, the belief we retract is in
our frame of how another person is thinking, what I might call an
"empathetic frame".
<br />
I think that's a mis-step. A lot of jokes end with the butt of the
joke plainly unenlightened. It's clear to everyone that nothing has
been retracted in his or her mind. ISTM this doesn't fit at all.
</div>
<div class="outline-3" id="outline-container-3_1">
<h3 id="sec-3_1">
Try social common ground instead. </h3>
<div class="outline-text-3" id="text-3_1">
I think they miss a more likely frame, one which I'd call social
common ground. (More about it <a href="http://www.blogger.com/blogger.g?blogID=5983563776019477979#sec-4">below</a>)
<br />
We can't just unilaterally retract a belief that exists in social
common ground. "Just disbelieving it" would be simply not doing
social common ground. And we as social creatures have a great deal of
investment in it.
<br />
To retract a belief in social common ground, something has to license
us to do so, and it generally also impels us to. ISTM the need to
create that license/impulse explains why idiot jokes are the way they
are.
<br />
This also explains why the butt of the joke not "getting it" doesn't
prevent a joke from being funny, and even enhances the mirth. His or
her failure to "get it" doesn't block social license to retract.
<br />
Covert entry fits naturally here too. As social creatures, we also
have a great deal of experience and habit regarding social common
ground. This gives plenty of room for covert entry.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-4">
<h2 id="sec-4">
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">What's social common ground? </a></h2>
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
</a><br />
<div class="outline-text-2" id="text-4">
</div>
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
</a><br />
<div class="outline-3" id="outline-container-4_1">
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
</a><br />
<h3 id="sec-4_1">
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">Linguistic common ground </a></h3>
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
</a><br />
<div class="outline-text-3" id="text-4_1">
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
</a><br />
<a href="" id="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb" name="ID-f13ee0dc-a8b2-49f7-9a4d-c87a5dd894bb">
"Common ground" is perhaps more easily explained in linguistics. If I
mention (say) the book <span style="text-decoration: underline;">Inside Jokes</span>, then you can say "it" to refer
to it, even though you haven't previously mentioned the book yourself.
But neither of us can just anaphorically<sup></sup></a><a class="footref" href="http://www.blogger.com/blogger.g?blogID=5983563776019477979#fn.1" name="fnr.1">1</a> refer to "it" when we
collectively haven't mentioned it before.
<br />
We have a sort of shared frame that we both draw presuppositions from.
Of course, it's not really, truly shared. It's a form of co-operation
and it can break. But normally it's shared.
</div>
</div>
<div class="outline-3" id="outline-container-4_2">
<h3 id="sec-4_2">
From language common ground to social common ground </h3>
<div class="outline-text-3" id="text-4_2">
I don't think it's controversial to say that:
<br />
<ul>
<li>
A similar common ground frame always holds socially, even outside
language.
</li>
<li>
Normal people maintain a sense of this common ground during social
interactions.
</li>
<li>
Sometimes they do so even at odds with their wishes, the same way
they can't help understanding speech in their native language.
</li>
</ul>
</div>
</div>
</div>
<div id="footnotes">
<h2 class="footnotes">
Footnotes: </h2>
<div id="text-footnotes">
<div class="footnote">
<sup><a class="footnum" href="http://www.blogger.com/blogger.g?blogID=5983563776019477979#fnr.1" name="fn.1">1</a></sup> Pedantry: There are also non-anaphoric "it"s, such as "It's
raining."
</div>
</div>
</div>
</div>Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com5tag:blogger.com,1999:blog-5983563776019477979.post-54783978847234829652012-05-05T20:14:00.001-07:002012-06-20T12:06:24.153-07:00I may not be the first to propose PDM<div xmlns="http://www.w3.org/1999/xhtml">
<div class="outline-2" id="outline-container-1">
<h2 id="sec-1">
I may not be the first to propose PDM </h2>
<div class="outline-text-2" id="text-1">
</div>
<div class="outline-3" id="outline-container-1_1">
<h3 id="sec-1_1">
Previously </h3>
<div class="outline-text-3" id="text-1_1">
Previously I advanced <a href="http://tehom-blog.blogspot.com/2011/08/crazy-idea-parallel-dark-matter.html">Parallel Dark Matter</a>, the theory dark matter is
actually normal matter that "lives" on one of 5 "parallel universes"
that exchange only gravitational force with the visible universe. I
presumptively call these parallel universes "branes" because they fit
with braneworld cosmology.
</div>
</div>
</div>
<div class="outline-2" id="outline-container-2">
<h2 id="sec-2">
Spergel and Steinhardt proposed it earlier </h2>
<div class="outline-text-2" id="text-2">
They may have proposed it in 2000, and in exactly one sentence.
<br />
It's not exactly the same: They don't explicitly propose that it
simply is ordinary matter on another brane, and they do not propose
multiple branes accounting for the ratio of dark matter to visible
matter. But it's close enough that in good conscience I have to let
everyone know that they said this first.
<br />
AFAICT they and everyone else paid no further attention to it.
<br />
The relevant sentence is on page 2: "M-theory and superstrings, for
example, suggest the possibility that dark matter fields reside on
domain walls with gauge fields separated from ordinary matter by an
extra (small) dimension".
</div>
</div>
</div>Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-65572577691176733452012-05-04T15:51:00.001-07:002012-05-04T15:51:10.586-07:00The nature of Truth
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>The nature of Truth </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I recently finished reading <span style='text-decoration:underline;'>A User's Guide To Thought And Meaning</span> by
Ray Jackendoff. In it, he asks "What is truth?" and mentions several
problems with what we might call the conventional view.
</p>
<p>
He didn't really answer the question, but on reading it, a surprising
answer occured to me.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>T=WVP </h2>
<div id='text-2' class='outline-text-2'>
<p>
<b>Truth is just what valid reasoning preserves</b>.
</p>
<p>
No more and no less. I'll abbreviate it T=WVP
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Not "about the world" </h2>
<div id='text-3' class='outline-text-2'>
<p>
The conventional view is that truths are <b>about the world</b>, and valid
reasoning merely doesn't drop the ball. I'll abbreviate it CVOT. To
illustrate CVOT, consider:
</p>
<table frame='hsides' rules='groups' cellpadding='6' cellspacing='0' border='2'>
<caption/>
<colgroup><col class='left'/>
</colgroup>
<thead>
<tr><th class='left' scope='col'>All elephants are pink</th></tr>
<tr><th class='left' scope='col'>Nelly is an Elephant</th></tr>
</thead>
<tbody>
<tr><td class='left'>Nelly is pink</td></tr>
</tbody>
</table>
<p>
where the reasoning is valid but the major premiss is false, and so is
the conclusion.
</p>
<p>
Since "about the world" plays no part in my definition, I feel the
need to justify why it needn't and shouldn't.
</p>
</div>
<div class='outline-3' id='outline-container-3_1'>
<h3 id='sec-3_1'>"About the world" isn't really about the world </h3>
<div id='text-3_1' class='outline-text-3'>
<p>
Consider the above example. Presumably you determined that "All
elephants are pink" is false because at some point you saw an elephant
and it was grey<sup><a href='#fn.1' name='fnr.1' class='footref'>1</a></sup>.
</p>
<p>
And how did you determine that what you were seeing was an elephant
and it wasn't pink? Please don't stop at "I saw it and I just knew".
I know that readers of this blog have more insight into their thinking
than that. Your eyes and your brain interpreted something as seeing a
greyish elephant. I'm not saying it wasn't one, mind you. But you
weren't born knowing all about elephants. You had to learn about
them. You even had to learn the conventional color distinctions -
other cultures distinguish the named colors differently.
</p>
<p>
So you used reasoning to determine that this sensory input indicated
an elephant. Not conscious reasoning - the occipital lobe does an
enormous amount of processing without conscious supervision, and not
declarative facts - more like skills to interpret sights correctly.
But consciously or not, you used a type of reasoning.
</p>
<p>
So the major premiss ("All elephants are pink") wasn't directly about
the world after all. We reached it by reasoning. So on this level at
least, T=WVP looks unimpeachable and CVOT looks problematic.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_2'>
<h3 id='sec-3_2'>Detour: Reasoning and valid deductive reasoning </h3>
<div id='text-3_2' class='outline-text-3'>
<p>
I'll go back in a moment and finish that argument, but first I must
clarify something.
</p>
<p>
My sharp-eyed readers will have noticed that I first talked about
valid reasoning, but above I just said "reasoning" and meant something
much broader than conscious deductive reasoning. I'm referring to two
different things.
</p>
<p>
Deductive reasoning is the type of reasoning involved in the
definition, because only deductive reasoning can be valid. But other
types of reasoning too can be characterized by how well or poorly they
preserve truth in some salient context, even while we define truth
only by reference to valid reasoning. Truth-preservation is not the
only virtue that reasoning can have. For instance, one can also ask
how well it finds promising hypotheses or explores ramifications.
Truth-preservation is just the aspect that's relevant to this
definition.
</p>
<p>
One might object that evolutionarily, intuitive reasoning is not
motivated by agreeing with deductive reasoning, but by usefulness.
Evolution provided us with reasoning tools not because it has great
respect for deductive reasoning, but because they are "good tricks"
and saved the lives of our remote ancestors. In some cases useful
mental activity and correct mental activity part company, for instance
a salesperson convincing himself or herself that the line of products
really is a wonderful bargain, the better to persuade the customers,
when honestly it's not.
</p>
<p>
True. It's a happy accident that evolutionary "good tricks" gave us
tools that strongly tend to agree with deductive reasoning. But
accident or not, we can sensibly characterize other acts of reasoning
by how well or poorly they preserve truth.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_3'>
<h3 id='sec-3_3'>Can something save CVOT? </h3>
<div id='text-3_3' class='outline-text-3'>
<p>
I said that "on this level at least, T=WVP looks unimpeachable and
CVOT looks problematic."
</p>
<p>
Well, couldn't we extend CVOT one level down? Yes we could, but the
same situation recurs. The inputs, which look at first like truths or
falsities about the world, turn out on closer inspection to be the
products of yet more reasoning (in the broad sense). And not
neccessarily our own reasoning, they could be "pre-packaged" by
somebody else. This gives us no better reason to expect that they
truthfully describe the real world.
</p>
<p>
Can we save CVOT by looking so far down the tower<sup><a href='#fn.2' name='fnr.2' class='footref'>2</a></sup> of mental
levels that there's just no reasoning involved? We must be careful
not to stop prematurely, for instance at "I just <b>see</b> an elephant".
Although nobody taught us how to see and we didn't consciously reason
it out, there is a reasoning work being done underneath there.
</p>
<p>
What if we look so far down that no living creature has mentally
operated on the inputs? For instance, when we smell a particular
chemical, say formaldehyde, because our smell receptors match the
chemical's shape?
</p>
<p>
Is that process still about the world? Yes, but not the way the color
of elephants was. It tells you that there are molecules of
formaldehyde at this spot at this time. That's much more limited.
</p>
<p>
CVOT can't stop here. It wouldn't be right to treat this process as
magically perceiving the world. A nerve impulse is not a molecule of
formaldehyde. To save CVOT, truth about the world still has to enter
the picture somehow. There's still a mediating process from inputs (a
molecule of formaldehyde is nearby) to outputs (sending an impulse).
</p>
<p>
But by now you can see the dilemma for CVOT: in trying to find inputs
that are true but aren't mediated by reasoning, we have to keep
descending further, but in doing so, we sacrifice aboutness and still
face the same problem of inputs.
</p>
<p>
Can CVOT just stop descending at some point? Can we save it by
poositing that the whole process (chemical, cell, impulse) produces an
output that's true about the world, and furthermore that this truth is
achieved other than by correctly processing true inputs about the
world?
</p>
<p>
Yes for the first part, no for the second. If we fool the smell
receptor, for instance by triggering it with electricity instead of
formaldehyde, it will happily communicate a falsehood about the world,
because it will have correctly processed false inputs.
</p>
<p>
So we do need to be concerned about the truth of the inputs, so CVOT
does need to keep descending. It has to descend to natural selection
at this point. Since I believe in the unity of design space, I think
this change of destination makes no difference to the argument, so I
merely mention it in passing.
</p>
<p>
Since we must descend as long as there are inputs, where will it end?
What has outputs but no inputs? What can be directly sensed without
any mediation?
</p>
<p>
If there is such a level to land at, I can only imagine it as a level
of pointillistic experiences. Like Euclid's points, they have no
part. One need not assemble them from lower inputs because they have
no structure to require assembly.
</p>
<p>
If such pointillistic experiences exist, they aren't about anything
because they don't have any structure. At best, a pointillistic
experience indicates transiently, without providing further context, a
single interaction in the world. Not being about anything, they can't
be truths about the world.
</p>
<p>
So CVOT is not looking good. It needs its ultimate inputs to have
aboutness and they don't, not properly anyways.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_4'>
<h3 id='sec-3_4'>Does T=WVP do better? </h3>
<div id='text-3_4' class='outline-text-3'>
<p>
If CVOT has problems, that doesn't neccessarily mean that T=WVP
doesn't. Can T=WVP offer a coherent view of truth, one that doesn't
need magically true inputs?
</p>
<p>
I believe it can. I said earlier that truth-preservation is not the
only virtue that reasoning can have. Adbuctive reasoning can (under
felicitous conditions) find good explanations and inductive reasoning
can supply probable facts even in the absence of inputs. Bear in mind
that I include unconscious, frozen, and tacit processes here, just as
long as they are doing any reasoning work.
</p>
<p>
So while deductive reasoning doesn't drop the ball, other types of
reasoning can actually improve the ball. Could they improve the ball
so much that really, as processed thru this grand and mostly
unconscious tower of reasoning, they actually <b>create</b> the ball?
Could they incrementally transform initial inputs that aren't even
properly about the world into truth as we know it? I contend that
this is exactly how it happens.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_5'>
<h3 id='sec-3_5'>Other indications that "about the world" just doesn't belong </h3>
<div id='text-3_5' class='outline-text-3'>
<p>
Consider the following statements<sup><a href='#fn.3' name='fnr.3' class='footref'>3</a></sup>:
</p>
<ol>
<li>
Sherlock Holmes was a detective
</li>
<li>
Sherlock Holmes was a chef
</li>
</ol>
<p>
Notice I didn't say "fictional". You can figure out that they're
talking about fiction, but that's not in the statements themselves.
</p>
<p>
I assume your intuition, like mine, is that (1) is true (or true-ish)
and (2) is false (or false-ish).
</p>
<p>
In CVOT, they're the same, because they're both meaningless (or
indeterminate or falsely presupposing). (1) can't naturally be
privileged over (2) in CVOT.
</p>
<p>
In T=WVP, (1) is privileged over (2), as it should be. Both are
reasoning about Arthur Conan Doyle's fiction. (1) proceeds from
healthy, unexceptional reasoning about them, while (2) somehow
imagines Holmes serving the hound of the Baskervilles to dinner
guests. (1) clearly proceeds from better reasoning than (2), and in
T=WVP this justifies its superior truth status.
</p>
<p>
CVOT could awkwardly salvaged by saying that we allow accomodation, so
we map "Sherlock Holmes" to the fictional detective by adding the
qualifier "fictional" to the statements. But then why can't we fix
(2) with accomodation too? Doyle never wrote "Cookin' With Sherlock",
but it's likely that someone somewhere has. Why can't we accomodate
to that too? And if we accomodate to anything anyone ever wrote,
including (say) Alice In Wonderland and Bizzaro world, being about the
world means almost nothing.
</p>
<p>
Furthermore, if we accept accomodation as truth-preserving, we risk
finding that "All elephants are pink" is true too<sup><a href='#fn.4' name='fnr.4' class='footref'>4</a></sup> because "by
pink, you must mean ever so slightly pinkish grey" or "by elephant,
you must mean a certain type of mouse".
</p>
<p>
I could <code>reductio</code> further, but I think I've belabored it enough.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>Circularity avoided in T=WVP </h2>
<div id='text-4' class='outline-text-2'>
<p>
Rather than defining truth as what valid reasoning preserves, it's
more usual to define valid reasoning as truth-preserving operations.
Using both definitions together would make a circular definition.
</p>
<p>
But we can define valid reasoning in other ways. For instance, in
terms of tautologies - statements that are always true no matter what
value their variables take. A tautology whose top functor is "if"
(material implication) describes a valid reasoning operation. For
instance:
</p><pre class='example'>
(a & (a -> b)) -> b
</pre>
<p>
In English, "If you have A and you also have "A implies B", then you
have B". That's <code>modus ponens</code> and it's valid reasoning.
</p>
<p>
I said tautologies are "statements that are always true", which is the
conventional definition of them, but it contains "true". Again I need
to avoid a circular definition. So I just define tautology and the
logical operations in terms of a matrix of enumerated values (a
truth-table). We don't need to know the nature of truth to construct
such a matrix or to examine it. We can construct operations
isomorphic to the usual logical operations simply in terms of opaque
symbols:
</p>
<table frame='hsides' rules='groups' cellpadding='6' cellspacing='0' border='2'>
<caption/>
<colgroup><col class='left'/><col class='left'/><col class='left'/>
</colgroup>
<thead>
<tr><th class='left' scope='col'>X</th><th class='left' scope='col'>Y</th><th class='left' scope='col'>X AND Y</th></tr>
</thead>
<tbody>
<tr><td class='left'>true</td><td class='left'>true</td><td class='left'>true</td></tr>
<tr><td class='left'>true</td><td class='left'>false</td><td class='left'>false</td></tr>
<tr><td class='left'>false</td><td class='left'>true</td><td class='left'>false</td></tr>
<tr><td class='left'>false</td><td class='left'>false</td><td class='left'>false</td></tr>
</tbody>
</table>
<table frame='hsides' rules='groups' cellpadding='6' cellspacing='0' border='2'>
<caption/>
<colgroup><col class='left'/><col class='left'/><col class='left'/>
</colgroup>
<thead>
<tr><th class='left' scope='col'>X</th><th class='left' scope='col'>Y</th><th class='left' scope='col'>X OR Y</th></tr>
</thead>
<tbody>
<tr><td class='left'>true</td><td class='left'>true</td><td class='left'>true</td></tr>
<tr><td class='left'>true</td><td class='left'>false</td><td class='left'>true</td></tr>
<tr><td class='left'>false</td><td class='left'>true</td><td class='left'>true</td></tr>
<tr><td class='left'>false</td><td class='left'>false</td><td class='left'>false</td></tr>
</tbody>
</table>
<table frame='hsides' rules='groups' cellpadding='6' cellspacing='0' border='2'>
<caption/>
<colgroup><col class='left'/><col class='left'/>
</colgroup>
<thead>
<tr><th class='left' scope='col'>X</th><th class='left' scope='col'>NOT X</th></tr>
</thead>
<tbody>
<tr><td class='left'>true</td><td class='left'>false</td></tr>
<tr><td class='left'>false</td><td class='left'>true</td></tr>
</tbody>
</table>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>Some other virtues of this definition </h2>
<div id='text-5' class='outline-text-2'>
<p>
Briefly:
</p>
<ul>
<li>
It recovers the Quinean disquotation sense of truth. Ie, a quoted
true statement, interpreted competently, is true.
</li>
<li>
It recovers our ordinary sense of truth (I hinted at this above)
</li>
<li>
It recovers the property that truth has where the chain is as
strong as its weakest link.
</li>
</ul>
</div>
</div>
<div id='footnotes'>
<h2 class='footnotes'>Footnotes: </h2>
<div id='text-footnotes'>
<p class='footnote'><sup><a href='#fnr.1' name='fn.1' class='footnum'>1</a></sup> Or you trusted somebody else who told you the saw a grey
elephant. In which case, read the argument as applying to them.
</p>
<p class='footnote'><sup><a href='#fnr.2' name='fn.2' class='footnum'>2</a></sup> I'm talking as if it was a tower of discrete levels only for
expository convenience. I don't think it's all discrete levels, I
think it's the usual semi-fluid, semi-defined situation that natural
selection creates.
</p>
<p class='footnote'><sup><a href='#fnr.3' name='fn.3' class='footnum'>3</a></sup> Example borrowed from Ray Jackendoff
</p>
<p class='footnote'><sup><a href='#fnr.4' name='fn.4' class='footnum'>4</a></sup> Strictly speaking, we would only do this for presuppositions,
but if the speaker mentions "the pink elephant" at some point the
<code>reductio</code> is good to go.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com2tag:blogger.com,1999:blog-5983563776019477979.post-73804028372049205012012-04-27T10:35:00.001-07:002012-04-27T10:35:59.302-07:00Review: Ray Jackendoff's User's Guide To Thought And Meaning
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>A User's Guide To Thought And Meaning </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I just finished <span style='text-decoration:underline;'>A User's Guide To Thought And Meaning</span> by Ray
Jackendoff, a linguist best known for X-bar theory.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1_2'>
<h3 id='sec-1_2'>Summary </h3>
<div id='text-1_2' class='outline-text-3'>
<p>
I wasn't impressed with it. Although he starts off credibly if
pedestrianly, the supporting arguments for his main thesis are fatally
flawed. I found it annoying as I got further into the book to see him
building on a foundation that I considered unproven and wrong.
</p>
<p>
His main thesis can be summarized by a quote from the last chapter:
</p><blockquote>
<pre class='example'>
What we experience as rational thinking consists of thoughts linked
to language. The thoughts themselves aren't conscious.
</pre>
</blockquote>
</div>
</div>
<div class='outline-3' id='outline-container-1_3'>
<h3 id='sec-1_3'>A strange mistake </h3>
<div id='text-1_3' class='outline-text-3'>
<p>
The foregoing quote leads me to the strangest assumption in the book.
He says that our mental tools are exactly our language tools. He does
allow at one or two points that visual thinking might qualify too.
</p>
<p>
That may be true of Ray, but I know for a fact that it's not true of
me. I often have the experience of designing some piece of source
code in my head, often when I'm either falling asleep or waking up.
Then later I go to code it, and I realize that I have to think of good
<b>names</b> for the various variables and functions. I hadn't used names
before when I handled them mentally because I wasn't handling them by
language (as we know it). I wasn't handling them by visual imagery
either. Of course I was mentally handling them as concepts.
</p>
<p>
There are other indicators that we think in concepts: The
tip-of-the-tongue experience and words like "Thingamajig" and
"whatchamacallit". In the chapter <span style='text-decoration:underline;'>Some phenomena that test the Unconscious Meaning Hypothesis</span>, Ray mentions these but feels that his
hypothesis survives them. It's not clear to me why he concludes that.
</p>
<p>
What is clear to me is that we (at least some of us) think with all
sorts of mental tools and natural language is only one of them.
</p>
<p>
If he meant "language" in a broad sense that includes all possible
mental tools, which he never says, it makes his thesis rather
meaningless.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1_4'>
<h3 id='sec-1_4'>Shifting ground </h3>
<div id='text-1_4' class='outline-text-3'>
<p>
Which brings me to a major problem of the book. Although he proposes
that all meaning is unconscious, his support usually goes to show that
<b>some</b> meaning (or mental activity) is unconscious. That's not good
enough. It's not even surprising; of course foundational mental
activity is unconscious.
</p>
<p>
To be fair, I will relate where he attempts to prove that <b>all</b>
meaning is unconscious, from the chapter <span style='text-decoration:underline;'>What's it Like To Be Thinking Rationally?</span> He does this by quoting neuropsychologist Karl
Lashey:
</p>
<blockquote>
<pre class='example'>
No activity of mind is ever conscious. This sounds like a paradox but
it is nonetheless true. There are order and arrangement, but there is
no experience of the creation of that order. I could give numberless
examples, for there is no exception to the rule.
</pre>
</blockquote>
<p>
Unfortunately, Lashey's quote fails to support this; again he gives
examples and takes himself to have proven the general case. Aside
from this, he simply pronounces his view repeatedly and forcefully.
Jackendoff says "I think this observation is right on target" and he's
off.
</p>
<p>
One is tempted to ask, what about:
</p><ul>
<li>
Consciously deciding what to think about.
</li>
<li>
Introspection
</li>
<li>
Math and logic, where we derive a meaning by consciously
manipulating symbols? Jackendoff had talked about what
philosophers call the Regression Problem earlier in the chapter,
and I think he takes himself to have proven that symbolic logic is
unconscious too, but that's silly. He also talks about the other
senses "all" being misleading in syllogisms, but that's a fact
about natural language polysemy, not about consciousness.
</li>
</ul>
<p>
None of this is asked, but one is left with the impression that all of
these "don't count". It makes me want to ask, "What <b>would</b> count?
If nothing counts as conscious thought, then you really haven't said
anything."
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1_5'>
<h3 id='sec-1_5'>One last thing </h3>
<div id='text-1_5' class='outline-text-3'>
<p>
In an early chapter <span style='text-decoration:underline;'>Some Uses of ~mean~ and ~meaning~</span>, he tries to
define <code>meaning</code>. Frustratingly, he seems unaware of the definition I
consider best, which is generally accepted in semiotics:
</p><pre class='example'>
X means Y just if X is a reliable indication of Y
</pre>
<p>
Essentially all of the disparate examples he gives fall under this
definition, either directly or metonymically.
</p>
<p>
Since the meaning of "meaning" is central to his book, failure to use
find and use this definition gives one pause.
</p>
</div>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com2tag:blogger.com,1999:blog-5983563776019477979.post-67679221123776152442012-03-01T10:59:00.001-08:002012-03-01T10:59:18.908-08:00Digrasp - The options for representing digraphs with pairs
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Digrasp 3 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>This is a long-ish answer to John's comment on <a href='http://tehom-blog.blogspot.com/2012/02/how-are-dotted-graphs-second-class.html'>How are dotted graphs second-class?</a>, where he asks how I have in mind to represent digraphs
using pairs.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>The options for representing digraphs with pairs </h2>
<div id='text-2' class='outline-text-2'>
<p>
I'm not surprising that it comes across as unclear. I'm deliberately
leaving it open which of several possible approaches is "right". ISTM
it would be premature to fix on one right now.
</p>
<p>
As I see it, the options include:
</p>
<ol>
<li>
Unlabelled n-ary rooted digraph. Simplest in graph theory,
strictest in Kernel: Cars are nodes, cdrs are edges (arcs) and may
only point to pairs or nil. With this, there is no way to make
dotted graphs or lists, so there is no issue of their standing nor
any risk of "deep" conversion to dotted graphs. It loses or
alters some functionality, alists in particular.
</li>
<li>
Labelled binary rooted digraph: More natural in Kernel, but more
complex and messier in graph theory. Cars and cdrs are both
edges, and are labelled (graph theory wise) as "car" or "cdr".
List-processing operations are understood as distinguishing the
two labels and expecting a pair in the cdr. They can encounter
unexpected dotted ends, causing errors.
</li>
<li>
Dynamic hybrid: Essentially as now. Dottedness can be checked for,
much like with `proper-list?' but would also be checkable
recursively. There's risk of "deep" conversion from one to the
other; list-processing operations may raise error.
</li>
<li>
Static hybrid: A type similar to pair (undottable-pair) can only
contain unlabelled n-ary digraphs, recursively. List operations
require that type and always succeed on it. There's some way to
structurally copy conformant "classic" pair structures to
undottable-pair structures.
</li>
<li>
Static hybrid II: As above, but an undottable-pair may hold a
classic pair in its car but not its cdr, and that's understood as
not part of the digraph.
</li>
</ol>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>And there's environments / labelled digraphs </h2>
<div id='text-3' class='outline-text-2'>
<p>
By DIGRASP, I also mean fully labelled digraphs in which the nodes are
environments and the labels are symbols. But they have little to do
with the list-processing combiners.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com0tag:blogger.com,1999:blog-5983563776019477979.post-7797057953471261082012-02-27T10:05:00.001-08:002012-02-27T10:05:34.039-08:00How are dotted graphs second class?
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Digrasp </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I said that <a href='http://tehom-blog.blogspot.com/2012/02/digrasp.html'>dotted graphs seem to be second class objects</a> and John
asked me to elaborate.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>How are dotted graphs second class? </h2>
<div id='text-2' class='outline-text-2'>
<p>
A number of combiners in the <a href='http://web.cs.wpi.edu/~jshutt/kernel.html'>spec</a> accept cyclic but not dotted lists.
These are:
</p>
<ul>
<li>
All the type predicates
</li>
<li>
map and for-each
</li>
<li>
list-neighbors
</li>
<li>
append and append!
</li>
<li>
filter
</li>
<li>
reduce
</li>
<li>
"Constructably circular" combiners like $sequence
</li>
</ul>
<p>
So they accept any undotted graph, but not general dotted graphs.
This occurs often enough that to make dotted graphs seem second-class.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Could it be otherwise? </h2>
<div id='text-3' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-3_1'>
<h3 id='sec-3_1'>The "no way" cases </h3>
<div id='text-3_1' class='outline-text-3'>
<p>
For some combiners I think there is no sane alternative, like `pair?'
and the appends.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_2'>
<h3 id='sec-3_2'>The "too painful" cases </h3>
<div id='text-3_2' class='outline-text-3'>
<p>
For others, like filter or list-neighbors, the dotted end could have
been treated like an item, but it seems klugey and irregular, and they
can't do anything sane with a "unary dotted list", ie a non-list.
</p>
<p>
$sequence etc seem to belong here.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_3'>
<h3 id='sec-3_3'>map and for-each </h3>
<div id='text-3_3' class='outline-text-3'>
<p>For map and for-each, dotted lists at the top level have the same
problem as above, but ISTM "secondary" dotted lists and lists of
varying length could work.
</p>
<p>
Those could be accomodated by passing another combiner argument
(<code>proc2</code>) that, when any list runs out, is given the remaining tails
isomorphically to Args, and its return is used as the tail of the
return list. In other words, map over a "rectangle" of list-of-list
and let proc2 work on the irregular overrun.
</p>
<p>
The existing behavior could be recaptured by passing a proc2 that, if
it gets all nils, returns nil, and otherwise raises error. Other
useful behaviors seem possible, such as continuing with default
arguments or governing the length of the result by the shortest list.
</p>
</div>
</div>
<div class='outline-3' id='outline-container-3_4'>
<h3 id='sec-3_4'>Reduce </h3>
<div id='text-3_4' class='outline-text-3'>
<p>Reduce puzzles me. A cyclic list's cycle after it is collapsed to a
single item resembles a dotted tail, and is legal. Does that imply
that a dotted list should be able to shortcut to that stage?
</p>
</div>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com4tag:blogger.com,1999:blog-5983563776019477979.post-27857907876186677272012-02-25T14:28:00.001-08:002012-02-25T14:28:48.375-08:00Digrasp
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Digrasp </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>I have often blogged about <a href='http://web.cs.wpi.edu/~jshutt/kernel.html'>Kernel</a>, <a href='http://www.cs.wpi.edu/~jshutt/'>John Shutt</a>'s Scheme-like language.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Lisp becomes Digrasp? </h2>
<div id='text-2' class='outline-text-2'>
<p>
One interesting thing about Kernel is that it treats pairs rather than
lists as fundamental. Consequently, digraphs constructed from pairs
have a certain fundamental status too. Most operations in Kernel
allow arbitrary digraphs if they allow pairs. OK, dotted graphs seem
to be second class objects. But as long as every cdr points to a pair
or nil, you can pass it almost anywhere that accepts a pair.
</p>
<p>
So rather than LISt Processing, it's like DIrected GRAph ProceSsing.
OK, the acronym's not perfect, but it sounds better than DIGRAP and
echoes LISP.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com3tag:blogger.com,1999:blog-5983563776019477979.post-78775022669713255772012-02-24T19:35:00.002-08:002012-02-24T19:39:07.014-08:00Review Beginning Of Infinity 3<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>Review Beginning Of Infinity 3 </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Been busy </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I've been busy adding a major feature to <a href='http://sourceforge.net/projects/rosegarden/'>Rosegarden</a>, so I've let this
go for a while. But I fixed the last known bug today, so I may
already be done (or not).
</p>
</div>
</div>
<div class='outline-3' id='outline-container-1_2'>
<h3 id='sec-1_2'>Previously </h3>
<div id='text-1_2' class='outline-text-3'>
<p>
So now that I have a little time again, this has been jangling around
in my mind. Patchwork Zombie compared hard-to-vary to peaks on a
fitness landscape, in order to make the concept more obvious.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>How much is hard-to-vary like a fitness landscape? </h2>
<div id='text-2' class='outline-text-2'>
<p>
A pointy landscape is definitely part of the picture. The layout of
the landscape corresponds in the familiar way to the dimensions of
variation.
</p>
<p>
But it's not a fitness landscape, because hard-to-vary is itself the
fitness condition. Or to be tiresomely pedantic, Deutsch appeals to
it as being the relevant fitness condition on various topics. So
height can't also be the fitness condition.
</p>
<p>
That much I'm sure of. Now comes the part where I have to relate what
he "surely must have meant". ISTM that height on the landscape
corresponds to some perceptual dimension. Sharp peaks which fall off
very steeply are hard to vary and rounded peaks aren't.
</p>
<p>
And I bet you noticed, where I said "some perceptual dimension", that
there wasn't just one perceptual dimension in the previous posts. Right. A landscape could
have many height dimensions / perceptual dimensions. Steepness on all
of them would count; presumably it's something like the norm of the
gradient.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>Deutsch's motivating example </h2>
<div id='text-3' class='outline-text-2'>
<p>
I'll relate how Deutsch introduced hard-to-vary, which may make it
clearer.
</p>
<p>
He initially talks about hard-to-vary by comparing two ways of copying
things. Both are like "telephone", the children's game where one
person tells a secret to the next, who tells it to the next, to the
next, and the last person tells it aloud, and you see how much it has
changed.
</p>
<dl>
<dt>Analog</dt><dd>
Each person sees a picture of a Chinese junk, and draws
it, and then shows that drawing to the next person.
Every generation of copy is a little less faithful to the
original. Probably no copy is very much worse than the
previous, but the result at the end scarcely resembles
the picture at the start of the chain.
</dd>
<dt>Digital</dt><dd>
Origami (paper-folding). Each person is shown how to
fold a Chinese junk. If an intermediate guy makes a
sloppy copy, the next guy may still understand what he
was trying to do; his copy won't inherit the sloppiness.
Or the next guy may fail to understand the intent, and
then his copy will not be much like the original at all,
and everyone further down the line will inherit his
mistake. Every generation of copy is either basically
the same as the original or very wrong.
</dd>
</dl>
<p>
The "digital" copying, Deutsch says, is the one that's hard to vary.
Variations either disappear or they change the design into some
grossly different design.
</p>
</div>
</div>
</div>Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com2tag:blogger.com,1999:blog-5983563776019477979.post-20356673427618136912012-02-14T20:52:00.001-08:002012-02-14T20:52:58.971-08:00"Hard To Vary" and personal identity
<div xmlns='http://www.w3.org/1999/xhtml'>
<div class='outline-2' id='outline-container-1'>
<h2 id='sec-1'>"Hard To Vary" and personal identity </h2>
<div id='text-1' class='outline-text-2'>
</div>
<div class='outline-3' id='outline-container-1_1'>
<h3 id='sec-1_1'>Previously </h3>
<div id='text-1_1' class='outline-text-3'>
<p>
I read and sorta-lightly-reviewed David Deutsch's <a href='http://tehom-blog.blogspot.com/2012/02/beginning-of-infinity.html'>The Beginning Of Infinity</a>. But in this post I'm going to talk about a tangential
question to which it suggested an answer. So this post is my
thoughts evoked by Deutsch's book.
</p>
</div>
</div>
</div>
<div class='outline-2' id='outline-container-2'>
<h2 id='sec-2'>Transporters as abattoirs </h2>
<div id='text-2' class='outline-text-2'>
<p>
As a way of introducing the question, I'm going to recount a
conversation that I sometimes hear in nerdspace. It starts with
someone observing that transporters <code>a la</code> <span style='text-decoration:underline;'>Star Trek</span> "are really
death machines". Why? Because they make a copy and destroy the
original. "They kill you and make an identical twin".
</p>
<p>
Someone else (usually me) asks socratically, if this twin is so
completely identical, what have you lost? Describe any test for this
lost thing, other than where the guy is now standing.
</p>
<p>
The next point in the usual exchange is fraught with exasperation.
I'll paraphrase it as this: In our normal experience if the body that
we walk around in and whose eyes we see out of is destroyed, that's
the end of us.
</p>
<p>
Then someone observes that in the normal course of events, every atom
in our bodies is periodically replaced. Some faster than others, but
after a few months, we have been mostly replaced with new material.
</p>
<p>
"But that's different, it's gradual"
</p>
<p>
"What's so magical about change being gradual?"
</p>
<p>
"There's continuity."
</p>
<p>
"In the transporter, there's continuity too, of information. All the
relevant information reaches the other end. Otherwise it wouldn't
know to how rebuild the guy there."
</p>
<p>
"But you're always conscious"
</p>
<p>
"What about when you're asleep?"
</p>
<p>
"You're alive the whole time."
</p>
<p>
"With Heisenberg uncertainty, on a short enough time-scale you're not
really continuously anything."
</p>
<p>
Various other points are made. Usually this debate gets repetitive
and exasperated and ends without a meeting of minds, but with a
feeling that "transporters kill you" is simplistic.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-3'>
<h2 id='sec-3'>The question </h2>
<div id='text-3' class='outline-text-2'>
<p>
And a question is left hanging in the air: Then what exactly is it
that we want preserved? We value personal survival. What is X, that
if we have it, we have this valuable personal survival, and if we
don't have X, we don't?
</p>
<p>
It's not being materially unchanged. We change atoms all the time.
</p>
<p>
It's not physically breathing or heart-beating - we all know about
coma patients.
</p>
<p>
It has something to do with being faithfully copied. But it isn't
being 100% unchanged. If you could never learn anything new, that
wouldn't be perfect, ideal personal survival, it'd be scarcely better
than death.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-4'>
<h2 id='sec-4'>Personal identity is the hardest to vary (to ourselves) </h2>
<div id='text-4' class='outline-text-2'>
<p>
I've already given away a big chunk of the answer. We value the "hard
to vary" parts of ourselves. Our atoms aren't hard to vary. Our good
parts are. Almost any oxygen will do for breathing. No other set of
friends or childhood memories are suitable replacements for our own.
</p>
<p>
With <a href='http://tehom-blog.blogspot.com/2012/02/beginning-of-infinity.html'>art</a>, we had to ask what design terrain it was hard to vary in,
and the answer seemed to be the audience's perceptive powers. But
with personal identity, we are both the art and the audience.
</p>
<p>
So the criterion is self-referential. Art and explanation didn't have
self-referentiality, at least not in the all-consuming way that
personal identity does.
</p>
<p>
So our "hard to vary" criterion has a lot of chicken-and-egg-ness to
it. We value aspects of ourselves because we appreciate them in
contrast to the possible variants that we perceive - but those
perceptions were in turn informed and molded by what we value. It's
a path-dependent metric.
</p>
</div>
</div>
<div class='outline-2' id='outline-container-5'>
<h2 id='sec-5'>Does it fit? </h2>
<div id='text-5' class='outline-text-2'>
<p>
It's a quick stab at a deep problem, so there's plenty of room for
this idea to be misguided. But it seems about right. It doesn't fall
into the trap of valuing our atoms or our continuous wakefulness, or
making a frozen-in-carbonite body our ideal.
</p>
<p>
The path-dependency fits. Personal identity is full of path-dependent
phenomena. Our friends and families are irreplacable to us, but we're
not seriously under the impression that had our lives been different
and we met another random-ish set of people, those putative other
people would all have been second-rate unlike our actual friends and
family.
</p>
<p>
It seems compatible with Wei Dai's observation (can't find the link)
that people of all cultures have an abundance of apparently terminal
values. At least, it's not obviously suspicious for there to be many
hard-to-vary values. On the other hand it doesn't fall into the
deontic trap of stipulating a list of terminal values, leaving us to
ask "Why those?".
</p>
<p>
So I find this to be a promising theory of the value in personal
identity.
</p>
</div>
</div>
</div>
Tehomhttp://www.blogger.com/profile/14836581076251384864noreply@blogger.com1