19 July 2012

Plastination 2

Plastination 2

Previously

I blogged about Plastination, a potential alternative to cryonics.

Luke's comment got me to write more (always a risk commenters take)

The biggest problem

The big problem in plastination is that it is hit-or-miss. What it preserves, it seems to preserve well, but in current SOA, whole sections of the brain might be unpreserved. The researchers who developed it didn't care about bringing their lab rats back from the dead, so that was considered good enough.

From a layman's POV, infusing the whole brain doesn't look harder than cryonics infusing the whole brain with cryoprotectant, but there could be all sorts of technical details that make me wrong.

So which wins, plastination or cryonics?

A lot depends on which you judge more likely in a reasonable time-frame: repair nanobots or emulation. I'd judge emulation much more likely. We can already emulate roundworms and have partly emulated fruit flies. So I suspect Moore's law makes human emulation in a reasonable time-frame much more likely than not.

Can we prove it?

One thing I like about plastination-to-emulation is that we could prove it out now. Teach a fruit fly some trick, or let it learn something meaningful to a fruit fly - maybe the identity of a rival, if fruit flies learn that.

Plastinate its brain, emulate it. Does it still know what it learned? And know it equally well? If so, we can justifiably place some confidence in this process. If not, we've just found a bug to fix.

So with plastination-to-emulation, we have the means to drive a debugging cycle. That's very good.

Difference in revival population dynamics

One difference that I don't know what to make of: If they work, the population dynamics of revival would probably be quite different.

In plastination-to-emulation, revival becomes possible for everybody at the same time. If you can scan in one plastinated brain, you can scan any one.

In cryonics-to-cure-and-thaw, I expect there'd be waves as the various causes of death were solved. Like, death from sudden heart attack might be cured long before Alzheimer's disease became reversible, if ever.

11 July 2012

Plastination - the new cryonics?

Plastination - an alternative to cryonics

Previously

I'll assume that everyone who reads my blog has heard of cryonics.

Trending

Chemopreservation has been known for some time, but has recently received some attention as a credible alternative to cryonics. These pages (PLASTINATION VERSUS CRYONICS, Biostasis through chemopreservation) make the case well. They also explain some nuances that I won't go into. But basically, chemopreservation stores you more robustly by turning your brain into plastic. There's no liquid nitrogen required, no danger of defrosting. With chemopreservation, they can't just fix what killed you and "wake you up", you'd have to be scanned and uploaded.

Are thawing accidents likely? Yes.

Cryonics organizations such as Alcor just wouldn't let you thaw, because they take their mission very seriously?
Without casting any aspersions on cryonics organizations' competence and integrity, consider that recently, 150 autistic brains being stored for research at McLean Hospital were accidentally allowed to thaw (here, here, here). McLean and Harvard presumably take their mission just as seriously as Alcor and have certain organizational advantages.

My two cents: Store EEG data too

In the cryonics model, storing your EEG's didn't make much sense. When (if) resuscitation "restarted your motor", your brainwaves would come back on their own. Why keep a reference for them?
But plastination assumes from the start that revival consists of scanning your brain in and emulating it. Reconstructing you would surely be done computationally, so any source of information could be fed into the reconstruction logic.
Ideally the plastinated brain would preserve all the information that is you, and preserve it undistorted. But what if it preserved enough information but garbled it? Like, the information that got thru was ambiguous. There would be no way to tell the difference between the one answer that reconstructs your mind correctly and many other answers that construct something or someone else.
Having a reference point in a different modality could help a lot. I won't presume to guess how it would best be used in the future, but from an info-theory stance, there's a real chance that it might provide crucial information to reconstruct your mind correctly.
And having an EEG reference could provide something less crucial but very nice: verification.

20 June 2012

Parallel Dark Matter - make that five

Hold that last brane

Previously

I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.

Recently I posted (Brown dwarfs may support PDM) that wrt brown dwarfs, the ratio between the number we see by visual observation and the number that we seem to see by gravitational microlensing, 1/5, is similar to what PDM predicts.

I had another look and it turns out I was working from bad data. The ratio is not just similar, it's the same.

Dark matter accounts for 23% of the universe's mass, while visible matter accounts for 4.6% (the remainder is dark energy). Ie, exactly 1/5. I don't know why I accepted a source that put it as 1/6; lazy, I guess.

That implies 5 dark branes rather than 6. I have updated my old PDM posts accordingly.

11 June 2012

Brown dwarfs may support PDM

Some evidence from brown dwarfs may support PDM

Previously

I have been blogging about a theory I call Parallel Dark Matter (and here and here), which I may not be the first to propose, though I seem to be the first to flesh the idea out.

We see fewer brown dwarfs than we expected

In recent news, here and here, a visual survey of brown dwarfs (Wide-field Infrared Survey Explorer, or WISE) shows far fewer of them than astronomers expected.
Previous estimates had predicted as many brown dwarfs as typical stars, but the new initial tally from WISE shows just one brown dwarf for every six stars.
Note the ratio between observed occurence and predicted occurence: 1/6. That's not the last word, though. Davy Kirkpatrick of WISE says that:
the results are still preliminary: it is highly likely that WISE will discover additional Y dwarfs, but not in vast numbers, and probably not closer than the closest known star, Proxima Centauri. Those discoveries could bring the ratio of brown dwarfs to stars up a bit, to about 1:5 or 1:4, but not to the 1:1 level previously anticipated

But gravitational lensing appeared to show that they were common

But gravitational microlensing events suggested that brown dwarfs are common; if they weren't, it'd be unlikely that we'd see gravitational microlensing by them to that degree.
While I don't have the breadth of knowledge to properly survey the argument for brown dwarf commonness, it's my understanding that this was the main piece of evidence for it.

This is just what PDM would predict

PDM predicts that we would "see" gravity from all six branes, but only visually see the brown dwarfs from our own brane.
The ratio isn't exact but seems well within the error bars. They found 33, so leaving out other sources of uncertainty, you'd expect only a 68% chance that the "right" figure - ie, if it were exactly the same as the average over the universe - would be between 27 and 38.
Note that PDM predicts a 1/6 ratio between gravitational observations and visual observations. I emphasize that because in the quotes above, the ratios were between something different, visual observations of brown dwarfs vs visible stars.

18 May 2012

Emtest

Emtest

Previously

Some years back, I wrote a testing framework for emacs called Emtest. It lives in a repo hosted on Savannah, mirrored here, doc'ed here.

Cucumber

Recently a testing framwork called Cucumber came to my attention. I have multiple reactions to it:

But they left important parts unadopted

But they didn't really adopt table testing in its full power. There are a number of things I have found important for table-driven testing that they apparently have not contemplated:

N/A fields
These are unprovided fields. A test detects them, usually skipping over rows that lack a relevant field. This is more useful than you might think. Often you are defining example inputs to a function that usually produces output (another field) but sometimes ought to raise error. For those cases, you need to provide inputs but there is nothing sensible to put in the output field.
Constructed fields
Often you want to construct some fields in terms of other fields in the same row. The rationale above leads directly there.
Constructed fields II
And often you want to construct examples in terms of examples that are used in other tests. You know those examples are right because they are part of working tests. If they had some subtle stupid mistake in them, it'd have already shown up there. Reuse is nice here.
Persistent fields
This idea is not originally mine, it comes from an article on Gamasutra1. I did expand it a lot, though. The author looked for a way to test image generation (scenes) and what he did was at some point, capture a "good" image the same image generator. Then from that point on, he could automatically compare the output to a known good image.
  • He knew for sure when it passed.
  • When the comparison failed, he could diff the images and see where and how badly; it might be unnoticeable dithering or the generator might have omitted entire objects or shadows.
  • He could improve the reference image as his generator got better.

I've found persistent fields indispensable. I use them for basically anything that's easier to inspect that it is to write examples of. For instance, about half of the Klink tests use it.

They didn't even mention me

AFAICT neither Cucumber nor Gherkin credits me at all. Maybe they're honestly unaware of the lineage of the ideas they're using. Still, it gets tiresome not getting credit for stuff that AFAICT I invented and gave freely to everybody in the form of working code.

They don't use TESTRAL or anything like it.

TESTRAL is the format I defined for reporting tests. Without going into great detail, TESTRAL is better than anything else out there. Not just better than the brain-dead ad hoc formats, but better than TestXML.

BDD is nice

Still, I think they have some good ideas, especially regarding Behavior Driven Development. IMO that's much better than Test-Driven Development2.

In TDD, you're expected to test down to the fine-grained units. I've gone that route, and it's a chore. Yes, you get a nice regression suite, but pretty soon you just want to say "just let me write code!"

In constrast, where TDD is bottom-up, BDD is top-down. Your tests come from use-cases (which are structured the way I structure inline docstrings in tests, which is nice, and just how much did you Cucumber guys borrow?) BDD looks like a good paradigm for development.

Not satisfied with Emtest tables, I replaced them

But my "I was first" notwithstanding, I'm not satisfied with the way I made Emtest do tables. At the time, because nobody anywhere had experience with that sort of thing, I adopted the most flexible approach I could see. This was tag-based, an idea I borrowed from Carsten Dominick's org-mode3.

However, over the years the tag-based approach has proved too powerful.

  • It takes a lot of clever code behind the scenes to make it work.
  • Maintaining that code is a PITA. Really, it's been one of the most time-consuming parts of Emtest, and always had the longest todo list.
  • In front of the scenes, there's too much power. That's not as good as it sounds, and led to complex specifications because too many tags needed management.
  • Originally I had thought that a global tag approach would work best, because it would make the most stuff available. That was a dud which I fixed that years ago.

So, new tables for Emtest

So this afternoon I coded a better table package for Emtest. It's available on Savannah right now; rather, the new Emtest with it is available. It's much simpler to use:

emt:tab:make
define a table, giving arguments:
docstring
A docstring for the entire table.
headers
A list of column names. For now they are simply symbols, later they may get default initialization forms and other help
rows
The remaining arguments are rows. Each begins with a namestring.
emt:tab:for-each-row
Evaluate body once for each row, with the row bound to var-sym
emt:tab
Given a table row and a field symbol, get the value of the respective field

I haven't added Constructed fields or Persistent fields yet. I will when I have to use them.

Also added foreign-tester support

Emtest also now supports foreign testers. That is, it can communicate with an external process running a tester, and then report that tester's results and do all the bells and whistles (persistence, organizing results, expanding and collapsing them, point-and-shoot launching of tests, etc) So the external tester can be not much more than "find test, run test, build TESTRAL result".

It communicates in Rivest-style canonical s-expressions, which is as simple a structured format as anything ever. It's equally as expressive as XML and there exist interconverters.

I did this with the idea of using it for the Functional Reactive Programming stuff I was talking about before, if in fact I make a test implementation for it (Not sure).

And renamed to tame the chaos

At one time I had written Emtest so that the function and command prefixes were all modular. Originally they were written-out, like emtest/explorer/fileset/launch. That was huge and unwieldy, so I shortened their prefixes to module unique abbreviations like emtl:

But when I looked at it again now, that was chaos! So now

  • Everything the user would normally use is prefixed emtest
    • Main entry point emtest
    • Code-editing entry point emtest:insert
    • "Panic" reset command emtest:reset
    • etc
  • Everything else is prefixed emt: followed by a 2 or 3 letter abbreviation of its module.

I haven't done this to the define and testhelp modules, though, since the old names are probably still in use somewhere.

Footnotes:

1 See, when I borrow ideas, I credit the people it came from, even if I have improved on it. Can't find the article but I did look; it was somewhat over 5 years ago, one of the first big articles on testing there.

2 Kent Beck's. Again, crediting the originator.

3 Again credit where it's due. He didn't invent tags, of course, and I don't know who was upstream from him wrt that.

12 May 2012

Mutability And Signals 3

Mutability And Signals 3

Previously

I have a crazy notion of using signals to fake mutability, thereby putting a sort of functional reactive programming on top of formally immutable data. (here and here)

Now

So recently I've been looking at how that might be done. Which basically means by fully persistent data structures. Other major requirements:

  • Cheap deep-copy
  • Support a mutate-in-place strategy (which I'd default to, though I'd also default to immutable nodes)
  • Means to propagate signals upwards in the overall digraph (ie, propagate in its transpose)

Fully persistent data promises much

  • As mentioned, signals formally replacing mutability.
  • Easily keep functions that shouldn't mutate objects outside themselves from doing so, even in the presence of keyed dynamic variables. For instance, type predicates.
  • From the above, cleanly support typed slots and similar.
  • Trivial undo.
  • Real Functional Reactive Programming in a Scheme. Implementations like Cell and FrTime are interesting but "bolted on" to languages that disagree with them. Flapjax certainly caught my interest but it's different (behavior based).
  • I'm tempted to implement logic programming and even constraint handling on top of it. Persistence does some major heavy lifting for those, though we'd have to distinguish "immutable", "mutate-in-place", and "constrain-only" versions.
  • If constraint handling works, that basically gives us partial evaluation.
  • And I'm tempted to implement Software Transactional Memory on it. Once you have fully persistent versioning, STM just looks like merging versions if they haven't collided or applying a failure continuation if they have. Detecting in a fine-grained way whether they have is the remaining challenge.

DSST: Great but yikes

So for fully persistent data structures, I read the Driscoll, Sarnak, Sleator and Tarjan paper (and others, but only DSST gave me the details). On the one hand, it basically gave me what I needed to impelement this, if in fact I do. On the other hand, there were a number of "yikes!" moments.

The first was discovering that their solution did not apply to arbitrary digraphs, but to digraphs with a constant upper bound p on the number of incoming pointers. So the O(1) cost they reported is misleading. p "doesn't count" because it's a constant, but really we do want in-degree to be arbitrarily large, so it does count. I don't think it will be a big deal because the typical node in-degree is small in every code I've seen, even in some relentlessly self-referring monstrosities that I expect are the high-water mark for this.

Second yikes was a gap between the version-numbering means they refer to (Dietz et al) and their actual needs for version-numbering. Dietz et al just tell how to efficiently renumber a list when there's no room to insert a new number.

Figured that out: I have to use a level of indirection for the real indexes. Everything (version data and persistent data structure) hold indirect indexes and looks up the real index when it needs it. The version-renumbering strategy is not crucial.

Third: Mutation boxes. DSST know about them, provide space for them, but then when they talk about the algorithm, totally ignore them. That would make the description much more complex, they explain. Yes, true, it would. But the reader is left staring at a gratuitously costly operation instead.

But I don't want to sound like I'm down on them. Their use of version-numbering was indispensable. Once I read and understood that, the whole thing suddenly seemed practical.

Deep copy

But that still didn't implement a cheap deep copy on top of mutate-in-place. You could freeze a copy of the whole digraph, everywhere, but then you couldn't both that and a newer copy in a single structure. Either you'd see two copies of version A or two copies of version B, but never A and B.

Mixing versions tends to call up thoughts of confluent persistence, but IIUC this is a completely different thing. Confluent persistence IIUC tries to merge versions for you, which limits its generality. That would be like (say) finding every item that was in some database either today or Jan 1; that's different.

What I need is to hold multiple versions of the same structure at the same time, otherwise deep-copy is going to be very misleading.

So I'd introduce "version-mapping" nodes, transparent single-child nodes that, when they are1 accessed as one version, their child is explored as if a different version. Explore by one path, it's version A, by another it's version B.

Signals

Surprisingly, one part of what I needed for signals just fell out of DSST: parent pointers, kept up to date.

Aside from that, I'd:

  • Have signal receiver nodes. Constructed with a combiner and an arbitrary data object, it evaluates that combiner when anything below it is mutated, taking old copy, new copy, receiver object, and path. This argobject looks very different under the hood. Old and new copy are recovered from the receiver object plus version stamps; it's almost free.
  • When signals cross the mappers I added above, change the version stamps they hold. This is actually trivial.
  • As an optimization, so we wouldn't be sending signals when there's no possible receiver, I'd flag parent pointers as to whether anything above them wants a signal.

Change of project

If I code this, and that's a big if, it will likely be a different project than Klink, my Kernel interpreter, though I'll borrow code from it.

  • It's such a major change that it hardly seems right to call it a Kernel interpreter.
  • With experience, there are any number of things I'd do differently. So if I restart, it'll be in C++ with fairly heavy use of templates and inheritance.
  • It's also an excuse to use EMSIP.

Footnotes:

1 Yes, I believe in using resumptive pronouns when it makes a sentence flow better.

Review Inside Jokes 1

Review Inside Jokes 1

Previously

I am currently reading Inside Jokes by Matthew M. Hurley, Daniel C. Dennett, and Reginald B. Adams Jr. So far, the book has been enlightening.

Brief summary

Their theory, which seems likely to me, is that humor occurs when you retract an active, committed, covertly entered belief.
Active
It's active in your mind at the moment. They base this on a Just-In-Time Spreading Activation model.
Covertly entered
Not a belief that you consciously same to. You assumed it "automatically".
Committed
A belief that you're sure about, as opposed to a "maybe". To an ordinary degree, not neccessarily to a metaphysical certitude.
And a blocking condition: Strong negative emotions block humor.

Basic humor

What they call "basic" humor is purely in your own "simple" (my word) mental frame. That frame is not interpersonal, doesn't have a theory of mind. Eg, when you suddenly realize where you left your car keys and it's a place that you foolishly ruled out before, which is often funny, that's basic humor.

Non-basic humor

Non-basic humor occurs in other mental frames. These frames have to include a theory of mind. Ie, we can't joke about clams - normal clams, not anthropomorphized in some way. I expect this follows from the requirement of retracting a belief in that frame.

Did they miss a trick?

They say that that in third-person humor, the belief we retract is in our frame of how another person is thinking, what I might call an "empathetic frame".
I think that's a mis-step. A lot of jokes end with the butt of the joke plainly unenlightened. It's clear to everyone that nothing has been retracted in his or her mind. ISTM this doesn't fit at all.

Try social common ground instead.

I think they miss a more likely frame, one which I'd call social common ground. (More about it below)
We can't just unilaterally retract a belief that exists in social common ground. "Just disbelieving it" would be simply not doing social common ground. And we as social creatures have a great deal of investment in it.
To retract a belief in social common ground, something has to license us to do so, and it generally also impels us to. ISTM the need to create that license/impulse explains why idiot jokes are the way they are.
This also explains why the butt of the joke not "getting it" doesn't prevent a joke from being funny, and even enhances the mirth. His or her failure to "get it" doesn't block social license to retract.
Covert entry fits naturally here too. As social creatures, we also have a great deal of experience and habit regarding social common ground. This gives plenty of room for covert entry.

What's social common ground?




Linguistic common ground



"Common ground" is perhaps more easily explained in linguistics. If I mention (say) the book Inside Jokes, then you can say "it" to refer to it, even though you haven't previously mentioned the book yourself. But neither of us can just anaphorically1 refer to "it" when we collectively haven't mentioned it before.
We have a sort of shared frame that we both draw presuppositions from. Of course, it's not really, truly shared. It's a form of co-operation and it can break. But normally it's shared.

From language common ground to social common ground

I don't think it's controversial to say that:
  • A similar common ground frame always holds socially, even outside language.
  • Normal people maintain a sense of this common ground during social interactions.
  • Sometimes they do so even at odds with their wishes, the same way they can't help understanding speech in their native language.

Footnotes:

1 Pedantry: There are also non-anaphoric "it"s, such as "It's raining."