18 May 2012

Emtest

Emtest

Previously

Some years back, I wrote a testing framework for emacs called Emtest. It lives in a repo hosted on Savannah, mirrored here, doc'ed here.

Cucumber

Recently a testing framwork called Cucumber came to my attention. I have multiple reactions to it:

But they left important parts unadopted

But they didn't really adopt table testing in its full power. There are a number of things I have found important for table-driven testing that they apparently have not contemplated:

N/A fields
These are unprovided fields. A test detects them, usually skipping over rows that lack a relevant field. This is more useful than you might think. Often you are defining example inputs to a function that usually produces output (another field) but sometimes ought to raise error. For those cases, you need to provide inputs but there is nothing sensible to put in the output field.
Constructed fields
Often you want to construct some fields in terms of other fields in the same row. The rationale above leads directly there.
Constructed fields II
And often you want to construct examples in terms of examples that are used in other tests. You know those examples are right because they are part of working tests. If they had some subtle stupid mistake in them, it'd have already shown up there. Reuse is nice here.
Persistent fields
This idea is not originally mine, it comes from an article on Gamasutra1. I did expand it a lot, though. The author looked for a way to test image generation (scenes) and what he did was at some point, capture a "good" image the same image generator. Then from that point on, he could automatically compare the output to a known good image.
  • He knew for sure when it passed.
  • When the comparison failed, he could diff the images and see where and how badly; it might be unnoticeable dithering or the generator might have omitted entire objects or shadows.
  • He could improve the reference image as his generator got better.

I've found persistent fields indispensable. I use them for basically anything that's easier to inspect that it is to write examples of. For instance, about half of the Klink tests use it.

They didn't even mention me

AFAICT neither Cucumber nor Gherkin credits me at all. Maybe they're honestly unaware of the lineage of the ideas they're using. Still, it gets tiresome not getting credit for stuff that AFAICT I invented and gave freely to everybody in the form of working code.

They don't use TESTRAL or anything like it.

TESTRAL is the format I defined for reporting tests. Without going into great detail, TESTRAL is better than anything else out there. Not just better than the brain-dead ad hoc formats, but better than TestXML.

BDD is nice

Still, I think they have some good ideas, especially regarding Behavior Driven Development. IMO that's much better than Test-Driven Development2.

In TDD, you're expected to test down to the fine-grained units. I've gone that route, and it's a chore. Yes, you get a nice regression suite, but pretty soon you just want to say "just let me write code!"

In constrast, where TDD is bottom-up, BDD is top-down. Your tests come from use-cases (which are structured the way I structure inline docstrings in tests, which is nice, and just how much did you Cucumber guys borrow?) BDD looks like a good paradigm for development.

Not satisfied with Emtest tables, I replaced them

But my "I was first" notwithstanding, I'm not satisfied with the way I made Emtest do tables. At the time, because nobody anywhere had experience with that sort of thing, I adopted the most flexible approach I could see. This was tag-based, an idea I borrowed from Carsten Dominick's org-mode3.

However, over the years the tag-based approach has proved too powerful.

  • It takes a lot of clever code behind the scenes to make it work.
  • Maintaining that code is a PITA. Really, it's been one of the most time-consuming parts of Emtest, and always had the longest todo list.
  • In front of the scenes, there's too much power. That's not as good as it sounds, and led to complex specifications because too many tags needed management.
  • Originally I had thought that a global tag approach would work best, because it would make the most stuff available. That was a dud which I fixed that years ago.

So, new tables for Emtest

So this afternoon I coded a better table package for Emtest. It's available on Savannah right now; rather, the new Emtest with it is available. It's much simpler to use:

emt:tab:make
define a table, giving arguments:
docstring
A docstring for the entire table.
headers
A list of column names. For now they are simply symbols, later they may get default initialization forms and other help
rows
The remaining arguments are rows. Each begins with a namestring.
emt:tab:for-each-row
Evaluate body once for each row, with the row bound to var-sym
emt:tab
Given a table row and a field symbol, get the value of the respective field

I haven't added Constructed fields or Persistent fields yet. I will when I have to use them.

Also added foreign-tester support

Emtest also now supports foreign testers. That is, it can communicate with an external process running a tester, and then report that tester's results and do all the bells and whistles (persistence, organizing results, expanding and collapsing them, point-and-shoot launching of tests, etc) So the external tester can be not much more than "find test, run test, build TESTRAL result".

It communicates in Rivest-style canonical s-expressions, which is as simple a structured format as anything ever. It's equally as expressive as XML and there exist interconverters.

I did this with the idea of using it for the Functional Reactive Programming stuff I was talking about before, if in fact I make a test implementation for it (Not sure).

And renamed to tame the chaos

At one time I had written Emtest so that the function and command prefixes were all modular. Originally they were written-out, like emtest/explorer/fileset/launch. That was huge and unwieldy, so I shortened their prefixes to module unique abbreviations like emtl:

But when I looked at it again now, that was chaos! So now

  • Everything the user would normally use is prefixed emtest
    • Main entry point emtest
    • Code-editing entry point emtest:insert
    • "Panic" reset command emtest:reset
    • etc
  • Everything else is prefixed emt: followed by a 2 or 3 letter abbreviation of its module.

I haven't done this to the define and testhelp modules, though, since the old names are probably still in use somewhere.

Footnotes:

1 See, when I borrow ideas, I credit the people it came from, even if I have improved on it. Can't find the article but I did look; it was somewhat over 5 years ago, one of the first big articles on testing there.

2 Kent Beck's. Again, crediting the originator.

3 Again credit where it's due. He didn't invent tags, of course, and I don't know who was upstream from him wrt that.

12 May 2012

Mutability And Signals 3

Mutability And Signals 3

Previously

I have a crazy notion of using signals to fake mutability, thereby putting a sort of functional reactive programming on top of formally immutable data. (here and here)

Now

So recently I've been looking at how that might be done. Which basically means by fully persistent data structures. Other major requirements:

  • Cheap deep-copy
  • Support a mutate-in-place strategy (which I'd default to, though I'd also default to immutable nodes)
  • Means to propagate signals upwards in the overall digraph (ie, propagate in its transpose)

Fully persistent data promises much

  • As mentioned, signals formally replacing mutability.
  • Easily keep functions that shouldn't mutate objects outside themselves from doing so, even in the presence of keyed dynamic variables. For instance, type predicates.
  • From the above, cleanly support typed slots and similar.
  • Trivial undo.
  • Real Functional Reactive Programming in a Scheme. Implementations like Cell and FrTime are interesting but "bolted on" to languages that disagree with them. Flapjax certainly caught my interest but it's different (behavior based).
  • I'm tempted to implement logic programming and even constraint handling on top of it. Persistence does some major heavy lifting for those, though we'd have to distinguish "immutable", "mutate-in-place", and "constrain-only" versions.
  • If constraint handling works, that basically gives us partial evaluation.
  • And I'm tempted to implement Software Transactional Memory on it. Once you have fully persistent versioning, STM just looks like merging versions if they haven't collided or applying a failure continuation if they have. Detecting in a fine-grained way whether they have is the remaining challenge.

DSST: Great but yikes

So for fully persistent data structures, I read the Driscoll, Sarnak, Sleator and Tarjan paper (and others, but only DSST gave me the details). On the one hand, it basically gave me what I needed to impelement this, if in fact I do. On the other hand, there were a number of "yikes!" moments.

The first was discovering that their solution did not apply to arbitrary digraphs, but to digraphs with a constant upper bound p on the number of incoming pointers. So the O(1) cost they reported is misleading. p "doesn't count" because it's a constant, but really we do want in-degree to be arbitrarily large, so it does count. I don't think it will be a big deal because the typical node in-degree is small in every code I've seen, even in some relentlessly self-referring monstrosities that I expect are the high-water mark for this.

Second yikes was a gap between the version-numbering means they refer to (Dietz et al) and their actual needs for version-numbering. Dietz et al just tell how to efficiently renumber a list when there's no room to insert a new number.

Figured that out: I have to use a level of indirection for the real indexes. Everything (version data and persistent data structure) hold indirect indexes and looks up the real index when it needs it. The version-renumbering strategy is not crucial.

Third: Mutation boxes. DSST know about them, provide space for them, but then when they talk about the algorithm, totally ignore them. That would make the description much more complex, they explain. Yes, true, it would. But the reader is left staring at a gratuitously costly operation instead.

But I don't want to sound like I'm down on them. Their use of version-numbering was indispensable. Once I read and understood that, the whole thing suddenly seemed practical.

Deep copy

But that still didn't implement a cheap deep copy on top of mutate-in-place. You could freeze a copy of the whole digraph, everywhere, but then you couldn't both that and a newer copy in a single structure. Either you'd see two copies of version A or two copies of version B, but never A and B.

Mixing versions tends to call up thoughts of confluent persistence, but IIUC this is a completely different thing. Confluent persistence IIUC tries to merge versions for you, which limits its generality. That would be like (say) finding every item that was in some database either today or Jan 1; that's different.

What I need is to hold multiple versions of the same structure at the same time, otherwise deep-copy is going to be very misleading.

So I'd introduce "version-mapping" nodes, transparent single-child nodes that, when they are1 accessed as one version, their child is explored as if a different version. Explore by one path, it's version A, by another it's version B.

Signals

Surprisingly, one part of what I needed for signals just fell out of DSST: parent pointers, kept up to date.

Aside from that, I'd:

  • Have signal receiver nodes. Constructed with a combiner and an arbitrary data object, it evaluates that combiner when anything below it is mutated, taking old copy, new copy, receiver object, and path. This argobject looks very different under the hood. Old and new copy are recovered from the receiver object plus version stamps; it's almost free.
  • When signals cross the mappers I added above, change the version stamps they hold. This is actually trivial.
  • As an optimization, so we wouldn't be sending signals when there's no possible receiver, I'd flag parent pointers as to whether anything above them wants a signal.

Change of project

If I code this, and that's a big if, it will likely be a different project than Klink, my Kernel interpreter, though I'll borrow code from it.

  • It's such a major change that it hardly seems right to call it a Kernel interpreter.
  • With experience, there are any number of things I'd do differently. So if I restart, it'll be in C++ with fairly heavy use of templates and inheritance.
  • It's also an excuse to use EMSIP.

Footnotes:

1 Yes, I believe in using resumptive pronouns when it makes a sentence flow better.

Review Inside Jokes 1

Review Inside Jokes 1

Previously

I am currently reading Inside Jokes by Matthew M. Hurley, Daniel C. Dennett, and Reginald B. Adams Jr. So far, the book has been enlightening.

Brief summary

Their theory, which seems likely to me, is that humor occurs when you retract an active, committed, covertly entered belief.
Active
It's active in your mind at the moment. They base this on a Just-In-Time Spreading Activation model.
Covertly entered
Not a belief that you consciously same to. You assumed it "automatically".
Committed
A belief that you're sure about, as opposed to a "maybe". To an ordinary degree, not neccessarily to a metaphysical certitude.
And a blocking condition: Strong negative emotions block humor.

Basic humor

What they call "basic" humor is purely in your own "simple" (my word) mental frame. That frame is not interpersonal, doesn't have a theory of mind. Eg, when you suddenly realize where you left your car keys and it's a place that you foolishly ruled out before, which is often funny, that's basic humor.

Non-basic humor

Non-basic humor occurs in other mental frames. These frames have to include a theory of mind. Ie, we can't joke about clams - normal clams, not anthropomorphized in some way. I expect this follows from the requirement of retracting a belief in that frame.

Did they miss a trick?

They say that that in third-person humor, the belief we retract is in our frame of how another person is thinking, what I might call an "empathetic frame".
I think that's a mis-step. A lot of jokes end with the butt of the joke plainly unenlightened. It's clear to everyone that nothing has been retracted in his or her mind. ISTM this doesn't fit at all.

Try social common ground instead.

I think they miss a more likely frame, one which I'd call social common ground. (More about it below)
We can't just unilaterally retract a belief that exists in social common ground. "Just disbelieving it" would be simply not doing social common ground. And we as social creatures have a great deal of investment in it.
To retract a belief in social common ground, something has to license us to do so, and it generally also impels us to. ISTM the need to create that license/impulse explains why idiot jokes are the way they are.
This also explains why the butt of the joke not "getting it" doesn't prevent a joke from being funny, and even enhances the mirth. His or her failure to "get it" doesn't block social license to retract.
Covert entry fits naturally here too. As social creatures, we also have a great deal of experience and habit regarding social common ground. This gives plenty of room for covert entry.

What's social common ground?




Linguistic common ground



"Common ground" is perhaps more easily explained in linguistics. If I mention (say) the book Inside Jokes, then you can say "it" to refer to it, even though you haven't previously mentioned the book yourself. But neither of us can just anaphorically1 refer to "it" when we collectively haven't mentioned it before.
We have a sort of shared frame that we both draw presuppositions from. Of course, it's not really, truly shared. It's a form of co-operation and it can break. But normally it's shared.

From language common ground to social common ground

I don't think it's controversial to say that:
  • A similar common ground frame always holds socially, even outside language.
  • Normal people maintain a sense of this common ground during social interactions.
  • Sometimes they do so even at odds with their wishes, the same way they can't help understanding speech in their native language.

Footnotes:

1 Pedantry: There are also non-anaphoric "it"s, such as "It's raining."

05 May 2012

I may not be the first to propose PDM

I may not be the first to propose PDM

Previously

Previously I advanced Parallel Dark Matter, the theory dark matter is actually normal matter that "lives" on one of 5 "parallel universes" that exchange only gravitational force with the visible universe. I presumptively call these parallel universes "branes" because they fit with braneworld cosmology.

Spergel and Steinhardt proposed it earlier

They may have proposed it in 2000, and in exactly one sentence.
It's not exactly the same: They don't explicitly propose that it simply is ordinary matter on another brane, and they do not propose multiple branes accounting for the ratio of dark matter to visible matter. But it's close enough that in good conscience I have to let everyone know that they said this first.
AFAICT they and everyone else paid no further attention to it.
The relevant sentence is on page 2: "M-theory and superstrings, for example, suggest the possibility that dark matter fields reside on domain walls with gauge fields separated from ordinary matter by an extra (small) dimension".

04 May 2012

The nature of Truth

The nature of Truth

Previously

I recently finished reading A User's Guide To Thought And Meaning by Ray Jackendoff. In it, he asks "What is truth?" and mentions several problems with what we might call the conventional view.

He didn't really answer the question, but on reading it, a surprising answer occured to me.

T=WVP

Truth is just what valid reasoning preserves.

No more and no less. I'll abbreviate it T=WVP

Not "about the world"

The conventional view is that truths are about the world, and valid reasoning merely doesn't drop the ball. I'll abbreviate it CVOT. To illustrate CVOT, consider:

All elephants are pink
Nelly is an Elephant
Nelly is pink

where the reasoning is valid but the major premiss is false, and so is the conclusion.

Since "about the world" plays no part in my definition, I feel the need to justify why it needn't and shouldn't.

"About the world" isn't really about the world

Consider the above example. Presumably you determined that "All elephants are pink" is false because at some point you saw an elephant and it was grey1.

And how did you determine that what you were seeing was an elephant and it wasn't pink? Please don't stop at "I saw it and I just knew". I know that readers of this blog have more insight into their thinking than that. Your eyes and your brain interpreted something as seeing a greyish elephant. I'm not saying it wasn't one, mind you. But you weren't born knowing all about elephants. You had to learn about them. You even had to learn the conventional color distinctions - other cultures distinguish the named colors differently.

So you used reasoning to determine that this sensory input indicated an elephant. Not conscious reasoning - the occipital lobe does an enormous amount of processing without conscious supervision, and not declarative facts - more like skills to interpret sights correctly. But consciously or not, you used a type of reasoning.

So the major premiss ("All elephants are pink") wasn't directly about the world after all. We reached it by reasoning. So on this level at least, T=WVP looks unimpeachable and CVOT looks problematic.

Detour: Reasoning and valid deductive reasoning

I'll go back in a moment and finish that argument, but first I must clarify something.

My sharp-eyed readers will have noticed that I first talked about valid reasoning, but above I just said "reasoning" and meant something much broader than conscious deductive reasoning. I'm referring to two different things.

Deductive reasoning is the type of reasoning involved in the definition, because only deductive reasoning can be valid. But other types of reasoning too can be characterized by how well or poorly they preserve truth in some salient context, even while we define truth only by reference to valid reasoning. Truth-preservation is not the only virtue that reasoning can have. For instance, one can also ask how well it finds promising hypotheses or explores ramifications. Truth-preservation is just the aspect that's relevant to this definition.

One might object that evolutionarily, intuitive reasoning is not motivated by agreeing with deductive reasoning, but by usefulness. Evolution provided us with reasoning tools not because it has great respect for deductive reasoning, but because they are "good tricks" and saved the lives of our remote ancestors. In some cases useful mental activity and correct mental activity part company, for instance a salesperson convincing himself or herself that the line of products really is a wonderful bargain, the better to persuade the customers, when honestly it's not.

True. It's a happy accident that evolutionary "good tricks" gave us tools that strongly tend to agree with deductive reasoning. But accident or not, we can sensibly characterize other acts of reasoning by how well or poorly they preserve truth.

Can something save CVOT?

I said that "on this level at least, T=WVP looks unimpeachable and CVOT looks problematic."

Well, couldn't we extend CVOT one level down? Yes we could, but the same situation recurs. The inputs, which look at first like truths or falsities about the world, turn out on closer inspection to be the products of yet more reasoning (in the broad sense). And not neccessarily our own reasoning, they could be "pre-packaged" by somebody else. This gives us no better reason to expect that they truthfully describe the real world.

Can we save CVOT by looking so far down the tower2 of mental levels that there's just no reasoning involved? We must be careful not to stop prematurely, for instance at "I just see an elephant". Although nobody taught us how to see and we didn't consciously reason it out, there is a reasoning work being done underneath there.

What if we look so far down that no living creature has mentally operated on the inputs? For instance, when we smell a particular chemical, say formaldehyde, because our smell receptors match the chemical's shape?

Is that process still about the world? Yes, but not the way the color of elephants was. It tells you that there are molecules of formaldehyde at this spot at this time. That's much more limited.

CVOT can't stop here. It wouldn't be right to treat this process as magically perceiving the world. A nerve impulse is not a molecule of formaldehyde. To save CVOT, truth about the world still has to enter the picture somehow. There's still a mediating process from inputs (a molecule of formaldehyde is nearby) to outputs (sending an impulse).

But by now you can see the dilemma for CVOT: in trying to find inputs that are true but aren't mediated by reasoning, we have to keep descending further, but in doing so, we sacrifice aboutness and still face the same problem of inputs.

Can CVOT just stop descending at some point? Can we save it by poositing that the whole process (chemical, cell, impulse) produces an output that's true about the world, and furthermore that this truth is achieved other than by correctly processing true inputs about the world?

Yes for the first part, no for the second. If we fool the smell receptor, for instance by triggering it with electricity instead of formaldehyde, it will happily communicate a falsehood about the world, because it will have correctly processed false inputs.

So we do need to be concerned about the truth of the inputs, so CVOT does need to keep descending. It has to descend to natural selection at this point. Since I believe in the unity of design space, I think this change of destination makes no difference to the argument, so I merely mention it in passing.

Since we must descend as long as there are inputs, where will it end? What has outputs but no inputs? What can be directly sensed without any mediation?

If there is such a level to land at, I can only imagine it as a level of pointillistic experiences. Like Euclid's points, they have no part. One need not assemble them from lower inputs because they have no structure to require assembly.

If such pointillistic experiences exist, they aren't about anything because they don't have any structure. At best, a pointillistic experience indicates transiently, without providing further context, a single interaction in the world. Not being about anything, they can't be truths about the world.

So CVOT is not looking good. It needs its ultimate inputs to have aboutness and they don't, not properly anyways.

Does T=WVP do better?

If CVOT has problems, that doesn't neccessarily mean that T=WVP doesn't. Can T=WVP offer a coherent view of truth, one that doesn't need magically true inputs?

I believe it can. I said earlier that truth-preservation is not the only virtue that reasoning can have. Adbuctive reasoning can (under felicitous conditions) find good explanations and inductive reasoning can supply probable facts even in the absence of inputs. Bear in mind that I include unconscious, frozen, and tacit processes here, just as long as they are doing any reasoning work.

So while deductive reasoning doesn't drop the ball, other types of reasoning can actually improve the ball. Could they improve the ball so much that really, as processed thru this grand and mostly unconscious tower of reasoning, they actually create the ball? Could they incrementally transform initial inputs that aren't even properly about the world into truth as we know it? I contend that this is exactly how it happens.

Other indications that "about the world" just doesn't belong

Consider the following statements3:

  1. Sherlock Holmes was a detective
  2. Sherlock Holmes was a chef

Notice I didn't say "fictional". You can figure out that they're talking about fiction, but that's not in the statements themselves.

I assume your intuition, like mine, is that (1) is true (or true-ish) and (2) is false (or false-ish).

In CVOT, they're the same, because they're both meaningless (or indeterminate or falsely presupposing). (1) can't naturally be privileged over (2) in CVOT.

In T=WVP, (1) is privileged over (2), as it should be. Both are reasoning about Arthur Conan Doyle's fiction. (1) proceeds from healthy, unexceptional reasoning about them, while (2) somehow imagines Holmes serving the hound of the Baskervilles to dinner guests. (1) clearly proceeds from better reasoning than (2), and in T=WVP this justifies its superior truth status.

CVOT could awkwardly salvaged by saying that we allow accomodation, so we map "Sherlock Holmes" to the fictional detective by adding the qualifier "fictional" to the statements. But then why can't we fix (2) with accomodation too? Doyle never wrote "Cookin' With Sherlock", but it's likely that someone somewhere has. Why can't we accomodate to that too? And if we accomodate to anything anyone ever wrote, including (say) Alice In Wonderland and Bizzaro world, being about the world means almost nothing.

Furthermore, if we accept accomodation as truth-preserving, we risk finding that "All elephants are pink" is true too4 because "by pink, you must mean ever so slightly pinkish grey" or "by elephant, you must mean a certain type of mouse".

I could reductio further, but I think I've belabored it enough.

Circularity avoided in T=WVP

Rather than defining truth as what valid reasoning preserves, it's more usual to define valid reasoning as truth-preserving operations. Using both definitions together would make a circular definition.

But we can define valid reasoning in other ways. For instance, in terms of tautologies - statements that are always true no matter what value their variables take. A tautology whose top functor is "if" (material implication) describes a valid reasoning operation. For instance:

(a & (a -> b)) -> b

In English, "If you have A and you also have "A implies B", then you have B". That's modus ponens and it's valid reasoning.

I said tautologies are "statements that are always true", which is the conventional definition of them, but it contains "true". Again I need to avoid a circular definition. So I just define tautology and the logical operations in terms of a matrix of enumerated values (a truth-table). We don't need to know the nature of truth to construct such a matrix or to examine it. We can construct operations isomorphic to the usual logical operations simply in terms of opaque symbols:

XYX AND Y
truetruetrue
truefalsefalse
falsetruefalse
falsefalsefalse
XYX OR Y
truetruetrue
truefalsetrue
falsetruetrue
falsefalsefalse
XNOT X
truefalse
falsetrue

Some other virtues of this definition

Briefly:

  • It recovers the Quinean disquotation sense of truth. Ie, a quoted true statement, interpreted competently, is true.
  • It recovers our ordinary sense of truth (I hinted at this above)
  • It recovers the property that truth has where the chain is as strong as its weakest link.

Footnotes:

1 Or you trusted somebody else who told you the saw a grey elephant. In which case, read the argument as applying to them.

2 I'm talking as if it was a tower of discrete levels only for expository convenience. I don't think it's all discrete levels, I think it's the usual semi-fluid, semi-defined situation that natural selection creates.

3 Example borrowed from Ray Jackendoff

4 Strictly speaking, we would only do this for presuppositions, but if the speaker mentions "the pink elephant" at some point the reductio is good to go.