05 May 2012

I may not be the first to propose PDM

I may not be the first to propose PDM

Previously

Previously I advanced Parallel Dark Matter, the theory dark matter is actually normal matter that "lives" on one of 5 "parallel universes" that exchange only gravitational force with the visible universe. I presumptively call these parallel universes "branes" because they fit with braneworld cosmology.

Spergel and Steinhardt proposed it earlier

They may have proposed it in 2000, and in exactly one sentence.
It's not exactly the same: They don't explicitly propose that it simply is ordinary matter on another brane, and they do not propose multiple branes accounting for the ratio of dark matter to visible matter. But it's close enough that in good conscience I have to let everyone know that they said this first.
AFAICT they and everyone else paid no further attention to it.
The relevant sentence is on page 2: "M-theory and superstrings, for example, suggest the possibility that dark matter fields reside on domain walls with gauge fields separated from ordinary matter by an extra (small) dimension".

04 May 2012

The nature of Truth

The nature of Truth

Previously

I recently finished reading A User's Guide To Thought And Meaning by Ray Jackendoff. In it, he asks "What is truth?" and mentions several problems with what we might call the conventional view.

He didn't really answer the question, but on reading it, a surprising answer occured to me.

T=WVP

Truth is just what valid reasoning preserves.

No more and no less. I'll abbreviate it T=WVP

Not "about the world"

The conventional view is that truths are about the world, and valid reasoning merely doesn't drop the ball. I'll abbreviate it CVOT. To illustrate CVOT, consider:

All elephants are pink
Nelly is an Elephant
Nelly is pink

where the reasoning is valid but the major premiss is false, and so is the conclusion.

Since "about the world" plays no part in my definition, I feel the need to justify why it needn't and shouldn't.

"About the world" isn't really about the world

Consider the above example. Presumably you determined that "All elephants are pink" is false because at some point you saw an elephant and it was grey1.

And how did you determine that what you were seeing was an elephant and it wasn't pink? Please don't stop at "I saw it and I just knew". I know that readers of this blog have more insight into their thinking than that. Your eyes and your brain interpreted something as seeing a greyish elephant. I'm not saying it wasn't one, mind you. But you weren't born knowing all about elephants. You had to learn about them. You even had to learn the conventional color distinctions - other cultures distinguish the named colors differently.

So you used reasoning to determine that this sensory input indicated an elephant. Not conscious reasoning - the occipital lobe does an enormous amount of processing without conscious supervision, and not declarative facts - more like skills to interpret sights correctly. But consciously or not, you used a type of reasoning.

So the major premiss ("All elephants are pink") wasn't directly about the world after all. We reached it by reasoning. So on this level at least, T=WVP looks unimpeachable and CVOT looks problematic.

Detour: Reasoning and valid deductive reasoning

I'll go back in a moment and finish that argument, but first I must clarify something.

My sharp-eyed readers will have noticed that I first talked about valid reasoning, but above I just said "reasoning" and meant something much broader than conscious deductive reasoning. I'm referring to two different things.

Deductive reasoning is the type of reasoning involved in the definition, because only deductive reasoning can be valid. But other types of reasoning too can be characterized by how well or poorly they preserve truth in some salient context, even while we define truth only by reference to valid reasoning. Truth-preservation is not the only virtue that reasoning can have. For instance, one can also ask how well it finds promising hypotheses or explores ramifications. Truth-preservation is just the aspect that's relevant to this definition.

One might object that evolutionarily, intuitive reasoning is not motivated by agreeing with deductive reasoning, but by usefulness. Evolution provided us with reasoning tools not because it has great respect for deductive reasoning, but because they are "good tricks" and saved the lives of our remote ancestors. In some cases useful mental activity and correct mental activity part company, for instance a salesperson convincing himself or herself that the line of products really is a wonderful bargain, the better to persuade the customers, when honestly it's not.

True. It's a happy accident that evolutionary "good tricks" gave us tools that strongly tend to agree with deductive reasoning. But accident or not, we can sensibly characterize other acts of reasoning by how well or poorly they preserve truth.

Can something save CVOT?

I said that "on this level at least, T=WVP looks unimpeachable and CVOT looks problematic."

Well, couldn't we extend CVOT one level down? Yes we could, but the same situation recurs. The inputs, which look at first like truths or falsities about the world, turn out on closer inspection to be the products of yet more reasoning (in the broad sense). And not neccessarily our own reasoning, they could be "pre-packaged" by somebody else. This gives us no better reason to expect that they truthfully describe the real world.

Can we save CVOT by looking so far down the tower2 of mental levels that there's just no reasoning involved? We must be careful not to stop prematurely, for instance at "I just see an elephant". Although nobody taught us how to see and we didn't consciously reason it out, there is a reasoning work being done underneath there.

What if we look so far down that no living creature has mentally operated on the inputs? For instance, when we smell a particular chemical, say formaldehyde, because our smell receptors match the chemical's shape?

Is that process still about the world? Yes, but not the way the color of elephants was. It tells you that there are molecules of formaldehyde at this spot at this time. That's much more limited.

CVOT can't stop here. It wouldn't be right to treat this process as magically perceiving the world. A nerve impulse is not a molecule of formaldehyde. To save CVOT, truth about the world still has to enter the picture somehow. There's still a mediating process from inputs (a molecule of formaldehyde is nearby) to outputs (sending an impulse).

But by now you can see the dilemma for CVOT: in trying to find inputs that are true but aren't mediated by reasoning, we have to keep descending further, but in doing so, we sacrifice aboutness and still face the same problem of inputs.

Can CVOT just stop descending at some point? Can we save it by poositing that the whole process (chemical, cell, impulse) produces an output that's true about the world, and furthermore that this truth is achieved other than by correctly processing true inputs about the world?

Yes for the first part, no for the second. If we fool the smell receptor, for instance by triggering it with electricity instead of formaldehyde, it will happily communicate a falsehood about the world, because it will have correctly processed false inputs.

So we do need to be concerned about the truth of the inputs, so CVOT does need to keep descending. It has to descend to natural selection at this point. Since I believe in the unity of design space, I think this change of destination makes no difference to the argument, so I merely mention it in passing.

Since we must descend as long as there are inputs, where will it end? What has outputs but no inputs? What can be directly sensed without any mediation?

If there is such a level to land at, I can only imagine it as a level of pointillistic experiences. Like Euclid's points, they have no part. One need not assemble them from lower inputs because they have no structure to require assembly.

If such pointillistic experiences exist, they aren't about anything because they don't have any structure. At best, a pointillistic experience indicates transiently, without providing further context, a single interaction in the world. Not being about anything, they can't be truths about the world.

So CVOT is not looking good. It needs its ultimate inputs to have aboutness and they don't, not properly anyways.

Does T=WVP do better?

If CVOT has problems, that doesn't neccessarily mean that T=WVP doesn't. Can T=WVP offer a coherent view of truth, one that doesn't need magically true inputs?

I believe it can. I said earlier that truth-preservation is not the only virtue that reasoning can have. Adbuctive reasoning can (under felicitous conditions) find good explanations and inductive reasoning can supply probable facts even in the absence of inputs. Bear in mind that I include unconscious, frozen, and tacit processes here, just as long as they are doing any reasoning work.

So while deductive reasoning doesn't drop the ball, other types of reasoning can actually improve the ball. Could they improve the ball so much that really, as processed thru this grand and mostly unconscious tower of reasoning, they actually create the ball? Could they incrementally transform initial inputs that aren't even properly about the world into truth as we know it? I contend that this is exactly how it happens.

Other indications that "about the world" just doesn't belong

Consider the following statements3:

  1. Sherlock Holmes was a detective
  2. Sherlock Holmes was a chef

Notice I didn't say "fictional". You can figure out that they're talking about fiction, but that's not in the statements themselves.

I assume your intuition, like mine, is that (1) is true (or true-ish) and (2) is false (or false-ish).

In CVOT, they're the same, because they're both meaningless (or indeterminate or falsely presupposing). (1) can't naturally be privileged over (2) in CVOT.

In T=WVP, (1) is privileged over (2), as it should be. Both are reasoning about Arthur Conan Doyle's fiction. (1) proceeds from healthy, unexceptional reasoning about them, while (2) somehow imagines Holmes serving the hound of the Baskervilles to dinner guests. (1) clearly proceeds from better reasoning than (2), and in T=WVP this justifies its superior truth status.

CVOT could awkwardly salvaged by saying that we allow accomodation, so we map "Sherlock Holmes" to the fictional detective by adding the qualifier "fictional" to the statements. But then why can't we fix (2) with accomodation too? Doyle never wrote "Cookin' With Sherlock", but it's likely that someone somewhere has. Why can't we accomodate to that too? And if we accomodate to anything anyone ever wrote, including (say) Alice In Wonderland and Bizzaro world, being about the world means almost nothing.

Furthermore, if we accept accomodation as truth-preserving, we risk finding that "All elephants are pink" is true too4 because "by pink, you must mean ever so slightly pinkish grey" or "by elephant, you must mean a certain type of mouse".

I could reductio further, but I think I've belabored it enough.

Circularity avoided in T=WVP

Rather than defining truth as what valid reasoning preserves, it's more usual to define valid reasoning as truth-preserving operations. Using both definitions together would make a circular definition.

But we can define valid reasoning in other ways. For instance, in terms of tautologies - statements that are always true no matter what value their variables take. A tautology whose top functor is "if" (material implication) describes a valid reasoning operation. For instance:

(a & (a -> b)) -> b

In English, "If you have A and you also have "A implies B", then you have B". That's modus ponens and it's valid reasoning.

I said tautologies are "statements that are always true", which is the conventional definition of them, but it contains "true". Again I need to avoid a circular definition. So I just define tautology and the logical operations in terms of a matrix of enumerated values (a truth-table). We don't need to know the nature of truth to construct such a matrix or to examine it. We can construct operations isomorphic to the usual logical operations simply in terms of opaque symbols:

XYX AND Y
truetruetrue
truefalsefalse
falsetruefalse
falsefalsefalse
XYX OR Y
truetruetrue
truefalsetrue
falsetruetrue
falsefalsefalse
XNOT X
truefalse
falsetrue

Some other virtues of this definition

Briefly:

  • It recovers the Quinean disquotation sense of truth. Ie, a quoted true statement, interpreted competently, is true.
  • It recovers our ordinary sense of truth (I hinted at this above)
  • It recovers the property that truth has where the chain is as strong as its weakest link.

Footnotes:

1 Or you trusted somebody else who told you the saw a grey elephant. In which case, read the argument as applying to them.

2 I'm talking as if it was a tower of discrete levels only for expository convenience. I don't think it's all discrete levels, I think it's the usual semi-fluid, semi-defined situation that natural selection creates.

3 Example borrowed from Ray Jackendoff

4 Strictly speaking, we would only do this for presuppositions, but if the speaker mentions "the pink elephant" at some point the reductio is good to go.

27 April 2012

Review: Ray Jackendoff's User's Guide To Thought And Meaning

A User's Guide To Thought And Meaning

Previously

I just finished A User's Guide To Thought And Meaning by Ray Jackendoff, a linguist best known for X-bar theory.

Summary

I wasn't impressed with it. Although he starts off credibly if pedestrianly, the supporting arguments for his main thesis are fatally flawed. I found it annoying as I got further into the book to see him building on a foundation that I considered unproven and wrong.

His main thesis can be summarized by a quote from the last chapter:

What we experience as rational thinking consists of thoughts linked
to language.  The thoughts themselves aren't conscious.

A strange mistake

The foregoing quote leads me to the strangest assumption in the book. He says that our mental tools are exactly our language tools. He does allow at one or two points that visual thinking might qualify too.

That may be true of Ray, but I know for a fact that it's not true of me. I often have the experience of designing some piece of source code in my head, often when I'm either falling asleep or waking up. Then later I go to code it, and I realize that I have to think of good names for the various variables and functions. I hadn't used names before when I handled them mentally because I wasn't handling them by language (as we know it). I wasn't handling them by visual imagery either. Of course I was mentally handling them as concepts.

There are other indicators that we think in concepts: The tip-of-the-tongue experience and words like "Thingamajig" and "whatchamacallit". In the chapter Some phenomena that test the Unconscious Meaning Hypothesis, Ray mentions these but feels that his hypothesis survives them. It's not clear to me why he concludes that.

What is clear to me is that we (at least some of us) think with all sorts of mental tools and natural language is only one of them.

If he meant "language" in a broad sense that includes all possible mental tools, which he never says, it makes his thesis rather meaningless.

Shifting ground

Which brings me to a major problem of the book. Although he proposes that all meaning is unconscious, his support usually goes to show that some meaning (or mental activity) is unconscious. That's not good enough. It's not even surprising; of course foundational mental activity is unconscious.

To be fair, I will relate where he attempts to prove that all meaning is unconscious, from the chapter What's it Like To Be Thinking Rationally? He does this by quoting neuropsychologist Karl Lashey:

No activity of mind is ever conscious.  This sounds like a paradox but
it is nonetheless true.  There are order and arrangement, but there is
no experience of the creation of that order.  I could give numberless
examples, for there is no exception to the rule.  

Unfortunately, Lashey's quote fails to support this; again he gives examples and takes himself to have proven the general case. Aside from this, he simply pronounces his view repeatedly and forcefully. Jackendoff says "I think this observation is right on target" and he's off.

One is tempted to ask, what about:

  • Consciously deciding what to think about.
  • Introspection
  • Math and logic, where we derive a meaning by consciously manipulating symbols? Jackendoff had talked about what philosophers call the Regression Problem earlier in the chapter, and I think he takes himself to have proven that symbolic logic is unconscious too, but that's silly. He also talks about the other senses "all" being misleading in syllogisms, but that's a fact about natural language polysemy, not about consciousness.

None of this is asked, but one is left with the impression that all of these "don't count". It makes me want to ask, "What would count? If nothing counts as conscious thought, then you really haven't said anything."

One last thing

In an early chapter Some Uses of ~mean~ and ~meaning~, he tries to define meaning. Frustratingly, he seems unaware of the definition I consider best, which is generally accepted in semiotics:

X means Y just if X is a reliable indication of Y

Essentially all of the disparate examples he gives fall under this definition, either directly or metonymically.

Since the meaning of "meaning" is central to his book, failure to use find and use this definition gives one pause.

01 March 2012

Digrasp - The options for representing digraphs with pairs

Digrasp 3

Previously

This is a long-ish answer to John's comment on How are dotted graphs second-class?, where he asks how I have in mind to represent digraphs using pairs.

The options for representing digraphs with pairs

I'm not surprising that it comes across as unclear. I'm deliberately leaving it open which of several possible approaches is "right". ISTM it would be premature to fix on one right now.

As I see it, the options include:

  1. Unlabelled n-ary rooted digraph. Simplest in graph theory, strictest in Kernel: Cars are nodes, cdrs are edges (arcs) and may only point to pairs or nil. With this, there is no way to make dotted graphs or lists, so there is no issue of their standing nor any risk of "deep" conversion to dotted graphs. It loses or alters some functionality, alists in particular.
  2. Labelled binary rooted digraph: More natural in Kernel, but more complex and messier in graph theory. Cars and cdrs are both edges, and are labelled (graph theory wise) as "car" or "cdr". List-processing operations are understood as distinguishing the two labels and expecting a pair in the cdr. They can encounter unexpected dotted ends, causing errors.
  3. Dynamic hybrid: Essentially as now. Dottedness can be checked for, much like with `proper-list?' but would also be checkable recursively. There's risk of "deep" conversion from one to the other; list-processing operations may raise error.
  4. Static hybrid: A type similar to pair (undottable-pair) can only contain unlabelled n-ary digraphs, recursively. List operations require that type and always succeed on it. There's some way to structurally copy conformant "classic" pair structures to undottable-pair structures.
  5. Static hybrid II: As above, but an undottable-pair may hold a classic pair in its car but not its cdr, and that's understood as not part of the digraph.

And there's environments / labelled digraphs

By DIGRASP, I also mean fully labelled digraphs in which the nodes are environments and the labels are symbols. But they have little to do with the list-processing combiners.

27 February 2012

How are dotted graphs second class?

Digrasp

Previously

I said that dotted graphs seem to be second class objects and John asked me to elaborate.

How are dotted graphs second class?

A number of combiners in the spec accept cyclic but not dotted lists. These are:

  • All the type predicates
  • map and for-each
  • list-neighbors
  • append and append!
  • filter
  • reduce
  • "Constructably circular" combiners like $sequence

So they accept any undotted graph, but not general dotted graphs. This occurs often enough that to make dotted graphs seem second-class.

Could it be otherwise?

The "no way" cases

For some combiners I think there is no sane alternative, like `pair?' and the appends.

The "too painful" cases

For others, like filter or list-neighbors, the dotted end could have been treated like an item, but it seems klugey and irregular, and they can't do anything sane with a "unary dotted list", ie a non-list.

$sequence etc seem to belong here.

map and for-each

For map and for-each, dotted lists at the top level have the same problem as above, but ISTM "secondary" dotted lists and lists of varying length could work.

Those could be accomodated by passing another combiner argument (proc2) that, when any list runs out, is given the remaining tails isomorphically to Args, and its return is used as the tail of the return list. In other words, map over a "rectangle" of list-of-list and let proc2 work on the irregular overrun.

The existing behavior could be recaptured by passing a proc2 that, if it gets all nils, returns nil, and otherwise raises error. Other useful behaviors seem possible, such as continuing with default arguments or governing the length of the result by the shortest list.

Reduce

Reduce puzzles me. A cyclic list's cycle after it is collapsed to a single item resembles a dotted tail, and is legal. Does that imply that a dotted list should be able to shortcut to that stage?

25 February 2012

Digrasp

Digrasp

Previously

I have often blogged about Kernel, John Shutt's Scheme-like language.

Lisp becomes Digrasp?

One interesting thing about Kernel is that it treats pairs rather than lists as fundamental. Consequently, digraphs constructed from pairs have a certain fundamental status too. Most operations in Kernel allow arbitrary digraphs if they allow pairs. OK, dotted graphs seem to be second class objects. But as long as every cdr points to a pair or nil, you can pass it almost anywhere that accepts a pair.

So rather than LISt Processing, it's like DIrected GRAph ProceSsing. OK, the acronym's not perfect, but it sounds better than DIGRAP and echoes LISP.

24 February 2012

Review Beginning Of Infinity 3

Review Beginning Of Infinity 3

Been busy

I've been busy adding a major feature to Rosegarden, so I've let this go for a while. But I fixed the last known bug today, so I may already be done (or not).

Previously

So now that I have a little time again, this has been jangling around in my mind. Patchwork Zombie compared hard-to-vary to peaks on a fitness landscape, in order to make the concept more obvious.

How much is hard-to-vary like a fitness landscape?

A pointy landscape is definitely part of the picture. The layout of the landscape corresponds in the familiar way to the dimensions of variation.

But it's not a fitness landscape, because hard-to-vary is itself the fitness condition. Or to be tiresomely pedantic, Deutsch appeals to it as being the relevant fitness condition on various topics. So height can't also be the fitness condition.

That much I'm sure of. Now comes the part where I have to relate what he "surely must have meant". ISTM that height on the landscape corresponds to some perceptual dimension. Sharp peaks which fall off very steeply are hard to vary and rounded peaks aren't.

And I bet you noticed, where I said "some perceptual dimension", that there wasn't just one perceptual dimension in the previous posts. Right. A landscape could have many height dimensions / perceptual dimensions. Steepness on all of them would count; presumably it's something like the norm of the gradient.

Deutsch's motivating example

I'll relate how Deutsch introduced hard-to-vary, which may make it clearer.

He initially talks about hard-to-vary by comparing two ways of copying things. Both are like "telephone", the children's game where one person tells a secret to the next, who tells it to the next, to the next, and the last person tells it aloud, and you see how much it has changed.

Analog
Each person sees a picture of a Chinese junk, and draws it, and then shows that drawing to the next person. Every generation of copy is a little less faithful to the original. Probably no copy is very much worse than the previous, but the result at the end scarcely resembles the picture at the start of the chain.
Digital
Origami (paper-folding). Each person is shown how to fold a Chinese junk. If an intermediate guy makes a sloppy copy, the next guy may still understand what he was trying to do; his copy won't inherit the sloppiness. Or the next guy may fail to understand the intent, and then his copy will not be much like the original at all, and everyone further down the line will inherit his mistake. Every generation of copy is either basically the same as the original or very wrong.

The "digital" copying, Deutsch says, is the one that's hard to vary. Variations either disappear or they change the design into some grossly different design.