20 September 2010

Why not use stow for a distro?

Why not base a distro on stow?

What Stow is

Stow is a Perl package by Bob Glickstein that helps install packages cleanly. More cleanly than you might think possible, if you're familiar with traditional installation. It works like this:

  • You install the package entirely inside one directory, usually a subdirectory of usr/local/stow No part of it is in bin or usr/bin or usr/doc etc, all in the one directory.
    • We'll say it's in /usr/local/stow/foo-1.0
  • You command:
    cd /usr/local/stow/ 
    stow foo-1.0
  • That makes symlinks from usr/doc, usr/bin, etc into the stow/foo-1.0 directory tree.
  • Now the package is available just as if it had been installed.
  • Want it gone? Just
    stow -D foo-1.0

This is neat in every sense of the word. It can manage multiple versions of a package neatly too.

Why they fit together

One task that a distro such as Debian or Redhat puts a lot of work into is package management. New packages typically put files in many places such as usr/bin, and a distro's package manager has to track where they go so it knows what to remove when deinstalling or upgrading. It has to use various tricks such as renaming old versions of config files.

But stow's way is cleaner. Why not do it that way?

Also, traditional distros aren't too happy with source distributions. They can compile them when they have a source package in their own format, but that mostly misses the point.

Mostly you compile in order to get the latest (stable or bleeding edge) version of something. For instance, I compiled GEDA today. I wanted the latest because the version in the Debian lenny distro doesn't handle everything that gschem outputs.

There was not a Debian package of the latest geda-gaf source, and I didn't really expect there to be. So I downloaded the latest and compiled it. Now Debian/dpkg/aptitude not only didn't help me, it was actually in conflict with what I was doing. Of course then dpkg couldn't manage its prerequisites for me, so I have to do that chore manually. Worse, the distro has its own idea of what is provided and what isn't, and what I'm compiling doesn't count in its eyes.

I didn't want to install on top of a version that aptitude installed, because that will confuse it. So I told aptitude to remove the package. That breaks depenencies and there is no reasonable way to tell aptitude that what I just compiled satisfies the dependency - to do that I'd have to create a Debian package for it, and still aptitude would treat it as a suspicious local package. So I had to remove easyspice, which I didn't want to. I probably have to fetch and compile it - even though I have it. Debian package management thinks it knows better than me.

And it occurred to me that it didn't have to be that way. The biggest reason why the stuff I compiled didn't have equal standing was the need to manage where packages put their files. It can't easily mix distro'd packages and compiled source because there's nothing to tell it which files the source "owns." Stow does that cleanly, and could do so even for a mix of distro'd packages and compiled source.


And would it be so hard to make autoconf's ./configure tell a package manager about missing prerequisites? Most of them were just the *-dev parts of packages I already had. I'm not too familiar with autoconf, but the only fundamental problem I see is a lack of a common agreed-on format.

Downside: Stow needs to bootstrap

Obviously this can't work for every package, since stow itself depends on a number of packages including a kernel and a Perl interpreter. Aside from Perl's intrinsic yukkiness, that also means that the Perl interpreter and its sizable standard libraries must be available.

There are, happily, a number of variants or offshoots: Graft, lnlocal, Reflect, Sencap, Toast. However they mostly seem to be in Perl as well. One exception is Reflect, which requires only bash and coreutils. Unfortunately, Reflect appears to be abandoned.

But they don't agree on install-time config management

Stow won't replace install-time config management, though. Compiling from source typically doesn't have that at all; things are configged at build time with ./configure.

That's a problem. Distro'd packages and compiled source just don't agree on when to config.

Stow should have been done years ago

Years back, I thought that package management via symlinks and dedicated trees would be a neat idea. I never did anything about it. Bob Glickstein did. He's also done a number of other neat forward-looking things, including sregex for emacs.

17 September 2010

Sweet Dreams

Thoughts On Sweet Dreams

Sweet Dreams: Philosophical Obstacles to a Science of Consciousness is a 2005 book by philosopher Deniall Dennett.


Dennett's understanding of consciousness

Dennett builds on his earlier ideas about consciousness, in particular the Multiple Drafts Model. He argues for a definition of consciousness as analogous to fame1. Thoughts that we are aware of are like famous people, while thoughts that we don't notice are like unknown wanna-bes. Here I say "thoughts", but that's just my term for convenience and brevity; Dennett makes it clearer what he means but I can't sum it up in a few words.

But don't imagine little mental homunculi as fans of the "famous" thoughts. The analogy doesn't go as far as that. The "audience" are simple mental modules. They may be made of even simpler modules. At the bottom, it's just tiny mental robots.

He says that an important point of the analogy is that what makes thoughts part of one's consciousness or not are their ~sequelae~2. He argues this by asking us to imagine a situation where an up-and-coming author was about to hit it big - new book coming out with much publicity, big TV interviews lined up, maybe even already taped - and on the day that he would have gotten famous, some natural disaster occurred and the news was all about that, eclipsing the hopeful author. That wouldn't be fame, even though fame would be the normal consequence, because the normal sequelae of fame did not occur. Similarly, Dennett argues, thoughts that are otherwise the same as normal conscious thoughts but don't become mentally "famous" - say because one was distracted at the time - are not conscious because they lack the sequelae that would normally make them conscious.

Mental rehearseal as uniquely human

Dennett also adds some thoughts about mental rehearseal, "our habit of immediately reviewing or rehearsing whatever grabs our attention strongly". He speculates that mental rehearseal:

  • may be what makes a conscious thought stay conscious rather than lapsing into obscurity.
  • may be a uniquely human activity (vs animals)
  • lack of it may account for infantile amnesia, ie it's why we don't remember our very early years.

My thoughts

So are computers conscious?

Following Dennett's definition leads me to the surprising conclusion that not only are computers conscious, they are super-conscious. Computer behavior not only fits the definition, it fits it far better than ours does.

Computers can, if suitably instructed, call up any piece of data in their RAM and send it essentially anywhere in themselves: to the CPU, to the peripherals, to the larger world via the net. (Add many "etc"s here to cover the various possibilities) They do the echoes/reverberating/recollectability thing much more perfectly than we do.

Maybe it makes more sense to say that computers are extremely conscious, just not at all self-willed.

What if fame and consciousness are really the same?

As I said above, Dennett makes it clear that his fame analogy is not literal; "famous" thoughts are appreciated by mechanical mental modules, not by an audience of tiny people. But of course at the sub-human granularity he's talking about, there couldn't be a human audience. At the coarser granularity of human communication, that doesn't apply.

What if we take the fame = consciousness analogy as actually correct?

  • Consider famous thoughts - perhaps the phrases of Shake-speare or the equations of Newton. Do their continued sequelae make their thinkers still conscious?

    I'd say no. Obviously the thoughts are part of some consciousnesses, but not part of Newton's no-longer-functioning consciousness.

  • Contrariwise, consider a thought that never has public sequelae - a thought that never gets out into the world at all. Perhaps the thinker dies without ever communicating it by word or by deed. So those thoughts have no public sequelae, which we're assuming are of a piece with mental sequelae. Are those thoughts conscious or not?

    It misses the point to answer, "Of course! If you had asked him, he would have said so, and been miffed at you for doubting it". We just said that that never happened, and the real3, actual sequelae are at issue, not hypothetical ones.

  • If at some dark point in the future humankind is all destroyed, then there will have been no ultimate sequelae to any fame. By our hypothesis, neither will there have been any to our thoughts. Does that imply that nobody was ever conscious?

    In my view, yes in the large cosmic view, but we observers live within the smaller view. The (ultimately doomed) culture surrounds us and we can see it just fine. We may reasonably answer no, while still knowing that some day it will all be gone.

On being misunderstood

Daniel Dennett writes in a very gentle style. Phrases like "These idiots did not understand what I was saying" are not to be found in his books. Nevertheless, I get the impression that sometimes his patience is sometimes tried by misunderstandings that he is responding to.


1 He suggests "influence" as another inexact word for what he is describing, but does not expand on that.

2 Again, I can't do justice to sequelae in a few words, but basically he means "consequences", in a slightly technical philosopher's sense. See here for more.

3 This is not to contradict the many-worlds interpretation. If this concerns you, then just read "real" as "in the same branch as the observer".

New edition of Grieg's Lyric Pieces

I bought Lyric Pieces. It's the sheet music to all of Grieg's Lyric Pieces. There have been editions of this before. I had borrowed the Bertha Feiring Tapper edition before, but at c. $80 for one book, I couldn't justify buying it. I was happy when this new edition came out at a reasonable price.

Also, the fingerings here are cleaner in a few places. I have in mind Vogelein, where Tapper had some strange fingerings that may have been mistakes.

14 September 2010

Alternatives to No Mind Hair

Hanson says: Gods Near or No Mind Hair

Robin Hanson speculates that U-evolution (aka fecund universes) implies that either:

  • Imprinting minds on baby universes is impossible.
  • One such mind is imprinted on our universe and probably could be found.

In this post I propose some other possibilities.

Let's calculate the right probabilities

But first I want to address his calculation.

A self-reproducing universe would have a chance p of evolving intelligence, which would then birth an expected number N of similar baby universes, such that p*N >1.

This calculation isn't calculating the right thing. It gives us an estimated number of mindful universes for a given number of U-generations. That tells us nothing. Under the U-evolution assumption, there are infinitely many universes that we might have been in.

What we need to estimate is the likelihood that a universe selected at random is mindful or mindless. We could model it with an infinitely long Markov chain. So either:

  • The MINDFUL state is persistent and the MINDLESS state is transient, so the probability of picking MINDFUL is 1.0
  • The MINDLESS state is persistent and the MINDFUL state is transient, so the probability of picking MINDFUL is 0.0
  • Both MINDLESS and MINDFUL states are more likely to transition to each other than to themselves. Then the probability of picking MINDFUL depends on the transition probabilities.
  • Or MINDLESS never transitions to MINDFUL - but we know that's not so.

Mindless because mindless black holes are useful

Would the U-children of a mindful universe all be mindful universes? Or even mostly?

Though of course I can't estimate the probability that it would with any real confidence, if absolutely forced to answer, I would say that probably not. That would imply that an advanced civilization would imprint on nearly every black hole it made. But black holes are probably very useful to them for other purposes. Much has been written on the potential uses of black holes; I won't repeat it here.

In other words, the transition probability MINDFUL -> MINDFUL is probably fairly low. So I would have to estimate the probability that we live in a mind-imprinted universe as fairly low.


Many years ago on rec.arts.sf.science I argued a similar issue with regard to the Fermi Paradox1. The issue was not the Fermi Paradox itself, but what a certain variant of the Anthropic Principle2 had to say about it.

The Anthropic Principle has often been used to answer philosophical questions about probable universes by ruling out the uninhabitable ones, thus making our own habitable universe seem more probable, even if a priori it is calculated to be improbable3.

I suggested that the question "Are we alone in the universe?" was similar to the Anthropic question, and like the Anthropic question, implied an affirmative answer. I speculated that if advanced extraterrestrials had visited us, we would not be asking whether we were alone in the universe, not even to proceed to answer the question in the negative. They would almost surely dominate us so thoroughly that we wouldn't have a separate identity to ask about. We would ask "Are humans-and-aliens alone in the universe?" and we would still answer no.

For comparison, I don't think that dogs would ask "Are dogs alone in the universe?", even if they could formulate the thought. They'd ask "Are dogs-and-people alone in the universe?" Another comparison: I don't seriously ask "Am I myself alone in the universe?". It'd be a silly question! I only seriously ask "alone?" about the largest extension of "us" that I know of, all humanity.

By this argument, it is not very surprising that we find that the largest extension of "us" to be alone in the universe. It could hardly be otherwise.

First movers and families of mindful universes

Other people speculated that, given the Anthropic Principle, we should expect to be in a universe with nearly as many observers as possible. But the argument above provides a counterargument: Almost no matter our state of development, we'll find that "we" are alone. On that interpretation, it's not that surprising that "we" should be in some intermediate stage of growth and appear likely to be the first movers in the next stage of growth.

In a similar vein, even if there are familial chains of mindful universes, perhaps it is still not surprising that we find ourselves to be potentially first movers.

Or maybe it's us

Suppose that the universe was in fact imprinted and will one day become mindful. What form should we expect the mechanics of it to take?

Of course it's very hard to say, but one answer builds on the idea that "Ontology Recapitulates Phylogeny". That is, the way that an organism develops loosely repeats the way it evolved in the first place. For instance, at one point in a fetus' development, it has gills. Yes, you and I had gills. What for? They're not functional, so why do we grow them and then lose them? Because we (and all vertebrates) evolved from fish.

Why is it still like that? Because the only way that our DNA knows how to build a human is by doing the tried-and-true way that worked before, with small variations. Apparently all the variations that didn't make gills4 also omitted something important, so the gill-making stays.

It could well be that our most distant U-ancestor-inhabitants came to be in a way broadly similar to how we did: Evolving chemically from non-life in some stable situation fed by a stellar energy source. I'll call that idea Evolving-on-planets. If so, "Ontology Recapitulates Phylogeny" implies that when they imprinted mindfulness on our universe, they would use evolving-on-planets. They'd make, more or less, us.

This is not to suggest that our U-ancestor-inhabitants were short-sighted like DNA is. Even if they could design other methods, they might still prefer mindfulness thru evolving-on-planets:

  • Having no feedback from the black holes they so imprinted, they must design "blindly". Maybe they would trust only the most familiar way of coming to mindfulness, the same one that spawned them.
  • Maybe they would have a sentimental attachment to evolving-on-planets.
  • Maybe evolving-on-planets is a technologically appealing way for a U-imprinter to make a universe mindful.


1 The Fermi Paradox is: Since there are so many stars in so many galaxies, and each has some probability of bearing life, where is everybody?

2 The Anthropic Principle states that we can only find ourselves in a habitable universe, because otherwise who would be there to notice its uninhabitability?

3 But beware the Boltzmann Brains answer. The Anthropic Principle cannot be used to overcome arbitrarily high adverse probabilities. If the adverse probability is too steep, it becomes more likely that an observer would find himself to be an improbable thing in a probable but inhospitable universe (A Boltzmann Brain) than a normal thing in an improbable but hospitable universe.

4 Or that reduced gill-making to less than it is now, to be very picky.

11 September 2010

Scheme setters

Where looking for a generalized set! led me

Objections I read about SRFI-17

It "associates properties with Scheme symbols"

  • It wants to associate properties with Scheme symbols. This is considered not in the spirit of Scheme
  • In SRFI-17, setters pertain to symbols rather than procedures. That can cause problems when a symbol is rebound.

But that turns out not to be the case. It takes some careful reading to discover it, and reading the reference implementation helps. SRFI-17 associates setters to procs, not to symbols.

That bears repeating: What it associates a setter to is really a proc that you can pass to map etc. It is as if every proc becomes a double-proc of getter-and-setter form..

Its syntax

Some have proposed that set! should only apply to a symbol; that a generalized set should have some other name. I don't find the argument convincing and I don't agree. ISTM that when set! has a symbol as its first argument, that's a natural case of the generalized set!. So I prefer keeping the SRFI-17 syntax:

;;More general set!
: (set! (proc args) value)
;;Fully general set!
: (set! form value)
;;Set! as we know it:
: (set! var-name value)


So I think SRFI-17 is just about right.

Andy Gaynor's proposal

I also looked at Andy Gaynor's 2000 proposal, http://srfi.schemers.org/srfi-17/mail-archive/msg00077.html

But IMO it has some drawbacks of its own:

  • I don't like the multiple arguments facility. `set!' shouldn't take multiple arguments and figure out what to do with them.
  • In some ways it does too much. I'd prefer no `define-setter'. I'd rather define getter and setter together. If a pre-existing lambda's setter is to be defined, it would seem better to use SRFI-17's proposal of:
    (set! (setter x) proc).
  • I'd also prefer no support for directly calling the setter. It can be done but shouldn't be part of the implementation. Setters are all about using a parallel expression for destructuring what is to be set. Using a setter explicitly is just calling a function. Why support that specially?

What else might have a setter?

One interesting candidate for having a setter is `assoc'. There are different interpretations:

If the associated item is not found, add it, otherwise mutate it.
If the associated item is not found, error, otherwise mutate it. So the set of keys is immutable.
If the associated item is not found, add it, otherwise error. So the existing mappings are immutable.

More fields

If we're going to allow these double-lambdas to have two fields, why not more? Some other possibilities:

  • "pusher" and "popper", useful when the underlying type is a container.
  • docstring, a string or a function which yields documentation.
  • "accessor-doc", a function that describes the accessor, suitable for combining to tell where a thing is located.
  • "matcher" - allowing constructor functions to have counterparts for pattern-matching.
    • It would be nice to can use this for formals in arglists.
  • "type" of the proc
    • Min/max arity
    • Argument types
    • Promises about it
    • Other type information
  • "evaluation-type"
    • normal
    • macro
    • built-in
  • Optimization information
    • "merge", which tries to advantageously merge a given call to the function with other calls immediately before or after it.
      • How: It is passed this call and preceding calls, returns a merge or #f.
      • A smart merge-manager may know what functions it can hop over.
    • Specializations. List of the available specializations.
    • Hidden read/clobbered arguments.
    • compiler-macro as in Common Lisp
    • Profiling data

Lambda as an extensible type

So this extended lambda would be an object type. Since the type will be so extensible, we'll want a base type that provides this minimally and can be extended. So that base type must be recognized by the eval loop as applyable.

Scheme generalized letting

Generalized letting

Imagine for a moment that Scheme bindings could bind, not just values but all sorts of properties that a field might have. Think CLOS. Here are some of the possibilities:

(let-plus name
      ;;Equivalent to an ordinary binding
      (id1 'value value-1)
      ;;An immutable object
      (id2 'value value-2 'setter #f) 
      ;;An immutable object with a getter.
      (id3 'value value-3 'getter get-id3 'setter #f) 
      ;;A "called" object, with both getter and setter.
      (id4 'value value-4 'getter get-id4 'setter set-id4) 
      ;;An uninitialized "called" object
      (id5 'getter get-id3 'setter set-id5) 
      ;;A type-checked object.
      (id6 'value value-6 'satisfies my-type-pred?))
   (list id1 id2 id3 id4 id5 id6)
   (set! id1 12)
   (set! id2 12)
   (set! id3 12)
   (set! id4 12)
   (set! id5 12)
   (set! id6 12))

Equivalent to

(let name
      (id1 value-1)
      (id2 value-2) 
      (id3 value-3) 
      (id4 value-4) 
      (id6 value-6))
   (list id1 id2 
      (get-id3 id3) 
      (get-id4 id4) 
      (error "Uninitialized id5") 
   (set! id1 12)
   (error "Can't set id2")
   (error "Can't set id3")
   (set! id4 (set-id4 12))
   (set! id5 (set-id5 12))
      ((new-val 12))
      (if (my-type-pred? new-val)
         (set! id6 new-val)
         (error "Wrong type"))))

What this provides

This mechanism could provide:

  • immutable fields
  • effectively constant environments
  • enforce type restrictions
  • succintly enforce controlled access to fields (They can be succintly provided now, but not succintly enforced)

How this can be minimal and extensible

Looking at the above, you can see that there are already a fair number of fields that might be set, and one would like for the user to can extend it further. Surely we don't want to build anything so complex into the core language.

So what is a modest construct that can support this? A mere syntactic transformation won't do. It won't apply this everywhere the current environment is used, which we need.

So I propose a primitive binding construct that knows a getter function. That is, when the associated symbol is evaluated, the value is the return value of the getter function when passed the (actual) value associated with that symbol.

In SRFI-17, any procedure can have a setter, which is an associated procedure that sets the object's value and set! will use it, so controlled setters would just fall out.

(controlled-let my-guard
   ((a 12)(b 144))
   (list a b))

is equivalent to:

   ((a 12)(b 144))
   (list (my-guard a) (my-guard b)))

In order to implement the generalized binding constructs on top of this, one would:

  • Predetermine a suitable object type T
  • Predetermine a getter that expected it argument to be of type T
    • Similarly for the setter, if provided.
  • Write a macro that:
    • Takes a list of generalized binding specs and a body
    • Interpret each binding spec as a symbol plus the arguments to a constructor for an object of type T.
    • Uses controlled-let, giving it:
      the predetermined getter
      The zip of the bound symbols and the list of constructor forms
      the body argument

09 September 2010

Windbelts 3

A few days ago I suggested that large-scale windbelts might do more with less by transfering energy between strings so that it would only need one magnet-and-coils generator, or at least fewer. Today just for fun I sketched what a large freestanding multi-string windbelt might look like if it used this idea. The circle in the middle would contain the generator.


org2blog now can upload photos conditionally


I added a new interactive function org2blog-dir-add-new-photos, which uploads just the new photos from a given directory.

In doing this, I factored g-client's gphoto-directory-add-photos. Now gphoto-directory-add-photos-x does almost the same thing but takes a test predicate which is called with each photo's filename.

07 September 2010

The King's Singers: Watching the White Wheat (Review)

Watching the White Wheat (Folksongs of the British Isles)

The title song by itself is worth the price of the album. This one has got all the bases covered: It's a beautiful tune, the arrangement is gorgeous, and to nobody's surprise the King's Singers do a fine job with it.

This CD resembles the Annie Laurie CD. It's a mix of folksongs from the British isles. Some are fast, fun tunes, some slow and soulful. And for some reason, on both CDs the best track is the only one sung in Welsh (the rest are in English).

I liked the rendition of Danny Boy on this CD better than the one on Annie Laurie; they have no other songs in common. The arrangements tend to be less distracting than those on Annie Laurie. One exception: the solos on the first two verses of O Waly, Waly. However, the remaining verses are gorgeous.

On the whole, I like this CD better.

Good tracks:

  • Early One Morning
  • Watching the White Wheat (Bugeilo'r Gwenith Gwyn)
  • Danny Boy (Londonderry Air)
  • O my love is [like] a red, red rose.
  • There's nae luck about the house

Pretty good:

  • O Waly, Waly
  • Migildi Magildi

Windbelts Again

Windbelts Again

Where else to put them

I blogged about windbelts a week ago. I suggested using them on bridges and tall buildings. I completely forgot about power lines. Like bridges and tall buildings, they already have basically all the physical and electrical infrastructure in place.

How to make them do as much with less

One potential obstacle to large-scale deployment of windbelts is that on order to generate a serious amount of electricity, each installation would need not just one string but many strings. Naively done, you've use one magnet and several coils for each string. This would multiply some of the material requirements linearly.

But there may be a better way. If you're familiar with the acoustics of the piano, you know that the individual notes1 have not one but three strings. The vibration of any of the strings is transmitted to the other two2. This is the source of a piano's sustained resonance, for one thing.

One could use the same principle with windbelts to couple a range of strings together. Then only one of the group of strings would need a magnet and coils, even though power was collected over all the strings.


1 More precisely, most notes. Often below bass F there's just one string, then there's an octave of double strings, and the rest is triple strings.

2 And to the soundboard and the other strings etc.

04 September 2010

Question Forums

Question Forums


In online discussions, people talk past each other. It's a fact of online life. Much moreso in discussions about politics or ideology, but even in friendly discussions, it crops up.

A few years ago I had an idea about how to fix it - or at least ameliorate it.

The idea

What if there was a forum where in order to answer a post, you had to, well, answer it? That is, a post could formally ask questions. Those who wanted to reply to the post would have to answer the questions first.

Software could enforce this rule. It could be much like online polling. The answers could be displayed associated to the replies, and in other ways.

Potential problems

He asked too many questions!

Another potential means of abuse: Participants who ask, not a modest few questions, but a boatload of questions. After all, they have a captive audience!

Participants might be just indluging themselves, or might use it as a strategy to block replies.

I'd propose a limit to the number of formal questions a post may ask. There might be some flexibility here; the limit might depend on such factors as:

  • How long a participant has been participating
  • Whether he is in good standing.
  • How much "question-juice" he has stored up - but this has its own problems.
  • A strategic tradeoff whereby the replier need only answer any N of the questions. Ask too many questions just takes some control away from the questioner.
    • "Soft limits" on the above, where the replier who answers fewer than N questions still can reply but their reply is automatically modded down.

He just replies in another thread!

Participants might adopt a strategy of circumventing the mechanism by just replying somewhere else.

They'd have to answer the questions there too, but:

  • Not if they started a fresh thread
  • On answering one post, they might put in their replies to many posts in that single post.
  • Or they might congregate on a thread that asks questions more to their liking.

To some extent that problem contains its own solution. If respondents put their answers in strange places, the posts that they "answer" will appear to go unanswered. This generally is not what posters want.

I'd also propose some limitation on people starting new threads. Most forums have that anyways. I don't see right now how anything can be done about the other two problems.

Figuring out a question's exact flaw is too much work!

In the solutions above, I proposed that there be a spectrum of It won't just give one option for "Question is flawed", it will suggest flavors.

But here we have a dilemma. OT1H want participants to be specific about what flaws they claim that a question has. OTOH, we don't want to require participants to do a lot of work taxonomizing the exact flaw of each flawed question they see.

I think a reasonable balance is achievable, but how can we achieve it? Well, I don't think the answer is to predetermine a particular level of detail. Discussion should be more flexible than that. So:

  • We'd have a taxonomy of question flaw types.
  • Repliers could specify the flaw in as much or as little detail as they chose to.
  • Feedback mechanisms would exist, possibly involving challenges to insufficiently detailed objections.

The flip side of questions that assume

Above, questions that assume too much were considered a problem. But that's only half of the picture. Normal, healthy discourse does assume a lot. But in healthy discourse the presuppositions are already believed by all participants. Nobody is trying to sneak presuppositions in.

And the following scenario would be just wrong:

  • Participant A asserts position X
  • Participant B asks a question that presupposes X
  • Participant A then objects that the question is flawed because it assumes X.

I have some ideas about that which involve structuring questions further, but it's late.

How to carve up answer-space?

Other forms of flawed question

  • Off topic questions
  • Incomprehensible questions
  • Questions that assume too much. This is a slightly more general case of what was discussed above.

Further ideas

I have further ideas, but it's late. I will try to post more another day.

Some theory

Why is this a good idea? That is, not the motivation or the mechanics, but the theory behind it. Why should questions in particular be so great? Why should one sentential mood be favored above the others? Let's look at them:

  • Questions: Questions are naturally interactive. Which is not to say that they are always really used that way - there are rhetorical questions, and browbeating questions, and so forth. But notice, even those work by superficially appearing to be interactive.
  • Statement. Statements are less interactive. They don't naturally leave a place for your interlocutor to respond. That's not to say that people don't answer statements - it happens all the time. It's just to say that statements don't naturally invite it.
  • Imperative. The imperative mood ("Do this!") could be viewed as interactive too, but in a different way. You're commanding your interlocutor to do something. A group of people telling each other what to do isn't going to improve improving the quality of communication, nor the tone.

01 September 2010

The paradox of existence preferences

The paradox

  • Really obvious fact: Most people prefer to live. They prefer to exist over to not exist and they demonstrate this continuously by not committing suicide.
  • Adam Ozimek argues that if so, "there is a huge market failure whereby the unborn are unable to contract with their potential parents to pay for life".
  • Robin Hanson then proposes (or at least toys with the idea of) solving this market failure, in effect optimizing "who should exist". "Nonexistent people" would in effect pay to exist. He claimed that his proposal was Pareto efficient - that is, it makes everybody better off.
  • Most observers seemed to disagree. I liked Carl Shulman's and Wei Dai's answers, and I felt that Robin's calculation of value ignored a person's interest in their own existence.

But we're left with a paradox. Why doesn't the conclusion follow?

  • Because we're too squeamish to think about it? I hope not, and I doubt that's it. I'm not, and I doubt the overcomingbias readers whose answers I liked are either.
  • Because the proposed solution isn't actually Pareto efficient? Part of the discussion focused on whether it was. I think it's fair to say that under the original assumptions it is, but under other reasonable assumptions it isn't.

"No nontransitive preferences" doesn't extend this far

"Nontransitive preferences" are when your preferences contradict each other. For instance, if you prefer an apple to a banana, an orange to an apple, and banana to an orange, you have nontransitive preferences. An unscrupulous grocer could trade you an apple for a banana plus $.05, an orange for an apple plus $.05, and a banana for an orange plus $.05 and make $.15 from you without improving your situation in any way.

For most purposes, nontransitive preferences are irrational. But maybe this doesn't cover people's preference to exist. If you'll indulge some ontological looseness for a moment, maybe it's entirely reasonable for "nonexistent people" to have little or no preference to exist, but for existing people to have a strong preference to continue to exist.

One could say that what counts is state transitions rather than states. That is, people prefer not to change state from existent to nonexistent (or as we uneducated masses say, to "die") but have no strong preference whether to change state from nonexistent to existent.

I don't think this holds up well. Most people would prefer that, if they die on the operating table, they should be brought back to life if possible. There are exceptions but seem mostly due to poor quality of life due to poor health.

One might counter by claiming that a person who dies on the operating table doesn't really cease to exist, since medicine can resuscitate him. I don't think that rebuttal works. Even if a person could entirely cease to exist and be brought back - think Star Trek transporters - I expect people would again prefer re-existing.

So we really are talking about state preferences, not transition preferences.

Comparison to the Ontological Argument

I think a better answer to the paradox is that it has essentially the same flaw as in the ontological argument. Existence is not a predicate. And existence isn't an asset that one can have in one's portfolio. Nobody can say "I have some assets but I don't have existence".

So the concept of the "nonexistent people" preferring to trade some of their assets for the asset "existence" is incoherent. ISTM this undercuts the reasoning behind Hanson's proposal, and thus solves the paradox.

org2blog again

While I'm thinking about it

Just used my org2blog to upload the last post. Including all the pictures exercised it fairly well. While it's fresh in my mind I have a few comments:

How to use it with pictures

  • (Of course) Write a file that includes pictures.
    • Capture the pictures. I find it's convenient if you put them in a sub-directory of your blogging directory
    • Use C-u C-c l to make links to them
  • Upload the pictures
    • If you have a Blogger account, you have a Picasa account, or you do if you have ever put a picture in your blog. If you don't, you'll have to set something up.
    • Call gphoto-albums twice. The first time, g-client will have an error. It's due to a missing entity, really an upstream problem so I haven't looked at it.
      • If my acute shortage of round tuits ends, I'll make gphoto-choose-album automatically does this.
    • Upload the photos. Usually gphoto-directory-add-photos is most convenient. For single photos, use gphoto-photo-add.
    • org2blog will automatically remember what remote URL they correspond to.
  • Upload the blog entry with M-x org2blog-post

I made gphoto-directory-add-photos easier

Now it picks album names from a list.

Glitch in gphoto

Somewhere in gphoto there was a "bad byte code" glitch. That seems to have been solved by reloading. I'm not sure how serious it is.


I'm not sure what TV Raman planned with this function. It doesn't look like it would work, and I've never used it. However, I fixed what looks like a scoping error that always gave me some byte-compilation bugs. No guarantee from me that it works.

org export can't naturally flow text & pictures

Not a major failing, since it's not really meant as a page designer. But I'd have liked to flow the text around the pictures, and I couldn't without writing the HTML by hand.



What a windbelt is


A windbelt is an alternative energy source invented by Shawn Frayne. It uses the principle of aeroelastic flutter. Basically it recovers power from a string that vibrates in the wind, like an Aeolian harp.

See also here



  • They are cheap. That's undeniable. Less than $10 for the parts for a working model.
  • It works. He's selling them1 and they generate power. There are videos.
  • Less moving parts than a windmill. That's "less", not "fewer". It's got as many moving parts - more if there are multiple strings. But they don't move as much. You could hold your finger an inch away from the moving part and not get smacked by it.
  • Scalable? Apparently a 10 meter version is in the works.
  • Efficient? Shawn claims that a windbelt generates 10 to 30 times as much power as a microturbine under certain conditions; it makes 40 milliwatts in 10-mph breeze.

    That's promising, and it suits his plan of marketing it in the third world. But for more conventional use, I'm left with questions:

    • 10 mph is a low speed for conventional windmills. They get better at higher wind speeds. How does the efficiency compare at higher wind speeds?
    • At higher wind speeds, microturbines are not the most efficient means. One would want to at least compare against conventional windmills.
  • Noisy? Actually, no. According to Shawn's site, the windbelt is actually fairly quiet.

Could it be made a little more efficient?

Two ideas:

  • (Not by me. A commenter on Youtube suggested it first AFAIK)

    Put the generator at or near the center of the string, where it is exposed to more vibration.

    Or if it's difficult to give it a stiff platform there, at least put it somewhat away from the end, which is the part of a string that vibrates the least.

  • Generate power from the whole range of transverse modes. It appears to me from the demonstration that windbelt is only generating power from one transverse vibration mode. But a string has a whole family of transverse vibration modes, effectively doubling the opportunity for power generation.

    Addendum: I see that more advanced versions do essentially that: http://lh6.ggpht.com/_jtvOCEq74z4/TH6yj73M22I/AAAAAAAAAF4/bPvhnV3iVfo/humsix_triangle_27.gif

Where might it be used

Shawn basically talks about using it in third world countries that can't afford windmills. But I think one could aim higher than that.

In place of conventional windmills


Could they be used instead of conventional windmills? There are two reasons that make me think maybe:

  • NIMBY. After the Ted Kennedy / Narragansett Bay brouhaha, it looks like conventional windmills aren't politically easy (Don't ask me why, I thought they were scenic). Might windbelts be a little easier on the view?
  • Cross-section. A conventional windmill goes to heroic lengths to sweep its working surface over a large cross-section, but even so, a lot of the wind goes thru it without ever coming near the windmill's working surface. For a windbelt with (say) 2 meters between strings, the air is never more than 1 meter from the working surface.

On suspension bridges

In particular, vertically on suspension bridges. It's a great location:

  • The physical support is already there.
  • The electrical infrastructure is partly there - they are wired for road lighting and other electrical use.
  • There's a huge cross-section exposed to the wind.


On tall buildings

There is a lot of wind around tall buildings. They block the wind so it is all funnelled around them. It's actually a big problem, and can suck windows out and make it uncomfortable for pedestrians to walk nearby.

It's a problem many people have tried to solve, and tried to generate power from. But with conventional windmills, it's not easy. Where can you safely swing the blades? If you duct the wind to a turbine or similar, you waste a lot of power and it's noisy.

But a windbelt doesn't have that problem. And once again, having the physical and electrical support already in place is nice.


1 Individually, not mass produced. See his FAQ.