20 September 2010

Why not use stow for a distro?

Why not base a distro on stow?

What Stow is

Stow is a Perl package by Bob Glickstein that helps install packages cleanly. More cleanly than you might think possible, if you're familiar with traditional installation. It works like this:

  • You install the package entirely inside one directory, usually a subdirectory of usr/local/stow No part of it is in bin or usr/bin or usr/doc etc, all in the one directory.
    • We'll say it's in /usr/local/stow/foo-1.0
  • You command:
    cd /usr/local/stow/ 
    stow foo-1.0
    
  • That makes symlinks from usr/doc, usr/bin, etc into the stow/foo-1.0 directory tree.
  • Now the package is available just as if it had been installed.
  • Want it gone? Just
    stow -D foo-1.0
    

This is neat in every sense of the word. It can manage multiple versions of a package neatly too.

Why they fit together

One task that a distro such as Debian or Redhat puts a lot of work into is package management. New packages typically put files in many places such as usr/bin, and a distro's package manager has to track where they go so it knows what to remove when deinstalling or upgrading. It has to use various tricks such as renaming old versions of config files.

But stow's way is cleaner. Why not do it that way?

Also, traditional distros aren't too happy with source distributions. They can compile them when they have a source package in their own format, but that mostly misses the point.

Mostly you compile in order to get the latest (stable or bleeding edge) version of something. For instance, I compiled GEDA today. I wanted the latest because the version in the Debian lenny distro doesn't handle everything that gschem outputs.

There was not a Debian package of the latest geda-gaf source, and I didn't really expect there to be. So I downloaded the latest and compiled it. Now Debian/dpkg/aptitude not only didn't help me, it was actually in conflict with what I was doing. Of course then dpkg couldn't manage its prerequisites for me, so I have to do that chore manually. Worse, the distro has its own idea of what is provided and what isn't, and what I'm compiling doesn't count in its eyes.

I didn't want to install on top of a version that aptitude installed, because that will confuse it. So I told aptitude to remove the package. That breaks depenencies and there is no reasonable way to tell aptitude that what I just compiled satisfies the dependency - to do that I'd have to create a Debian package for it, and still aptitude would treat it as a suspicious local package. So I had to remove easyspice, which I didn't want to. I probably have to fetch and compile it - even though I have it. Debian package management thinks it knows better than me.

And it occurred to me that it didn't have to be that way. The biggest reason why the stuff I compiled didn't have equal standing was the need to manage where packages put their files. It can't easily mix distro'd packages and compiled source because there's nothing to tell it which files the source "owns." Stow does that cleanly, and could do so even for a mix of distro'd packages and compiled source.

Digression

And would it be so hard to make autoconf's ./configure tell a package manager about missing prerequisites? Most of them were just the *-dev parts of packages I already had. I'm not too familiar with autoconf, but the only fundamental problem I see is a lack of a common agreed-on format.

Downside: Stow needs to bootstrap

Obviously this can't work for every package, since stow itself depends on a number of packages including a kernel and a Perl interpreter. Aside from Perl's intrinsic yukkiness, that also means that the Perl interpreter and its sizable standard libraries must be available.

There are, happily, a number of variants or offshoots: Graft, lnlocal, Reflect, Sencap, Toast. However they mostly seem to be in Perl as well. One exception is Reflect, which requires only bash and coreutils. Unfortunately, Reflect appears to be abandoned.

But they don't agree on install-time config management

Stow won't replace install-time config management, though. Compiling from source typically doesn't have that at all; things are configged at build time with ./configure.

That's a problem. Distro'd packages and compiled source just don't agree on when to config.

Stow should have been done years ago

Years back, I thought that package management via symlinks and dedicated trees would be a neat idea. I never did anything about it. Bob Glickstein did. He's also done a number of other neat forward-looking things, including sregex for emacs.

17 September 2010

Sweet Dreams

Thoughts On Sweet Dreams

Sweet Dreams: Philosophical Obstacles to a Science of Consciousness is a 2005 book by philosopher Deniall Dennett.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjR5-_qOIvGiFpt_pghfWSvVToogufFSIfUzNy108EmhbHBeA9e88Vf_plsY2I_Du9gq-jLr7WhIoAsfiHD7Aa1jypeMG2oNEnAU7c1UnVlkUoUb-YUsM2lGOkvJsS-ha7i6_saOcKOr0U/

Dennett's understanding of consciousness

Dennett builds on his earlier ideas about consciousness, in particular the Multiple Drafts Model. He argues for a definition of consciousness as analogous to fame1. Thoughts that we are aware of are like famous people, while thoughts that we don't notice are like unknown wanna-bes. Here I say "thoughts", but that's just my term for convenience and brevity; Dennett makes it clearer what he means but I can't sum it up in a few words.

But don't imagine little mental homunculi as fans of the "famous" thoughts. The analogy doesn't go as far as that. The "audience" are simple mental modules. They may be made of even simpler modules. At the bottom, it's just tiny mental robots.

He says that an important point of the analogy is that what makes thoughts part of one's consciousness or not are their ~sequelae~2. He argues this by asking us to imagine a situation where an up-and-coming author was about to hit it big - new book coming out with much publicity, big TV interviews lined up, maybe even already taped - and on the day that he would have gotten famous, some natural disaster occurred and the news was all about that, eclipsing the hopeful author. That wouldn't be fame, even though fame would be the normal consequence, because the normal sequelae of fame did not occur. Similarly, Dennett argues, thoughts that are otherwise the same as normal conscious thoughts but don't become mentally "famous" - say because one was distracted at the time - are not conscious because they lack the sequelae that would normally make them conscious.

Mental rehearseal as uniquely human

Dennett also adds some thoughts about mental rehearseal, "our habit of immediately reviewing or rehearsing whatever grabs our attention strongly". He speculates that mental rehearseal:

  • may be what makes a conscious thought stay conscious rather than lapsing into obscurity.
  • may be a uniquely human activity (vs animals)
  • lack of it may account for infantile amnesia, ie it's why we don't remember our very early years.

My thoughts

So are computers conscious?

Following Dennett's definition leads me to the surprising conclusion that not only are computers conscious, they are super-conscious. Computer behavior not only fits the definition, it fits it far better than ours does.

Computers can, if suitably instructed, call up any piece of data in their RAM and send it essentially anywhere in themselves: to the CPU, to the peripherals, to the larger world via the net. (Add many "etc"s here to cover the various possibilities) They do the echoes/reverberating/recollectability thing much more perfectly than we do.

Maybe it makes more sense to say that computers are extremely conscious, just not at all self-willed.

What if fame and consciousness are really the same?

As I said above, Dennett makes it clear that his fame analogy is not literal; "famous" thoughts are appreciated by mechanical mental modules, not by an audience of tiny people. But of course at the sub-human granularity he's talking about, there couldn't be a human audience. At the coarser granularity of human communication, that doesn't apply.

What if we take the fame = consciousness analogy as actually correct?

  • Consider famous thoughts - perhaps the phrases of Shake-speare or the equations of Newton. Do their continued sequelae make their thinkers still conscious?

    I'd say no. Obviously the thoughts are part of some consciousnesses, but not part of Newton's no-longer-functioning consciousness.

  • Contrariwise, consider a thought that never has public sequelae - a thought that never gets out into the world at all. Perhaps the thinker dies without ever communicating it by word or by deed. So those thoughts have no public sequelae, which we're assuming are of a piece with mental sequelae. Are those thoughts conscious or not?

    It misses the point to answer, "Of course! If you had asked him, he would have said so, and been miffed at you for doubting it". We just said that that never happened, and the real3, actual sequelae are at issue, not hypothetical ones.

  • If at some dark point in the future humankind is all destroyed, then there will have been no ultimate sequelae to any fame. By our hypothesis, neither will there have been any to our thoughts. Does that imply that nobody was ever conscious?

    In my view, yes in the large cosmic view, but we observers live within the smaller view. The (ultimately doomed) culture surrounds us and we can see it just fine. We may reasonably answer no, while still knowing that some day it will all be gone.

On being misunderstood

Daniel Dennett writes in a very gentle style. Phrases like "These idiots did not understand what I was saying" are not to be found in his books. Nevertheless, I get the impression that sometimes his patience is sometimes tried by misunderstandings that he is responding to.

Footnotes:

1 He suggests "influence" as another inexact word for what he is describing, but does not expand on that.

2 Again, I can't do justice to sequelae in a few words, but basically he means "consequences", in a slightly technical philosopher's sense. See here for more.

3 This is not to contradict the many-worlds interpretation. If this concerns you, then just read "real" as "in the same branch as the observer".

New edition of Grieg's Lyric Pieces

I bought Lyric Pieces. It's the sheet music to all of Grieg's Lyric Pieces. There have been editions of this before. I had borrowed the Bertha Feiring Tapper edition before, but at c. $80 for one book, I couldn't justify buying it. I was happy when this new edition came out at a reasonable price.

Also, the fingerings here are cleaner in a few places. I have in mind Vogelein, where Tapper had some strange fingerings that may have been mistakes.

14 September 2010

Alternatives to No Mind Hair

Hanson says: Gods Near or No Mind Hair

Robin Hanson speculates that U-evolution (aka fecund universes) implies that either:

  • Imprinting minds on baby universes is impossible.
  • One such mind is imprinted on our universe and probably could be found.

In this post I propose some other possibilities.

Let's calculate the right probabilities

But first I want to address his calculation.

A self-reproducing universe would have a chance p of evolving intelligence, which would then birth an expected number N of similar baby universes, such that p*N >1.

This calculation isn't calculating the right thing. It gives us an estimated number of mindful universes for a given number of U-generations. That tells us nothing. Under the U-evolution assumption, there are infinitely many universes that we might have been in.

What we need to estimate is the likelihood that a universe selected at random is mindful or mindless. We could model it with an infinitely long Markov chain. So either:

  • The MINDFUL state is persistent and the MINDLESS state is transient, so the probability of picking MINDFUL is 1.0
  • The MINDLESS state is persistent and the MINDFUL state is transient, so the probability of picking MINDFUL is 0.0
  • Both MINDLESS and MINDFUL states are more likely to transition to each other than to themselves. Then the probability of picking MINDFUL depends on the transition probabilities.
  • Or MINDLESS never transitions to MINDFUL - but we know that's not so.

Mindless because mindless black holes are useful

Would the U-children of a mindful universe all be mindful universes? Or even mostly?

Though of course I can't estimate the probability that it would with any real confidence, if absolutely forced to answer, I would say that probably not. That would imply that an advanced civilization would imprint on nearly every black hole it made. But black holes are probably very useful to them for other purposes. Much has been written on the potential uses of black holes; I won't repeat it here.

In other words, the transition probability MINDFUL -> MINDFUL is probably fairly low. So I would have to estimate the probability that we live in a mind-imprinted universe as fairly low.

Co-opting

Many years ago on rec.arts.sf.science I argued a similar issue with regard to the Fermi Paradox1. The issue was not the Fermi Paradox itself, but what a certain variant of the Anthropic Principle2 had to say about it.

The Anthropic Principle has often been used to answer philosophical questions about probable universes by ruling out the uninhabitable ones, thus making our own habitable universe seem more probable, even if a priori it is calculated to be improbable3.

I suggested that the question "Are we alone in the universe?" was similar to the Anthropic question, and like the Anthropic question, implied an affirmative answer. I speculated that if advanced extraterrestrials had visited us, we would not be asking whether we were alone in the universe, not even to proceed to answer the question in the negative. They would almost surely dominate us so thoroughly that we wouldn't have a separate identity to ask about. We would ask "Are humans-and-aliens alone in the universe?" and we would still answer no.

For comparison, I don't think that dogs would ask "Are dogs alone in the universe?", even if they could formulate the thought. They'd ask "Are dogs-and-people alone in the universe?" Another comparison: I don't seriously ask "Am I myself alone in the universe?". It'd be a silly question! I only seriously ask "alone?" about the largest extension of "us" that I know of, all humanity.

By this argument, it is not very surprising that we find that the largest extension of "us" to be alone in the universe. It could hardly be otherwise.

First movers and families of mindful universes

Other people speculated that, given the Anthropic Principle, we should expect to be in a universe with nearly as many observers as possible. But the argument above provides a counterargument: Almost no matter our state of development, we'll find that "we" are alone. On that interpretation, it's not that surprising that "we" should be in some intermediate stage of growth and appear likely to be the first movers in the next stage of growth.

In a similar vein, even if there are familial chains of mindful universes, perhaps it is still not surprising that we find ourselves to be potentially first movers.

Or maybe it's us

Suppose that the universe was in fact imprinted and will one day become mindful. What form should we expect the mechanics of it to take?

Of course it's very hard to say, but one answer builds on the idea that "Ontology Recapitulates Phylogeny". That is, the way that an organism develops loosely repeats the way it evolved in the first place. For instance, at one point in a fetus' development, it has gills. Yes, you and I had gills. What for? They're not functional, so why do we grow them and then lose them? Because we (and all vertebrates) evolved from fish.

Why is it still like that? Because the only way that our DNA knows how to build a human is by doing the tried-and-true way that worked before, with small variations. Apparently all the variations that didn't make gills4 also omitted something important, so the gill-making stays.

It could well be that our most distant U-ancestor-inhabitants came to be in a way broadly similar to how we did: Evolving chemically from non-life in some stable situation fed by a stellar energy source. I'll call that idea Evolving-on-planets. If so, "Ontology Recapitulates Phylogeny" implies that when they imprinted mindfulness on our universe, they would use evolving-on-planets. They'd make, more or less, us.

This is not to suggest that our U-ancestor-inhabitants were short-sighted like DNA is. Even if they could design other methods, they might still prefer mindfulness thru evolving-on-planets:

  • Having no feedback from the black holes they so imprinted, they must design "blindly". Maybe they would trust only the most familiar way of coming to mindfulness, the same one that spawned them.
  • Maybe they would have a sentimental attachment to evolving-on-planets.
  • Maybe evolving-on-planets is a technologically appealing way for a U-imprinter to make a universe mindful.

Footnotes:

1 The Fermi Paradox is: Since there are so many stars in so many galaxies, and each has some probability of bearing life, where is everybody?

2 The Anthropic Principle states that we can only find ourselves in a habitable universe, because otherwise who would be there to notice its uninhabitability?

3 But beware the Boltzmann Brains answer. The Anthropic Principle cannot be used to overcome arbitrarily high adverse probabilities. If the adverse probability is too steep, it becomes more likely that an observer would find himself to be an improbable thing in a probable but inhospitable universe (A Boltzmann Brain) than a normal thing in an improbable but hospitable universe.

4 Or that reduced gill-making to less than it is now, to be very picky.

11 September 2010

Scheme setters

Where looking for a generalized set! led me

Objections I read about SRFI-17

It "associates properties with Scheme symbols"

  • It wants to associate properties with Scheme symbols. This is considered not in the spirit of Scheme
  • In SRFI-17, setters pertain to symbols rather than procedures. That can cause problems when a symbol is rebound.

But that turns out not to be the case. It takes some careful reading to discover it, and reading the reference implementation helps. SRFI-17 associates setters to procs, not to symbols.

That bears repeating: What it associates a setter to is really a proc that you can pass to map etc. It is as if every proc becomes a double-proc of getter-and-setter form..

Its syntax

Some have proposed that set! should only apply to a symbol; that a generalized set should have some other name. I don't find the argument convincing and I don't agree. ISTM that when set! has a symbol as its first argument, that's a natural case of the generalized set!. So I prefer keeping the SRFI-17 syntax:

;;More general set!
: (set! (proc args) value)
;;Fully general set!
: (set! form value)
;;Set! as we know it:
: (set! var-name value)

Conclusion

So I think SRFI-17 is just about right.

Andy Gaynor's proposal

I also looked at Andy Gaynor's 2000 proposal, http://srfi.schemers.org/srfi-17/mail-archive/msg00077.html

But IMO it has some drawbacks of its own:

  • I don't like the multiple arguments facility. `set!' shouldn't take multiple arguments and figure out what to do with them.
  • In some ways it does too much. I'd prefer no `define-setter'. I'd rather define getter and setter together. If a pre-existing lambda's setter is to be defined, it would seem better to use SRFI-17's proposal of:
    (set! (setter x) proc).
    
  • I'd also prefer no support for directly calling the setter. It can be done but shouldn't be part of the implementation. Setters are all about using a parallel expression for destructuring what is to be set. Using a setter explicitly is just calling a function. Why support that specially?

What else might have a setter?

One interesting candidate for having a setter is `assoc'. There are different interpretations:

pushnew
If the associated item is not found, add it, otherwise mutate it.
mutate-only
If the associated item is not found, error, otherwise mutate it. So the set of keys is immutable.
new-only
If the associated item is not found, add it, otherwise error. So the existing mappings are immutable.

More fields

If we're going to allow these double-lambdas to have two fields, why not more? Some other possibilities:

  • "pusher" and "popper", useful when the underlying type is a container.
  • docstring, a string or a function which yields documentation.
  • "accessor-doc", a function that describes the accessor, suitable for combining to tell where a thing is located.
  • "matcher" - allowing constructor functions to have counterparts for pattern-matching.
    • It would be nice to can use this for formals in arglists.
  • "type" of the proc
    • Min/max arity
    • Argument types
    • Promises about it
    • Other type information
  • "evaluation-type"
    • normal
    • macro
    • built-in
  • Optimization information
    • "merge", which tries to advantageously merge a given call to the function with other calls immediately before or after it.
      • How: It is passed this call and preceding calls, returns a merge or #f.
      • A smart merge-manager may know what functions it can hop over.
    • Specializations. List of the available specializations.
    • Hidden read/clobbered arguments.
    • compiler-macro as in Common Lisp
    • Profiling data

Lambda as an extensible type

So this extended lambda would be an object type. Since the type will be so extensible, we'll want a base type that provides this minimally and can be extended. So that base type must be recognized by the eval loop as applyable.

Scheme generalized letting

Generalized letting

Imagine for a moment that Scheme bindings could bind, not just values but all sorts of properties that a field might have. Think CLOS. Here are some of the possibilities:

(let-plus name
   (  
      ;;Equivalent to an ordinary binding
      (id1 'value value-1)
      ;;An immutable object
      (id2 'value value-2 'setter #f) 
      ;;An immutable object with a getter.
      (id3 'value value-3 'getter get-id3 'setter #f) 
      ;;A "called" object, with both getter and setter.
      (id4 'value value-4 'getter get-id4 'setter set-id4) 
      ;;An uninitialized "called" object
      (id5 'getter get-id3 'setter set-id5) 
      ;;A type-checked object.
      (id6 'value value-6 'satisfies my-type-pred?))
   (list id1 id2 id3 id4 id5 id6)
   (set! id1 12)
   (set! id2 12)
   (set! id3 12)
   (set! id4 12)
   (set! id5 12)
   (set! id6 12))

Equivalent to

(let name
   (  
      (id1 value-1)
      (id2 value-2) 
      (id3 value-3) 
      (id4 value-4) 
      id5
      (id6 value-6))
   (list id1 id2 
      (get-id3 id3) 
      (get-id4 id4) 
      (error "Uninitialized id5") 
      id6)
   (set! id1 12)
   (error "Can't set id2")
   (error "Can't set id3")
   (set! id4 (set-id4 12))
   (set! id5 (set-id5 12))
   (let
      ((new-val 12))
      (if (my-type-pred? new-val)
         (set! id6 new-val)
         (error "Wrong type"))))

What this provides

This mechanism could provide:

  • immutable fields
  • effectively constant environments
  • enforce type restrictions
  • succintly enforce controlled access to fields (They can be succintly provided now, but not succintly enforced)

How this can be minimal and extensible

Looking at the above, you can see that there are already a fair number of fields that might be set, and one would like for the user to can extend it further. Surely we don't want to build anything so complex into the core language.

So what is a modest construct that can support this? A mere syntactic transformation won't do. It won't apply this everywhere the current environment is used, which we need.

So I propose a primitive binding construct that knows a getter function. That is, when the associated symbol is evaluated, the value is the return value of the getter function when passed the (actual) value associated with that symbol.

In SRFI-17, any procedure can have a setter, which is an associated procedure that sets the object's value and set! will use it, so controlled setters would just fall out.

(controlled-let my-guard
   ((a 12)(b 144))
   (list a b))

is equivalent to:

(let 
   ((a 12)(b 144))
   (list (my-guard a) (my-guard b)))

In order to implement the generalized binding constructs on top of this, one would:

  • Predetermine a suitable object type T
  • Predetermine a getter that expected it argument to be of type T
    • Similarly for the setter, if provided.
  • Write a macro that:
    • Takes a list of generalized binding specs and a body
    • Interpret each binding spec as a symbol plus the arguments to a constructor for an object of type T.
    • Uses controlled-let, giving it:
      getter
      the predetermined getter
      bindings
      The zip of the bound symbols and the list of constructor forms
      body
      the body argument

09 September 2010

Windbelts 3

A few days ago I suggested that large-scale windbelts might do more with less by transfering energy between strings so that it would only need one magnet-and-coils generator, or at least fewer. Today just for fun I sketched what a large freestanding multi-string windbelt might look like if it used this idea. The circle in the middle would contain the generator.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBI25UyyVKZBFs76WymALejeJK0_x1SANcUw2hRmUSVf2t_G5MeNaYhno9vjLC-FO8bSXd5f6Xs0H9IWuK4gWp_kv0HtAlq_0vs_nTgr79-pwUXZsLcXKBkx4ruWpEgZKCSq5hunok4AM/