29 January 2011


My extension to filesets: fileset-whole


I was frustrated with the project manager I was using. It didn't do everything I wanted, and it tended to mess up settings in other buffers. I had made a nice (and tricky to code!) plugin that let a certain keysequence launch project commands from any affected buffer. But it wasn't stable. I'd use it for a while, and then find it had disappeared. Or I'd use it on a second project, and it would complain that it couldn't find data about that project. Unstable.

So even though I had contributed a fair bit of code to it and it was the best of the project managers, I was looking for a better way.

Yesterday it dawned on me that emacs' built-in filesets functionality is actually fairly close to what I want. It is a group of files, which is the core of what I wanted. It launches commands wrt them. It organizes them well for use, and allegedly co-operates with other software such as dired+. And it's stable and apparently not buggy.

So I tried filesets out. It didn't serve, because it insisted on certain ways of doing things. Nevertheless, it was tantalizingly close. If only it could be persuaded to do just a few things differently: eval args to commands, not insist on running each command on each file, and keep a little freeform data in addition to the file names, things like that. Minor things, but woven deeply into the fiber of filesets.

I turned it over in my mind most of yesterday, wondering if I should fork the filesets development, or make changes and submit them (often a frustrating experience because it places you at the mercy of someone else's attention and comprehension), or make a personal version (and get out of sync with other development). None appealing.

The answer

I realized last night that there was a way to extend filesets instead of working with its assumptions. All I had to do is define another alist from fileset name to data, this one to hold general key-value data instead of just basically file names. Then I could make it store all sorts of associated data with filesets. And of course, I had to rewrite the command runner and some other stuff.

So today I wrote fileset-whole. It basically adds twins of some of filesets' things that didn't do all I wanted:

Alist of key-value data associated with each fileset, near-twin of filesets-data
List of commands that can be launched on a fileset as a whole. Like filesets-commands, but I do args differently for various reasons.
Launcher for commands

One other neat feature: It remembers which fileset an open buffer is associated with (in buffer-local variable `fileset-whole-name-here') So already I am not prompted for fileset names nearly as often.

Current state

So far, no serious problems. The commands work, as far as I've tried them. Now I'm going to switch back to my usual project, klink, but using `fileset-whole'.

I haven't published it yet. I'll probably create a git repo for it after I've tried it out myself and fixed anything serious that comes up.

24 January 2011

cords, emacs, and lone-COWs

Cords, emacs, and lone-COWs

Quick recap

I've been blogging about how I'd like to combine emacs and cords. My time is already committed to Klink and Emtest, and the time I don't spend on them is coveted by other projects. But the idea has been jangling around in my head for some time, so I blog about it.

I've already talked about the basics, the text structure emacs wants, and how cords might be adapted to support it. This led me to imagine dynamic cords that mirrored the state of objects, and to ask how their updates could cause redisplay in a reasonable way.

The problem

Propagating updates

How might mutation of an object cause redisplay? Here's a very naive approach: When anything mutates an object, we set a dirty flag in the window(s) (if any) that are displaying that object.

But this is not robust. In emacs, only a few operations intrinsically require redisplay: inserting into a buffer, changing certain text properties. Most operations do not directly require redisplay. Those that indirectly require it call one or more of the direct operations. So failing to force redisplay is mostly not an issue. (Even that is not completely robust1) But for us, mutations to arbitrary objects would require redisplay. This is far harder to track.

We'd also need to propagate the dirty flag to exactly the windows that are displaying that object, ie windows whose displayed cord is transitively built from it.

When a redisplay was appropriate, each window would check whether it was contributed to by each mutated object, and if so, recalculate the cord(s) it contributed to and redisplay it in its proper context.

This is a problem, because a cord node doesn't naturally know about other cords built from it, and because it gives the window no hint what changed, so it has to redisplay everything.

A new cord each time?

Cords are basically immutable. Mutating a cord is contrary to the spirit of cords, and that's bad. They are supposed to be immutable.

So new text means a new cord object. If the display should change, there should be a new cord to display.

Or maybe not?

But there are problems with making a new cord every time. The process is driven by object mutation. Every relevant object mutation wants to make a new chain of ancestors for the affected cord. That would be grossly inefficient. We should only make those cords that are required for display.

Another problem is that it makes it more work to track cord contributions because cords' identities keep changing. If we keep lists of them, we have to keep erasing the old one and making a new one.

And we usually don't need to

And consider dynamic cords made from objects when those objects have since mutated. It is possible that something might be interested in their previous value, but it's unlikely. Otherwise, they don't really mean anything. There is no live object that they are describing, except perhaps by pure co-incidence. Nothing wants to display them.


Introducing lone-COWs

So let's treat dynamic cords and their ancestors as a sort of copy-on-write objects. Only we'll optimize them for the common case where it is released from its old use before a new use is wanted.

In fact, we can imagine a fairly general facility for objects like this, objects that are dynamic but are treated as constant building blocks. I'll call them "lone-COWs", because they are copy-on-write (COW) objects that are often unique, but not always (otherwise I'd have called them "unique objects" and the COW part wouldn't be needed)

General nature

Lone-COWs, like COWs, are a type of reference. Technically they are handles, that is, references only pointed to by other references. A lone-COW holds an object OBJ.

When OBJ is to be mutated, a lone-COW figures out whether anything is interested in OBJ's old value. If anything is, it (being a COW) copies itself and OBJ and lets one OBJ be mutated while the other stays constant. In any case, the lone-COW sends notifications to interested parties.


There are these types of reference to lone-COWs:

  • "Owner" references that want to treat the lone-COW as constant. If an owner actually wants to mutate it, it isn't really an owner but a co-mutator (or has become one). It'd be a special type of reference that holds a lone-COW and decrements the count on finalization, but I won't explore that here.

    A lone-cow only needs a count of these. We expect this count to usually be 0.

  • "Tracker" references that want to track the lone-COW across its mutations.
  • "Co-mutator" references that want to (pretend to) copy some other object when the lone-COW (pretends to) copy itself. I will call the other object a "co-mutator".

External operations on lone-COWs

  • Copying OBJ. This is just making an "owner" reference, which just increments a reference count.
  • Co-mutating with OBJ. That means that an object A wants to, next time OBJ is mutated, effect something like:
    (set! A (make-a 'field-1 OBJ' 'field-2 (a-field-2 A) etc))

    This is expected to chain from child to parent to grandparent etc.

  • Tracking OBJ. That means that an object A wants to co-mutate with OBJ but A has not yet processed OBJ's earlier mutations. A wants OBJ to go ahead with any further mutations and A will catch up to it later. If the lone-COW splits, A wants to know about it.
  • Mutating OBJ. This must notify the lone-COW beforehand.

Internal operations

  • Copying the lone-COW. This is done if OBJ will be mutated and there are any "owner" references to it (Unless there is just one and there are no co-mutator references, but OBJ shouldn't be set up to be mutated this way. Owners expect to hold constant objects)
    • This notifies every tracker and co-mutator that now the lone-COW lives at the new location.
    • All the "owner" references continue pointing at the old copy.
  • Anticipating that OBJ will mutate. Operations:
    • Copy the lone-COW if it needs it
    • (No-op) Now there are certainly no owners
    • Put the lone-COW in a state where nothing can add co-mutator references or owners to it.
    • Notify every co-mutator that OBJ will mutate. Remove it from the list of co-mutators. The onus is on the co-mutators to arrange further relations such as still co-mutating or becoming a tracker.
    • (No-op) Trackers don't care about mutation.
    • Put the lone-COW in a state where it accepts co-mutator references and owners.
    • (No-op) The mutation can now proceed.

Notifying the references

  • Owner references don't ever need to be notified.
  • Tracker references are notified just if the lone-COW copies. We just change a pointer in them.
  • Co-mutator references, when notified, call back to act on an arbitrary object.

    A co-mutator that is a lone-COW itself will consider itself notified of mutation, thus propagating the notification wherever it belongs.

    A co-mutator reference would hold:

    • Idiosyncratic info. The callback interprets this. Generally this would include a weak reference to an interested object, effectively a backlink.
    • A callback for notifying the tracker that the lone-COW copied. It is to be called with these params:
      • the idiosyncratic info
      • the lone-COW

To make it easy to notify the references, a lone-COW keeps a list of tracker and co-mutator references to it.


A lone-COW has this data:

  • OBJ, a mutable object of arbitrary type.
  • A count of "owner" references
  • A list of tracker references
  • A list of co-mutator references



A lone-COW must be notified before its mutable object changes. It has to be before, not after, just in case there are any "owner" references to it.

Ways to satisfy this requirement:

  • All its components being either immutable objects or lone-COWs themselves.
  • Some low-level mechanism that intercepts attempts to mutate OBJ or merely to arrange to mutate it.
    • Possibly this creates lone-COWs that hold parts of the object.
  • Or other code has the onus to notify it.
Uniquely own OBJ

OBJ should be refered to only by the lone-COW. This could be guaranteed by copying it when making the lone-COW.

Co-mutator and tracker references must set themselves up

The onus is on co-mutator and tracker references to arrange to be found by the lone-COW.

Copying must understand lone-COWs

Functions that copy objects must increment a lone-COW's reference-count instead of physically copying it.

Some co-mutator strategies

Lazy lone-COW

When it receives notification, it:

  • creates a "tracker" reference to the first lone-COW.
  • Arrange to conform itself later, possibly by setting a dirty flag.
  • (Of course considers itself mutated, see above)

When it updates, it:

  • Makes a co-mutate reference from the tracker reference
  • Drops the tracker reference.
Co-mutator that is not a lone-COW

So it gets notified but itself has nothing to notified. Probably this means it is some sort of user interface, say a window. When it receives notification, it arranges to update the display in the near future, possibly the next time it's waiting for a command. Until then, it just tracks the lone-COW.

So it uses the lazy lone-COW strategy except that it ultimately updates a display.

Co-mutator with parts

When something tries to get a reference to a mutable part, give it a lone-COW reference and arrange to be a co-mutator of that lone-COW.



Circular lone-COW references are not a problem, because we erase each notification link as it is used. So even if a notification transitively reaches the place it started, it doesn't continue circling.

Repeated notification

Repeated notification is not a problem, again because we erase each co-mutator notification link as it is used.

Backdoor changes

Owners that are "cheated" by OBJ being mutated in some backdoor manner. This can't happen if requirement 1 is satisfied. That doesn't mean it can't happen.

Long-distance owners

Do new "owning" references to the top of the graph cause problems when the lower objects are mutated? No, because we copy before mutation (That's what "copy on write" (COW) is about). We don't need for child lone-COWs to have non-zero reference counts. When a lone-COW splits, we copy OBJ. If OBJ has lone-COW components, "copying" them will increase their "owner" reference count, which will cause them to split if the mutation came from within them. So as long as requirement 1 is satisfied, we're untouchable.

Multiple tracking?

Should there be multiple tracking & co-mutating? We've assumed thruout that all the mutations are of interest to all the trackers and co-mutators and to none of the owners. For our immediate purposes, single tracking is sufficient. And lack of multiple tracking does not create problems, it's just lack of a feature.

Should owners get the copy?

Should the treatment of trackers and owners be reversed? Maybe. I did say we want to optimize for the common case when we have a lone object being mutated and no owners. Yet the fact that the lone-COW copies means that we do have owner references, as we didn't expect to often have. For now, I don't want to muddy the waters by using the reverse polarity from what normal COWs use.

The semantics of new owners or co-mutators during notification.

At first I thought they'd apply before the mutation. But when they are notified, we've already committed to the mutation. The notification period is just meant for housekeeping actions, not for functional action. Then I tried applying them afterwards, but that was wrong too. If something re-notified the lone-COW of this mutation, it could cause them to be applied early. So we can't always control whether they are applied early or late. That's troublesome. So it seemed best to just block them.

We could try to control whether they are applied early or late, but that seems potentially complex.

Ramifications for our use

So we've come back to the design to merge emacs and cords. In this design, we'd use lone-COWs for:

  • dynamic cords
  • The arbitrary mutable objects dynamic cords are built from.
  • cords built from dynamic cords
  • Windows as displays
  • Windows as containers that track cords

Using a chain of lone-COWs has the effect of propagating any object mutations to all the windows that are displaying it, without further intervention. That's basically problem solved.


1 Eg, auto-insert used to not cause redisplay, so you would only see the inserted template after you started typing

23 January 2011



Quick catchup

About a year ago I blogged an idea about combining cords and emacs1. Now that it has jangled around in my head for another year, I have some further thoughts. Yesterday I talked about how emacs wants to structure text and how it pretends to.

Now I want to talk about the ramifications of structured text on my wild idea of combining emacs and cords.

Don't ask buffer to hold a list etc

One things I talked about yesterday was how some modes want the buffer to be a list or tree of objects. It's tempting, then, to imagine that some buffers would hold a list or tree of printable objects instead of text.

But it wouldn't work well. Even when printable objects fully control their own representation, as widgets and ewocs do, they are generally do not want to be the whole printed representation. They want accoutrements, often headers, footers, and separators.

So just one object

So buffer would hold at most one object, either (as now) a text string, or a printable object that manages both its own sub-objects (the model) and the cord that it presents for display (the view).

Presumably the design is basically recursive, something like this: The root object supplies a template and farms most of its representation out to its sub-objects. They in turn farm it out to sub-sub-objects, etc. Each subN-object generates a cord from data that it owns.

Dynamic content

Our subN-objects will create content dynamically. Cords support that with CORD_from_fn. But doing only that has some drawbacks:

  • CORD_fn wants to give one character at a time. That's very inefficient.
  • Each displayable object would have to manage caching etc itself.
  • We'll want different things from it for different purposes:
    • To display
      • It might be blank or abbreviated for invisibility or folding. Emacs does this by looking at a magic text property, but it's really a display concern.
      • It might include ornaments that are not part of the proper text.
    • To search.
      • If an item is folded, we generally still want to search its text.
      • For widgets and similar, we'd like to search on just the real text and not ornaments in the display such as button characters.
    • To save to file.
      • This representation might be completely different that the displayed text, eg as we do for project-buffer-mode buffers.
    • Other lesser purposes, such as:
      • Displaying differently in different windows.
      • Exporting

So let's make these objects' lives a little easier. I suggest adding to the roster of cords types a super-CORD_from_fn, which would contain:

  • clientdata as for CORD_from_fn
  • A cache for each major use-type above (display,search,save)
    • A dirty or uninitialized cache could be indicated as a magic object.
    • Possibly also a catchall cache for other use-types.
  • A method that:
    • Takes client_data
    • Takes an object indicating the use-type
    • Returns a cord
  • Not length. It may differ across use-types. If cache is dirty, it's not known, while if cache is clean, one can just find length in it.

Some quick notes

Display and faces

Displaying in different faces need no longer be a trick. We could add to the roster of cords types a "face" cord that controls the face that text enclosed in it is displayed in.

We could also provide an "image" cord, meaning to show that image inline with the text.

How to read such files?

How does one know to use this structuring for a given file? Just about as now: an auto-mode-alist tells what mode to use. A "mode" file-local variable can be used to supplement this mechanism. Just let modes can control the read and hand the buffer a super-CORD_from_fn object instead of a plain text cord object.

A file could be visited as plain text by another command, `find-file-as-text'.


1 Quick summary: Cords are neat and mondo efficient, Emacs, for all its greatness, uses buffer-gap which is much weaker. But emacs text isn't just text, it has properties and markers and other useful stuff. To make that work, we'd have to add some stuff onto cords, but it's doable.

22 January 2011

Structure of emacs text

The effective structure of emacs text

Emacs wants to structure text

Emacs is not a structural editor, but it structures text a lot. Right now I'm writing in org-mode, and in this one buffer I'm looking at maybe 3 dozen distinct meaningful regions displaying in 8 different faces. And that's just what I've written so far, here at the third paragraph.

Furthermore, there are outlines, ewocs, widgets, overlays, and stretches of text that have their own keymaps, or faces, or actions before and after insertion, etc. On the emacs-devel list, they were recently proposing "islands" of text that would nearly have their own modes.

But the structure is a trick

Yet except for overlays, they are all tricks. They are faked by careful control over emacs' interaction with the text "inside" the fake objects. This can involve controlling point motion, regions, insertion, marks, display, sticky properties, etc. As an elisp programmer, IMO this need to fake structure contributes more complexity to elisp code than any other single factor does.

It also makes for fragile interactions. A user can accidentally transgress the structural assumptions, for instance by splitting a headline in an outline. Sometimes I do. It's hard to prevent.

Flexibility is important too

On the other hand, there's something to be said for that situation. It's flexible. It doesn't limit you to a predefined set of types and operations on them. It doesn't need to predict what sort of structure will be useful, because it doesn't provide any. It just hands the Elisp programmer a set of basic tools with which to build structure.

The dimensions of useful text structure


Solid objects

On the one hand, there are structures that want to be real objects, with an inside and an outside. For example, an ewoc or a widget.

There's no such thing as half of one of these. Half an ewoc makes no sense. If you kill half the text inside a widget, you're left with a complete widget that has less text.

We don't want to cross scope on these objects at all. A region with (say) point inside a widget and mark outside is not generally useful.

Not solid: Mere stretches of text

On the other hand, there are structures that want to be just stretches of text. For example, ordinary text even if you have marked a region in it, or given some part of it a special face or property.

These have no intrinsic inside and outside, nor any strong intrinsic structure. In fact, these things are scarcely "things". We're just pointing at some length of text, asking whether it's an object, and getting "no".

Semi-solid objects

And there are intermediate cases. For example, an item in an outline. For a less solid example, a word or paragraph in a text mode.

These still want to have an inside and an outside, but it's editable. If you delete the whitespace between two words, you've got just one word.

Holds data?

If there's a text-containing object, can it hold data in addition to the text itself? In emacs, the answer is always "yes". Text can have properties, which can be of any type.

So the question is not "does it hold data?", but "does it hold data in a way that covaries with the (pseudo)type of object it is?"

For ewocs and widgets, the answer is clearly yes. Even for outline items, the answer wants to be yes. For instance, org items can have properties, and all outline items have an implied "depth" property.

Has behavior?

Again, in emacs the answer is always "yes", and of course there's the caveat that behavior is really due to commands and not objects and we need to rule out buggy commands that don't understand the given object.

So again we need to refine the question. "Does it, in conjunction with the set of commands appropriate to it, behave in a way that covaries with the (pseudo)type of object it is?"

Again, for the solid objects the answer is clearly yes, and "yes" also seems correct for the semi-solid objects.



For instance, words and paragraphs. Emacs deals specially with those in various ways. But it often does so without remembering the lightweight object as an entity. For instance, word constituents are defined the syntax table.


For instance, widgets. Also ewocs.

In between

Of course there's a whole spectrum of weight in between widgets and words.

Impact on buffer structure


(This is not a category). The other dimensions were keyed by object type, but for this dimension ISTM it makes more sense to key by mode type.


In some modes, the buffer wants to be structured from beginning to end. For instance, dired or gnus.

Yanking in unstructured text would just confuse the mode. Often the buffer is read-only, even if user operations modify it, so that one can't mess up the structured text at all.


Some modes expect essentially no structure. For instance, fundamental mode. One can yank in arbitrary text; it's nothing special. Now the buffer has more text in it.

Text-mode is another example, now a little more structured. It can be viewed as structured into paragraphs, but any text whatsoever qualifies as zero or more paragraphs.

Partly structured

Other modes are intermediate. For instance, outline mode, org mode, or almost any source code mode.

These modes generally try to cope with any arbitrary text in it. They don't try to prevent killing or yanking. But they also treat some text specially or give it extra meaning - for outline, it's headlines and stars. For source code, it generally includes comments, code, strings.

So the user is free to kill or yank, but needs to be somewhat careful and needs to understand the meaning of various types of text.

The picture

So as I see it, the picture is one of wanna-be objects of varying sizes floating around in emacs buffers. The heavier ones are trying to be real objects, the lighter ones aren't (much). Some modes want to be made up of text, others really want to be a list or tree of objects.

In the next post, I plan to build on that.

16 January 2011

Followup to an idea provoked by Crisis Economics

Followup to an idea provoked by Crisis Economics

This is a followup to an idea I recently posted provoked by Roubini and Mihm's Crisis Economics.

Is there another lurking moral hazard?

This was prompted by James' comment. James asked about the terms of the insurance, and I answered that it's basically "if bank X dies, mail checks to its account holders".

But that made me think, isn't there a moral hazard there? When a bank seems soon to fail, what keeps bankers and customers from colluding to inflate the amount of the deposits? Then insurance pays the customer money that he never actually lost, and the customer promises to secretly split it with the banker.

It's a type of fraud, so it'd be illegal. But we'd want structural protections too.

Perhaps the answer is to set the terms so that very recent deposits are not covered, or are less covered. Alternatively, recent deposits might be charged higher rates.

Can the premium rates zoom high?

I said that the premiums were set by auction, and the auction recurs. Presumably as it becomes clear that a bank will fail, the bids quickly rise to rates that clear out the accounts in the time remaining.

But this would hardly be fair to depositors. Deposits need to at least be held at previous rates for long enough for a consumer to see the new prices and possibly leave.

Why is banking different than most consumer decisions?

This was prompted by Scott's comments.

We trust consumers to make their own decisions on many purchases and services. Arguably we should trust them much further. So why shouldn't we ditch all government intervention, even as minimal as I proposed, and just leave it all to individual consumers' judgement?

Some reasons:

Banking is a credence good

Opening a bank account isn't like buying gas (a search good). It's not even like buying a car (an experience good). It's a credence good - you don't know whether it's bad till it burns you.

(Setting aside the fact that because of the govt, banking consumers don't actually get burned, taxpayers do)

Banking does not alert the consumer

Buying a car, because it obviously costs you a fair chunk of money, alerts most reasonable consumers to carefully gather information and weigh their options.

Putting the first $100 into a bank account does not. Neither does the next single check you put in, and so forth. So there's at least a "frog in boiling water" factor.

There's also the fact that, by their very nature, banks try to assure customers that they are extremely secure. That's why they have all that expensive furniture. Clearly this successfully reassures many people, and being reassured is the opposite of being alerted.

You could say that pages of legal documents alert the consumer. Sure, but that only goes so far. Even car purchases, my example of a very alert purchase, are felt to require lemon laws that restrain or override the legalese "agreement".

The consumer will lose an obfuscation arms-race

It should come as no surprise that the financial community can create enormous amounts of obfuscation that conceal far more than they reveal. Non-experts should not be required to penetrate it.

14 January 2011

Very Verbose Mode

Very, very, very verbose-mode

A lot of command line programs have what's called verbosity. That controls how much the program tells you about what it's doing. Usually looks something like this:


     Send verbose output to standard error describing what [Program]
     is doing.  Using `-v' or `--verbose' increases the verbosity by
     one; using `--verbose=N' sets it to N.

If the verbosity is low, it might print out out something like:


or even nothing at all, but if it's set high, it might print out something like:

Initializing arrays 
Initializing globals  
Reading global init file /etc/acme
File ~/.acmerc not found

and so forth.

So I wondered, just how high can you set it? Only one way to find out: try it! So:

% acme --verbose=100

and here's the output:

Welcome to the Acme utility!

First thing I'm going to do is initialize.  Haven't done it yet, I'm
just telling you what I'm going to do.  Here goes!

I'm initializing.

OK, so far so good.  

Now that I've started initializing, first thing I'm going to do is set
up the arrays.  But first I'm gonna tell you a little story about
three cowboys in the 1890's.  Now these three cowpokes was on the
Wyoming trail - you know, that's a long trail.  Runs clear from, well,
I don't rightly know, but anyways, it's pretty long.  Now one cowboy
says to the others

and so forth. Lots and lots of so forth.

11 January 2011

A thought provoked by Crisis Economics

An idea provoked by Crisis Economics

I'm reading a book called Crisis Economics by Nouriel Roubini and and Stephen Mimh. It's about the Econonmic Crisis and the bailout. So far, it's unimpressive and tends anti-capitalist and anti-Austrian. But it gave me an idea.

The motivating problem

In the subchapter "Moral Hazard" (page 68 ff) they describe why the why various participants in the world of finance had no incentive to do the right thing. First the Principal-Agent Problem, then the moral hazard experienced by shareholders, who were happy to gamble with mostly other people's money. Finally they say (page 70):

In theory, one final firewall exists to keep moral hazard in check: the people who lend money to banks and other financial firms. If any party has a strong incentive to monitor banks, they do. After all, they stand to lose their shirts if the bank does something stupid. Unfortunately, this is another example of the law of unintended consequences. Fund lent to most ordinary banks come in the form of deposits. However, most deposits are subject to deposit insurance. So even if a bank recklessly gambles with depositors' money, the depositors can sleep well at night knowinhg that deposit insurance will make them whole.That remvoes any incentive for them to take actions that might punish the bank for its bad decisions.

In other words (this is me talking now), FDIC is concealing a price signal. Had this signal existed and discomfited depositors, the problem would have been essentially self-correcting. But FDIC is also needed to prevent bank runs. It got me thinking, is there any way to have that price signal and yet mostly have protection against bank runs?

The idea

And then I thought, we need a real market signal. And it can't come from the bank, because the case in which it must pay, bank failure, is exactly the case in which it can't pay. That leaves just two interested parties: depositors (in aggregate) and insurers.

So instead of FDIC, let private insurers compete. They would bid the lowest rate they're willing to charge to insure a given bank's deposits, for instance "%0.56 per year". Winner gets cash flow at that rate and the obligation to make deposits good if the bank fails. The bank in question is not a party to this auction, and does not set the terms.

ISTM there's no need for them to negotiate directly with individual depositors - that'd be impractical and the situation would be dominated by depositors' individual lack of information and negotiating skill.

The relation between bank and depositors would almost sort itself out as long as depositors aren't locked into long-term arrangements. Of course banks will pass the cost on to depositors. Depositors presumably go elsewhere if terms are too unfavorable.

The rate information will make its way to depositors as a matter of course. Nevertheless, since their use of this information is the whole point of this exercise, it probably should be made public and brought to each depositor's attention, though most will ignore it.

Some issues

Regress: The moral hazard of insurers defaulting

It risks creating a new moral hazard: Insurers that can't actually bear the potential losses. They might be speculators hoping that lightning won't strike on their watch, or their fate might be tied to that bank, so that if the bank fails, so do they.

How are we to know which insurers can actually bear the potential losses? Better, how can we know it without a bureaucracy deciding it?

We appear to have just moved the problem from the bank to the insurer. So let's re-apply the same solution there: let insurance in turn require coverage against defaulting, on basically the same terms. The cash flow to the second insurer comes from the first insurer's cut, the cost is not (directly) borne by the depositor.

Presumably for unreliable would-be insurers, their own coverage takes so large a cut of their profits that they can't effectively compete in the bidding. For reliable insurers, presumably the situation is the reverse, so they can make a profit. Since there is this dimunition in rates, we can be sure that an infinite regress of metaN-insurers does not result in an infinite rate. Instead for a dimunition R, the overall rate is about:


So we can simply let it regress to extinction.

That solves half the problem. But we still might be looking at a circular situation. A bank might (openly or otherwise) be its own insurer and meta-insurer and meta2-insurer etc.

So we'd like to predictively know whether a bank and its insurer are truly independent. Prediction markets may help us. For each bank and insurer, there would be a prediction market on whether the insurer survives a bank collapse. It pays "No" if both default, "Yes" if the bank defaults but the insurer doesn't, and is not resolved if the bank does not fail. This gives us a probability of double-failure; a figure between 0.0 and 1.0. Call it p(!F2|F1)

Now calculate the total coverage required, assuming a single insurer, as:

`total deposits' / p(!F_2|F_1)

In other words, the bank is required to have coverage up to the total amount of deposits, but the amount of coverage from a given insurer only "counts" as:

`face value of coverage' * p(!F_2|F_1)

Similarly, for a meta-insurer, have a prediction market on whether it survives default by both bank and insurer.

Making these markets would add some overhead, but not neccessarily much, since at any instant it's required once over the bank's entire holdings.

No signal until very late

Another worry is that the rates may not reflect the actual situation until very late. Insurers who are aware of a developing problem at a bank might think that they can pull out in time.

The problem is that the contract may have less latency than the situation. So let's add a time rule. Say a formerly successful bidder who is now outbid retains a continually diminishing fraction of the insurance contract for some time. Ie, still gets a fraction of the deposits per unit time, and still has to provide coverage uniformly for that fraction of total deposits. That time interval is a free parameter in this proposal.

Then the situation is sensitive to the exact time at which the bank fails.

Collusion keeping the little guy out

Having arranged all this, we wouldn't like to see it become the sole preserve of a de facto cartel with captured regulatory agencies keeping others out.

This is apparently at odds with our need to keep lightweight insurers out. But fortunately my solution to the lightweight insurers does not rely on bureaucracy. Of course nothing will keep regulators from regulating, but there's little in this proposal that requires them to do so.

I'd add "explanation" to Taylor and Dennett on causality

Taylor and Dennett on causality

I just finished reading Who's Still Afraid of Determinism? Rethinking Causes and Possibilities, a paper by Christopher Taylor and Daniel Dennett on causality.

They give arguments that illuminate and motivate their conclusions. It's well done; I've come to expect this from Dennett.

But I think they missed a better theory of causality. I say that with a certain amount of trepidation. Dennett is very likely the deepest and most sure-footed philosophical thinker of our age. Who am I to tell him what he missed? But nevertheless I will say what I think.

Short summary of what they said

The paper opens with some views that should be uncontroversial. Determinism and causality relate to the idea of counterfactuality, "could things have happened differently?" Judea Pearl's counterfactual conditionals. Counterfactuality relates to possible worlds, but not to every possible world or even every physically possible world, but to some relevant set of possible worlds (X). They admit the bounds of this set are vague1.

They spend a large part of the paper trying various means of delineating X. Do only the most similar possible worlds count? No. Use the "narrow method", "conditions as they precisely were"? No. The "wiggle method"? They finally settle on that one only by process of elimination. And does sufficiency count or neccessity? Usually neccessity, sometimes sufficiency.

They end up with mostly a potpourri of heuristics instead of rules. They are not fully satisfied with this situation; they call it "sometimes irksome".

What I'd have changed

For some reason that I don't see, some people feel that a theory of scientific explanation should be built on top of causality. My suggestion is that the derivation should run exactly the opposite way: Derive causality from explanation, not vice versa.

When I say "derive causality from explanation", I mean something like this:

  • Explanation is understood essentially as in the statistical relevance model (SR)
    • But for technical reasons, I say that an explanation structures feature-space, where others say it "partitions" feature-space. Think shades of grey instead of black-and-white.
  • Conditions of explanatory goodness: Explanations are better as they:
    • Are stronger statistical explanations.
    • Apply in more worlds2 (ie, larger X)
    • Apply in a larger feature space.
    • Apply to more phenomena.
    • And have fewer statistical tails. So enlarging the set of worlds, feature-space, or explananda above by cherry-picking is cheating.
  • A good causal model is just:
    • A good explanation
    • Applied in a real world. Ie, there's no causality for things that didn't really happen, nor for platonic objects such as math and logic. There's room to modally mediate this condition, so we can talk about causality in counterfactual worlds, just more carefully.

      NB, it is only the application that is about one real world. The explanation may be considered across many possible worlds.

Applying it to what they said

I'll come back to why I like my treatment of explanation, but first back to Taylor and Dennett. How does this theory improve on their treatment of causality?

  • It's clearer how to delineate the relevant set of worlds (X). By the conditions of explanatory goodness, we want as large a set as is consistent with a good explanation, but (by "fewer tails") we also rule out cherry-picking just those worlds that help the explanation. This rules out large X, because an overly large X would dilute and weaken the statistical relevance.
  • It doesn't appeal to similarity of worlds as such. That protects it from the Nixon Nuke argument, which argued that "pressing the button" could not be said to cause a nuclear exchange because the most similar worlds were those in which, perhaps by electrical fault, no nuclear exchange occurred.
  • It is a not a set of heuristics but a bona fide theory.
  • It provides or improves on all their causality factors (page 9)
    • The deciding condition is no longer a mixture of sufficiency and neccessity, but is always statistical relevance.
    • It's consistent with the sharpshooter argument, where a poor marksman sniper is said to cause the death of his victim even though his odds of hitting were low. This was their argument for ranking neccessity above sufficiency.
    • It's also consistent with the king-and-mayor argument, which was their argument for why sufficiency was still sometimes the determining factor.
    • "Truth of explanans and explanandum in the real world" - trivial both here and there.
    • The "Independence" condition has always been part of the SR model; it need not be re-introduced.
    • The "Temporal priority" condition and the "Miscellaneous further criteria [or heuristics]" that are mentioned all appear to develop naturally from the explanation-based model.

More about my explanation theory

But isn't causality needed first?

One might be tempted to object that causality is needed to avoid bizarre explanations. Eg since patterns in the future can't possibly cause past phenomena, they can't be good explanations of them. So do we need causality first?

Not so fast. Look at the possible cases for a given would-be explanatory pattern in the future:

  • It is just a fluke.
  • It is not a fluke, but is part of a larger SR pattern that directly encompasses past patterns; the same features are involved.
  • It is not a fluke, but it is part of a larger SR pattern, one that indirectly encompasses past patterns. Different features are involved, but there is still a causal path from part of the pattern to the phenomena.
  • It is none of the above. This is the crucial case; the other cases just delineate it.

Taking the cases individually:

  • It is a fluke. Then you will not generally find strong statistical relevance for it. One could commit a statistical sin and cherry-pick fluke cases, but that's just cheating. No problems with this case.
  • It is part of a larger SR pattern that directly encompasses past patterns; the same features are involved.

    Then there is no problem with causality. I've already said that we favor explanations that apply in more worlds and/or in a larger feature space. No problems with this case.

  • It is part of a larger SR pattern, one that indirectly encompasses past patterns; different features are involved, but there is still a causal path from part of the pattern to the phenomena. Here the causal path is mediated by predictive intelligence. That doesn't make it any less a causal path. No problems with this case.
  • It is none of the above. It is no fluke, but either doesn't encompass past patterns at all, or doesn't in any way that any intelligence could act on. In other words, we're actually seeing the future affecting the past.

    If we actually observed cases like this, it would not mean that this theory had approved a bad explanation. It would mean that we were seeing time travel and must change our ideas of causality. Of course we don't observe any such thing. So no problems with this final case either.

Another benefit of the conditions of explanatory goodness

The conditions of explanatory goodness remove certain objections to SR, such as the barometer problem. The barometer problem is where we supposedly can't tell which is the case:

  1. The approaching storm explains the falling barometer, or
  2. The falling barometer explains the approaching storm.

Case #2 is only on even footing with case #1 in worlds where we can find no case of a storm without a barometer. But that's a very minute sliver of all possible worlds. So the conditions of explanatory goodness defuse the barometer problem.

A watching golf instructor might say truly:

"If Austin had taken the more expensive golf lessons, he would have made that putt"

But Austin might say also truly:

"If I had taken the more expensive golf lessons, I could only have afforded one lesson, so I still would not have made that putt"

So there can be more than one reasonable relevant set of worlds.


1 More than vague, they are ambiguous; more than one X can be right even when describing the same situation. I actually got this from a linguistics paper on counterfactuals. To adapt an example from the paper, suppose that Austin has taken a few cheap-o golf lessons and misses a putt.

2 Or I could say more technically, explanations that apply in a set of possible worlds having a larger measure. But let's not be pompous.