## Dennett on free will

I just finished reading Daniel Dennett's Freedom Evolves. It's about free will and freedom. His overall point is that over time, over evolution, we have acquired more or more powerful free choices.

## "Stop that crow!"

Thruout the book, Dennett seems to worry that his ideas on free will will dispel some useful illusion. He often uses the phrase "Stop that crow!", which he borrows from the Dumbo movie, where the elephant can fly as long as he's holding what he thinks is a magic feather.

## Free will

Now, I consider "free will" to be a chimerical concept in its common usage. The term bundles several very different things together:

• Subjectively making choices
• Lack of physical determinability. As opposed to observing a person's brain in complete detail and predicting the "free" choice he makes later that day.
• Status as automaton or not.
• Moral responsibility, as opposed to "you can't blame me because somebody or something made' me do it"

Dennett never dissects the strands of meaning in this way. But in chapters 2 and 3, he demonstrates that there is no essential connection between free will and lack of physical determinability. He also refutes the (silly IMO) position that quantum indeterminability somehow makes for free will.

He motivates the non-connection between choices and physical determinability with the example of Conway's game of Life.

Although Life is "really" a pixelated rectangle with every pixel on equal footing, Life experimenters distinguish and name higher-level patterns - "entities" within the game, one might say. Experimenters also make design choices that often include avoiding harm to their creations, eg by placing blocks where gliders might smash into them. Avoiding is fundamentally a choice. Entities within the game itself could theoretically be designed to react to incoming disruptions by making avoidance choices,

Now, Life is actually Turing Complete - given a large space, you can actually build a Universal Turing Machine in it. And so Life entities avoidance choices could theoretically reach any specified standard of intelligence.

And of course Life is fully pre-determined. So entities in Life could make real choices in a fully pre-determined world.

## I blogged about chapter 3 without knowing it

Chapter 3 (Thinking about Determinism) basically repeats Who's Still Afraid of Determinism? Rethinking Causes and Possibilities, a paper by him and Christopher Taylor on causality. By co-incidence, I blogged about it earlier1.

## The Evolution of Moral Agency

The chapter I liked best was chapter 7, the Evolution of Moral Agency. In it he advances a theory, new to me but pulled variously from Robert Frank and George Ainsley, that we have moral emotions in order to resist temptation. And why resist temptation? It's actually in our long term interests (Frank). And why not just directly act on our long-term interests? Because temptation increases hyperbolically as it gets nearer (Ainsley), and we can counter it best with a competing stimulus, that being moral emotions (Frank), and that works best if it's not under conscious control (Dennett himself).

This theory has the ring of truth to it.

## The Future of Human Freedom

I found his final chapter weak (as Dennett goes). He's concerned with "Holding the line against creeping exculpation". Ie, as we learn more and more about why people make the choices they do, must that mean we can less and less

Given Dennett's past writings, I was surprised that he didn't bring his previous ideas to bear more. Instead, he writes a woolly final chapter, littered with poorly chosen real world examples that he distances himself from.

### What I would have said

Rather than merely criticize, I will offer my own answer. I would have said, leaning on Dennett's own earlier ideas on design, that we are blameworthy or praiseworthy exactly for occurences that we designed.

Important points:

• I did not say "declaratively designed". Don't imagine an inventor with a blueprint and a contraption run amuck, or anything of the like. I mean "design" in the sense that Dennett has used it previously, the sense in which design permeates life. In particular:
• Lack of declarative faculty is no bar to responsibility. My dog can't describe or reflect on her choices, but she can properly be praised or blamed. A little. To a doggy standard of responsibility, not a human one.
• Self-deception is no bar to responsibility.
• An important but subtle distinction: we are responsible for the design that was realized, not for the realized outcome. That means that "wiggle room" between design and realized outcome is no bar to responsibility. In particular, I contemplate:
• Probabilistic tradeoffs, regardless which outcome is realized. A drunk driver, even thought she happened not to hit anyone, has done a blameworthy thing.
• Contingent outcomes. So bureaucrats, when one says "I just recommended X" and the other "I just followed the recommendation", don't avoid responsibility. They each partly designed the outcome, contingent on the another's action.

The virtues of this definition:

• It defeats the creeping exculpation that concerns Dennett.
• It doesn't bar exculpation where there's really no mens rea.
• It's fuzzy on just the right things:
• When it's fuzzy whether an occurence is the realization of some design.
• (The flip side of the previous) When it's fuzzy whether a design was realized. (again, distinct from whether an outcome was realized)
• It allows multiple responsibility for the same occurence in situations, like the example that Dennett gives on page 74.
• It gives the right answer when a designer is a surprising entity such as a group, an organization, or a system.
• We don't have to know the responsible person or group's inner workings in order to assign responsibility, we can treat him/her/it as a black box that effectuations of designs come out of. Black-boxing generally makes analysis easier.
• We can understand "attempts" and "incitement" and similar qualified sins in this framework.
• We can understand lack of responsibility due to being deceived in this framework.
• It distinguishes our responsibility from that of non-choosers (fire) and weak choosers (animals) in a principled, non-question-begging way.

## Footnotes:

1 Short summary of my position: When trying to define causality, we should really be looking at quality of explanation.

## Kernel Suggestions

Having gotten neck-deep in coding Klink, most recently listloop, I've found a few small areas where IMHO the Kernel specification might be improved.

### Let finders return (found? . value)

Again, John considered this with assoc' (6.3.6) and decided the other way. But assoc's case doesn't generalize. Successful assoc' results can't be nil because they are pairs, but in general, nil' is a possible object to find.

Already I find myself writing code that wants to use a (bool . value) return, like:

($define! (found . obj) (some-finder)) ($if (and?
(not? found)
(some-check))
($define! (found . obj) (another-finder)) #inert)  It's very convenient. This regular form also makes it easy to wrap finders in control constructs that easily understand whether the result of a finder is available. I would ask that finders in general return a (bool . value) pair, including: • assoc, or a reserved name associated with assoc. • assq, or again a reserved name associated with it •$binds? + (eval SYM ENV)
• Provided in Klink as find-binding'
• The accessors built by make-keyed-dynamic-variable and make-keyed-static-variable. With this, it is easier to determine whether a known keyed variable is bound or not, but raising an error if it's not requires an extra step. Ie, the reverse of the current definition. Either could be defined in terms of the other.

### More on List combiners in Klink

Since the last post, I've implemented a sort of general listloop facility in Klink. It consists of:

• Two new T_ types
• A number of predefined styles (Two for now, easily expanded)
• A C function to create a listloop
• A C function to iterate a listloop

## What it accomplishes

• It takes the place of a number of specialized loop facilities.
• It potentially can turn most map-like operations into streaming operations by changing just a few parameters. I haven't written that part yet, because promises are implemented but are not yet available in the C part of Klink.
• It can potentially build somewhat more powerful list loop functionality almost directly in C, again by just passing parameters. For instance, assoc, find, and position.
• It appears possible to reason about it in a fairly general way, for optimization purposes.

## Dimensions of loop behavior

Last time, I spoke of two dimensions of looping: Family and Argument Arrangement. And a mea culpa: I was wrong to group $sequence with for-each. They are similar but$sequence returns its final value unless the list is circular, for-each' never does.

On further analysis, I ended up with more dimensions than that. I considered 9 dimensions of loop behavior, and ended up with 6 style parameters and 6 loop instantiation parameters, plus 2 parameters that are always supplied outside.

The dimensions of loop behavior are not independent. There are a few combinations that make no sense.

### The dimensions themselves

From my design notes:

• Termination control
• Cases:
• When countdown is zero
• When arglist is empty
• When value is #t/is #f
• But for this case we also need an empty or countdown condition.
• Could be generalized to any eq? test, possibly useful for find.
• (Unlikely to need a case for when arglist is empty except for N elements, since neighbors is figured out by count first)
• The effective operative return value
• Cases:
• Map: Cons of value and accumulator
• Final value is reverse accumulator
• Boolean/reduce/sequence: Value itself
• Final value is value
• Each: No return value
• Terminator:
• Cases:
• Usually none
• Map: Reverse the value
• That's just reverse
• For-each: Give #inert
• Just curry the K_INERT value
• All can be done without reference to the loop object itself
• Treatment of the arglist(s). Cases:
• Use a single element (car, cdr remains)
• Use elements from multiple lists (car across, cdr across remains)
• Use multiple consecutive elements ((car, cadr), cdr remains)
• Other arguments to the operative
• Pass the accumulator too (reduce does this)
• This is really passing the previous value, which is sometimes an accumulator.
• Pass the index too
• Pass the "countdown" index too.
• Make the value the first arg in a list

## Regularities in list combiners in Klink

Yesterday I was writing list combiners for Klink, my interpreter for a Scheme-derived languages called Kernel. I couldn't help noticing many regularities between them. Not just obvious regularities, such as and? / $and? / or? /$or?, but deeper ones that pervaded the set of list-oriented combiners. They seemed to all fall into one of a small set of families, and one of a small set of argument arrangements.

Here's a table of existing (or planned shortly) combiners that followed this pattern:

Args\FamilyAndOrMapEach
Non-opand?or?copy-list
Eval$and?$or?mapeval1$sequence Operate on donutevery?some?mapfor-each Neighbors>? et allist-neighbors The families are as follows: and Value must be boolean, stop early on #f. or Value must be boolean, stop early on #t. map Collect values, no specified order. each Evaluate for side-effects, in order. And the argument arrangements are: Non-op Non-operative on given values Eval Evaluate the list element as a form Rectangular Operate on transposed rectangular args • Takes 2 length-counts • Unsafe itself but used as a worker by safe combiners. • Name format: counted-NAME Unary Operate on args to a unary operative • Takes 1 length count, the other is always 1. • Specializes "rectangular". • Name format: NAME1 Donut Operate on possibly infinite lists ("donuts") • Called thru apply-counted-proc Neighbors Operate on list-neighbors ## Some tempting extensions: argument arrangements I also envision other argument arrangements that aren't part of the Kernel spec, but which I've found useful over the years: ### Ragged This is the indeterminate-length variant of "Donut" and "Rectangular". It operates on "ragged" transposed args that need not all be the same length, and may in fact be streams. It produces a stream. Ie, each iteration it returns: (cons value ($lazy (start the next iteration)))


It terminates when any input stream or list runs out of arguments. It returns:

(\$lazy (termination-combiner streams))


where termination-combiner' is a parameter. That is done to reveal the final state, which the caller may be interested in.

It is allowed to never terminate, since a caller need not evaluate the entire stream.

Combiners of this type would have the same name as donut combiners plus "-ragged", eg "map-ragged".

### Indexed

Operate on transposed args plus an "index" argument. This is just providing an argument that already exists internally. It doesn't constrain the order of operations.

### Blackboard - probably not

Operate on transposed args plus a "blackboard" argument. Intended for providing and collecting results that don't fit neatly as mapped returns. It maintains the overall control structure of visiting each element of a list, but can collect results in more varied ways.

In its favor as an argument arrangement, most of the families combine with it to provide useful behavior: Stop on false, stop on true, or keep going for side effects.

Against it, it's not clear that it can co-operate coherently with the map' family, which has no specified order of evaluation.

It also seems to require another argument arrangement underneath it.

And finally, it's not clear that it is sufficiently distinct from reduce' to be worth having. And blackboard is an argument arrangement, while reduce is proposed as a family. That suggests that something is wrong with one or both.

So probably no to "blackboard". Some variant of "reduce" will likely do it instead.

## Some tempting extensions: Families

### Reduce

In reduce, the value cascades thru each element. Though superficially related to the list-neighbors argument arrangement, it's not the same thing. It considers the elements one by one, not two by two. Its binary combiner gets two arguments only because one is an accumulator.

In its favor as a family, it already comes in counted and donut versions.

Standing against it is that its counted form is abnormally a non-op arrangement instead of a unary or rectangular arrangement. But non-op can be considered a specialization of unary' with identity' as the operative.

And it's not clear that an eval' arrangement could ever make sense. But again, the eval' arrangement is a specialization of unary, with eval' as the operative and the single argument as the form.

So I'll wait and see whether reduce can fit in or not. But it looks promising.

## Footnotes:

1 mapeval is internal, used by the C code for the `eval' operation. However, I see no problem exposing it and it might be useful.