11 January 2011

I'd add "explanation" to Taylor and Dennett on causality

Taylor and Dennett on causality

I just finished reading Who's Still Afraid of Determinism? Rethinking Causes and Possibilities, a paper by Christopher Taylor and Daniel Dennett on causality.

They give arguments that illuminate and motivate their conclusions. It's well done; I've come to expect this from Dennett.

But I think they missed a better theory of causality. I say that with a certain amount of trepidation. Dennett is very likely the deepest and most sure-footed philosophical thinker of our age. Who am I to tell him what he missed? But nevertheless I will say what I think.

Short summary of what they said

The paper opens with some views that should be uncontroversial. Determinism and causality relate to the idea of counterfactuality, "could things have happened differently?" Judea Pearl's counterfactual conditionals. Counterfactuality relates to possible worlds, but not to every possible world or even every physically possible world, but to some relevant set of possible worlds (X). They admit the bounds of this set are vague1.

They spend a large part of the paper trying various means of delineating X. Do only the most similar possible worlds count? No. Use the "narrow method", "conditions as they precisely were"? No. The "wiggle method"? They finally settle on that one only by process of elimination. And does sufficiency count or neccessity? Usually neccessity, sometimes sufficiency.

They end up with mostly a potpourri of heuristics instead of rules. They are not fully satisfied with this situation; they call it "sometimes irksome".

What I'd have changed

For some reason that I don't see, some people feel that a theory of scientific explanation should be built on top of causality. My suggestion is that the derivation should run exactly the opposite way: Derive causality from explanation, not vice versa.

When I say "derive causality from explanation", I mean something like this:

  • Explanation is understood essentially as in the statistical relevance model (SR)
    • But for technical reasons, I say that an explanation structures feature-space, where others say it "partitions" feature-space. Think shades of grey instead of black-and-white.
  • Conditions of explanatory goodness: Explanations are better as they:
    • Are stronger statistical explanations.
    • Apply in more worlds2 (ie, larger X)
    • Apply in a larger feature space.
    • Apply to more phenomena.
    • And have fewer statistical tails. So enlarging the set of worlds, feature-space, or explananda above by cherry-picking is cheating.
  • A good causal model is just:
    • A good explanation
    • Applied in a real world. Ie, there's no causality for things that didn't really happen, nor for platonic objects such as math and logic. There's room to modally mediate this condition, so we can talk about causality in counterfactual worlds, just more carefully.

      NB, it is only the application that is about one real world. The explanation may be considered across many possible worlds.

Applying it to what they said

I'll come back to why I like my treatment of explanation, but first back to Taylor and Dennett. How does this theory improve on their treatment of causality?

  • It's clearer how to delineate the relevant set of worlds (X). By the conditions of explanatory goodness, we want as large a set as is consistent with a good explanation, but (by "fewer tails") we also rule out cherry-picking just those worlds that help the explanation. This rules out large X, because an overly large X would dilute and weaken the statistical relevance.
  • It doesn't appeal to similarity of worlds as such. That protects it from the Nixon Nuke argument, which argued that "pressing the button" could not be said to cause a nuclear exchange because the most similar worlds were those in which, perhaps by electrical fault, no nuclear exchange occurred.
  • It is a not a set of heuristics but a bona fide theory.
  • It provides or improves on all their causality factors (page 9)
    • The deciding condition is no longer a mixture of sufficiency and neccessity, but is always statistical relevance.
    • It's consistent with the sharpshooter argument, where a poor marksman sniper is said to cause the death of his victim even though his odds of hitting were low. This was their argument for ranking neccessity above sufficiency.
    • It's also consistent with the king-and-mayor argument, which was their argument for why sufficiency was still sometimes the determining factor.
    • "Truth of explanans and explanandum in the real world" - trivial both here and there.
    • The "Independence" condition has always been part of the SR model; it need not be re-introduced.
    • The "Temporal priority" condition and the "Miscellaneous further criteria [or heuristics]" that are mentioned all appear to develop naturally from the explanation-based model.

More about my explanation theory

But isn't causality needed first?

One might be tempted to object that causality is needed to avoid bizarre explanations. Eg since patterns in the future can't possibly cause past phenomena, they can't be good explanations of them. So do we need causality first?

Not so fast. Look at the possible cases for a given would-be explanatory pattern in the future:

  • It is just a fluke.
  • It is not a fluke, but is part of a larger SR pattern that directly encompasses past patterns; the same features are involved.
  • It is not a fluke, but it is part of a larger SR pattern, one that indirectly encompasses past patterns. Different features are involved, but there is still a causal path from part of the pattern to the phenomena.
  • It is none of the above. This is the crucial case; the other cases just delineate it.

Taking the cases individually:

  • It is a fluke. Then you will not generally find strong statistical relevance for it. One could commit a statistical sin and cherry-pick fluke cases, but that's just cheating. No problems with this case.
  • It is part of a larger SR pattern that directly encompasses past patterns; the same features are involved.

    Then there is no problem with causality. I've already said that we favor explanations that apply in more worlds and/or in a larger feature space. No problems with this case.

  • It is part of a larger SR pattern, one that indirectly encompasses past patterns; different features are involved, but there is still a causal path from part of the pattern to the phenomena. Here the causal path is mediated by predictive intelligence. That doesn't make it any less a causal path. No problems with this case.
  • It is none of the above. It is no fluke, but either doesn't encompass past patterns at all, or doesn't in any way that any intelligence could act on. In other words, we're actually seeing the future affecting the past.

    If we actually observed cases like this, it would not mean that this theory had approved a bad explanation. It would mean that we were seeing time travel and must change our ideas of causality. Of course we don't observe any such thing. So no problems with this final case either.

Another benefit of the conditions of explanatory goodness

The conditions of explanatory goodness remove certain objections to SR, such as the barometer problem. The barometer problem is where we supposedly can't tell which is the case:

  1. The approaching storm explains the falling barometer, or
  2. The falling barometer explains the approaching storm.

Case #2 is only on even footing with case #1 in worlds where we can find no case of a storm without a barometer. But that's a very minute sliver of all possible worlds. So the conditions of explanatory goodness defuse the barometer problem.

A watching golf instructor might say truly:

"If Austin had taken the more expensive golf lessons, he would have made that putt"

But Austin might say also truly:

"If I had taken the more expensive golf lessons, I could only have afforded one lesson, so I still would not have made that putt"

So there can be more than one reasonable relevant set of worlds.

Footnotes:

1 More than vague, they are ambiguous; more than one X can be right even when describing the same situation. I actually got this from a linguistics paper on counterfactuals. To adapt an example from the paper, suppose that Austin has taken a few cheap-o golf lessons and misses a putt.

2 Or I could say more technically, explanations that apply in a set of possible worlds having a larger measure. But let's not be pompous.

No comments:

Post a Comment