The Ethical Stance
I recently read and reviewed Daniel Dennett's The Intentional Stance.
He defines three levels of analysis:
- The physical stance. All things are made up of physical material (atoms and molecules, usually) and obey physical laws. This applies to anything.
- The design stance. Some things behave as though selected for a particular purpose. This stance applies to artifacts and creatures - things that are selected, whether by "real" designers or blindly by evolution.
- The Intentional Stance. Some things behave as if they held beliefs and goals. This stance applies to intentional systems (basically, intelligent living things).
This got me thinking. What if there are further continuations of this pattern? What might the next layer be?
I've already given away the answer in my title. So I'll just jump in and start explaining it.
- The ethical stance. This stance applies to moral systems. A moral system M behaves as if someone (not neccessarily M) is obliged to (not) engage in certain behaviors.
Some parallels
Some parallels between the stances other than the physical stance:
- The stances are about predicting the behavior of specific physical entities as "black boxes", ie without reference to the specific physical (or lower-stance) mechanisms that make them behave thus.
- In cases where they apply, higher stances are expected to predict behavior more efficiently, from less input and less computation, than lower ones.
- However, they are expected to predict behavior less reliably, though still fairly well.
- They are not expected to apply to "broken" systems, which must be understood by lower stances, by the physical stance as a last resort.
-
One can relate each stance to a specific type of platonic entity:
- A design
- A set of beliefs
-
An
ethos
- Those platonic types have very many instances (platonic entities), across very many dimensions of difference, and it's difficult to even characterize that dimensional space. Nor can one always point to discrete platonic individuals - sometimes there's a continuum. But for simplicity I will talk about platonic entities.
-
One can kind of associate a given physical entity with a given
platonic entity, but always with reservations. With similar
reservations, one can associate it in part:
- That a design satisfies a particular requirement (among others)
- That a believer holds a particular belief (among other beliefs)
- That a moral actor holds to a particular ethic (among other ethics)
-
It is equivalent to say (always with the reservations) that:
- A given physical thing and a given platonic entity have such an association.
- A given thing is an {artifact or creature, intentional system, moral system}.
- A {design, ethos, mindset} is embodied in a given thing.
-
That association is not absolute. If you look hard enough, you can
find:
- Ambiguity about which platonic entity a given thing embodies. At high magnification, as it were, the association is to a fuzzy dot rather than a crisp point.
- Similarly, ambiguity about the boundaries of the physical thing. "Extended phenotype", for instance.
- "Mistakes", where natural implications of the association accidentally fail.
- "Insufficiencies" where far-removed implications of the association systematically fail. For example, you're rational and you know the basic axioms of arithmetic (the Peano Axioms), and they neccessarily imply the answer to the Goldbach conjecture, so is the Goldbach conjecture true?
- Misrepresentations, where explicit claims more-or-less by the entity itself are unreliable or wrong. Moreso for the higher stances.
- That association uses structure from the immediately previous stance. When lower layer associations break, the higher layer associations tend to become indeterminate (much more indeterminate than usual, anyways)
- That association doesn't need to be explicit or conscious. It can be tacit.
-
An embodiment operates with regard to entities (and structures,
operations, and properties) interpreted at its own level. With
only two previous data points, I'm not sure how this goes, but I
think it's something like:
- Functions (of artifacts or creatures) are about expected entities (which may be historical phantoms)
- Beliefs are about wrt notional entities
- Ethics (viewed by moral agents) are about (notional) entities that (again notionally) have or are owed obligations. (This I'm least sure about)
- The notional entities (etc) are nevertheless generally bound to things in the real world, especially in healthy systems.
A chart of parallel parts between the stances:
Name | Associated | Part of | What to apply | Something that a |
---|---|---|---|---|
platonic type | P. type | the stance to | nonbroken one does | |
Physical stance | N/A | N/A | any physical thing | exists |
Design stance | design | requirement? | artifacts and life | performs X |
Intentional Stance | mindset | belief | intentional systems | knows X |
Ethical stance | ethos | ethic | moral systems | sees obligation X |
Shouldn't "value" and "harm" and "benefit" play a part?
That was my first impulse. Of course value and benefit and harm interact with the ethical stance.
But "values" is already captured by lower stances. To define the ethical stance in terms of valuing, "behaves as if _ values X", would have been a mistake. It would risk capturing quirky personal preferences that have no ethical relevance. It also strains on situations where a moral agent has views about a distant occurence that can't ever affect him.
Much more to be said
There's much more to be said, but I've already spent a lot of time writing this up, so I'm going to stop now.
It's not obvious to me that the ethical stance should be considered a higher (or is that deeper) level beyond the intentional stance. If these stances are to be ordered at all, perhaps ethics comes before intent, on the grounds that intent has predictive power in situations where there isn't enough data to apply ethics.
ReplyDeleteInteresting point. I see the line of argument. But it makes me think that I didn't express the point about "less input" well.
ReplyDeleteI said "less input", which implies "less data". But when Dennett talked about sparse information, it always seemed to be information selected out of a deluge of information. There was no sense of starving for data.
The thing was not so much that someone using the intentional stance (or other non-physical stances) gets good "miles per gallon" out of very little data. It was that they winnow the deluge of information down to less input before thinking hard about it.
"Winnowing the deluge" can be applied to the physical stance too. Eg, in signal processing (let's say of a purely physical phenomenon like sonar echoes off the ocean floor), sometimes engineers decimate the data before processing it.
And where I said "set of beliefs", I actually meant "collection of beliefs". A collection is like a primitive variant of a set. It doesn't neccessarily have distinct individuals in it.
ReplyDelete