## Review of How Pleasure Works by Paul Bloom

This book has some interesting bits, but doesn't live up to its title. Bloom never articulates a coherent theory of how pleasure works. Nor does he really pull the individual chapters together into a unified theory of pleasure.

As he immediately tells you in the preface, he's not talking about simple animal pleasures, food and comfort, that sort of thing. He's talking about uniquely human pleasure, or pleasures with a uniquely human aspect.

The chapters each explore some aspect of how pleasure works. In each case, the chapter's mini-theory or theories explain some pleasure phenomena but can't explain others.

For instance, in chapter 6 (Imagination), the theory is that the pleasure we take in stories is a sort of extra-powerful daydreaming, more vivid than our real daydreams and amplified by the storyteller's skillful choices. But (as he admits) this is far from a complete theory of why we like stories.

When I was done reading this book, I asked myself what in it would explain the pleasure we take in music. And my answer was, none of them. Chapter by chapter:

1. The Essence Of Pleasure. This whole chapter seemed pretty insubstantial to me. "Essence" is far too elastic a term to carry any real explanatory weight.
2. Foodies - about food. Not applicable.
3. Bedtricks - about sex. Not applicable.
4. Irreplaceable - No, essentially all music is copies of something. Not just copies of recordings of a performance, but performances themselves are often copies, too. The few times I've heard improvisation, while in some cases I appreciated the skill, it didn't strike me as particularly great music.

You could say that a given opus is irreplaceable: What could replace Beethoven's 6th symphony or Eine Kleine Nachtmusik? But that seems to be stretching the point beyond reason. Were all (say) roses to perish from existence forever, we'd miss them too, but that doesn't mean a rose is irreplaceable.

5. Performance - the mini-theory here is that our pleasure is an appreciation of skilled performance.

For me, sure, though appreciating virtuosity is hardly the only pleasure I take from music. But many people who have zero musical sophistication enjoy music. Often they don't even know what instrument they are hearing played. How are they appreciating the performer's virtuosity?

Furthermore, music is often used to accompany other performances - background music, overture, dance. Here most audiences are unlikely to appreciate the performer's skill or to even notice the music as performance.

6. Imagination - stories explained as extra-powerful daydreaming. Doesn't seem applicable. One might try to force it to fit by arguing that music sometimes helps you daydream, but I think that's a stretch.
7. Safety And Pain - Again largely about stories. Here the mini-theory is that we get to experience things we couldn't safely experience in real life. It's hard to see how that could apply to music.
8. Why Pleasure Matters - this chapter mostly tries to tie pleasure to religion. Not applicable.

So How Pleasure Works did not explain music. It's often an interesting book, but far from a coherent theory of how pleasure works.

## EMSIP source parser

I split this post in two. I was originally going to talk about EMSIP as a whole, which I do here. But right now I want to talk about one of those little parsers, the source code parser, which is designed to handle the main flow of source code.

### Objectives

1. Simplicity
2. Follow EMSIP's framework and spirit
3. Where there exists a compelling "write it without thinking" expression for something, allow it.
• Exception: infix syntax for algebra, which would conflict with (2)
• For instance, allow:
• quoting of expressions
• distinguishing numbers from symbols just because of how they look.
• prefixing numbers with minus sign, eg -6.
• I emphasize that this rationale is limited. If you would have to think even for a moment about how to write an expression, it doesn't fall under this clause. This is only for expressions that people tend to automatically write before thinking. It's not meant for tending-towards-Perl shortcuts.

### The objects to express

As I mentioned, essentially all the complexity of source code would be pushed out to little parsers:

• comments
• strings
• radix numbers such as hex numbers
• complex numbers
• rationals
• bytevectors
• vectors
• character escapes
• (etc)

And obviously they're each pretty simple.

Relieved of the burden of representing every strange object, I think we can get by with just a few simple things:

• decimal numbers, essentially as traditional.
• Introduced by a digit, or -, + followed by a digit.
• Continued by a digits, letters, and some punctuation.
• We'd support just decimal integers and inexact decimal floats. Not much more - maybe exact floats. No attempt to represent complex numbers, rationals, numbers of various precision, etc. Those are to be supported by little parsers.
• symbols, essentially as traditional.
• Introduced by a letter. Also by -, +, $, or -> followed by a letter. (We'll extend this group lower down) • Continued by a digits, letters, and some punctuation. • Punctuation groups • Introduced by quote, even in non-whitespace states For regularity, I'll say that - or + followed by punctuation can start a punctuation group. • Continued by other punctuation. Not continued by quote; quote starts another punctuation group. • Purpose: "funnier" manipulations of source, such as quoting the next sexp or dotting a list. • Whitespace So the first character after whitespace, a bracket, or punctuation controls the type, with a few exceptions which must consider their next character. In table form: typeBegun byContinued by numberusually by digitalmost anything symbolusually by letteralmost anything punctuationjust quotejust the leftovers whitespacewhitespacewhitespace ### The classes of character That gives us these classes of character: (not brackets) brackets are accounted for by EMSIP main. letters a-z, A-Z digits 0-9 whitespace whitespace strong punctuation just single-quote ' weak constituents • Includes definitely +, -, >, <,$, and probably all the other punctuation.
(illegal characters)
characters that just shouldn't appear.

### The parse states

No we have just these major parse states:

uncommitted
Entered by weak constituents.

This state can be terminated by anything except weak constituents.

Becomes part of something else depending on how it's terminated.

• By digit: becomes part of a number
• By letter: becomes part of a symbol
• By strong punctuation: becomes part of a punctuation group
• By whitespace or bracket: makes a symbol of just itself.
whitespace
Entered by whitespace.

This state can be interrupted by anything except whitespace.

Makes a whitespace object whose tail transformer obliterates it.

punctuation group
Entered by strong punctuation.

Makes a punctuation group.

This state can be terminated by anything except weak constituents.

committed
This is two similar states, number and symbol.

Entered by digit or letter

This state can be interrupted only by whitespace, strong punctuation, or closing bracket.

Makes a number or symbol

In table form:

StateMakesEntered byInterruptable by
Uncommitted(Depends)Weak constituentsAll but weak cons
WhitespacewhitespacewhitespaceAll but whitespace
PunctuationPunctuation groupStrong punctuationAll but weak cons
Committednumber or symboldigit or letterWhitespace, strong cons

### suitable SREs

These SREs1 assume the classes of character as primitives. They assume longest-match parsing so uncommitted' will be part of a larger construct when possible.

uncommitted
(* weak-constituent)

whitespace
(* whitespace)

punctuation-state
(seq strong-punctuation (* weak-constituent))

committed-number
(seq digit (* (~ strong-punctuation whitespace)))

committed-symbol
(seq letter (* (~ strong-punctuation whitespace)))

number
(seq (? uncommitted) committed-number)

symbol
(seq (? uncommitted) committed-symbol)

punctuation
(seq (? uncommitted) punctuation-state)

thing
(* (or whitespace number symbol punctuation uncommitted))

(overall)
(* thing)


### Operating on promised tails

One thing about EMSIP parsers is that they include transformers that operate on a promise of their tail. This lets us do some important things neatly, such as skip comments and splice character constants into strings. It also implies that read gives us a stream of objects, rather than returning one object and being called repeatedly. We'd use essentially this mechanism for some business in the source parser.

Whitespace objects use this mechanism to remove themselves. Their tail transformers just return the tail.

Punctuation groups will also operate on promises of their tails. This lets them handle all the "funny business" that traditional sexp syntax does with quote and its friends.

Numbers and symbols don't do anything special, they just cons themselves onto the tail.

### And that's it!

And that's all we have to do to get a reasonable source parser under EMSIP.

## Footnotes:

1 For more on SREs, see SRE or a very similar idea, Irregexes

## Introducing EMSIP

### Some motivation

#### I like s-expressions

I like s-expressions. They don't make you memorize new syntax for every new feature someone added. Editors and other tools can almost always make sense of them - with heavier syntaxes, there's always someone who thumps their chest and claims that they "could write a parser for it in an afternoon, no problem!" but when you look for actual existing tools that work on a given heavy syntax, finding them is a different story.

Not only can editors handle s-expressions, you can write code to process them in new ways without ever having to wrestle with a parser. In both Scheme and Common Lisp, the code to read an s-expression is just read' - can't get much simpler than that.

And I have philosophical reasons for liking them. S-expressions are regular, regular is clean, and clean is good.

#### But they could go further

Quick, where does this s-expression end?

#mynewreadermacro a b c () ) () "["]{} 52:abcde


Of course you can't tell. In Lisp (Common Lisp, that is) there are many dozens of standard reader macros, and new ones can be added.

And it's not just reader macros. There are a few built-in extra bumps:

• Double-quotes for strings
"A string"

• Literal symbols in Common Lisp, like
|One literal symbol|

• Infix operators such as quote, backquote, and low quote.

#### Scheme is better but headed worse

But surely Scheme is clean and small, right? Well, the impetus to write EMSIP up and post it came from Scheme. I've just been looking at the R6RS spec, and I'm stunned at how complex the Scheme syntax has become. Not up to the complexity of Common Lisp's full syntax spec, but headed in that direction.

### EMSIP

I propose instead an idea that I've been developing for some time. I call it EMSIP. The idea is to have:

• brackets (3 types and a comma)
• A lot of little parsers that cannot violate brackets. They are expected to only parse regular expressions.

Think of it as a case of separation of concerns. No little parser needs to care how any other little parser behaves. It need only interface properly with EMSIP.

It may sound like not enough to implement a proper programming language, but it is. When I get more round tuits, I hope to add EMSIP into Klink, the interpreter I'm writing for Kernel.

Essentially all the complexity would be pushed out to little parsers:

• comments
• strings
• radix numbers such as hex numbers
• complex numbers
• rationals
• bytevectors
• vectors
• character escapes
• (etc)

### EMSIP main syntax

In EMSIP there are only seven magic characters:

• open parenthesis
• close parenthesis
• open curly braces
• close curly braces
• open square brackets
• close square brackets
• comma
{}[](),


Each of them always has the same structural effect, so even a naive processing tool can tell where structures begin and end. All bracket types consume exactly the text from the opening bracket to the matching closing bracket. Never more and never less. So even if an editor or other tool knows nothing about the syntax, it knows when an parser's scope ends.

The little parsers have their own internal syntaxes which:

• apply only to their scope.
• cannot override brackets, but treat the comma as non-magic.
• Can only do regular expressions.

### More to come

I'll write more about EMSIP later, I hope.

## Thoughts on reading and writing in Kernel

This is an email I sent to John Shutt, designer of Kernel, which I am writing an implementation for. I thought I'd put it up for public scrutiny and comments.

## The email

I am implementing printing and reading for Klink now, and it made me think about printing and reading objects in general, which I know you were looking for a general solution to.

From the partial state of printing, it's already clear that operatives frequently present themselves to be printed as bare objects and not as symbols evaluating to themselves.

So I'm proposing that as a fallback before we resort to printing an object as "#<whatever>" we do the following:

• When printing an object:
• find a visible symbol bound to it (if that's not possible, we can't do this)
• NB, this is for objects, not for symbols that are intended to be captured, eg symbols bound inside a $let. • print an indication of an environment it can be found in and that environment's state. I'm thinking of a hash or a standard library path. We print the state to avoid problems mutation could cause. If we printed, rebound the symbol, and read, the result might not be equal? to the object. That environment is not neccessarily the environment the symbol is bound in, nor neccessarily the current environment. It need only be one the symbol is visibly bound to the object in. It could be a standard library because read doesn't bind anything in that environment nor give access to the environment. That indication of environment might encompass a sexp of arbitrary size. • print an indication of that symbol (Not neccessarily the same as printing that symbol) • When reading: • know an environment to find symbols in. • when reading the symbol indication above, look it up in that environment • In response to environment indications, change to the right reading environment. It's an error if it doesn't exist or it's in a different state. It has some drawbacks: • Con: It introduces reverse-association into environments. • Mitigating: Introduces only what list-bindings would introduce, because one could just try every binding. • Not an issue: Finding multiple bindings for the same object. We can just use the most convenient one. • Con: Naively implemented, requires a lot of searching to find an object's symbol. • Mitigating: Need not be naively implemented. • Con: We need to control what counts as the visible environment for read and print. • Mitigating: That's easy. • Pro: OTOH that gives us some extra control. • Pro: G1 implies eliminating the special case of one predetermined reading environment. • Con: We need to calculate and represent an environment's state uniquely. • Calculating it is doable, but it's work. • Mitigating: There are ways to represent it: hashes, standard libraries. • Not an issue: Causing evaluation during read. This only causes symbol lookup. • Con: There'd be 2 ways of printing what are now printed as symbols. For someone writing forms, that's an error opportunity and violates G3. • Mitigating: That's only an error opportunity for manually written sexps, which generally means forms. But in that case the forms are evalled, so bare symbols will be looked up. This usually gives the same object, and when it doesn't, there really were 2 distinct objects by that name so there's no getting around needing 2 distinct representations. Further, the easier form to write (bare symbol) and the more likely meaning (local binding of that symbol) coincide. That's a lot of cons, but ISTM each one is mitigated fairly well. I'd say this strategy is a lot like paths (mymod:foo) and should be generalized in that direction. So: • the "indication of symbol" would be a sort of path. • the "indication of environment" would be contained in something like a$let, except purely syntactic and allowing only environments.

So as a first sketch, it might look something like:

#let-environments
(
((mod-1 (id 34fe93ab5539d9bc8))
(mod-2 (kernel devel)))

;;The object bound to foo' in mod-1'
#object (mod-1 foo)
;;Object bound to bar' in (kernel devel testhelp mocks filebuf)
#object (mod-2 testhelp mocks filebuf bar))


I'm certainly not committed to this syntax, but it gives the idea. I don't like the prefixuality of the sharps.

That gives us this overall strategy for printing objects:

• For literals - Unvarying print logic that does only superficial inspection of data
• I include symbols as honorary literals because when read and written, they have no relation to an environment.
• For containers (at least for transparent ones) - print logic that includes printing their contents, read logic includes calling a constructor that reconstructs them from their contents. The results of write-then-read should satisfy equal?
• For non-containers or opaque containers: The method described above.
• Last resort method: "<#opaque-type ID>" style.

What do you think?

Tom Breton (Tehom)

## kernel-library-paths

I've been thinking about library paths in relation to Klink, the Kernel implementation I've been writing.

### Some lessons from the past

• Being familiar with Elisp's horrid treatment of library naming, I'd like to do library-finding right, and not hobble for years with a suboptimal system.
• Scheme's use of lists as library specs is the right way to do it.
• SRFI-103 suffered from a lack of encoding for identifiers that don't map cleanly to file names. Avoid that.
• SRFI-103's attempt to escape some identifiers was overly special, resulting in a non-trivial knowledge bad. Don't do that, use something well-known.
• It is really nice to allow a "main" file in order to group a file with its supporting packages.
• From SRFI-103, though its "main^" syntax was awkward.
• From my experience creating Elisp packages, especially Emtest.
• Want to support at least these parameters:
• An ordered list of directories to search.
• An ordered list of file-name extensions
• From SRFI-103 "to distinguish dialects, formats, and extensions"
• Want to at least optionally choose among available versions of a library.

I don't have to support that in the near future, but it should at least be specified.

• It's nice to allow sub-features in "require" type statements
• From emacs >= 22 require' statements, and from Redhat rpm.
• This clashes with the previous lesson.
• Traditional major.minor.release version numbers have some problems, especially wrt project branching.
• Git among others.
• This clashes yet further with the previous two.
• I'd really really prefer versioning that doesn't push the version-update treadmill to go any faster than it has to.
• Too many times on the version-update treadmill.
• Reading the first syntactic datum is a wart on the procedure, and I may be able to solve it.
• It is nice and convenient to have a way to abbreviate file locations, something more transparent than calling expand-file-name everywhere.
• From SWI Prolog and from emacs' default-directory
• Init files should be ordinary source files, implying that they should fit into this library scheme. They come in more flavors than you would expect.
• Learned from many places, especially TinyScheme and emacs.
• It's often convenient to be able to associate "support" files with a source file. Their extension should differ so that you can tell them apart.
• Learned from Mercury and from C/C++
• The normal source file extension should be unique. So use resources like Filext to check it.
• From Prolog and Perl's clash over *.pl (Prolog was there first)
• "KRN" is taken. Could take it anyways since it's unrelated, a font for a video game
• "KRL" seems available.
• Sometimes one would like to define multiple submodules within the same file.
• Learned from Mercury and from an experiment I did in Prolog.

## Answer 1: Just borrow URL encoding.

It very simple and it's in widespread use.

Now, what to do about clashes? Eg, what if files "too%06dany" and "toomany" both exist?

Several answers present themselves:

• Prefer them in alphabetical order?
• Prefer them in almost alphabetical order, but sort the escape sequence to before/after other files?
• Check for the existence of clashing filenames and raise error.
• Don't allow clashes. Enforce a one-to-one mapping of strings to filename encodings.

I like the last answer best. Encodings that are not mandatory are forbidden. In other words, for any file name it must be the case that:

(encode (decode filename))


gives the original filename again.

## Answer 2: Encode "main" with a similar escape sequence.

That is, some escape sequence that would otherwise be illegal means "main". Let's use "%main" for that.

## Answer 3: A file is a list of sexps

This will probably be the most controversial answer. Instead of reading the first syntactic datum in the source file, let the whole source file be a list. That is, when you start to load a file, pretend you just read an open bracket, and the matching close bracket is EOF. Each sexp in between is one element in the list.

That even handles the case where the file is just a script: Put Kernel's \$sequence'1 as the first symbol in the file.

## Tentative answer 4: Library namespaces for special purpose

These would include at least:

• Standard preludes for an implementation
• Say the implementation is called "acme", the prefix would be;
(implementation acme)

• System rc files for various purposes
• Personal rc files for various purposes

## Tentative answer 5: Resolving the treatment of versioning.

Recapping, the things I wanted were:

1. Want to can choose among available versions
2. Use sub-features for version requirement.
3. Version IDs need not be major.minor.release
4. Don't make the version-update treadmill go faster than it has to.

If we were to use traditional major.minor.release version numbers, that would satisfy #1 but scuttle #2 and #3.

My tentative answer is to eschew the false economy of not reading a file before you need it. Our code should just partially evaluate the file and see what it places in a "subfeatures" area2. If that's ambiguous, then we have a problem - but we always would have in that case.

If efficiency is required, do it behind the scenes. Possibly pre-compile data for individual files or even to index the entirety of code available on a given system.

For #4, I have a tentative scheme involving automated tests and code coverage tools, but that's for another time.

## Answer 6:

If a library path resolves partway thru to file rather than a directory, look in that file for submodules. IIRC this is a direct clone of how Mercury does it.

## Footnotes:

1 equivalent to Lisp progn' or Scheme `begin'.

2 Kernel is basically re-inventing these things with first-class environments. So very likely subfeatures will be an environment accessed by key from the main environment that the library provides.

## COW-stacks

### Why

Klink (as Tinyscheme) has two representations of its stack:

• As a list.
• Pro: It allows reusable continuations
• Con: It's very inefficient.
• As a resizable array
• Pro: It's much more efficient
• Con: It does not allow reusable continuations.

Neither choice is very appealing.

### The answer

Thinking about the problem, I invented what I call a copy-on-write stack. It's really a set of arbitrarily many stacks. The multiple stacks can share their deeper elements, while cheaply pushing and popping their shallow elements.

I'm not actually going to add COW-stacks to Klink any time soon, but I thought about how they would work and I wrote it down. Stray references below to Klink and continuations are due to the context in which I was thinking about COW-stacks.

### How: COW stacks

Entities, briefly:

SE
Stack element
LS
Linear section (of stack elements)
Slider
The part that's visible from outside; the stack as a container.

Operations, briefly:

• push
• pop
• capture a continuation
• finalize a continuation

#### Overall idea

We hold a tree of linear sections of stack elements. The sliders, representing states of execution, can move along the tree, and can be cloned for continuation capture. These is to work as if each continuation capture had copied the entire stack.

##### Desired properties
• The most common operations go very fast. In particular, when there's only one slider, this should do almost nothing that an ordinary stack would not do
• Allow arbitrarily many partly-shared stacks.
• Minimize copying the stack.
• Each stack works as if the entire stack had been copied

#### Entities in detail

##### Stack element
• Essentially as a dump stack element now
• Parts:
• Next instruction (ie, to be continued to)
• Arguments
• Possibly data
• Rethinking data function types: "data" may just a special argument that it passes to itself, that our special data-function type gets from its car. Do we ever need to actually store that?
• List of dynamic bindings
• Could cache the identity of the next SE, if it never moves in memory.
##### Linear sections
• A linear section of the tree. Generally associated to one or more sliders. It keeps a list of these sliders roughly in order from shallowest to deepest. Ie, highest index to lowest.
• Parts:
• Reference to its parent linear section
• For the root, this is NULL
• Index into its parent linear section
• A growable array of stack elements
• Length of that array
• Pointer to a slider, understood as the head of a list.
• Can be NULL (for an empty list) if this linear section has become unoccupied.
##### Slider
• The object representing a state of execution.
• Owners:
• Captured continuations
• The interpreter
• Parts:
• Reference to PO
• Index into the PO's array of stack elements.
• Reference to the "next" slider, understood to be used as a cell in a list.
• For the last element, this is NULL
• Would be nice for this to be a weak reference, since it shouldn't keep sliders alive.
• States:
surely-shallowest
This element is the head of the list
maybe-shallowest
This element, while not the head of the list, may well be equally as shallow. It's not guarantee to be, since we mostly don't conform marks when we move other sliders.
deeper-than-this
This element has been moved deeper than its "proper" position in the list. It can remain out of position indefinitely.
dead
This node represents an unreferenced slider and can safely be removed at any time. That's just if we choose strategy (2) for removing sliders.
normal
This node is none of the above.

#### Two strategies for removing a slider

• Traverse the list and when we find it, perform list surgery.
• Mark it dead and when it's encountered, discard it.

I have no strong opinion on which is better.

#### Operations in detail

• pop
• These are the possible situations:
• Decrement within linear section
• When: index > 0
• Action:
• Decrement the index.
• If state is:
surely-shallowest
Make new list of sliders, keeping this slider on list.
(other states)
Set it deeper-than-this.
dead
Shouldn't happen.
• Decrement out of linear section to previous LS.
• When: index == 0 and LS has a parent
• If state is:
surely-shallowest
Make new list of sliders, removing this slider from the list.
(other states)
Do nothing special for this step.
dead
Shouldn't happen.
• Remove it from the list
• Clone a new slider and mark this one dead?
• Extract it from the list?
• Add it to the parent LS
• Index is the index that the child LS had into its parent.
• State: maybe-shallowest
• Decrement out of root linear section
• When: index == 0 and LS has no parent
• Stack is empty. Interpreter is done.
• push
• These are the possible situations:
• State is surely-shallowest
• Remarks: This state does nothing special
• State is maybe-shallowest
• Compare to the actual head's index. If this slider is:
• As deep (or deeper):
• Mark that one maybe-shallowest
• Remove ours from the list.
• Mark ours (or its clone) surely-shallowest
• Place ours (or its clone) at the beginning of the list.
• Continue as surely-shallowest.
• Not as deep
• (We needn't mark ours normal because we're about to remove it)
• Proceed as for normal.
• State is normal or deeper-than-this
• Make a new LS
• Its parent is the old LS
• Its index into parent is our old index
• Remove this slider from the list.
• In each case, having made sure we are safe, we add another SE:
• If the array isn't large enough, enlarge it.
• Write an element to PO at our index
• Increment our index by 1
• capture a continuation
• Make a new slider. Insert it behind the old slider, same index.
• Index: Same as old one
• State
• If old slider is surely-shallowest, make the new one maybe-shallowest.
• Otherwise give the new one the same state as the old one.
• finalize a continuation
• Remove its slider from the list
• Initialization
• Make one linear section
• Initialize its array of SEs.
• Make one slider, surely-shallowest, index 0, pointing at that linear section.

#### Supporting operations

Conform sliders
Conform a first element that moved or was removed.
• Remarks
• This was just intended for when the first element moves. Other elements mostly stay in place, and are only reconsidered if first element happens to see them while it's rebuilding a list.
• We'll examine the next elements, collecting a new list that we'll give to the LS. We'll also conform the portion of the list that we see, and we'll make sure that that portion extends as far as the next-shallowest index.
• Action:
• (Simple case, for comparison) If there's no next element, we're done. New list is this element or nothing.
• If an element is this slider itself, ignore it. We'll insert it when we have surely decided its position.
• If there is one and its state is:
surely-shallowest
This shouldn't ever happen.
dead
Ignore this slider. It won't be part of the new list.
(other)
We'll put in rough order in the collected list.

Possibilities of its index relative to our slider:

• Its index is shallower (presumably only by 1)
• Set our slider's state normal since we can rule out the other states.
• Do we know a definite head yet?
• No:
• This is now the definite head.
• Mark it surely-shallowest
• Record min-index
• Record reference to its cdr, the tail cdr.
• Yes:
• Leave it in its place in the list
• If it's index is:
• Same as min-index
• Mark it maybe-shallowest
• Larger than min-index
• Mark it normal
• Smaller than min-index
• This shouldn't ever happen
• Record reference to its cdr, the new tail cdr.
• Its index is the same as our slider's.
• Remarks: We may or may not want to also mark other nodes, since they may tie for shallowest.
• Construct an intermediate list consisting of:
• Our slider, if we're leaving it on the list
• The new slider
• Append the deeper-than-this list, if any.
• Append the new slider's tail.
• Remark: If there's no deeper-than-this list, this construction amounts to consing our slider to the new slider.
• Do we know a definite head yet?
• No:
• Mark our slider surely-shallowest
• Mark the other slider maybe-shallowest
• Return the intermediate list
• Yes:
• Add it as the tail of the collected list
• Return the collected list
• Its index is larger than our sliders:
• If other node is deeper-than-this:
• Queue it to a deeper-than-this list
• We'll add that list when we construct an intermediate list just before returning.
• It's a queue so nodes stay in order.
• Continue (ie, examine next node)
• Remarks:
• There's no point try to remark it.
• It won't match min-index
• It won't match our slider's index.
• Otherwise:
• Same as for same index.
• If there's none (general this time),
• Construct an intermediate list consisting of:
• Our slider, if we're leaving it on the list
• Append the deeper-than-this list, if any.
• No tail
• Treat it as in maybe-shallowest, same index.