24 August 2011

git-svn with less pain

How to make a svn clone with git-svn without spending 10 hours

I recently had the mixed experience of cloning the Rosegarden source from its SVN repo with git-svn.

First thing: Do not believe anyone who tells you to just launch:

git svn clone -s REPO-URL 

If somebody tells you to do that, give them a mean glare from me. It will take hours and hundreds of megabytes. I quit after a total of maybe 10 or 11 hours, after maybe a dozen restarts with no end in sight. It had used about 350 megabytes of disk, and it wasn't anywhere near finished downloading.

Second thing: You're going to have to find the latest SVN revision number by hand. At least, I found no way to do it within the git workflow1. You can find it remotely via svn but if you wanted to use svn you wouldn't be doing this.

What worked for me is:

git svn clone -r $REV:HEAD -s $URL $DIR

where $REV is a fairly recent SVN revision number; $URL and $DIR are obvious.

At the risk of making my own post redundant, this Stackflow thread showed me how to do it.

Footnotes:

1 I tried just initting the directory with git-svn and launching various informational git-svn commands from in it; none worked.

17 August 2011

Simulated evolution of dark matter supports PDM

Simulated evolution of dark matter supports Parallel Dark Matter

This is going to be a short one. I just read a link that tends to support PDM. The story was actually out a few weeks ago but I didn't find it until just now.

Dark matter is similar to that of visible matter. ScienceDaily. Retrieved August 17, 2011, from http://www.sciencedaily.com/releases/2011/07/110721102021.htm

Summary

A team of astrophysicists and cosmologists found that "large cosmic structures made up of dark and normal matter evolve along the same lines."

PDM predicts this, and most other dark matter theories don't.

Journal Reference

M. Demianski, A. Doroshkevich, S. Pilipenko, S. Gottloeber. Simulated evolution of the dark matter large-scale structure of the Universe. Monthly Notices of the Royal Astronomical Society, 2011; 414 (3): 1813 DOI: 10.1111/j.1365-2966.2011.18265.x

15 August 2011

Crazy idea? Parallel Dark Matter

General background

A lot of ink and electrons have been spilled over the question of what dark matter is. A theory about it has been kicking around in my head for a few years now. It could be completely wrong. But I haven't seen it out there before, and it doesn't obviously fly in the face of observations, so I'm putting it out there. If it's wrong, so be it.
I think most of my readers are already familiar with dark matter, so I'm just linking to the wiki entry. But I do want to highlight a few points:
  • Whatever dark matter is, it's probably not an undifferentiated swarm of particles (WIMPs usually). This is known from the bullet cluster (1E0657-556) and similar observations.
  • There's not enough gravitational lensing for it to be mostly or all MACHOs.
  • And despite some neat fits to observations, theories like MOND that modify gravity have extreme theoretical difficulties.
  • While dark matter is generally co-located with visible galaxies, there are seemingly random exceptions in both directions. Eg, NGC 4736 and VIRGOHI21.

Parallel Dark Matter

Probably the easiest way to explain Parallel Dark Matter is to start by pointing at braneworld cosmology. That doesn't mean PDM is committed to Braneworld cosmology, just that they fit neatly.
Braneworld cosmology says that everything we see, everything in the visible universe, is actually stuck to the surface of a brane. The only things that aren't stuck to it are gravitons, and that's used to explain why gravity is so weak. One braneworld theory, the Ekpyrotic universe, proposes two branes which collided and more-or-less made the Big Bang.
PDM proposes that there are 6 branes including ours, all about equally full. It proposes that dark matter is the matter of other branes, whose gravity alone escapes those branes.

A few immediate objections

But branes+gravitons is more complex than that

Q: Gravitons of low energy might not escape branes, or might not get back in (it might polarize a brane and bounce off). AFAIK, we can't measure that. So 5 hidden branes might actually not give the 5:1 gravitational pull that we see.
A: All true. The simplest form of PDM ignores that, so it gives just a first approximation. To a second approximation, it might be one of the following:
  • The required energy to escape/enter is so low as to be negligible.
  • It's not negligible and there are more than 6 branes.
  • The substrate is something other than branes.
If PDM comes to seem likely, it might let us see which is the case and provide a means for estimating the escape and entry energy.

So where's parallel earth?

Q: Where are the 5 parallel earths?
A: There aren't any. PDM does not propose that the other branes have parallel structure in their details. The neighborhood of our solar system is likely to be about as empty of dark matter as some randomly chosen equal-sized region of interstellar space.

How is it a halo?


Q: Why hasn't the dark matter collapsed into a galactic plane? Why is it still a spherical halo?
A: It will sometimes have collapsed individually for each brane but still in aggregate appear as a halo. Like a honeycomb paper party ball, each of its planes is flat, while the whole still approximates a sphere.
honeycomb-party-ball
Of course the illustration overstates the case. It has a few dozen planes instead of 5 (dark), and their alignment is regular and efficiently makes an approximate sphere. In PDM the orientation of galactic planes should be random, so it doesn't make a sphere as efficiently.
Dark galaxies would probably be concentric with each other and with the associated visible galaxy, but random in orientation.

But that's MACHOs!

Q: But MACHOs were ruled out by lensing observations. That would rule out "parallel brane" MACHOs too.
A: The non-dark universe is mostly not massive compact objects. It's mostly neutrinos, and a lot of the rest is gas. And the MACHO observations leave room for a fair fraction of dark matter to be MACHOs, just not all or most of it.

How could it be tested?

Some predictions of PDM that seem testable:
  • In a small percentage of spiral galaxies, the dark matter should be not a halo but a disk. I said earlier that, for dark spiral galaxies, their alignment is random with respect to our own and each other. So sometimes by pure chance they will all nearly line up.
    • Further prediction: such disks will have no particular orientation with respect to the visible galaxy's disk.
  • Unseen stellar partners should be more common closer to the galatic center. That's because dark objects should be gravitationally capturable just like visible objects. However, they would tend to move relatively faster with respect to visible objects, because they tend to be orbiting in a different plane, so they're still individually less likely to be captured than visible objects.
    • The incidence of unseen stellar partners, in proportion to captured visible stellar partners, should be noticeably higher inside the galactic bulge, where all the dark companion galaxies overlap ours.
    • It should not vary so much within the galactic bulge.
    • It should have a very fuzzy boundary due to drift and to inexact overlap.
  • Unfortunately I've lost the reference for this, so it's going to be vague. I figured I'd still put it out there as a possible disconfirmation. A few years back, astronomers found a flow of dark matter in our own galaxy. If PDM is true, it seems likely that such a flow would be part of a dark galaxy galactic plane. It should be approximately in a plane that goes thru the center of our galaxy.
    • There would be other dark flows in the same direction along the same galatic radius, within a few degrees to account for the dark disk's thickness.
    • There would be dark flows in the opposite direction along the opposite galatic radius, again within a few degrees.

General disclaimer

This is just a hypothesis I'm putting out there. If it's wrong, so be it.

10 August 2011

Trade Logic: Branching quantifiers 2

Branching quantifiers in Trade Logic, part 2

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. Last time, I built the branching quantifier QH in terms of Trade Logic, and now I'm going to build QL and Q>=N

QL

QL (the Rescher quantifier) is a branching quantifier that basically means "less than or equal in number". That's one of the neat things that branching quantifiers can do that the classical quantifiers \(\forall\) and \(\exists\) can't.

As the description implies, we're going to need an equivalence predicate to build QL. Trade Logic deliberately doesn't provide a global equivalence predicate, so that will be a parameter.

For convenience, I'm also going to define iff (if and only if) as:

(or (and p q)(and (not p)(not q)))

It's not a predicate, because it takes formulas which are not an exposed type of object. With iff and QH, I can easily build QL. This is really just a transcription of the usual definition.

(define-predicate (Q_L 
                     U 
                     (the ,(pred thing thing) Equal) 
                     (the ,(pred thing) phi)
                     (the ,(pred thing) psi))
   (Q_H U
      (lambda (x_1 x_2 y_1 y_2)
         (and
            (iff (Equal x_1 x_2) (Equal y_1 y_2))
            (if (phi x_1) (psi y_1))))))

infinite aka Q>=N

We can verbally define infinite sets as sets where no matter how many elements you enumerate, there is always another element. We'll treat that as an existential quantification outside a QL comparison. So I'll define infinite as:

(define-predicate (infinite U (the ,(pred thing thing) Equal) phi)
   (let
      ((p 
          (lambda (a)
             (Q_L U Equal phi
                (lambda (y) (and (not (Equal a y)) (phi y)))))))
      ;;"\exists a" around the entire Q_L part.
      (if (best p U +a) (and (phi a)(p a)))))

Side note: The LaTex formulas

I thought the LaTex formulas were formatting OK in Blogger because they looked OK in my previews. But apparently they're not. My mistake. I'll see if I can do something about them. But formatting math is not something I know a lot about.

Trade Logic: Branching quantifiers 1

Branching quantifiers in Trade Logic, part 1

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

As an experiment, I've been building towards writing the Internal Set Theory axioms1 in Trade Logic formulas.

Branching quantifiers

The Internal Set Theory axioms require the predicate finite. This can be built using branching quantifiers. If we had used the "classical" quantifiers instead of best, we'd need to add more quantifiers now. Fortunately, branching quantifiers can be built with best, unfortunately not in a straightforward way.

We're also going to have to extend the mode system. Earlier, I said that the mode system required that there exist an ordering in which all variables are bound before they are used. But branching quantifiers bind variables simultaneously, in a sense. This sense is more obvious when using best, which requires applying the target predicate, which means that each branch needs values that are bound in the other branch.

We're going to start by building the simplest Henkin quantifier QH, then QL on top of that, then Q>=N which means infinite. All will be parameterized on U (a universal set). They latter two also parameterized on an identity predicate.

QH

The basic idea is that the "test" clause of if is a conjunction of the best's of the two branches, while the formula dependent on all the objects is the "then" clause. In Trade Logic, if is always single-branched; it abbreviates (or (not A) B).

That almost works. The problem is that each branch actually nests two applications of best. We could easily rearrange it to get the outer variable, but we still need a way to retrieve the inner variable.

Each branch, if it were to exist in isolation, would be of the form $$ \forall x \exists y \phi(x,y) $$ which we write as2:

(let
   ;;Curried \exists y \phi(x,y)
   ((psi (lambda (X)
            (if
               (best (lambda (Y) (phi X Y)) U y_1)
               (Phi X y_1)))))
   ;;\forall x \psi
   (if (best (lambda (X) (not (psi X))) U x_1) (psi x_1)))

We'll define a slightly altered version of this that extract the variables, best_AE.

(define-predicate (best_AE phi U +x_out +y_out)
   (let
      ;;Curried \exists y \phi(x,y).  Still consults \phi
      ((psi (lambda (X)
               (if
                  (best (lambda (Y) (phi X Y)) U +y_out)
                  (phi X y_out)))))
      ;;\forall x \psi.  Doesn't consult \phi.
      (best (lambda (X) (not (psi X))) U +x_out)))

Note the resemblance between the formula for $$ \exists y \phi(x,y) $$ and a Skolemized variable. It takes a parameter, the same variable that $$ \forall x $$ binds. That means that when we incorporate this into a multi-branched configuration, that subformula will only "see" the universal binding that dominates it, which is just what we need for branching quantifiers.

With best_AE, we can define Q_H. NB, the branches of the conjunction output complementary arguments. First we'll define best_H:

(define-predicate (best_H Phi U +x_1 +x_2 +y_1 +y_2)
   (bind-unordered (x_1 x_2 y_1 y_2)
      (and 
         (best_AE
            (lambda (X Y)
               (Phi X x_2 Y y_2))
            U +x_1 +y_1)
         (best_AE 
            (lambda (X Y)
               (Phi x_1 X y_1 Y))
            U +x_2 +y_2))))

Sharp-eyed readers will notice two things:

  • Contradictory ordering requirements. We have to bind x1 before we can bind x2, but also have to bind x2 before we can bind x1.
  • New predicate bind-unordered. It works with the mode system and loosens the requirement that there exist an ordering in which all variables are bound before they are used. Within its scope, the ordering rules are suspended among the variables listed in the first argument. I'm not yet sure what the proper rules are for it. It's likely to block decomposition in most cases.

Having defined best_H, we can define Q_H as simply:

(define-predicate
   (Q_H Phi U)
   (if (best_H Phi U +x_1 +x_2 +y_1 +y_2)
      (Phi x_1 x_2 y_1 y_2)))

Footnotes:

1 Why that? Just because I happen to like illimited numbers.

2 I've changed pred to lambda. It means the same. As I wrote it more often, I realized that I preferred the familiar lambda even if it does wrongly suggest a function.

09 August 2011

Trade logic: Objects are sampleable

All objects are sampleable

I mentioned earlier in passing that in trade logic, all things are sampleable. As I put it, "all variables are ultimately instantiatable".

Things can be finite sets of objects with some probability distribution over them. They can be trivially sampleable, such as single objects, where sampling them always gives the same object. It's even possible that they may measureable collections that have no distinct individuals, from which one selects measured extents.

And the time has come to nail down how Trade Logic is to do sampling (selection).

Population bets

Settling population bets

I'll call a bet whose formula uses pick01 (explicitly or implicitly) a population bet. Population bets are useful for propositions that can easily be resolved for individuals but less easily for distributions.

Population bets that are trying to be resolved1 can pick particular individuals to resolve. Then they can settle probabilistically on the basis of that. It's basically using statistics.

The procedure for selecting an individual is to:

  • independently choose a value for each pick01 directly or indirectly included in the proposition.
  • Using those values, assess the issue's truth or falsity. (The Bettable selectors logic below should ensure that this step works)
  • Repeat as wanted. We sample with replacement; for simplicity and generality, there is no provision for not looking at the same individual again.
  • Using statistical logic (outside the scope here), settle the bet (maybe sometimes just partly settle it)

best redux

It seems that I didn't adapt epsilon enough when I borrowed it as best. It doesn't make a sampleable output. Rather, its output is the set of all the most suitable objects in the universe. That's not neccessarily finite, much less measureable. So there's no good general distribution over that. It would be paradoxical if there were.

So best needs another parameter: a sampleable collection of objects. That can be any object in Trade Logic. This parameter will generally be quite boring; often it's simply the whole universe of discourse. Nevertheless, it's needed.

So =(best Pred From To)=2 is now true just if:

  • Pred is a unary predicate
  • To is the part of From that satisfies Pred as well as anything else in From does

NB, I have not called To a "member" of From, as if From were a set. In general, From is a collection. To should be collection in general too. And we don't want to need an identity predicate as another parameter. And finally, we don't try to resolve "sub-part boundaries" here; those are addressed via structurally similar bets that replace calls of best with suitable particularizations. Nevertheless, I am not 100% sure I have supported collections adequately in this mechanism.

Footnotes:

1 When a bet is "trying to be resolved" is beyond the scope here, but I have some ideas on the topic.

2 I took the opportunity to put the arguments into in/out order.

Trade Logic: Missed a type ctor

07 August 2011

Pseudo-quantification in Trade Logic

Pseudo-quantification in Trade Logic via best

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

Not quantification

Trade Logic doesn't use quantification as such. Quantification would complicate the mode system, which is already adequate to distinguish free and bound variables. I also have theoretical concerns about vacuous quantification: If there were no objects that the system could refer to, the relative valuation of (forall x (p x)) and (some x (p x)) would be reversed.

The built-in predicate best

Instead of directly using quantification, Trade Logic uses the built-in predicate best, which is adapted from Hilbert's epsilon operator. best(A,B) is true just if:

  • B is a unary predicate
  • A satisfies B as well as anything else does

I'll expand a little on that last point. That's not the same as "satisfies B". best can be true if no value could satisfy B. In Trade Logic, best can also be true in fuzzy ways:

  • If B can only be satisfied to a certain degree, and A satisfies B to that degree, then best(A,B) is true (crisply, 100%)
  • if A satisfies B to a certain degree, but a lesser degree than some other value would, then best(A,B) is fuzzily true

Quantifiers can be expressed in terms of best, as they could with epsilon. In standard notation, we would write:

\begin{equation} \forall x p(x) \Leftrightarrow best(x, \neg p(x)) \rightarrow p(x) \end{equation} \begin{equation} \exists x p(x) \Leftrightarrow best(x, p(x)) \rightarrow p(x) \end{equation}

In Trade Logic, the respective formulas are:

(if (best +x (lambda (Y) (not (-p Y)))) (-p -x))

and

(if (best +x -p) (-p -x)) 

Note the addition of modes, and note that "p" is always an in mode. It must be bound outside this (sub)formula.

The behavior of best

A yes of any issue of the form (best +A -B) can be converted to a yes of (& (-X +A) (-B -A)) for any predicate X. Similarly, a no of (& (-X +A) (-B -A)) can be converted to a no of (best +A -B).

X may select A in an arbitrary way, but it will never be better at satisfying B than (best +A -B) is.

This works because no trader would make this conversion unless he got a better price after the conversion. This ensures that the price of best issues is always in fact the highest price. Effectively, existentially quantified issues are always as high or higher in price than each of their particular instances, and universally quantified issues as low or lower than their particular instances.

Best is adapted from epsilon

Best is adapted from Hilbert's epsilon operator. (Also see here) Epsilon (not best) classically has the following properties:

  • It is a function
    • Usually it comes with an axiom of extensionality, ie that the function's result is unique.
  • It takes one argument, a predicate (as a formula)
  • If that predicate can be satisfied, it returns an object that satisfies the predicate
  • If that predicate can't be satisfied, it returns any object at all.
  • With it, one can build statements equivalent to other statements that contain universal and existential quantifiers.
  • The quantifiers all and some can be expressed using it.
\begin{equation} \forall x p(x) \Leftrightarrow p(\varepsilon(x, \neg p(x))) \end{equation} \begin{equation} \exists x p(x) \Leftrightarrow p(\varepsilon(x, p(x))) \end{equation}

But Trade Logic doesn't contain functions. What Trade Logic has are fuzzy predicates1. So we use best instead.

Footnotes:

1 When I first saw Epsilon, I got the impression that it wasn't suitable for this reason. But I was wrong, it just needed to be adapted.

Trade Logic: Clarification on guarded conjunction

Clarification on guarded conjunction ("Dynamic")

Previously

Earlier I introduced Trade Logic and I said that the well-formedness criterion for formulas could be dynamic in certain limited circumstances. Essentially, if a subformula is conjoined with an appropriate type-check on an object, the object is statically treated as that.

Clarification

I forgot to distinguish this sort of conjunction from the conjunction that "and" implements. It's really not the same thing.

Simple conjunction can't do this right. Simple conjunction is built from 2 decompositions of $1. But there's no reasonable way to decompose the half-box in which the guard fails. Doing so would mean that the type-incorrect subformula was used despite being type-incorrect.

guard

Instead, these guarded conjunctions are headed by guard. Like:

(guard (accepts-type X Y) (X Y))

It decomposes like this. Note that one cannot further decompose the half-box where the guard fails.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRLiT-JTeGkCUDU4uH0Nw4hpKmPQnqPKC0ZNSRgEuEF2mNyOZZ8MydEWU1jVtcLY9cyFfwgSbsKxzI_K3QSBVKn22wzMHMpE4L6G8FNK8A7lT-5LBDzHLo40cLBsHuvIuncoQSEgssh0g/

06 August 2011

Trade Logic 2

More on Trade Logic

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

I planned to define standard as an exercise. But when I started to write it, I realized that there were a number of preliminaries that I needed first.

I also realized I had to change a few things. In the places where this post differs from the previous, it supersedes it.

About definitions

Definitions cannot assert

Definitions in Trade Logic deliberately cannot assert facts. And it would be disastrous if they could. If traders could "prove" things by defining them to be true, the whole system would collapse and be useless.

Instead, definitions simply expand to parameterized formulas. Formulas cannot assert facts.1

A "classical" definition of standard would have implicitly asserted that there is one unique predicate standard. But that implicit assertion is banned in Trade Logic. We (deliberately) wouldn't be able to construct a formula that asserts that.

Instead, we would define a predicate that is true just if its argument "behaves the same as" the predicate standard. That is, is-standard would be true if its argument is a predicate that satisfies the axioms of Internal Set Theory as standard would.

NB, this won't (and shouldn't) imply that is-standard's argument is defined in the system. It doesn't neccessarily have a global name in the system.

Definitions cannot raise error

Definitions in Trade Logic are not allowed to error. If they could, that would wreak havoc on the system.

They can be undecidable, which causes no great problems for wise bettors. They are also total, meaning that each of their arguments can be of any type. So our definition won't be able to lean on a strong typechecking mechanism.

Definitions are statically simply type-correct

However, there is a weak typechecking mechanism in Trade Logic. It is a well-formedness condition for formulas and predicates and thus for bettable issues. I'm largely borrowing the mechanism from the simply typed lambda calculus. But the simply typed lambda calculus is functional while Trade Logic is relational, and Trade Logic manages modes as well. So I adapt it slightly.

The exposed primitive types are:

  • thing
  • type
  • predicate

The type definitions are:

thing
Everything is a thing. For expressive purposes, either:
  • A variable.
  • Any object of any of the other types.
  • Extensibly, any other literal the system accepts.
type
Either:
  • One of the exposed primitive types
  • A type literal naming a definition of a type in the system
  • A list of types
  • A dotted list of types
  • Dynamically, anything satisfying is-type
call
Not an exposed type. Of the form (P . Args) where:
  • P is a predicate
  • Args is a (possibly empty) tree of things, ie a Herbrand term.
acceptable call
Not an exposed type. Either:
  • A call where Args is statically accepted by P. Ie, where we can prove that beforehand and using only local information.
  • Dynamically, a call satisfying (accepts-argobject P Args)
  • Dynamically, a call satisfying both
    • (accepts-type P T)
    • (satisfies-type Args T)
wff
Not an exposed type. Either:
  • Any issue (ie, predicate with empty argtree)
  • A combination of wffs as defined in the previous post.
  • An acceptable call.
predicate
Either:
  • A predicate literal naming a built-in predicate in the system
  • A predicate literal naming a definition of a predicate in the system
  • A lambda term - a call where:
    • P is the built-in literal pred
    • Args is a list of 2 elements
    • The first argument is a wff
    • The second argument is a mode-spec
    • The whole is mode-correct, as defined in the previous post.
  • Dynamically, anything satisfying is-predicate

Type acceptance

Type acceptance

A predicate P accepts a type T just if T is the type of P's parameter list or is subsumed by it.

The built-in predicate accepts-type

(accepts-type A B) is a binary predicate. It has two modes:

  • (accepts-type - -)
  • (accepts-type - +)

It is true just if:

  • A is a predicate
  • B is a type
  • B is a type that A accepts. In the second mode, B is exactly the given type of A's argobject.

The built-in predicate satisfies-type

(satisfies-type A B) is a binary predicate. It has one mode:

  • (satisfies-type - -)

It is true just if:

  • B is a type
  • A is of type B.

The built-in predicate accepts-argobject

(accepts-argobject A B) is effectively the conjunction of (accepts-type A C) and (satisfies-type B C)

Dynamic checking

Sometimes we don't know a thing's type statically, but we still want to use it or reason about how to use it. This often occurs when we use higher-order logic.

The clauses that say "Dynamically" are interpreted as allowing formulas to be conjunctions of type-checking operators and sub-formulas that require a specific type. Such formulas are interpreted as false if the type is incorrect, rather than erroneous. It is as if the type-check was examined first and the erroneous clause was skipped. (But in general, Trade Logic formulas do not have a particular order in which sub-terms are examined except as dictated by the mode system)

Definitions, redone

Types and predicates can be given names, ie defined in the system. Some trivial constructions of predicates deliberately can't be given names because it would be silly:

  • The predicate literals - they already have names.
  • The dynamic construction. "This predicate is something that satisfies is-predicate" would be pointless.

I might require a checkable proof for places in a construction that require static correctness. That would add an extra argument to acceptable-call and the pred call.

A named predicate can be bet on just if it accepts the empty type (null).

Footnotes:

1 In particular, Trade Logic does not have functions, it has predicates. This rules out Skolemization.

05 August 2011

Superpositionality answers Heidegger

Heidegger's famous question

Martin Heidegger famously asked "Why is there something rather than nothing?" There have been many attempts to answer it, but every single attempt I have seen has been wrong in some important respect. I will propose an answer (skip ahead if you can't wait).

But first I will try to convince you that the existing answers don't work, and then lay some groundwork for my answer.

Some sources

How do I know that I've covered the field of attempted answers well? Why should you believe I have? As opposed to me inventing strawmen, or covering some attempts but not "the good ones".

So here are some sources that already surveyed the attempted answers:

  • Nothingness (Stanford Encyclopedia of Philosophy) More of a flowing discussion than a list of answer candidates. Section 1 is relevant, the other sections less so.
  • The biggest Big Question of all (Shermer)
    1. God
    2. Wrong Question
    3. Grand Unified Theory
    4. Boom-and-Bust Cycles
    5. Darwinian Multiverse
    6. Inflationary Cosmology
    7. Many-Worlds Multiverse
    8. Brane-String Universes
    9. Quantum Foam Multiverse
    10. M-Theory Grand Design
  • Why Is There Something Rather Than Nothing? The Only Six Options (Patton)
    1. The universe is eternal and everything has always existed.
    2. Nothing exists and all is an illusion
    3. The universe created itself
    4. Chance created the universe
    5. The universe is created by nothing
    6. An transcendent being (God) created all that there is out of nothing.

Survey of attempted answers

Answers that only push the question back one step farther

"God made it all"

Covering what

Shermer's answer (1), Patton's answer (6)

The failure

The circularity of this has already been hashed to death. 'Nuff said.

Spontaneous generation (Science version)

Covering what

Michael Shermer's answers 3 thru 10 all fall into this category. Patton's (3) and (4) seem to belong in here too.

The problem with it

Usually this is tied to quantum phenomena, often to quantum fluctuations of the (hypothesized) inflation field, as in Shermer's (9).

But look at it thru the lens of the original question. "Why does anything exist?" leads directly to "Why does this something, the inflation field, exist? (if it does)" and "Why do these particular rules for it, that it can fluctuate and inflate, exist?" And the space and time that the quantum fluctuations inflate in are somethings too, so we have to ask why they exist too.

Note that if any of these question have ordinary answers, like "spin foam pre-existed and became the space-time", this merely pushes the question back one step, "Why does the spin foam exist?".

One can ask similar questions of the other science spontaneous generation answers. I won't bore you or myself by ringing changes on this theme across all of the science-y answers.

So this entire pattern of answer is a non-answer that can never truly answer "Why does anything exist?"

Probabilistic generation

What it covers

Discussion in Nothingness

Even if "Nothing exists" [is] the uniquely simplest possibility [], why should we expect that possibility to be actual? In a fair lottery, we assign the same probability of winning to the ticket unmemorably designated 321,169,681 as to the ticket memorably labeled 111,111,111.

The problem with it

Here the "something" assumed in the answer is much more subtle. Why should this cosmic roll of the dice cause a world to exist? I roll dice all the time in tabletop RPGs. This has yet to cause the things I roll up to pop into actual existence. Why is this cosmic roll of the dice different? What "breathes life" into it?

Whatever thing breathes life into it constitutes a subtle something that's assumed by the answer. So again we can ask, "Why does that something exist?"

Another issue

The Stanford Nothingness notes that the assumption that there's one empty world (nothingness) can be questioned. Is there at most one empty world?

Not too far off though

Nevertheless, this approach does hint at the answer that I give.

Answers that try to change the question

"Why not?"

The problem with it

When it's put as simply as this, it's obvious that it's just dodging the question. Next I'll look at some more sophisticated attempts to undercut the coherence of the question.

The universe has always existed

What it covers

Patton's (1)

The problem with it

It's a sleight of hand. It focusses on a tangential element of the question and then removes that element. The essential question goes unanswered.

Ordinarily when we speak of something existing, there was a moment at which it came into existence, or at least a time-frame in which it did. But that's a misleading intuition pump; easy to imagine, because it's commonplace, but really doesn't fit the question. The question wasn't "When did stuff come into existence?" or even "Why, when it came into existence, did it do so?"

If the universe has always existed and stretches backwards in time forever, well then, the question becomes why that backwards stretch:

  • contains something rather than nothing.
  • itself exists

"Wrong Question"

What it covers

Shermer's answer (2) at first glance appears to fall here (but it mostly won't)

"Somethingness" is the natural state of things.

The problem with it

Saying that somethingness more natural than nothingness is saying that there is some meta-rule that favors somethingness over nothingness. Well, that meta-rule is a something. So ask again, why doe that something it exist? So on closer inspection, this answer is mostly a species of Spontaneous generation (Philosophical version).

But one part of the issue that belongs under this subheading, not there. Having asked "Why does the meta-rule exist", one might answer the same way again: "Its existence - its somethingness - is more natural than its non-existence". Ie, appeal again to the meta-rule itself. So re-raising the original question does not immediately defeat this answer. The fixpoint here is in positive territory, as it were, not in negative territory. Before, in the answers that only push the question back one step farther, the fixpoint was in negative territory.

This answer still has serious problems.

  • It's entirely circular; not neccessarily false but it doesn't resolve anything.
  • One needs to ask why this fixpoint of meta-rules is selected as "real" and capable of self-support, when other fixpoints are not. What breathes life into somethingness-is-natural and not into others? NB, this question is "why choose this?", not "why does anything exist?"
  • And not least, Occam's Razor. I've ignored it thru this whole discussion so far, but it's important. Occam's Razor is completely contrary to somethingness-is-natural and has enormous empirical and intuitive support.

"Everything exists" is as simple as "Nothing exists"

What it covers

Discussion in Nothingness.

As far as simplicity is concerned, there is a tie between the nihilistic rule "Always answer no!" and the inflationary rule "Always answer yes!". Neither rule makes for serious metaphysics.

The problem with it

"Everything exists" not the same as "Something exists". So this argument fails to put "Something exists" on an equal footing with "Nothing exists".

Experiencing nothingness

Experiencing nothingness itself

If nothing existed, what exactly would you notice?

Of course you wouldn't see big dark shadows and hear the hollow echoes of sounds you make. You wouldn't have eyes to see them, or ears, or a brain to appreciate the experience. You would notice exactly nothing.

Experiencing everythingness

Earlier, we saw that "Everything exists" is as simple as "Nothing exists". So Occam's Razor is as favorable to everythingness as it is to nothingness. If my answer is to be reasonable, it can't ignore everythingness just because Heidegger didn't mention it.

So let's ask, in exact parallel: If everything existed, what exactly would you notice?

Of course you wouldn't see a big pink elephant and then a leprechaun dancing with a poodle in a kaleidoscope. That's a chaotic parade of some the individual things you could possibly see, but it's not experiencing everything at once. Not by a long shot.

What would you experience, if you experienced everything at once, with nothing at all left out?

Well, you couldn't localize it. You couldn't understand it, or pin it down as being some particular thing. You couldn't even pin it down as some particular thing that it wasn't.

What about your eyes and your brain? That's the most bizarre part. You'd have every possible eyes and every possible brain. In everythingness, every question of the form "Does X exist?" gets the answer "yes". "Does brain X exist?" (yes) "Does brain X, additionally having the property of being your very own thinking organ, exist?" (yes)

So I think that what you'd experience in everythingness would be completely formless and indistinct. Essentially the same as the experience of nothingness. And I think that if you experienced normal existence and nothing and everything at the same time, it would again add up to just normal experience.

Superpositionality

Briefly

Superpositionality or "quantum superposition" considers that a system is "really" in a state that is an overlapping of all of the possible configurations. By "really", we mean in the view of someone outside the system - in the birds-eye view, as it were.

Co-incidentally, Shermer's answer (7) is about Many-Worlds (M-W), which implies superpositionality. He didn't seem to notice the connection to his question.

M-W also implies something else that I will use in my argument: superpositions are parsimonious. M-W is extremely Occam-friendly. This seems to surprise people who don't understand M-W.

The question assumes too much

Earlier, I chided answers that try to change the question. So I have to be careful not to commit the same sin myself. Nevertheless, if a question assumes too much, it's OK to challenge those assumptions. Just play fair.

There is one very subtle assumption in the question. "Why is there something instead of nothing?" 1.

The question assumes that one or the other is the case. It's an obvious and mundane assumption, but one that doesn't work in such a basic philosophical question. I propose that it's one assumption too many. I'm going to remove that one assumption and then answer the question.

Misunderstanding averted

I'm not saying that superpositionality gives rise to physical existence. That would be wrong in several ways.

  • It's not that superpositional principles "act on" the nothingness and generate things out of it. The nothingness remains completely intact, as it were.
  • Superpositionality just doesn't do that. It does not create things.
  • And if I said that, I'd be sinning again one of my own pet peeves. By no means am I trying to dazzle anyone with Deep Science. Quite the contrary, I'm building my explanation from reasoning that I hope will already be familiar to my readers.

Footnotes:

1 The same assumption may occur in a more subtle form in "Why does anything exist?" - which can be taken as constrasting to the possibility of that thing not existing, ignoring the possibility of both being the case.

Prosperity defined?

Prosperity defined? (Maybe)

I recently read a paper called Information, Utility & Bounded Rationality, by Pedro A. Ortega and Daniel A. Braun.

They define what they call "free utility", akin to free energy in thermodynamics. They propose that free utility can be used as a variational principle as in control theory, leading to bounded optimal control solutions and recovering some well-known decision theory results.

So they found a connection between decision theory and thermodynamics. And it looks to be a deep one.

Another thing I liked

Decision theory tends to start by neglecting the cost of making decisions. It adds it on later, as Value Of Perfect Information (VOPI) and bounded rationality. It tends to feel bolted on, heterogeneous.

Free utility OTOH feels like a homogeneous concept. If I had to give that concept a familiar name, it would be "prosperity".

Abstract of the paper

Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here we employ an axiomatic framework for bounded rational decision-making based on a thermodynamic interpretation of resource costs as information costs. This leads to a variational "free utility" principle akin to thermodynamical free energy that trades off utility and information costs. We show that bounded optimal control solutions can be derived from this variational principle, which leads in general to stochastic policies. Furthermore, we show that risk-sensitive and robust (minimax) control schemes fall out naturally from this framework if the environment is considered as a bounded rational and perfectly rational opponent, respectively. When resource costs are ignored, the maximum expected utility principle is recovered.