Showing posts with label argument markets. Show all posts
Showing posts with label argument markets. Show all posts

24 December 2011

Trade Logic - what mechanisms would a system need?

The mechanisms that a Trade Logic system would need

Previously

I introduced and specified Trade Logic, and posted a fair bit about it.

Now

Here's another post that's mostly a bullet-point list. In this post I try to mostly nail down all the mechanisms TL needs in order to function as intended. Some of it, notably the treatment of selector bids, is still only partly defined.

The mechanisms that a TL system needs

  • Pre-market
    • Means to add definitions (understood to include selectors and bettable issues)
      • Initiated by a user
      • A definition is only accepted if it completely satisfies the typechecks
      • Definitions that are accepted persist.
  • In active market
    • Holdings
      • A user's holdings are persistent except as modified by the other mechanisms here.
    • Trading
      • Initiated by a user
      • User specifies:
        • What to trade
        • Buy or sell
        • Price
        • How much
        • How long the order is to persist.
      • A trade is accepted just if:
        • The user holds that amount or more of what he's selling.
        • It can be met from booked orders
        • It is to be retained as a booked order
      • Booked orders persist until either:
        • Acted on
        • Cancelled
        • Timed out
    • Issue conversion
      • Initiated by a user
      • User specifies:
        • What to convert from
        • What to convert to
        • How much
      • A trade is accepted just if:
        • It's licensed by one of the conversion rules
        • User has sufficient holdings to convert from
  • Selectors - where the system meets the outside world. Largely TBD.
    • As part of each selector issue, there is a defined mechanism that can be invoked.
      • Its actions when invoked are to:
        • Present itself to the outside world as invoked.
          • Also present its invocation parameters.
          • But not reveal whether it was invoked for settlement (randomly) or as challenge (user-selected values)
        • Be expected to act in the outside world:
          • Make a unique selection
          • Further query that unique result. This implies that selector issues are parameterized on the query, but that's still TBD.
          • Translate the result(s) of the query into language the system understands.
          • Present itself to the system as completed.
        • Accept from the real world the result(s) of that query.
          • How it is described to the system is TBD. Possibly the format it is described in is yet another parameter of a selector.
      • How an invocable mechanisms is described to the system is still to be decided.
      • (NB, a selector's acceptability is checked by the challenge mechanism below. Presumably those selectors that are found viable will have been defined in such a way as to be randomly inspectable etc; that's beyond my scope in this spec)
    • Challenge mechanism to settle selector bets
      • Initiated by challengers
        • (What's available to pro-selector users is TBD, but probably just uses the betting mechanism above)
      • Challenger specifies:
        • The value that the choice input stream should have. It is basically a stream of bits, but:
          • Where split-random-stream is encountered, it will split and the challenger will specify both branches.
          • Challenger can specify that from some point on, random bits are used.
        • Probably some form of good-faith bond or bet on the outcome
      • The selector is invoked with the given values.
      • Some mechanism queries whether exactly one individual was returned (partly undecided).
      • The challenge bet is settled accordingly.
    • There may possibly also be provision for multi-turn challenges ("I bet you can't specify X such that I can't specify f(X)") but that's to be decided.
  • Settling (Usually post-market)
    • Public fair random-bit generating
      • Initiated as selectors need it.
      • For selectors, some interface for retrieving bits just as needed, so that we may use pick01 etc without physically requiring infinite bits.
        • This interface should not reveal whether it's providing random bits or bits given in a challenge.
    • Mechanism to invoke selectors wrt a given bet:
      • It decides:
        • Whether to settle it at the current time.
        • If applicable, "how much" to settle it, for statistical bets that admit partial settlement
      • Probably a function of:
        • The cost of invoking its selectors
        • Random input
        • An ancillary market judging the value of settlement
        • Logic that relates the settlement of convertible bets. (To be decided, probably something like shares of the ancillary markets of convertible bets being in turn convertible)
      • Being activated, it in fact invokes particular selectors with fresh random values, querying them.
      • In accordance with the results of the query:
        • Shares of one side of the bet are converted to units
        • Shares of the other side are converted to zeros.
        • But there will also be allowance for partial results:
          • Partial settlement
          • Sliding payoffs

Trade Logic pick-boolean

Trade Logic pick-boolean

Previously

I initially said that the basic choice operator in TL was pick01, which outputs a random scalar between 0 and 1.

What's wrong with pick01 as basic

It occurs to me that picking a random scalar is not the most basic thing we could do.

It also occurred to me that pick01 as defined needs to provide infinite precision, which of course isn't physically realizable.

It couldn't be sensibly defined to generate both exact 0 and exact 1.

And the fact that I forget whether its interval was left-closed or right-closed suggests that there's a problem there.

So pick-boolean

So from infinite to minimal: pick-boolean is analogous to pick01, except that it outputs a bit, not a scalar.

But it shouldn't be unary

The temptation is to make pick-boolean unary. But if we did, it wouldn't be like other unary predicates. When two instances of these pick-boolean have the same inputs (ie, none), the outputs are not neccessarily the same. This lack of referential transparency potentially infects the outputs of any predicate that uses pick-boolean. This would restrict and complicate the conversions we are allowed to make.

So instead, let's provide a system random stream. A "random output" predicate such as pick-boolean will actually be ternary; its args will be:

  • Input bitstream
  • Output bitstream
  • Output random

This has the nice property that it gives us an upper bound on a proposition's sample space: It's the difference in position between the input stream and the output stream. Of course, that is not always predetermined.

We said earlier that bettable propositions had no arguments, and we need these to be bettable, so we have to revise slightly: A definition is bettable if its only inputs and outputs are system random streams, which will be a type.

There's another way this treatment is helpful: Sometimes we want to sample to settle issues (population bets). Such issues can sample by using the random stream argument, which feeds any selectors that are involved.

Random stream arguments also help validate selectors by challenge - the challenger specifies the bitstream. In this one case, the output bitstream is useful. It gives the various random calls an ordering, otherwise the challenger couldn't specify the bitstream.

Stream splitting

But sometimes a single stream doesn't work nicely. A predicate like pick01 may need to use arbitrarily many bits to generate arbitrary precision. So what can it do? It shouldn't consume the entire stream for itself, and it shouldn't break off just a finite piece and pass the rest on.

So let's provide split-random-stream. It inputs a random stream and outputs two random streams. We'll provide some notation for a challenger to split his specified stream accordingly.

Can't be built by users

We will provide no way to build an object of this new type from first principles. Users can get two of them from one by using split-random-stream, but can't get one from none. So the only way to have one is to get it from an input. Ultimately any proposition that uses a random stream must get it from system mechanisms of proposition resolution such as sampling and challenging.

New pieces

So let's define these built-ins:

  • Type: random-stream, the type of the system's random stream. No built-in predicate has any mode that outputs this type but doesn't also input it.
  • split-random-stream, ternary predicate that inputs a random stream and outputs two random streams.
  • pick-boolean, a ternary predicate outputting a bit.
  • Type: lazy scalar. A lazy scalar can only be compared with given precision, not compared absolutely. So any comparison predicates that take this also manage precision, possibly by inputting an argument giving the required precision.
  • pick01, ternary, outputs a lazy scalar.
  • Convenience
    • Maybe pick-positive-rational, similar but picking binarily from a Stern-Brocot tree.
    • Maybe pick-from-bell-curve, similar but picking lazily from the standard bell curve.

10 August 2011

Trade Logic: Branching quantifiers 2

Branching quantifiers in Trade Logic, part 2

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. Last time, I built the branching quantifier QH in terms of Trade Logic, and now I'm going to build QL and Q>=N

QL

QL (the Rescher quantifier) is a branching quantifier that basically means "less than or equal in number". That's one of the neat things that branching quantifiers can do that the classical quantifiers \(\forall\) and \(\exists\) can't.

As the description implies, we're going to need an equivalence predicate to build QL. Trade Logic deliberately doesn't provide a global equivalence predicate, so that will be a parameter.

For convenience, I'm also going to define iff (if and only if) as:

(or (and p q)(and (not p)(not q)))

It's not a predicate, because it takes formulas which are not an exposed type of object. With iff and QH, I can easily build QL. This is really just a transcription of the usual definition.

(define-predicate (Q_L 
                     U 
                     (the ,(pred thing thing) Equal) 
                     (the ,(pred thing) phi)
                     (the ,(pred thing) psi))
   (Q_H U
      (lambda (x_1 x_2 y_1 y_2)
         (and
            (iff (Equal x_1 x_2) (Equal y_1 y_2))
            (if (phi x_1) (psi y_1))))))

infinite aka Q>=N

We can verbally define infinite sets as sets where no matter how many elements you enumerate, there is always another element. We'll treat that as an existential quantification outside a QL comparison. So I'll define infinite as:

(define-predicate (infinite U (the ,(pred thing thing) Equal) phi)
   (let
      ((p 
          (lambda (a)
             (Q_L U Equal phi
                (lambda (y) (and (not (Equal a y)) (phi y)))))))
      ;;"\exists a" around the entire Q_L part.
      (if (best p U +a) (and (phi a)(p a)))))

Side note: The LaTex formulas

I thought the LaTex formulas were formatting OK in Blogger because they looked OK in my previews. But apparently they're not. My mistake. I'll see if I can do something about them. But formatting math is not something I know a lot about.

Trade Logic: Branching quantifiers 1

Branching quantifiers in Trade Logic, part 1

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

As an experiment, I've been building towards writing the Internal Set Theory axioms1 in Trade Logic formulas.

Branching quantifiers

The Internal Set Theory axioms require the predicate finite. This can be built using branching quantifiers. If we had used the "classical" quantifiers instead of best, we'd need to add more quantifiers now. Fortunately, branching quantifiers can be built with best, unfortunately not in a straightforward way.

We're also going to have to extend the mode system. Earlier, I said that the mode system required that there exist an ordering in which all variables are bound before they are used. But branching quantifiers bind variables simultaneously, in a sense. This sense is more obvious when using best, which requires applying the target predicate, which means that each branch needs values that are bound in the other branch.

We're going to start by building the simplest Henkin quantifier QH, then QL on top of that, then Q>=N which means infinite. All will be parameterized on U (a universal set). They latter two also parameterized on an identity predicate.

QH

The basic idea is that the "test" clause of if is a conjunction of the best's of the two branches, while the formula dependent on all the objects is the "then" clause. In Trade Logic, if is always single-branched; it abbreviates (or (not A) B).

That almost works. The problem is that each branch actually nests two applications of best. We could easily rearrange it to get the outer variable, but we still need a way to retrieve the inner variable.

Each branch, if it were to exist in isolation, would be of the form $$ \forall x \exists y \phi(x,y) $$ which we write as2:

(let
   ;;Curried \exists y \phi(x,y)
   ((psi (lambda (X)
            (if
               (best (lambda (Y) (phi X Y)) U y_1)
               (Phi X y_1)))))
   ;;\forall x \psi
   (if (best (lambda (X) (not (psi X))) U x_1) (psi x_1)))

We'll define a slightly altered version of this that extract the variables, best_AE.

(define-predicate (best_AE phi U +x_out +y_out)
   (let
      ;;Curried \exists y \phi(x,y).  Still consults \phi
      ((psi (lambda (X)
               (if
                  (best (lambda (Y) (phi X Y)) U +y_out)
                  (phi X y_out)))))
      ;;\forall x \psi.  Doesn't consult \phi.
      (best (lambda (X) (not (psi X))) U +x_out)))

Note the resemblance between the formula for $$ \exists y \phi(x,y) $$ and a Skolemized variable. It takes a parameter, the same variable that $$ \forall x $$ binds. That means that when we incorporate this into a multi-branched configuration, that subformula will only "see" the universal binding that dominates it, which is just what we need for branching quantifiers.

With best_AE, we can define Q_H. NB, the branches of the conjunction output complementary arguments. First we'll define best_H:

(define-predicate (best_H Phi U +x_1 +x_2 +y_1 +y_2)
   (bind-unordered (x_1 x_2 y_1 y_2)
      (and 
         (best_AE
            (lambda (X Y)
               (Phi X x_2 Y y_2))
            U +x_1 +y_1)
         (best_AE 
            (lambda (X Y)
               (Phi x_1 X y_1 Y))
            U +x_2 +y_2))))

Sharp-eyed readers will notice two things:

  • Contradictory ordering requirements. We have to bind x1 before we can bind x2, but also have to bind x2 before we can bind x1.
  • New predicate bind-unordered. It works with the mode system and loosens the requirement that there exist an ordering in which all variables are bound before they are used. Within its scope, the ordering rules are suspended among the variables listed in the first argument. I'm not yet sure what the proper rules are for it. It's likely to block decomposition in most cases.

Having defined best_H, we can define Q_H as simply:

(define-predicate
   (Q_H Phi U)
   (if (best_H Phi U +x_1 +x_2 +y_1 +y_2)
      (Phi x_1 x_2 y_1 y_2)))

Footnotes:

1 Why that? Just because I happen to like illimited numbers.

2 I've changed pred to lambda. It means the same. As I wrote it more often, I realized that I preferred the familiar lambda even if it does wrongly suggest a function.

09 August 2011

Trade logic: Objects are sampleable

All objects are sampleable

I mentioned earlier in passing that in trade logic, all things are sampleable. As I put it, "all variables are ultimately instantiatable".

Things can be finite sets of objects with some probability distribution over them. They can be trivially sampleable, such as single objects, where sampling them always gives the same object. It's even possible that they may measureable collections that have no distinct individuals, from which one selects measured extents.

And the time has come to nail down how Trade Logic is to do sampling (selection).

Population bets

Settling population bets

I'll call a bet whose formula uses pick01 (explicitly or implicitly) a population bet. Population bets are useful for propositions that can easily be resolved for individuals but less easily for distributions.

Population bets that are trying to be resolved1 can pick particular individuals to resolve. Then they can settle probabilistically on the basis of that. It's basically using statistics.

The procedure for selecting an individual is to:

  • independently choose a value for each pick01 directly or indirectly included in the proposition.
  • Using those values, assess the issue's truth or falsity. (The Bettable selectors logic below should ensure that this step works)
  • Repeat as wanted. We sample with replacement; for simplicity and generality, there is no provision for not looking at the same individual again.
  • Using statistical logic (outside the scope here), settle the bet (maybe sometimes just partly settle it)

best redux

It seems that I didn't adapt epsilon enough when I borrowed it as best. It doesn't make a sampleable output. Rather, its output is the set of all the most suitable objects in the universe. That's not neccessarily finite, much less measureable. So there's no good general distribution over that. It would be paradoxical if there were.

So best needs another parameter: a sampleable collection of objects. That can be any object in Trade Logic. This parameter will generally be quite boring; often it's simply the whole universe of discourse. Nevertheless, it's needed.

So =(best Pred From To)=2 is now true just if:

  • Pred is a unary predicate
  • To is the part of From that satisfies Pred as well as anything else in From does

NB, I have not called To a "member" of From, as if From were a set. In general, From is a collection. To should be collection in general too. And we don't want to need an identity predicate as another parameter. And finally, we don't try to resolve "sub-part boundaries" here; those are addressed via structurally similar bets that replace calls of best with suitable particularizations. Nevertheless, I am not 100% sure I have supported collections adequately in this mechanism.

Footnotes:

1 When a bet is "trying to be resolved" is beyond the scope here, but I have some ideas on the topic.

2 I took the opportunity to put the arguments into in/out order.

Trade Logic: Missed a type ctor

07 August 2011

Pseudo-quantification in Trade Logic

Pseudo-quantification in Trade Logic via best

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

Not quantification

Trade Logic doesn't use quantification as such. Quantification would complicate the mode system, which is already adequate to distinguish free and bound variables. I also have theoretical concerns about vacuous quantification: If there were no objects that the system could refer to, the relative valuation of (forall x (p x)) and (some x (p x)) would be reversed.

The built-in predicate best

Instead of directly using quantification, Trade Logic uses the built-in predicate best, which is adapted from Hilbert's epsilon operator. best(A,B) is true just if:

  • B is a unary predicate
  • A satisfies B as well as anything else does

I'll expand a little on that last point. That's not the same as "satisfies B". best can be true if no value could satisfy B. In Trade Logic, best can also be true in fuzzy ways:

  • If B can only be satisfied to a certain degree, and A satisfies B to that degree, then best(A,B) is true (crisply, 100%)
  • if A satisfies B to a certain degree, but a lesser degree than some other value would, then best(A,B) is fuzzily true

Quantifiers can be expressed in terms of best, as they could with epsilon. In standard notation, we would write:

\begin{equation} \forall x p(x) \Leftrightarrow best(x, \neg p(x)) \rightarrow p(x) \end{equation} \begin{equation} \exists x p(x) \Leftrightarrow best(x, p(x)) \rightarrow p(x) \end{equation}

In Trade Logic, the respective formulas are:

(if (best +x (lambda (Y) (not (-p Y)))) (-p -x))

and

(if (best +x -p) (-p -x)) 

Note the addition of modes, and note that "p" is always an in mode. It must be bound outside this (sub)formula.

The behavior of best

A yes of any issue of the form (best +A -B) can be converted to a yes of (& (-X +A) (-B -A)) for any predicate X. Similarly, a no of (& (-X +A) (-B -A)) can be converted to a no of (best +A -B).

X may select A in an arbitrary way, but it will never be better at satisfying B than (best +A -B) is.

This works because no trader would make this conversion unless he got a better price after the conversion. This ensures that the price of best issues is always in fact the highest price. Effectively, existentially quantified issues are always as high or higher in price than each of their particular instances, and universally quantified issues as low or lower than their particular instances.

Best is adapted from epsilon

Best is adapted from Hilbert's epsilon operator. (Also see here) Epsilon (not best) classically has the following properties:

  • It is a function
    • Usually it comes with an axiom of extensionality, ie that the function's result is unique.
  • It takes one argument, a predicate (as a formula)
  • If that predicate can be satisfied, it returns an object that satisfies the predicate
  • If that predicate can't be satisfied, it returns any object at all.
  • With it, one can build statements equivalent to other statements that contain universal and existential quantifiers.
  • The quantifiers all and some can be expressed using it.
\begin{equation} \forall x p(x) \Leftrightarrow p(\varepsilon(x, \neg p(x))) \end{equation} \begin{equation} \exists x p(x) \Leftrightarrow p(\varepsilon(x, p(x))) \end{equation}

But Trade Logic doesn't contain functions. What Trade Logic has are fuzzy predicates1. So we use best instead.

Footnotes:

1 When I first saw Epsilon, I got the impression that it wasn't suitable for this reason. But I was wrong, it just needed to be adapted.

Trade Logic: Clarification on guarded conjunction

Clarification on guarded conjunction ("Dynamic")

Previously

Earlier I introduced Trade Logic and I said that the well-formedness criterion for formulas could be dynamic in certain limited circumstances. Essentially, if a subformula is conjoined with an appropriate type-check on an object, the object is statically treated as that.

Clarification

I forgot to distinguish this sort of conjunction from the conjunction that "and" implements. It's really not the same thing.

Simple conjunction can't do this right. Simple conjunction is built from 2 decompositions of $1. But there's no reasonable way to decompose the half-box in which the guard fails. Doing so would mean that the type-incorrect subformula was used despite being type-incorrect.

guard

Instead, these guarded conjunctions are headed by guard. Like:

(guard (accepts-type X Y) (X Y))

It decomposes like this. Note that one cannot further decompose the half-box where the guard fails.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRLiT-JTeGkCUDU4uH0Nw4hpKmPQnqPKC0ZNSRgEuEF2mNyOZZ8MydEWU1jVtcLY9cyFfwgSbsKxzI_K3QSBVKn22wzMHMpE4L6G8FNK8A7lT-5LBDzHLo40cLBsHuvIuncoQSEgssh0g/

06 August 2011

Trade Logic 2

More on Trade Logic

Previously

Earlier I introduced Trade Logic. It's a form of logic that I designed to be suitable for connecting prediction markets. It is logic that "lives" within the system, not in individual traders' analyses.

I planned to define standard as an exercise. But when I started to write it, I realized that there were a number of preliminaries that I needed first.

I also realized I had to change a few things. In the places where this post differs from the previous, it supersedes it.

About definitions

Definitions cannot assert

Definitions in Trade Logic deliberately cannot assert facts. And it would be disastrous if they could. If traders could "prove" things by defining them to be true, the whole system would collapse and be useless.

Instead, definitions simply expand to parameterized formulas. Formulas cannot assert facts.1

A "classical" definition of standard would have implicitly asserted that there is one unique predicate standard. But that implicit assertion is banned in Trade Logic. We (deliberately) wouldn't be able to construct a formula that asserts that.

Instead, we would define a predicate that is true just if its argument "behaves the same as" the predicate standard. That is, is-standard would be true if its argument is a predicate that satisfies the axioms of Internal Set Theory as standard would.

NB, this won't (and shouldn't) imply that is-standard's argument is defined in the system. It doesn't neccessarily have a global name in the system.

Definitions cannot raise error

Definitions in Trade Logic are not allowed to error. If they could, that would wreak havoc on the system.

They can be undecidable, which causes no great problems for wise bettors. They are also total, meaning that each of their arguments can be of any type. So our definition won't be able to lean on a strong typechecking mechanism.

Definitions are statically simply type-correct

However, there is a weak typechecking mechanism in Trade Logic. It is a well-formedness condition for formulas and predicates and thus for bettable issues. I'm largely borrowing the mechanism from the simply typed lambda calculus. But the simply typed lambda calculus is functional while Trade Logic is relational, and Trade Logic manages modes as well. So I adapt it slightly.

The exposed primitive types are:

  • thing
  • type
  • predicate

The type definitions are:

thing
Everything is a thing. For expressive purposes, either:
  • A variable.
  • Any object of any of the other types.
  • Extensibly, any other literal the system accepts.
type
Either:
  • One of the exposed primitive types
  • A type literal naming a definition of a type in the system
  • A list of types
  • A dotted list of types
  • Dynamically, anything satisfying is-type
call
Not an exposed type. Of the form (P . Args) where:
  • P is a predicate
  • Args is a (possibly empty) tree of things, ie a Herbrand term.
acceptable call
Not an exposed type. Either:
  • A call where Args is statically accepted by P. Ie, where we can prove that beforehand and using only local information.
  • Dynamically, a call satisfying (accepts-argobject P Args)
  • Dynamically, a call satisfying both
    • (accepts-type P T)
    • (satisfies-type Args T)
wff
Not an exposed type. Either:
  • Any issue (ie, predicate with empty argtree)
  • A combination of wffs as defined in the previous post.
  • An acceptable call.
predicate
Either:
  • A predicate literal naming a built-in predicate in the system
  • A predicate literal naming a definition of a predicate in the system
  • A lambda term - a call where:
    • P is the built-in literal pred
    • Args is a list of 2 elements
    • The first argument is a wff
    • The second argument is a mode-spec
    • The whole is mode-correct, as defined in the previous post.
  • Dynamically, anything satisfying is-predicate

Type acceptance

Type acceptance

A predicate P accepts a type T just if T is the type of P's parameter list or is subsumed by it.

The built-in predicate accepts-type

(accepts-type A B) is a binary predicate. It has two modes:

  • (accepts-type - -)
  • (accepts-type - +)

It is true just if:

  • A is a predicate
  • B is a type
  • B is a type that A accepts. In the second mode, B is exactly the given type of A's argobject.

The built-in predicate satisfies-type

(satisfies-type A B) is a binary predicate. It has one mode:

  • (satisfies-type - -)

It is true just if:

  • B is a type
  • A is of type B.

The built-in predicate accepts-argobject

(accepts-argobject A B) is effectively the conjunction of (accepts-type A C) and (satisfies-type B C)

Dynamic checking

Sometimes we don't know a thing's type statically, but we still want to use it or reason about how to use it. This often occurs when we use higher-order logic.

The clauses that say "Dynamically" are interpreted as allowing formulas to be conjunctions of type-checking operators and sub-formulas that require a specific type. Such formulas are interpreted as false if the type is incorrect, rather than erroneous. It is as if the type-check was examined first and the erroneous clause was skipped. (But in general, Trade Logic formulas do not have a particular order in which sub-terms are examined except as dictated by the mode system)

Definitions, redone

Types and predicates can be given names, ie defined in the system. Some trivial constructions of predicates deliberately can't be given names because it would be silly:

  • The predicate literals - they already have names.
  • The dynamic construction. "This predicate is something that satisfies is-predicate" would be pointless.

I might require a checkable proof for places in a construction that require static correctness. That would add an extra argument to acceptable-call and the pred call.

A named predicate can be bet on just if it accepts the empty type (null).

Footnotes:

1 In particular, Trade Logic does not have functions, it has predicates. This rules out Skolemization.

24 June 2011

Trade Logic

Background: Prediction markets

Trade Logic has been kicking around in my head since I learned about Robin Hanson's idea of prediction markets1 around 1991.

A prediction market is a way of aggregating good-faith information about an issue. It's basically:

  • define some issue whose answer isn't know yet but will be,
  • take bets on it
  • later when the answer is known, pay off just the bets that were right.

In prediction markets, the bets always occur in opposing pairs, which I call yes and no. I'll illustrate it like this:

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheOI4x4XdgqqyvU8WRYvQqxhaLDadBo_4gdUXQNA4zQQvY_CvsZ288nhAZYTIX7YTO-7ZibO1Zfscqr-l_yQO_MGIaqqy8pN9FrGxU1qFqryIcXGhrfXDbclFXZUirwnc_eGXknA4oG88/

In these diagrams, the entire square represents one monetary unit. It's as if the diagram represented a way to partition $1. Not into smaller amounts of money, but into parts called yes and no, which together always add up to $1.

Introducing Trade Logic

As I talked about earlier, I want to add logic operations to prediction markets for certain reasons.

I've already hinted at one piece of the puzzle: each issue represents a separate way to decompose $1. Furthermore, pieces that are the same part of the same decomposition are interchangeable.

In Trade Logic, each issue (each decomposition) also corresponds to a formula. So when I write a or b in the following discussion, it refers to an issue in a prediction market, as well as to a formula and to a way of decomposing $1.

Trade Logic works by assembling the conclusion from parts of the premise(s). There may be parts left over. That is, bettors can buy yes or no of one or more issues and then assemble those pieces in a different way to form another issue.

That's how the logic itself works, but we also have to ask about incentives. Why would traders want to do that? They would if the new issue is trading at a too-high price, to make a profit by arbitrage. As a rule, that sort of situation is deterred and if it does arise it is quickly corrected. So as a consequence of the Efficient Market Assumption, Trade Logic reasons about related issues.

The unary operator not

The unary operator not, in a formula, is equivalent to swapping the sides of the issue. It swaps a yes for a no and vice versa. So a yes of a is the same as a no of ~a.

Basic binary operators to combine issues

Overview

Beyond this, we can combine two issues into a third issue. That's equivalent to combining their formulas under a binary operator. To do so, we must define a corresponding decomposition.

I'll call the issues being combined a and b.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOtWdOG0euX2zaeo4PXprAC9gD3iq_CJ-gdxRyc3RMSKBlvrFvLB878iaeEmDgITKILXnKloOvXddKTCX-BfDXh7iSlo4HMPmPV3KA-1IjJpbtfcZviLLZJCzX5omZWQjz6kTe-yqRrV0/

This is composed of the a diagram above and a b diagram, which is just about the same and the a diagram except I have drawn it at right angles to a.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8zOSMKUUrXyWZcNHhXBkkJefYPlvqYj1GAGrVMe803XQ_lW0YLz3ljGzYYC9qjJ6nvW_cf7vCvm4noAB77hXpPhpIBxX4rNNbpHfHwo7IbUG6wjOY-BzRwTr0qxUcTlJUzk8sfM5EIBM/

and

The and operator combines a and b into a new issue, a & b. A yes share of a & b pays off exactly when both a and b would pay off yes. A no share of a & b pays off when either a or b is judged no.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidzvR1xFz-tqMAJSGyvPMEbf6rbDoZEezJg9f9ZyKrB_lx9YF8BMCIoPeiZOX0MIfrt1Smzzo2C5L-NmsNE_PVJxJ3kq3ywIHWSmUXbS9qTttb8oDETaaizdVItsbauO1uFh0sYP8Ozjg/

Nothing much changes if one of the issues is finalized before the other. A finalized issue just has a fixed value for yes and no payoffs; usually one is worth 1.0 and the other is worth 0.0.

So if, say, b yes was judged true, it's as if the ~b column of the diagram was eliminated and the b column was stretched to cover the entire square.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhH78IBbY6CFx9CbYMTj7SQCp8U-O4_O0oLwzvxAlRSWgfQfCWu97j75u7Pg66fu1O16qEuxlUY34Mmg9PVXx7IiaJ7325MoIdKRpbd-jXvSrMiVyPOC79rJqYqsBG1flHg7eZhAAfw_hM/

or

Similarly, the or operator combines a and b into a new issue, a V b, which pays off yes when either a or b pay off yes.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCmPKds80CaDrj3Yd_KtMVtKl7JhNSDm9kpedyFPAhyUvymXtDFXGGIvBk6yLEmAAL8cxttbz69_hoTmgBkIPnuzZnPN5DzB4EE22bVoJR1IgwzHtyRW36x9k5MH3F30jr3-HtJTT0zaI/

if

We're interpreting if as material implication, so

a -> b

means the same as

~a V b

So a -> b is just:

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnSRNaEvdvdZUgySsTOxCy5iFnrU5Ua_W4896yirqoz3sr3QqdxfWUgxnymh5ZjJOsiyV5ihmkvYyVLjUm7YTNd6BI4HV-EYSGXiXOqpkTitzhf4hM0tjPnMCtrg6vvs0jGOm5lUlKMbI/

Definitions and variables

Motivation

To use formulas in a practical way, we'd like to be able to define abbreviations for subformulas that are used frequently. For abbreviations to do the job, they need to be parameterized and the parameters need to be sharable. So we need some sort of definitions and variables.

Variables are instantiatable

We need the issues to be bettable. The motivation is not so much to make them decidable, it is to give appropriate values to issues that turn out to be fuzzily true.

Consider an issue that has always held true (yes trades at nearly $1.0), until a single exception to the rule is found. Should that issue now trade at $0? No. That would make traders afraid to touch any universal rule, which might collapse to $0 at any time. But if it's trading at a high price and is treated as universally quantified, bettors can cheaply "prove" that the exception is false, though it's true. This situation is a money pump.

To avoid that dilemma, the rule is that all variables are ultimately instantiatable by operations that select examples, rather than being universally quantified or existentially quantified. That doesn't mean that issues have specific examples associated with them; they don't. It means that there is a set way for finding an example that doesn't rely on somebody hand-picking one.

A selection operation would generally mean drawing a random element from a given distribution. A distribution can be just a single example; then it's just a literal. There will be rules about how random number generation should be done, and how many examples to select, and how to settle (or partly settle) an issue for a given number of examples. I won't expand on the various rules right now. That's Crypto and Information Economy and Statistics.

They are instantiatable transitively

I said variables are "ultimately" instantiatable. That means that a given variable might not directly be instantiatable by a selection operation, but it's connected to something that is, or to something that's connected to something (etc).

The treatment here is largely borrowed from Mercury and Prolog.

Predicates have modes. A mode maps each parameter in an argument list to either in or out. A predicate is applied in one of its modes.

For every legal mode of a formula, there must exist an ordering of predicate applications where:

  • At the beginning, every variable is free. It will transition to bound at some point. (If it doesn't, that just means it's irrelevant)
  • For each predicate application:
    • A legal mode for that predicate is used.
    • For every parameter that has instantiation type in in that mode, the associated variable is free before that application and bound after it.
    • For every parameter that has instantiation type out in that mode, the associated variable is bound both before and after that application.
  • For each not operator (and more generally, in negative context):
    • Every variable mentioned outside the scope of the not has the same instantiation before and after the not.

Which ordering to use is not predetermined, but the modes of predicate applications in it are predetermined in order to avoid ambiguity, so that we can't instantiate in two different ways, which would risk getting two different results.

This is a fairly blocky way of treating modes and I expect it will be treated in a more fine-grained way later.

What definitions are

In some systems, definitions are really axioms in disguise. In Trade Logic, that would be an intolerable hole; bettors would quick learn to exploit it and ruin the system. So we won't go in that direction.

Our definitions will expand names to parameterized formulas. That is, each definition will map a unique identifier to:

  • A formula
  • A parameter list: A non-repeating list of variables.
  • A set of modes, all legal for that formula with respect to the parameter list.

A definition is used in a formula by giving its name and an argument list of the correct arity in a position where a (sub)formula can appear.

So a definition:

  • is a predicate, not a function. It "returns" a fuzzy boolean, not an object.
  • is not a clause. Unlearn your Prolog for this. A name has one definition, you don't add more clauses later.
  • is total. Its argument list accepts any type of object.
  • is used in some particular instantiation mode.

I don't think we need to require that definitions be in a Tarski hierarchy. I expext Trade Logic to be exposed to many other sources of unclarity besides self-reference. Undecideable issues won't ruin the system. However, we may need to use a Tarski hierarchy for decision markets, which want controlled language and decideable issues.

Selection operations

Selection operations are also predicates. They are used the same way as definitions.

I presuppose a set of primitive selection operations. But that's beyond the scope of what I'm talking about right now.

For purposes of issue decomposition, any primitive selection operation that cannot be satisfied behaves as though it trades at $0 (ie, its yes trades at $0). For instance, "Select a living dodo bird". A selection operation that is satisfied behaves as though it trades at $1.

What a wff is

Having said all that, now I can recursively define a well-formed formula (wff) in Trade Logic as:

  • One of the built-in operations applied to the proper number of wffs:
    • The unary operation (not) applied to a single wff.
    • One of the binary operations applied to two wffs.
  • A predicate application, consisting of:
    • The name of a predicate
    • A list of variables whose length matches that predicate's arity
    • A mode of that predicate

Which formulas are the same?

Trade Logic does implication by assembling the conclusion from parts of the premise(s). So we want to know when two formulas are the same.

Two formulas represent the same issue (and decomposition) just if they are structurally the same, except allowing arbitrary permutations under and and or.

In other words, put subformulas under and or or into some canonical order before comparing formulas. Then mostly ignore the variables, except that the same respective variables have to appear in the same respective positions.

Footnotes:

1 Back then he called it idea futures.