01 January 2014

Set Up Mathjax

Previously

I occasionally tried to post equations here, but gave up when they rendered incomprehensibly. The previews my software was showing me rendered OK but depended on local stuff.

My editing toolset

FWIW, I write and post this stuff using:

  • emacs
  • org-mode (by Carsten Dominick)
  • org2blog:atom, which I wrote. It was previously called org2blog until we realized there were two programs called org2blog)
  • gclient (by T V Raman, used by org2blog:atom)
  • Not Blogger's online editor, which I don't like because it isolates me from emacs and everything else.

So I set up MathJax

Fixing math display was on my todo list for an embarrassingly long time. I finally got around to fixing it when I became aware of MathJax. I have arXiv.org to thank for the pointer to MathJax.

Not many problems

MathJax was actually quite easy to set up. Only two caveats:

  • Of course you've got to use the Tex delimiters around the equations.
  • The include recommended by the MathKax site didn't seem to handle single-dollar-sign delimiters. Fortunately, this site provides a version that works.

16 December 2013

Spontaneous Dimensional Reduction

Spontaneous Dimensional Reduction

Previously

I have been reading a few papers by Steven Carlip on spontaneous dimensional reduction, also essentially the same here and most recently here.
Carlip is probably best known for his article in Scientific American on similar themes. There he played with a 2+1 dimension "Flatland" universe; here he is seriously proposing a 1+1 one.
It's not as crazy as it sounds. In fact, I find it quite promising.

In a nutshell

Spontaneous dimensional reduction is his idea that at the very smallest scales, space is 1-dimensional (so space-time is 1+1 dimensional). He brings together various lines of evidence that support this, including his own treatment of the Wheeler-deWitt equation at extremely small scales.
Discussing the last point, he suggests that spacetime at small scales "spends most of its time" in or near a Kasner solution, a anisotropic solution to general relativity that applies in 3 dimensions or more. He argues that Kasner solutions favor 1 dimension - strongly so if contracting, less strongly if expanding.
Elsewhere he argues that focussing effects dominate, albeit in a slightly different context. This would imply that the contracting state dominates, which is basically what he needs for this to work. To my knowledge he hasn't explicitly applied this to 1+1 dimensions - that puzzles me, since the two ideas of his seem to fit each other nicely.
Kasner solutions are vacuum solutions - solutions that only apply to empty space. Carlip argues that by looking at extremely small scales, spacetime is effectively flattened.
At larger scales, he says that expansion and contraction change repeatedly and choatically, the general idea being a Mixmaster universe or a BKL singularity. The familiar 3 spatial dimensions are built from 1-dimensional pieces not unlike tinkertoys are.

Features

Carlip doesn't appear to cover some of the nice features of 1+1 dimensionality, but I will.

Scalar propagator for gravity

The first one he does mention: In 1+1, the gravitational propagator is a scalar. All the problems with renormalizing gravity come from it having a non-scalar propagator (in fact, a rank 2 tensor; the other fundamental forces have rank 1 tensor propagators (ie, vectors)). With a scalar propagator, they should all go away.
My guess is that the other fundamental forces might also see a solution without renormalization from this. Nobody really likes renormalization, it's just been a neccessary evil in quantum field theory. Presumably that'd happen at an intermediate scale that has 2+1 dimensions.

The hierarchy problem

The hierarchy problem asks, why is gravity so much weaker than the other forces? For instance, you can lift a brick against the pull of the entire earth. The electromagnetic forces of the molecular bonds in your hand and arm exceed the gravitational forces levied by the 6.6 sextillion ton earth.
This offers an answer. By way of background, gravity requires 3 dimensions in order to propagate: 1 dimension of travel and 2 transverse dimensions. That's because it's a spin 2 force, which is why its propagator is a rank 2 tensor.
So spontaneous dimensional reduction says that gravity can't propagate at all at small scales, only at large scales. This may be enough to explain the hierarchy problem. (That's me conjecturing this)
"But wait", you say. "If it can't propagate at small scales, how does it get anywhere at larger scales? That's like saying, I can't walk three feet but I can walk a mile. Surely the big journey is made of little journeys?"
Well, what Carlip suggests elsewhere (here he may be summarizing others' work) is that for reduced dimensions, what happens instead is that gravity rearranges the topology of space, presumably affecting the BKL or Mixmaster behavior. This may be enough to let it propagate.

The self-energy problem

In a nutshell, the self-energy problem is that if forces like the coulomb force go as 1/r2 and therefore diminish over distance, then at small distances they should increase over distance, becoming infinite at r=0.
But (me again) in a 1-dimensional space, that doesn't happen. Forces go as 1/r0, which is to say they are insensitive to distance. No self-energy problem.
Further, there's helpful logic in the other direction. Why does spacetime do this at small distances? Why a Kasner-like solution instead of a simpler isotropic solution? Because if it didn't, then there would be infinite forces at small distances. If we don't need renormalization, we can just say as a principle that energies can't be infinite and then we'll find that 1+1 dimensional Kasner-like spacetime is needed at small scales.

Potential for insight about dark energy

Everybody's heard this for years so I'll be brief: Dark energy, whatever it is, is making the universe expand faster.
If Carlip's theory wrt Kasner solutions is true, then at small scales space is constantly expanding and contracting. This suggests (me again) some relation to dark energy. Maybe it's as simple as whether contraction or expansion dominates at that scale, and by how much.

18 November 2013

Heidegger 2

Heidegger 2

Previously

I previously blogged my answer to Martin Heidegger's deep question, "Why is there something rather than nothing?"

I just wrote it up for a friend. It says basically the same thing the earlier post does, but in a more accessible form.

What's not a good answer

First, I like to say what is not a good answer. For instance, it's not a good answer to talk about quantum fluctuations creating matter out of empty space. That may or may not follow from the rules of quantum mechanics, but those rules are a "something" too. Why do they exist? So to my mind, that doesn't really answer the question.

The full flavor of the question

Heidegger's question is deeper than that. What it asks to explain is not why is there matter, or why there is quantum mechanics, but why is there anything at all. Why does the world have any structure whatsoever?

My insight

My insight was that the question still assumes one little thing: that it's one or the other, either/or, obeying the law of the excluded middle. Which I know sounds like simple common sense, but consider this: any evidence it could possibly be based on is a something too, and so is the law of the excluded middle. Even antinomy, the law of non-contradiction, is a something about which we can ask why it exists.

So take a deep intellectual breath and imagine for a moment that it could be both ways. Imagine that you can see both a world of nothingness and a normal world. Doesn't matter how. If you like, you can imagine some sort of blend of a something-world and a nothing-world, or a split-screen of both worlds, or perhaps you gaze alternately on one world and the other, or teleport between them.

What would the nothingness look like? Seems like nothingness wouldn't make much of an impression. It wouldn't even mark its absence by the passage of time or an empty reach of space. It hasn't got time and space or anything else. It hasn't got its half of the split-screen you may have imagined. It hasn't even got a you in it to do the perceiving. Seems to me nothingness makes absolutely zero impression of any kind.

Now add up the impressions of both worlds. You get all the impressions from the normal world of somethings, plus zero. So you see just the normal world.

So that's my anthrophic, multiple-worlds answer to Heidegger's question. Even if you start with no assumption of something-ness, you end up seeing a world of somethings, a world with some propositions about it that aren't both true and false or neither. QED.

T's to cross and i's to dot

There are some philosophical t's to cross and i's to dot, but AFAICT they cross and dot easily. (Like, are there otherwise ways to aggregate the impressions of two worlds that give a different result? No, by definition aggregating X with nothing gives X.)

12 September 2013

My opinion, literally

Lexicography

Previously

I wrote this a few weeks back, in response to something my friend Michael wrote, but I was lazy about posting it.

It's about the Merriam-Webster dictionary defining "literally" to mean "figuratively". See slate.

My opinion

The dictionary has to be descriptive, not prescriptive. It should reflect how people have actually spoken and used words. Ultimately, that's what language is.

But by the same token, people actually use dictionaries prescriptively. They turn to dictionaries for authority on the "right" meaning of words.

I would take a middle position. A dictionary must eventually track usage, but there's no need for it to rush to anoint every popular solecism.

What lexicographers do it collect a corpus of contemporary usage and then group the words according to word sense, as they see it. I'm not surprised that they found so many hyperbolic "literally"s. I'm sure they also had access to literally tons of people who felt figurative "literally" to be a solecism.

There's merit, as some lexicographers do, in characterizing these groups of words in more sophisticated ways. Hyperbolic senses can be noted. So can loose senses ("sarcasm" that lacks irony) and apparent substitutions ("effect" where "affect" is meant).

It's too bad Merriam-Webster stopped before doing that, and I think they deserve all the criticism for it.

Patent Fix 1

Patent Fix 1

Previously

Jan Wolfe blogs Patent defendants aren't copycats. So who's the real inventor here?.

Robin Hanson also writes about this. While his central illustration is somewhat implausible, it nevertheless puts the issue on a concrete footing. Briefly, a hypothetical business finds better routes for city drivers for a price. Then they want to forbid all the drivers in the city from driving "their" route without a license. (This example is apparently set in a world where nobody every heard of Google maps, Mapquest, or even paper maps.)

Also see Defending reinvention, So How Do We Fix The Patent System?, and A Call for an Independent Inventor Defense.

Recent changes in patent law don't seem to have addressed reinvention.

Some points of consensus

We all seem to agree that the biggest issue is patent protection vs re-invention. If re-invention was somehow metaphysically impossible, the patent situation would be more defensible1.

We also seem to agree that software patents aggravate this problem.

This is a problem that I have been kicking around for a while too. I have, I think, a different analysis to offer and something of a solution.

The deep problem

The deep problem, as I see it, is a problem is common to all intellectual pseudoproperty. First the discoverer sells it - to an individual or to society via the social contract of patenting. Only afterwards can the invention be seen by the buyer and evaluated.

For the individual, such as the driver licensing a better route in Robin's example, this makes the route an experience good - he can only tell whether it's worth what he paid after he receives it. Then he can't return the good by unlearning it.

He may find that it's not worth what he paid because it's bad. More relevant to patent re-invention, he may find that he already knew of that route. Perhaps he sometimes uses it but mostly prefers a slightly longer scenic route. He shouldn't lose use of a route he already knew just because he "bought" the same from this business and then returned it.

This at least could be solved by contract - perhaps the driver can get a refund if he proves that he already knew that route. For society, it's worse. Patent law basically throws up its hands at the question of what the loss of opportunity for re-invention costs. It deems the cost to be zero, which is simply insane.

Why it's tricky to solve

It's sometimes said that the invention of the eraser on the pencil was obvious - after it was invented; before that, nobody could think of that. As it turns out, that's questionable2 for the pencil-eraser but the general point stands.

So we don't want re-invention to consist of stating, after one has seen the invention, "Why, it's obvious to anybody". That's hindsight bias. That's cheating. We want to measure problem hardness in foresight. How hard did it appear before you knew the answer?

So how can we measure problem hardness?

For any patent, there is some problem that it solves. This isn't just philosophical; a patent application must contain this. It's part of one of the less formal sections of an application, but even so, it's made explicit for every patent the USPTO ever granted.

Imagine a public blackboard where anybody can write down a problem. Our would-be inventor writes down the problem that his as-yet undisclosed invention solves. He leaves the problem on the board for a year. After a year, nobody has written down the solution.

Our inventor then says "Given an entire year, nobody has solved this problem except me. Therefore even if my invention is granted patent protection, nobody has lost the chance to invent it themselves. Since opportunity for re-invention was the only thing anybody really stood to lose, I should be granted patent protection. I stand ready to reveal my solution, which in fact solves the problem. If I'm granted patent protection, everybody wins."

We're not assuming that our inventor has already invented the invention. He could pose a problem that he feels pretty sure he can solve and only start working on it later in the year.

There are of course holes in that solution. Let's try to fill them.

Issues

A dialog about attention

"Not so fast". (Now we need names. I will call the first inventor Alice and the new voice Bob)

"Not so fast" says Bob. "I could have solved that problem, but I was working on something else more important. I'm confident that when the day comes that I actually need a solution, I can solve it myself. I like that better than paying your license fee. So your patent protection does deprive me of the opportunity for re-invention."

"You haven't proved you can do that." replies Alice. "Nobody answered my challenge, so why should I believe you?"

"Correction: they didn't answer it for free." says Bob. "It does take non-zero time and attention - and by the way, so would using your patent even if the license fee was $0"

Bob continues "Maybe other possible inventors felt the same as I did. Or maybe they're just not paying attention to that blackboard. If everybody has to pay attention to everything you write on that blackboard, that imposes a public cost too."

"I'll tell you what: Offer a reasonable bounty to make it worth my time, say $5000, and I will show you my solution to your problem."

"I haven't got $5000 to spend on that" says Alice, "I'm a struggling independent inventor with one killer idea in my pocket. And if I did have $5000 then I'd just be paying you to reveal your invention. I already have one, so the value to me is $0. If you're not bluffing, I'll be down $5000."

"If you don't have to offer a reward, I can play that game too," replied Bob, "but I won't leave it at one problem. I'll write write down thousands of them. Then I'll own all the problems that nobody paid attention to, many of which will be actually quite easy, just ignored. I'll solve some of them and nobody can use the solutions without licensing them from me."

"I see where that's going" says Alice. "I'd do the same." A look of absolute horror crossed her face. "We'd be back in the Dark Ages of the US patent system!"

How do we direct attention reasonably?

Collectively, Alice and Bob have a pretty good idea what's easy to solve and what's not. The trick is truthfully revealing that knowledge at a practical cost.

One possible solution quickly suggests itself to those acquainted with idea markets: We can adapt Hanson's old lottery trick.

I'll start by explaining the lottery. We'll go beyond that later. The idea is that Alice and everybody else who writes a problem on the blackboard pays a small license fee. The whole amount - let's say it's still $5000 - is offered on one problem chosen by lottery. Maybe it's offered on a few of them, but in any case a small fraction of what there is.

That's great for that one problem, but it leaves the other problems unmeasured.

That's where decision markets come in. Investors place bets on whether a problem, should it be chosen for a bounty, will in fact be solved. Then one particular problem is chosen by lottery. The bets about its hardness are settled while the other bets are called off. It's easy for the system to see whether someone has claimed the bounty. We won't tackle the quality of the solution until later in this post.

The hardness-market price determines whether the problem is considered easy or not - there may be finer gradations but we won't get into that here.

So we've amplified the signal. By offering a bounty on one problem, we've measured the hardness of all the problems. We'll improve on this later.

More dialog

So this scheme is implemented. A few weeks later, Bob comes storming in.

"Somebody wrote on the blackboard the exact problem that I've been working on!"

"That's odd." says Alice

"Did you do this to get back at me, Alice?"

"How does that inconvenience you? You could earn $5000."

"My invention is worth millions! Now I have to disclose it for a one-time payment of $5000? That doesn't even cover my research and development costs!"

"Well, you can hardly expect anyone to offer a million dollars for a bounty."

"That's true. Still, this is very wrong. Since you wrote the problem, you should be required to have a solution too."

"It just so happens that I do, but if I disclosed it, you'd just copy it. You should have to show your solution first."

"Then you'd copy it. See, Alice, I don't trust you either."

"I really had no idea you were working on it too, Bob. But if you really do have a million-dollar invention too, why should either of us sell it for $5000? As far as we know, only the two of us have it. Why should the two of us collectively get less than it's worth?"

"Luckily we found out in time. That's just sheer luck. We could agree to ignore the bounty and split the rewards of the patent between us."

"50/50"

The two inventors shook hands and then compared notes.

"Hey!" exclaimed Bob "I shouldn't have assumed that just because your invention solved the same problem, it was as good as mine. It's not! Mine's cheaper to manufacture. I'd have got about 95% of the market share."

"No, I'd have beat you. Mine's more stylish and easier to use."

The two inventors glared at each other, each convinced they had gotten the worst of the deal.

Bugs

So we have the following bugs:

  • The bounty acted like a price, but wasn't really a sensible price. In fact, it didn't even try, it just set one fixed price for everything.
  • The bounty overrode the market payoffs, which are a better measure of invention quality.
  • Relating the hardness to the number of times the bounty is claimed measures the wrong thing. What we need is solution unobviousness. This is trickier, since we have to look past the expected number of solutions and see how many actual solutions overlap.
  • If the first inventor to disclose gets a patent monopoly, it's unfair to Alice, who posed the problem and paid to do so. It shuts her out of her own invention.
  • If the first to disclose doesn't get a patent monopoly, for inventions whose expected market value is more than the bounty, the likelihood of the bounty being accepted will be too low. We'll see fewer than we should and therefore we'll underestimate obviousness.

Second draft

In order to fix those bugs, we're going to redo the bounty system:

  • Claiming the bounty doesn't require openly disclosing an invention. It requires only an unforgeable commit, probably as a secure hash of the disclosure document. The reveal and the payment will come later. NB this allows the original problem-poser to participate in the race for the solution.
  • After a predetermined period of time:
    • The claimed inventions are to be disclosed (all at once)
    • Inventions not disclosed within a given time-frame are ineligible.
    • Disclosures that don't match the prior commit are ineligible.
    • Unworkable inventions are ineligible.
    • Each distinct workable invention gets patent protection
      • Technically, that's each distinct patent claim - a USPTO patent application typically contains many related patent claims.
      • If multiple inventors invented the same invention, they each get an equal share. This is suboptimal vs sockpuppets, though.
      • Here I'm assuming that novelty against prior art isn't an issue. It won't be much of one, because why would Alice pay money to pose a problem whose solution is already publicly known? We can just say that the existence of a prior invention voids a patent claim, just like now.
    • Each distinct workable invention gets an equal fraction of the bounty.
  • Non-bountied inventions are treated essentially the same way, just minus the bounty and the bounty time period.
  • The market pays off in proportion to the count of unique workable solutions, rather than the count of all solutions.
    • We don't want to use some sort of "density of uniqueness", because that's too easily spoofed by submitting the same solution many times.
  • To estimate solution unobviousness:
    • For bountied problems, we use a count of solutions, backed off somewhat towards the market's prior estimate.
    • For non-bountied problems, we directly use the market's prior estimate.

Measuring invention workability and quality

We still need a way to measure invention workability. This is traditionally the job of a patent examiner. However, we've piled more work onto them, and there are concerns about how good a job they do. This post is already long, though, so I won't try to design a better mechanism for that here.

Progressively increasing bounties

One way to aim bounties more efficiently is to start by offering small bounties. Then for some of those problems whose bounties were not claimed, raise the bounty. That way we are not overpaying for easy answers.

Time frame

We haven't addressed the question of how long the problem period should be. We may want it to work differently when there is a bounty, since then we need a standard measuring period.

It's not hard to propose. I could say "six months", but I want to leave it open to more flexible rules, so I won't propose anything here.

Relates to how long a patent monopoly should be in force

This mechanism also provides a somewhat more principled way of deciding how long a patent should be in force: it should relate to how long the problem period is. Perhaps the two should be linearly proportional.

Bug: We charge for posing a problem

Posing a problem well is valuable and is a task in and of itself. Yet we've charged the problem-poser for the privelege. This isn't good, and I'd like it to be the other way.

We could try to recurse, with the problem being: "What are some unsolved problems in field X?" but then the solution is no longer in a standard form as formal patent applications are.

This post is already long, so I will leave it at that.

Footnotes:

1 I will mention in passing that AIUI re-invention is allowed but only under stringent conditions that are only practical for well-heeled institutions. The slightest exposure to a patent "taints" a would-be rediscoverer forever. IANAL so take this with a grain of salt.

2 That patent was later disallowed because it was a mere combination of two things, which is not patentable. See Eraser. Regardless, the general point stands.

05 December 2012

Causal Dynamical Triangulation

Causal Dynamical Triangulation

I've been reading up on Causal Dynamical Triangulation (CDT) (by Loll, Ambjoern, and Jurkiewicz). It's an attempted unified field theory related to Loop Quantum Gravity (LQG), which you may have read the Scientific American article on a few years back.
What it (like LQG) has to recommend it is that the structure of space emerges from the theory itself. Basically, it proposes a topological substrate (spin-foam) made of simplexes (lines, triangles, tetrahedrons, etc). Spatial curvature emerges from how those simplexes can join together.

Degeneration and the arrow of time

The big problem for CDT in its early form was that the space that emerged was not our space. What emerged was one of two degenerate forms. It either has infinite dimensions or just one. The topology went to one of two extremes of connectedness.
The key insight for CDT was that space emerges correctly if edges of simplexes can only be joined when their arrows of time are pointing in the same direction.

So time doesn't emerge?

But some like to see the "arrow of time" as emergent. The view is that it's not so much that states only mix (unmix) along the arrow of time. It's the other way around: "time" has an arrow of time because it has an unmixed state at one end (or point) and a mixed state at the other.
To say the say thing in a different way, the rule isn't that the arrow of time makes entropy increase, it's that when you have an entropy gradient along a time-like curve, you have an arrow of time.
The appeal is that we don't have to say that the time dimension has special rules such as making entropy increase in one direction. Also, both QM and relativity show us a time-symmetrical picture of fundamental interactions and emergent arrow-of-time doesn't mess that picture up.

Observables and CDT

So I immediately had to wonder, could the "only join edges if arrows of time are the same" behavior be emergent?
In quantum mechanics, you can only observe certain aspects of a wavefunction, called Observables. Given a superposition of a arrow-matched and arrow-mismatched CDT states, is it the case that only the arrow-matched state is observable? Ie that any self-adjoint operator must be only a function of arrow-matched states?
I frankly don't know CDT remotely well enough to say, but it doesn't sound promising and I have to suspect that Loll et al already looked at that.

A weaker variant

So I'm pessimistic of a theory where mismatched arrows are simply always cosmically censored.
But as far as my limited understanding CDT goes, with all due humility, there's room for them to be mostly censored. Like, arrow-mismatched components are strongly suppressed in all observables in cases where there's a strong arrow of time.

Degeneration: A feature, not a bug?

It occured to me that the degeneration I described earlier might be a feature and not a bug.
Suppose for a moment that CDT is true but that the "only join edges if arrows of time are the same" behavior is just emergent, not fundamental. What happens in the far future, the heat death of the universe, when entropy has basically maxxed out?
Space degenerates. It doesn't even resemble our space. It's either an infinite-dimensioned complete graph or a 1-dimensioned line.

The Boltzmann Brain paradox

What's good about that is that it may solve the Boltzmann Brain paradox. Which is this:
What's the likelihood that a brain (and mind) just like yours would arise from random quantum fluctuations in empty space? Say, in a section of interstellar space a million cubic miles in volume which we observe for one minute?
Very small. Very, very small. But it's not zero. Nor does it even approach zero as the universe ages and gets less dense, at least not if the cosmological constant is non-zero. The probability has a lower limit.
Well, multiplying an infinite span of time times that gives an infinite number of expected cases of Boltzmann Brains exactly like our own. The situation should be utterly dominated by those cases. But that's the opposite of what we see.

Degeneracy to the rescue

But if CDT and emergent time are true, the universe would have degenerated long before that time. Waving my hands a bit, I doubt that a Boltzmann Brain could exist even momentarily in that sort of space. Paradox solved.

Is that the Big Rip?

(The foregoing was speculative and hand-waving, but this will be far more so)
Having described that degeneration, I can't help noticing its resemblance to the Big Rip, the hypothesized future event when cosmological expansion dominates the universe and tears everything apart.
That makes me wonder if the accelerating expansion of space that we see could be explained along similar lines. Like, the emergent arrow-of-time-matching isn't quite 100% perfect, and when it "misses", space expands a little.
This would fit with the weaker variant proposed above.

Problems

For one thing, it's not clear how it could explain the missing 72.8% of the universe's mass as dark energy was hypothesized to.

End

Now my hands are tired from all the hand-waving I'm doing, so I'll stop.

Edit: dynamic -> dynamical

Meaning 2

Meaning 2

Previously

I relayed the definition of "meaning" that I consider best, which is generally accepted in semiotics:
X means Y just if X is a reliable indication of Y
Lameen Souag asked a good question
how would [meaning as reliable indication] account for the fact that lies have a meaning?

Lies

"Reliable" doesn't mean foolproof. Good liars do abuse reliable indicators.
Second, when we have seen through a lie, we do use the term "meaning" in that way. When you know that someone is a liar, you might say "what she says doesn't mean anything" (doesn't reliably indicate anything). Or you might speak of a meaning that has little to do with the lie's literal words, but accords with what it reliably indicates: "When he says `trust me', that means you should keep your wallet closed."

Language interpretation

Perhaps you were speaking of a more surface sense of the lie's meaning? Like, you could say "Sabrina listed this item on Ebay as a 'new computer', but it's actually a used mop." Even people who considered her a liar and her utterances unreliable could understand what her promise meant; that's how they know she told a lie. They extract a meaning from an utterance even though they know it doesn't reliably indicate anything. Is that a fair summation of your point?
To understand utterances divorced from who actually says them, we use a consensus of how to transform from words and constructions to indicators; a language.
Don't throw away the context, though. We divorced the utterance from its circumstances and viewed it thru other people's consensus. We can't turn around and treat what we get thru that process as things we directly obtained from the situation; they weren't.
If Sabrina was reliable in her speech (wouldn't lie etc), we could take a shortcut here, because viewing her utterance thru others' consensus wouldn't change what it means. But she isn't, so we have to remember that the reliable-in-the-consensus indicators are not reliable in the real circumstances (Sabrina's Ebay postings).
So when interpreting a lie, we get a modified sense of meaning. "Consensus meaning", if you will. It's still a meaning (reliable indication), but we mustn't forget how we obtained it: not from the physical situation itself but via a consensus.

The consensus / language

NB, that only works because the (consensus of) language transforms words and constructions in reliable ways. If a lot of people used language very unreliably, it wouldn't. What if (say) half the speakers substituted antonyms on odd-numbered days, or when they secretly flipped a coin and it came up tails. How could you extract much meaning from what they said?

Not all interpretations are created equal

This may sound like All Interpretations Are Created Equal, and therefore you can't say objectively that Sabrina commited fraud; that's just your interpetation of what she said; there could be others. But that's not what I mean at all.
For instance, we can deduce that she committed fraud (taking the report as true).
At the start of our reasoning process, we only know her locutionary act - the physical expression of it, posting 'new computer for sale'. We don't assume anything about her perlocutionary act - convincing you (or someone) that she offers a new computer for sale.
  1. She knows the language (Assumption, so we can skip some boring parts)
  2. You might believe what she tells you (Assumption)
  3. Since the iterm is actually an old mop, making you believe that she offers a new computer is fraud. (Assumption)
  4. Under the language consensus, 'new computer' reliably indicates new computer (common vocabulary)
  5. Since she knows the language, she knew 'new computer' would be transformed reliably-in-the-consensus to indicate new computer (by 1&4)
  6. Reliably indicating 'new computer' to you implies meaning new computer to you. (by definition) (So now we begin to see her perlocutionary act)
  7. So by her uttering 'new computer', she has conveyed to you that she is offering a new computer (by 5&6)
  8. She thereby attempts the perlocutionary act of persuading you that she offers a new computer (by 2&7)
  9. She thereby commits fraud (by 3&8)
I made some assumptions for brevity, but the point is that with no more than this definition of meaning and language-as-mere-consensus, we can make interesting, reasonable deductions.

(Late edits for clarity)