Archive

Ineffective Theory

Automating arbitrage on Manifold markets

Here are two questions on Manifold markets:

  1. Will there be a federal mask requirement on US domestic flights on November 8, 2022?
  2. Will there be a federal mask requirement in place on domestic flights as of Nov. 8, 2022?

This is the obvious hazard of allowing anyone to create a market—people aren’t always rigorously careful to ensure that they’re not creating a duplicate.

This is also an opportunity: the first market has (as I write this) a probability of 12%, and the second of 8%. Since both creators are trustworthy, the market probabilities should agree more closely than that.

Of course one can do this by hand, but then I’d have to check in periodically to make sure the two markets still agree, and if not, make appropriate trades. Much better to automate the check and the trade. (Also, I’m invested in the idea of automating arbitrage on Manifold—literally.)


Manifold markets has a nice API available. To make it a bit easier to work with, I wrote a python wrapper vatic. Be warned, this is an ill-tested work-in-progress! It’s good enough for this arbitrage task, and probably nothing else.

This is not a difficult task. Since the markets to be arbitraged are already identified, the script only needs to get their probabilities, check that they’re sufficiently far apart for the trade to be worthwhile (using 2% as a crude threshold), and then buy some YES of one and some NO of the other.

from vatic import manifold

slug1 = 'will-there-be-a-federal-mask-requir-d236f8cd3553'
slug2 = 'will-there-be-a-federal-mask-requir'

mani = manifold.Manifold(auth='Key obviously-im-hiding-it')
mkt1 = mani._get_slug(slug1)
mkt2 = mani._get_slug(slug2)

if abs(mkt1.probability - mkt2.probability) > .02:
    print('Probabilities separated by more than 2%---arbitraging!')
    if mkt1.probability > mkt2.probability:
        mkt1,mkt2 = mkt2,mkt1
    # Buy some YES from mkt1; buy some NO from mkt2
    print(mkt1.bet(1, 'YES'))
    print(mkt2.bet(1, 'NO'))

Running repeatedly yields:

(env)$ ./arbitrage.py 
Probabilities separated by more than 2%---arbitraging!
{'betId': 'xskQpaEh9RnEfsnp3NsF'}
{'betId': 'LiWz0zkraXxkRlEynlQu'}
(env)$ ./arbitrage.py 
Probabilities separated by more than 2%---arbitraging!
{'betId': 'aLtAe8ivMQYNH0sBTCZt'}
{'betId': 'O5LjGUwJR53tEsclvKkW'}
(env)$ ./arbitrage.py 
Probabilities separated by more than 2%---arbitraging!
{'betId': 'j1IVf1Dod7qm6mc39CtK'}
{'betId': 'O36ddnYtgnKpM69zXwI1'}
(env)$ ./arbitrage.py 
Probabilities separated by more than 2%---arbitraging!
{'betId': 'IXTLdjQJM1h0tDrINNsL'}
{'betId': 'lIVkXN0UEaPKGXCK8CiY'}

At the end of which I landed myself 4 NO shares and 40 YES shares. So it “worked”!

On the other hand, I now realize that I’ve accumulated a net YES position, which was certainly not my intent! If I believe that the true probability is 10%, then in expectation this strategy turns a profit (at least ignoring trading fees), but I nevertheless expect the strategy to result in a loss. Not terrible, but not ideal. I’m a bit more risk-averse, and would like a guaranteed profit.

So, I sell everything and try again.


To guarantee outcome-independent profit, we want to end up with roughly the same number of YES and NO shares at the end. Ideally, then, we would buy a single share at a time, not a single M$ worth at time. Something like:

mkt1.bet(mkt1.probability, 'YES')
mkt2.bet(1-mkt2.probability, 'NO')

Unfortunately the manifold API does not currently allow arbitrarily small bets. The smallest possible bet size is M$1, and in cases where the total amount of M$ being bet is small and the probabilities are close to 0%, there’s no way to construct a sensible bet.

A “good enough” approach is to buy shares probabilistically. If we buy YES on the low market about 10% of the time, and buy NO on the high market about 90% of the time, then we’ll end up with a roughly equal number of share of each when that’s possible. (When it’s not possible, we’ll be back where we started—profit only in expectation.)

import random
from vatic import manifold

slug1 = 'will-there-be-a-federal-mask-requir-d236f8cd3553'
slug2 = 'will-there-be-a-federal-mask-requir'

mani = manifold.Manifold(auth='Key obviously-im-hiding-it')
mkt1 = mani._get_slug(slug1)
mkt2 = mani._get_slug(slug2)

if abs(mkt1.probability - mkt2.probability) > .02:
    if random.random() < mkt1.probability:
        print('Buying YES')
    if random.random() < 1-mkt2.probability:
        print('Buying NO')

This also has the advantage of being stateless, so that the script can make a single M$1 bet at a time, and then re-evaluate. This prevents the client from needing to do any complicated calculations about what probabilities will be after the bet. I just take the smallest step possible, and then re-evaluate.

Being only very slightly reckless:

(env)$ while true; do ./arbitrage.py; done
Buying NO
Buying YES
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying NO
Buying YES
Buying NO

The end result: I spent M$17 to get 21.059 YES and 16.719 NO shares. This is not quite a guaranteed profit. In the event of NO, I lose M$.291. It is, however, much closer. The maximum loss is M$0.291, and the potential gain is M$4.059. Alternatively, trusting the 10% probability, the expected value of this position is M$17.15, so I gained $0.15 in expectation.

There’s more that should be tuned. For instance, a good bit was lost to creator fees in this process. I still managed to make a net (expected) profit, but presumably a bit more could be eked out by paying attention to fees.

There’s more to say, but this has already taken much more time than I expected, so I will simply close with one of those universally beloved exercises for the reader. Under what circumstances does the strategy above far underperform a human trader? (Can another trader, who knows I’m running this script, profit from that knowledge?)

Links for June 2022

Hanson on selection biases for arguments. A closely related principle: hearing a well-thought-out, but ultimately unconvincing, argument for X can often give you more confidence in not-X.

Trying to get a robot to correctly interpret your request. I’m far from the first to note this, but Asimov’s concept of “robopsychology” is suddenly incredibly relevant.

Zhao Yanjing on housing in China. It hits some familiar notes: it seems that the same lessons are being learned in many countries. The piece also hits some exceedingly unfamiliar notes (which may be unfamiliar for a good reason). Also, there’s this lovely quote:

There is a danger that external shocks such as the epidemic or deteriorating international relations will be used to explain the recent economic downturn.

Read wikipedia privately using fully homomorphic encryption. The link is a site demonstrating the ability of FHE to prevent a malicious server to know what information it sent you.

Circumscribe expertise (and maybe talk to the police)

There’s a standard piece of internet wisdom: don’t talk to the police. The canonical video link, I think, is this one. It’s a lecture by a law professor and former criminal defense attorney, alongside a police officer. The gist of it is usually summarized as “even if you believe you haven’t done anything wrong, never talk to the police—it can only harm”.

Now, a random internet stranger claiming “you should never talk to the police—they’re out to get you, doncha know!” is not very believable. Such a claim might be seen as conspiratorial, or even sovereign-citizen-adjacent. So it’s important for the believability of this claim (and the propagation of this meme) that it’s usually accompanied by the above link, and that the speakers in that link are apparently relevant experts. It’s difficult to plausibly doubt them when they say, for instance, that talking to the police without a lawyer can be a risky act even for someone who is in reality innocent.

The ultimate claim, though, is not a simple factual statement about a long tail of risk when talking to the police. The claim that this video is generally used to support is a normative one: you should not talk to the police. Translating the central factual claim into a normative claim is a tricky task, and it serves as a nice little case study on the relevance of “relevant” experts.

The pope announces that speaking to the police is commanded by God—you will be richly rewarded in the afterlife. Now should you talk to the police? Maybe you’re not so interested in the afterlife. Okay, your psychologist (named “Fraud”) tells you that police represent father figures, and you need to talk to them in order to overcome your inexplicable fear of rockets and sausages.

Those examples are deliberately silly. Fine: late one Tuesday night, the sheriff comes to your door.

“Excuse me. I’m investigating a series of crimes. Sunday, the fellow on the corner was shot and killed. Yesterday, Ms. Watserneym next door was shot—she’s doing okay in the hospital, though. We believe the suspect lives nearby. Have you seen anything suspicious?”

If, unbeknownst to the police, you just last night recorded a video of a shadowy figure hiding a package nearby, should you tell them? Sure, they might look at your phone and bust you for movie piracy. On the other hand, you’re about to die.

In the median case, though, you haven’t seen anything. What to do then? Your incentives, properly understood, still point towards “make the police as efficient as possible”, even at some short-term cost to your legal position. The sheriff is going to have to go interview everyone on your block, and decide if they seem suspicious. Neglecting, as you should, considerations of your own legal liability, you should do whatever it takes to convince the sheriff that his time is better spent elsewhere. (That way, he might spend his time elsewhere, increasing the odds of success.) If he asks “may I search your house”, the correct answer is “absolutely”. Remember, you have hours left to live.

In short: sometimes, you should talk to the police simply because it may help them do their job better, and police doing their job well has positive externalities.

Back to generalities about expertise. Law professors and police officers are not experts in quantifying externalities (more’s the pity). So now we see that deciding what you should do when questioned by the police in fact involves a slightly larger set of experts: one must at least include a certain flavor of economists. But is that all? Should we add a priest? A psychologist? An anthropologist? This is not a question to be answered. Stating a set of “relevant” experts is itself a factual statement about the world, and it’s a factual statement for which there is no particularly relevant expert.