I Came Here For You To Beat Me

Microprediction
6 min readMay 22, 2023

I have created a notebook that will help you beat the microprediction oracle — the one you are so terrified of.

Makes no sense whatsoever in an options market, continuous lottery, or a prediction web

Yes, like the frightened baby chipmunk, you are scared by anything that is different. I know this. Too bad for you I have prophesized in this book that in the future, there will be powerful “microprediction oracles” all around you. You hate that this strikes at the core of a fundamental contradiction inherent in the artisan management of data hungry methods.

You American. You dare to critique my thesis. You have spilled my macchiato. And yet you have not demonstrated that you can even beat the microprediction oracle that already exists. WHAT IS THIS? THIS MAKES NO SENSE! I mean the oracle isn’t going to get any worse now, is it? A web-scale collection of these things isn’t going to be LESS POWERFUL than ONE of them, is it?

That’s okay. I will help you. I will help you defeat me. Perhaps I will give up on this prediction web idea. But before I do that, I need to encounter someone who is truly better than me. Perhaps that is you.

Step 1. You better have a model

I know you think you are good at this machine learning thing. I know you think you wake up in the morning and piss excellence. Well, let me say that you better have a truly fine model and it better provide distributional predictions. Or at minimum, it better make 225 guesses of a future data point.

If you don’t have a model then I say, Ricky Bobby, that you clearly don’t know what to do with your hands. The bakery doesn’t have a model either by the way, which is really the point you missed, isn’t it now? One of many in the book. Or perhaps you think point estimates should do? Go read this. Your point estimate will probably lead to half of California burning by accident, silly American.

def predict(lagged_values):
"""
Replace this with your model. It is merely an example.
"""
import numpy as np
from microconventions.stats_conventions import StatsConventions
from microprediction import MicroReader
padded = [-1, 0, 1 ] + list(lagged_values) + list(lagged_values[:5]) + list(lagged_values[:15])
devo = np.std(padded)
values = sorted( [ devo*StatsConventions.norminv(p) + 0.001 * np.random.randn() for p in MicroReader().percentiles()] )
nudged = StatsConventions.nudged(values)
return values

This “data science” thing you talk about is ridiculous, by the way. Mathematics in the tradition of Bourbaki makes sense. What has America ever given the world, apart from George Bush, Cheerios, and the Thighmaster? You could not even invent Kaggle.

And now that I have asked you for a model you are already taking too long to provide any value, like a master craftsman painstakingly carving away. Quite disappointing I must say, compared to a microprediction oracle, at least in developer time. Your technologists would be better off without you. They can just use the API.

import matplotlib.pyplot as plt
plt.hist(predict(lagged_values=[1,2,3,4,5,6]), bins=100
A quick test of your prediction model, which sprays 225 Monte Carlo-like samples

Step 2: Pick a live source of data

You want to use stale tabular data, I know. You are so wedded to your precious common task framework, the great catalyst of the machine learning revolution. Well, I say that idea ain’t worth a velvet painting of a whale and a dolphin getting it on. Have you heard of “market drift”? No, only “model drift”. You should go read Dorothy, You’re Not in Kaggle Anymore. This isn’t about you on a podium, it’s about cakes going to waste. And croissants.

from microprediction.live.crypto import names_and_pricesdef bitcoin() -> float:
n, p = names_and_prices()
return p[0]/100
bitcoin_0 = bitcoin()def something_measured():
"""
Replace this with your own instrumented quantity
"""
return bitcoin()-bitcoin_0

Step 3. Decide on a criterion for assessing guesses

The matador shall dance with the blind shoemaker! Those 225 guesses are a lossy representation of a distribution, but stop your complaining and see if you can make a statistical point regardless. Feel free to change the criterion. Your excuses will not survive the next version of the oracle, by the way.

def robust_log_like(guesses:[float], value, show=False):
""" Interpret a finite list of guesses as a distribution and a quasi-likelihood. There's no perfect way ... I do not care ...
you wanted this "horse race" not me.
"""
from tdigest import TDigest
import math
import numpy as np
# Induce CDF from stochastic gen tdigest
h = (1e-4+ max(guesses)-min(guesses))/len(guesses)
digest = TDigest()
bumped_guesses = list()
for _ in range(500):
bumped_guesses.extend( [ x + np.random.randn()*h/3 for x in guesses])
if show:
import matplotlib.pyplot as plt
plt.hist(bumped_guesses, bins=1000)
plt.title('Implied density ?')
plt.show()

digest.batch_update(bumped_guesses)
eps = 1e-3
prob = (digest.cdf(value+eps) - digest.cdf(value-eps) + 1e-10)/eps
return math.log(prob)
robust_log_like([1,1.2,3,4,4.5,4.6,4.8],4.6, show=True)
Example of an implied density used for log-likelihood, with very few samples for illustration

Step 3: Get a write key of difficulty 12 at least

Yes, I know you expect to use some authentication invented in America. I do not care. The French invented democracy, existentialism, and the pari-mutuel. So, see instructions for making a write key and alternatives like begging. Otherwise, it will have to be tomorrow. Yes, tomorrow, you are going to get beaten. Beaten real bad, cowboy!

# STOP: RUNNING THIS CELL WILL TAKE A LONG TIME
from microprediction import new_key
WRITE_KEY = new_key(difficulty=12) # don't hold your breath.
print(WRITE_KEY)
# HERE IS AN ALTERNATIVE:
WRITE_KEY = 'BEG ME FOR A WRITE KEY AND PUT IT HERE'

Step 4: Run your model and compare

Now we shall dance. And yes, it will be a slow jam.

We shall:

  • Publish every 20 minutes the change in the last 15 minutes
  • Make model predictions
  • Retrieve market predictions
  • Wait for the next data point and add to an incremental assessment

I do this only to pander to crybabies like yourself who think models should not be disadvantaged. I know you will like this exercise, as it downplays the other advantages of markets over models, including exogenous data search, the ability to turn weak truths into stronger ones, lagged data into real-time, and so on. I don’t mind. I will defeat you anyway. On your own turf. With your own silly brand of racing.

NAME = 'benchmarking_target.json' # <--- change this to something, keep the .json 
WARMUP = 5
import time
from microprediction import MicroWriter
mw = MicroWriter(write_key=WRITE_KEY)
PARTICIPATE = True # Set to False to match your dark heart
from momentum import var_init, var_update
market_var = var_init() # Tracks running mean/var
model_var = var_init()
guesses = None
market_guesses = None
# Start the value changes
MINUTE = 60 # Should be 60 but make smaller for testing
for t in range(WARMUP):
time.sleep(5*MINUTE)
prev_value = something_measured()
time.sleep(15*MINUTE)
change = something_measured()-prev_value
mw.set(name=NAME, value = change)
print('Published '+str(change))
prev_change = change
for t in range(3*24*7*4):
time.sleep(5*MINUTE)
prev_value = something_measured()
# Make next predictions, and also get market's prediction
lagged_values = mw.get_lagged_values(name=NAME)
print(' predicting ...')
model_guesses = predict(lagged_values=lagged_values)
# Maybe participate too? Here comes the philosophical debate
if PARTICIPATE:
# predict 15 minute horizon
mw.submit(name=NAME, values=guesses, delay=mw.DELAYS[2])
time.sleep(5)
# Get market predictions
print(' retrieving market predictions ...')
market_guesses = mw.get_own_predictions(name=NAME, delay=mw.DELAYS[2])

# Next data point arrives
time.sleep(15*MINUTE)
change = something_measured()-prev_value
mw.set(name=NAME, value=change)
print('Published '+str(change))
# Evaluate
ll_model = robust_log_like(model_guesses, change)
ll_market = robust_log_like(market_guesses, change)
market_var = var_update(market_var, ll_market)
model_var = var_update(model_var, ll_model)
report = {'market_log_like':market_var['mean'],
'model_log_like':model_var['mean'],
'market_log_like_std':market_var['std']}
from pprint import pprint
if market_var['mean'] > model_var['mean']:
report['status'] = 'market is better'
else:
report['status'] = 'market will be better sooner or later'
pprint(report)
Published -0.02030000000002019
Published -0.04189999999999827
Published -0.04970000000002983
Published -0.39129999999997267

I’m sorry you lost

Your injury is one of ignorance and pride!

You thought you were a big hairy American winning machine. But now you must cross over the anger bridge and come back to the friendship shore. It’s not all about model contests and winner take all. In a French mechanism, everyone can help everyone else, and you could still be rewarded if you know the true distribution (roughly in proportion to the K-L distance to the market, see here).

It would have taken very little time for you to put your wonderful model into the prediction network instead (instructions) where it would eventually find something it is good at. Instead, we have wasted much more time on this bespoke non-comparison.

What is your plan? To withhold your model from the open prediction network so that you can say it is better than the open prediction network? That is sh!t. As sh!t as that Highlander movie.

I wish you had just read the docs, silly American.

--

--

Microprediction
Microprediction

Written by Microprediction

Chief Data Scientist, A Hedge Fund

No responses yet