How to Price Kentucky Derby Exactas

Microprediction
6 min readApr 17, 2021

As a kid, one of my introductions to applied mathematics was the racetrack. Specifically, it was Randwick Racecourse in Sydney. I won’t bore you with my various adventures, except to say that for the longest time I have felt a strong pull towards a mathematical problem that arises there — one that seemed not to be well covered in the order statistics literature. I’d go so far as to say I had an overwhelming aesthetic need to construct a self-consistent pricing model for all racetrack wagers.

I’m pleased to say that some work down these lines was recently published in the SIAM Journal on Financial Mathematics this month 12 (1), 295–317. Somebody had a Google alert set and noticed it, so it started circulating in the handicapping forums almost immediately. I suspect that sooner or later this will be my most cited paper, which isn’t saying much.

The title is Inferring Relative Ability from Winning Probability in Multientrant Contests and in this post we’ll use it to construct a coherent model for the upcoming Kentucky Derby. To be honest, I’ve kept this to myself for quite a number of years but recently I’ve come to realize its importance to things well beyond the racetrack. Not only in obvous places like e-commerce, where lots of little races occur all the time, but also in automated machine learning, bandit-like problems and elsewhere.

Perhaps I’ll write a blog article about those other uses one day but for now, it’s Derby time!

A Coherent Model for the Kentucky Derby

The basic problem is simple enough. Let’s assume that each horse has a performance distribution. That distribution can be anything you want (a marked departure from the prior literature, which you can read about in the paper). However, for concreteness let’s assume the running time is skew-normal.

Our task is to translate these running time distributions until they match presribed winning probabilities. In the case of the Kentucky Derby, that’s a roughly 15 dimensional optimization problem and thus not difficult. However my paper reveals a shortcut that is not only much, much faster but scales to races involving a million participants. Here’s the solution:

If you were to perform a Monte Carlo simulation using these running time distributions then you will recover exactly the current win probabilities. As you can see, the market currently infers that Tiz the Law is vastly superior to its rivals. This is one way to visualize that ability gap. Here is how the plot is produced.

The Math

Of course, there are some assumptions. The first step is interpretation of the live odds for the Derby as winning probabilities. That topic could occupy us for a long time but in the interest of simplicity, I applied a power law to unwide the so-called longshot effect — the same method I used to beat up on Nate Silver (article) not so long ago. Feel free to modify that as you see fit in the notebook I’ve provided.

I stared with these odds:

which I’m sure are out of date as you read this. For those not familiar, the bookmaking convention of quoting Tiz The Law at 0.6 means that the bookmaker risks 60c for every $1 the patron risks. So setting aside normalization and the longshot effect, that would translate into a probability of 1/1.6 = 0.625. After unwinding and normalizing, I assigned Tiz The Law a probability of 1/1.73.

For this we need some rudimentary logic:

The Rule of 1/4 for Show Betting

Bookmakers sometimes use the rule of 1/4 to estimate show probabilities. This means that if laying a horse to finish in the top three positions, they are willing to risk one quarter of what they would risk (to your dollar) when laying a horse to win. However they are smart enough to know not to do this if the favourite is odds on.

We can use the running time model to see why this could go badly wrong. The most interesting column in this table is the Percentage Diff. This reports the percentage difference in show probability of our coherent model, as compared with the ad-hoc rule of 1/4. The show odds that would be offered are found in the second column and as you can see are 1/4 of the win odds.

For example, the second favorite Honor A.P. is 16% more likely to finish in the top three positions according to our model than the rule of 1/4 would suggest. The longshot Winning Impression is twice as likely to finish in the top three!

Here I wanted to isolate the rule of 1/4 from the longshot effect. However you might prefer to use the rule of 1/4 on the original unadjusted bookmaker odds first, and then subsequently normalize. That will lead to a smaller percentage difference but still a significant one. The Derby is a very low entropy race, one might say. It isn’t often the case that one horse sucks so much oxygen out of the win probabilities.

How are these ratios calculated? Here I’m computing the bookmaker heuristics.

I also need a Monte Carlo model to run the performance model instead. That is, I dare say, mere book-keeping.

The Axiom of Choice, Exactas and Quinellas

You’ve probably come across Luce’s Axiom of Choice before, or reinvented it, without realizing it had a name. It is named for Harvard psychologist R. Duncan Luce — though if you read his reflective essay (here) you’ll see he didn’t really believe it himself.

The axiom relates to Ken Arrow’s principle of independence of irrelevant alternatives. It is normally applied to the study of people’s choices, but at the racetrack it is often applied to nature’s choice — as it were — of the winner of the race. It can also be very simply expressed in the racetrack setting. Luce’s Axiom states that if you remove a horse, you should simply renormalize winning probabilities.

What do I mean by “remove”? I mean condition on that horse winning — and that’s where the subtle difference arises. This use of the Axiom of choice is named for Harville, though most people would just assume it is “obvious”. Turns out, it is both obvious and terribly wrong. But here it is:

We can investigate the failure of the “axiom” of choice easily enough now that we have an internally consistent model for the horse race. The notebook contains an example of simulating the race and computing exacta prices (first two horses in order) and then comparing them against the trivially computed Harville exact prices.

What we find in this low entropy race is that the Harville exacta prices are probably wrong by a factor of two, or even three in some cases. The market is not stupid, but I will leave it to the enterprising reader to form a critical assessment of the efficiency of the exacta market. We are two weeks out from the big race as I write this, so anything I say will be stale quickly.

However, my real point is that when faced with the task of assigning probabilities to combinatorial choices from a set of n objects, or a race between things that isn’t a race between one radioactive particle and another, it isn’t necessary to stoop to the level of the Axiom of choice.

You can read more in the paper.

--

--