### Part two: Toward a theory of everything In part one of this article, @PlusEVAnalytics begun his attempt to combine questions about sample size, regression to the market, market efficiency and the Kelly Criterion into a betting version of the theory of everything. In part two, he develops this theory and tries to answer those questions. Read on to find out more.

Let’s recap what we covered in part one. We have a theoretical framework for understanding the nature of uncertainty, but we haven’t established any applications. We have a practical adjustment, whether referred to as fractional Kelly or as regression to market, that is useful but imperfect and whose theoretical basis is built on some shaky assumptions. It’s a match made in heaven…let’s put it all together.

## Let’s get Bayesian

To continue the previous example from part one, let’s assume that the market implied probability is 50% and our model probability is 55%. Let’s suppose that we’ve also decided, either subjectively or otherwise, that we’re going to select an initial bet size of 1/4 Kelly. We’re going to convert this assumption into a “prior distribution” for the true probability – an initial picture of what we think it looks like. This initial picture doesn’t have to be perfect, because we’re going to continually update it as we learn from our own bet history.

Note: What follows is my own method for constructing this distribution – it’s not the only way to do it, but I think it strikes a good balance of accuracy and simplicity. More sophisticated readers are encouraged to come up with their own methods!

We can simplify the picture by assuming that the true probability always falls within a certain range. This range is bounded at one end by the market implied probability and at the other end by the model probability. In this example the range is from 50% to 55%. Is it possible that the true probability could be 48%, or 57%? Yes, but it’s unlikely enough that ignoring these possibilities will not make a significant difference.

The only thing missing now is the shape of the prior distribution. I’m going to return to the idea of “regression to market” and model the regression weight using a simplified version of the Beta distribution. The Beta distribution normally has two parameters, but I want a one-to-one relationship between the Kelly multiple and this distribution, so I’m going to use only one parameter and set the other one to 1. The probability density function resulting distribution, also known as a “power function distribution” (see below). Where k is the parameter, and true probability = x * (model probability) + (1-x) * (market probability).

Given an initial Kelly multiple of 0.25, we can solve for k by converting it to a regression weight of 0.429 as above, and setting the mean of the distribution to the regression weight of 0.429 – so a Kelly multiple of 0.25 converts to k = 0.429 / (1 – 0.429) = 0.75, and the distribution looks like this: Here’s what the distribution would look like for various initial Kelly fractions: Now, suppose you make that first bet at 1/4 Kelly, and it wins. We can learn from this result by applying Bayes’ theorem:

Posterior f(x) conditional on observed results is proportional to (the likelihood of the observed results conditional on x) times prior f(x).

The observed results conditional on x follow a binomial distribution, in this case with 1 success in 1 trial.

Calculating the posterior distribution gives the following: You can see how the new information has allowed us to update our understanding of the generator just by a little bit – it’s only one bet after all! We can take the formulas in the “regression to market” section above and apply them in reverse to see that the posterior distribution implies a Kelly fraction of 0.26. So for our next bet, we should increase our bet size from 0.25 Kelly to 0.26 Kelly.

 Prior Kelly Fraction Wins Losses Posterior Kelly Fraction 0.25 1 0 0.26

Let’s continue this example – suppose your model has now been active for a year and has made 500 bets at an average projected win probability of 55%. The performance of the model has been poor – 250 wins and 250 losses. Should you continue or quit? Prior Kelly Fraction Wins Losses Posterior Kelly Fraction 0.25 250 250 0.05

The math says that you should continue but reduce your bet size to 0.05 Kelly.

Then, your next ten bets all lose. Now what? Prior Kelly Fraction Wins Losses Posterior Kelly Fraction 0.25 250 260 0.00

Now there is sufficient evidence that it’s time to quit.

For a happier example, let’s suppose that a brand new model wins its first twenty bets. Prior Kelly Fraction Wins Losses Posterior Kelly Fraction 0.25 20 0 0.47

The result is obviously driven at least somewhat by good luck – even if the model is perfect, winning a 55% bet twenty times in a row is still unusual. The Bayesian formula apportions the result between signal and noise such that it suggests an increase in the bet size for the 21st bet from 0.25 Kelly to 0.47 Kelly.

### The function

We can generalise the above method to calculate a posterior Kelly fraction as a function of:

- The prior Kelly fraction;

- Model probability, market probability and bookmaker margin for the bet in question;

- History of the model’s bets (model probability, market probability, result for a series of past bets)

VBA code for this function is provided in the appendix.

To better understand the behavior of this function, we can do a couple of sensitivity tests. Each of these tests assumes a string of bets each with market probability = 0.50 and model probability = 0.55.

Test #1: Suppose the emerging results match the model expectation exactly – 11 wins for every 9 losses, observed win rate of 55%. How does the posterior Kelly fraction look after 20 bets, 40 bets, 60 bets, etc? Test #2: Suppose the emerging results match the market expectation exactly – 10 wins for every 10 losses, observed win rate of 50%. How does the posterior Kelly fraction look after 20 bets, 40 bets, 60 bets, etc? You can see that the model reacts quite differently to both positive and negative results depending on the choice of prior Kelly fraction that is used. Your choice here should be informed by your own process, data and assumptions that underlie your model, but a choice that’s close to 0.50 (or 1/2 Kelly) will give you the most flexibility to learn from your results as they emerge.

### What have we learnt?

Sports betting is a unique type of gambling proposition because it uses natural generators instead of artificial ones, resulting in the need to consider both process uncertainty and parameter uncertainty. To do that, we can take the Kelly Criterion and modify it – first by regressing to market to form an initial picture of the generator, then by learning from a model’s observed results to sharpen the picture.

The resulting “posterior Kelly fraction” retains the most important property of the Kelly Criterion – that it scales the bet size up and down in proportion to the bankroll – but it also scales the bet size up and down in proportion to the degree of confidence in the model’s correct understanding of the generator. It is superior to regular Kelly for the purpose of bet sizing, and it is superior to both CLV and ROI for the purpose of evaluating a model’s results – it accounts for process uncertainty in a way that pure ROI does not, and it does not require the market efficiency assumption that CLV does.

Because the original application of the Kelly Criterion to gambling considered artificial generators only (Ed Thorp and blackjack), the academic literature on Kelly is generally silent on the concept of model error other than as a high-level rationale for using an arbitrarily selected Kelly fraction.

This modified Kelly Criterion explicitly takes model risk into account, and it reflects some of its important properties – that it propagates across multiple bets from the same model in a correlated manner, and that repeatedly observing win/loss results can allow the model error to be “learned” in a Bayesian sense.

Feel free to experiment for yourself, either using my distribution function or coming up with your own. I’m happy to answer any questions on Twitter @PlusEVAnalytics.

Here’s hoping that your parameter variance stays close to zero and your process variance is in your favour!

This guest contribution was made by one of our Twitter followers - @PlusEVAnalytics.

The author wishes to thank Joseph Buchdahl, Dominic Cortis and Rufus Peabody for reviewing a draft of this article and providing valuable feedback.

Appendix – VBA Code to calculate Posterior Kelly Fraction
Function Posterior_kelly_fraction(prior_kelly_fraction As Double, market_prob As Double, model_prob As Double, bookmaker_margin As Double, past_market_probs As Range, past_model_probs As Range, past_results As Range) As Double
' VBA code to calculate the posterior Kelly fraction using the following parameters:
' prior_kelly_fraction = your initial selected Kelly fraction, must be greater than 0 and less than 1
' market_prob = the market implied probability of the bet being considered, must be greater than 0 and less than 1
' model_prob = your probability of the bet being considered, must be greater than market_prob and less than 1
' bookmaker_margin = the percent margin (hold) being charged by the bookmaker - use 0.024 for -105/-105 lines, 0.045 for -110/-110 lines, etc
' past_market_probs, past_model_probs, past_results must be identically-sized ranges containing your bet history for this model. Past_results must be boolean (TRUE/FALSE)
' By @PlusEVAnalytics, use at own risk!
bandwidth = 0.0001 'If the function runs too slowly on your computer, try increasing this number.
prior_mean = (model_prob * prior_kelly_fraction) + (1 - prior_kelly_fraction) * market_prob * (1 + bookmaker_margin)
prior_k = (prior_mean - market_prob) / (model_prob - prior_mean)
numerator = 0
denominator = 0
For i = 1 To (1 / bandwidth)
likelihood = 1
prior_prob = WorksheetFunction.Beta_Dist(i * bandwidth, prior_k, 1, True) - WorksheetFunction.Beta_Dist((i - 1) * bandwidth, prior_k, 1, True)
For j = 1 To past_market_probs.Count
If past_results.Cells(j) Then
likelihood = likelihood * (past_market_probs.Cells(j) + (i - 0.5) * bandwidth * (past_model_probs.Cells(j) - past_market_probs.Cells(j)))
Else
likelihood = likelihood * (1 - (past_market_probs.Cells(j) + (i - 0.5) * bandwidth * (past_model_probs.Cells(j) - past_market_probs.Cells(j))))
End If
Next j
numerator = numerator + ((i - 0.5) * bandwidth) * prior_prob * likelihood
denominator = denominator + prior_prob * likelihood
Next i
posterior_mean = market_prob + (model_prob - market_prob) * (numerator / denominator)
posterior_edge = posterior_mean / (market_prob * (1 + bookmaker_margin))
Posterior_kelly_fraction = WorksheetFunction.Max(0, (posterior_edge - 1) / ((model_prob / market_prob) / (1 + bookmaker_margin) - 1))
End Function

Do you want to be informed about bookmakers' latest promotions? Click  and subscribe!

https://www.pinnacle.com/en/betting-articles
Guest Contributor