We already know how to do Bayesian Updating with discrete priors. Today we will learn how to do Bayesian Updating with continuous priors.

## Continous Priors

To do Bayesian Updating with continuous priors but with discrete data – we will look at the case that both is discrete next time – we just change sums to integrals and PMFs to PDFs.

We often denote the hypothesis “theta” where theta is a range and not a discrete hypothesis. The hypothesis is then often written as: , what means that the hypothesis is in a small interval of width around . That is because the probability is given by

If we apply all of the above mentioned things we get a Bayesian Updating table that looks like this:

Hypothesis | Prior | Likelihood | unnormalised Posterior | Posterior |

Total | 1 |

which is the total of the unnormalised posterior column can be hard to calculate and we often use a computer to do so, in case we don’t have a conjugate prior. More about conjugate priors soon.

**Example:** Suppose we have a coin with flat prior and unknown probability p of heads. We flip the coin and get heads. What is the posterior PDF?

**Answer:** A flat prior means that on the interval [0,1] which mens that all hypotheses are equally likely and that the hypotheses are between 0 and 1. Furthermore we let x=1 mean that the outcome was heads.

The definition of our hypothesis tells us that because our probability of getting heads is our hypothesis. We then have the following Bayesian Updating table:

Hypothesis | Prior | Likelihood | unnormalised Posterior | Posterior |

Total | 1 | 1 |

Our posterior PDF after seeing one heads is then:

We can see that Bayesian Updating with continuous priors doesn’t differ much from Bayesian Updating with discrete priors.