Bayes' Theorem describes the probability of an event based on the probability of all other conditions leading up to that event.
This can often be counter intuitive. Say, for example, you take a test for a disease that exists in 10% of the population, and you test positive. If the test has a Sensitivity of 90% (true positive rate) and a Specificity of 90% (true negative rate), your intuition may tell you that you have a 90% chance of having the disease.
Whilst this feels correct, we have neglected to take into account the base probability; the probability of having the disease in the first place.
In fact, your odds of truly having the disease after testing positive are actually 9% - far from the expected 90%.
This is worked out by taking the rather slim chance of you having the disease and multiplying it by the chance of the test picking it up (10% x 90%).
To get a sense of this we can watch how the modelled test below plays out:
- Not Infected
- False Positive
- True Positive
- False Negative
- True Negative
True Positive Tests:
False Positive Tests:
True Negative Tests:
False Negative Tests:
If Bayes' Theorem still seems a little unclear, Kalid Azad has written a great article which I highly recommend reading: An Intuitive (and Short) Explanation of Bayes’ Theorem. For a more complete explanation of the formula, as well as some history, see the Wiki Page.
If you're ready to see more however, Andrew Collier has developed a Shiny application illustrating an intuitive visualization of Bayesian Updates.Author: Ryan Nel