Module 12: Probability & Risk

Odds, chance, and how to think about risk

Part A · feel the scale of probability first
What does "1 in N" actually look like?
Each dot below is one person or one event. The red dot is the one that "happens."
1 in 10 — e.g. chance of getting a cold in any given month
1 in 100 — e.g. chance of a serious car accident in a year of driving
1 in 1,000 — e.g. dying in a car accident this year (UK/Western Europe)
Finding the red dot in this grid takes effort — that's the point.
The crowd test: "1 in 1,000" means: imagine 1,000 people in a sports stadium. One of them. That's it. "1 in a million" is 20 such stadiums. "1 in a billion" is 20,000 stadiums — the entire population of a large city.
Part B · the probability scale — real anchors
~1 in 2
Lifetime cancer diagnosis (Western countries)
Most cancers are treatable. This high number surprises people who think cancer is rare.
~1 in 4
Dying of heart disease (lifetime risk)
The single largest cause of death in most developed countries.
~1 in 100
Being involved in a serious car accident in any given year
People drive daily and feel safe — but across a lifetime of driving, the cumulative risk is substantial.
~1 in 500
Dying in a road accident this year (UK)
Higher in countries with less safe roads. Lower for pedestrians in cities.
~1 in 11,000
Dying on a single commercial flight
Flying is ~50–100× safer per km than driving. Yet people fear it far more.
~1 in 1,000,000
Being struck by lightning in a year
Lifetime risk (~1 in 15,000) is much higher — which is why "1 in a million" feels wrong for lightning.
~1 in 14,000,000
Winning a major lottery jackpot (UK National Lottery)
You are ~1,000× more likely to be struck by lightning this year than to win the jackpot.
Part C · the biggest trap — relative vs absolute risk
Interactive: see how the same fact can be presented two very different ways
The rule to remember forever
Whenever you hear a relative risk ("X% more likely"), always ask: "more likely than what?" — what is the base rate?
A 100% increase sounds catastrophic. If the base rate is 0.001%, doubling it gives 0.002% — almost nothing.
A 10% increase sounds small. If the base rate is 30%, adding 3 percentage points matters a lot.
Part D · cognitive biases — why our probability intuition fails

Availability bias

Vivid = likely

We judge probability by how easily we can imagine it. Plane crashes make the news; car crashes don't. So we fear the wrong things. Sharks kill ~5 people/year worldwide. Vending machines kill more.

Base rate neglect

Ignoring the denominator

A drug test is "99% accurate." You test positive. What's the chance you actually have the disease? If only 1 in 10,000 people have it, probably less than 1% — because false positives outnumber true positives. See Part E.

Gambler's fallacy

Coins have no memory

After 10 heads in a row, the next flip is still 50/50. The coin doesn't "owe" you tails. Each independent event resets to its base probability. Casinos are built on this misunderstanding.

Conjunction fallacy

Specific ≠ more likely

"She is a feminist bank teller" feels more probable than "she is a bank teller" — but it can't be, because the specific includes the general. Adding detail always reduces probability, never increases it.

Part E · the false positive problem — most surprising result in probability
Adjust the sliders — what does a positive test really mean?
1 in 100
99%
Part F · the birthday paradox — probability defies intuition
How many people in a room before two share a birthday?
Drag to find where the probability crosses 50% — most people guess far too high.
2 60
Part G · risk in real life — the numbers that actually matter

Lifetime risk of dying in a car (UK/EU)

~1 in 240

Yet most people drive daily without conscious fear. We accept familiar risks far more readily than unfamiliar ones — even when unfamiliar risks are lower.

Lifetime risk of dying in a plane crash

~1 in 11,000

About 45× safer than driving per journey. Per km travelled, flying is ~50–100× safer. Fear of flying is one of the most statistically unjustified common fears.

Risk of a serious side effect from a common vaccine

~1 in 100,000

The disease being vaccinated against typically carries 100–10,000× greater risk of the same outcome. Vaccine risk must always be weighed against disease risk, not against zero.

Risk of dying from surgery (routine, healthy adult)

~1 in 100,000

General anaesthesia alone: ~1 in 100,000. Surgical risk rises sharply with age, obesity, and pre-existing conditions.

Part H · test yourself

1. A headline reads: "New drug cuts heart attack risk by 50%." Should you be impressed?

Not necessarily — you need the base rate. If your annual risk of a heart attack was 2%, a 50% relative reduction brings it to 1%. That's a real 1 percentage point benefit — meaningful. But if your base risk was 0.2%, the drug brings it to 0.1% — an absolute reduction of just 0.1 percentage points. The drug would need to treat 1,000 people to prevent one heart attack. Whether that justifies the cost, side effects, and daily pill-taking depends entirely on that absolute number, not the 50% headline.

2. You flip a fair coin and get 7 heads in a row. What is the probability the next flip is tails?

Exactly 50%. Each flip is independent — the coin has no memory of previous outcomes. The probability of 7 heads in a row happening was 1/128 (~0.78%), which was unlikely, but it happened. Now that it has happened, you are simply at flip #8, and the probability of tails is 50%. This is the gambler's fallacy: the feeling that "tails is due" is a cognitive error. Casinos thrive on it. Roulette wheels display recent results precisely to feed this illusion.

3. A disease affects 1 in 1,000 people. A test for it is 99% accurate. You test positive. What is the approximate probability you actually have the disease?

About 9%. This is the false positive paradox. Test 100,000 people: 100 actually have the disease, and the 99% test catches 99 of them (true positives). But 99,900 don't have it, and the 1% error rate gives 999 false positives. So among all ~1,098 positive results, only 99 are real — that's 99/1,098 ≈ 9%. This is why medical screening for rare diseases is complex: even very accurate tests produce mostly false positives when the disease is uncommon. Doctors follow up positive screening tests with confirmatory tests precisely because of this.

4. You are choosing between two routes to work. Route A has a 10% chance of making you 10 minutes late. Route B has a 1% chance of making you 60 minutes late. Which is riskier in terms of expected delay?

They are equal. Expected delay = probability × impact. Route A: 10% × 10 min = 1 minute expected delay per trip. Route B: 1% × 60 min = 0.6 minutes expected delay per trip. Route B is actually slightly better in pure expected value — but Route A's frequent small delays might be more manageable than Route B's rare catastrophic ones. This illustrates that expected value isn't always the right metric: variance (how unpredictable the outcome is) matters too, especially when a bad outcome is truly unacceptable.

5. "Smokers are 15–30× more likely to get lung cancer than non-smokers." Is this a relative or absolute risk? What does it mean in practice?

It's a relative risk — but the base rate is high enough that it translates to a large absolute risk too. About 0.1% of non-smokers develop lung cancer per year. A 20× increase means ~2% of heavy smokers develop it per year. Over 40 years of heavy smoking, roughly 1 in 10 to 1 in 6 heavy smokers will develop lung cancer. That's an enormous absolute risk — and this is before counting heart disease, stroke, and other cancers that smoking also dramatically increases. This is one of the clearest examples in medicine where both the relative risk (20×) and the absolute risk (~15% lifetime) are both genuinely large and alarming.