Skip to main content

https://ukhsa.blog.gov.uk/2014/07/24/mythbuster-low-risk-or-no-risk/

Mythbuster: Low risk or no risk?

Posted by: , Posted on: - Categories: Mythbuster, Protecting the country's health

mythbuster logoThis is the first in a new type of post we’ll be featuring on Public Health Matters. Routinely the Mythbuster will take aim at one of the myths, misunderstandings and misconceptions that surround public health. In doing so, we’ll take a complex issue and break it down to make it easier to understand and hopefully put the myth to bed once and for all. First up: why don’t we (and can’t we) say something poses no risk?

As the nation’s top public health advisory body it’s a pretty regular occurrence for Public Health England to release health advice on a range of topics. Often enough, the bottom line for one of these pieces of advice is that the issue at hand “poses a low risk to public health”. Sometimes it’s even a “very low risk”. What we never say, however, is that it poses no risk, and that’s where people start to get worried.

It’s completely understandable why people would want an assurance that something poses no risk, especially when we’re talking about a topic that sounds quite scary: man-made additives in the food we eat, the chance of catching a strange disease or the likelihood that something in our environment will cause us harm are all things that stir up emotions. When we enter the debate to say there’s “only” a very low risk that those things will cause harm it’s not surprising that some people will angrily demand that if it poses any risk it shouldn’t be allowed. The truth is, however, that it’s not possible for us to ever conclusively say there is “no risk”.

A problem of definition

On the one hand, there’s a simple problem of definition at work here. To demonstrate, think of an activity you might think of as “totally risk free” – walking up the stairs, perhaps? But hundreds of people in the UK are killed in falls on stairs every year, with many more injured. Of course that’s a tiny number in terms of the UK’s population, but the point is that everything includes a little bit of risk. Think of any activity and it’s possible to think of a way it could harm your health.

While many of us might be happy to split the difference and call most of those risks so vanishingly small that we’d count them as non-existent, science has a duty to accuracy. The nature of science and evidence also has other implications for why we can’t call anything “risk free”.

Absence of evidence…

Working out whether a given thing – whether it’s a substance, circumstance or condition – causes a particular effect on health isn’t as simple as you might imagine. Finding two factors in the same place at the same time – what’s called a correlation – isn’t enough. Because our lives include so many things to consider at any given time it can be very hard to say whether one factor causes another, a third factor causes both or it’s all just one big coincidence. Further complicating matters is that a combination of factors may have an effect where any one of them would have been harmless. To unpick this mess, our scientists need evidence.

To establish causation we need to do more than just show that two factors happen in the same place or even follow the same pattern (otherwise we might assume all kinds of strange things, like US technology expenditure causing an increasing rate of suicide by hanging, were true). The kind of evidence needed depends on what we’re looking into, but it could reliably be assumed to include things like a possible cause consistently having the same effect in many people and in many places once you took other variables out of the equation (you can find more on that kind of science, including things like controlled double-blind testing, here). We would also need to have a sound theory, based on other known science, for how something would cause that effect. If, after all that evidence is taken into account, there is no indication that the substance, circumstance or condition is causing the effect, it’s probably safe to assume that it doesn’t.

…isn’t evidence of absence

So that settles it, doesn’t it? There’s no evidence that A causes B, so it’s safe to say there’s no risk of A causing B, right?

Not quite: that’s why I said it’s only probably safe to assume that. All kinds of things could have confounded our search for evidence that would prevent 100% certainty. Perhaps something causes a negative health effect in one person in every 100,000 but we only looked at 10,000 people. Perhaps the technology we have isn’t good enough yet to detect the relationship between cause and effect we would need to see. That’s why science constantly revisits the things we’re probably sure of with new technology, to see if we were wrong before.

Simply put, science can’t prove a negative. It can say something is improbable, doesn’t fit with what we’re already pretty sure we know about the world or that there’s no evidence for it, but it would be inaccurate and dishonest to say it’s impossible: we can never know that with absolute certainty. What you can be sure of, though, is that when we say something is “very low risk” we’re saying with as much surety as we can that it’s safe.

Featured image copyright Public Health England. Used under Crown Copyright.

Sharing and comments

Share this page

5 comments

  1. Comment by edward frost posted on

    "Finding both cause and effect in the same place at the same time – what’s called a correlation – isn’t enough."

    Finding cause and effect is called causation not corelation.

    Correlation between two variables does not necessarily imply that one causes the other. Cause and effect is caustion not correlation.

    Seemed a bit sloppy for a myth buster article. Correlation does not assure that there is a cause and effect. So finding cause and effect in the same place is not correlation but causation.

    To say finding cause and effect is correlation is plane wrong and creates more confustion.

    • Replies to edward frost>

      Comment by Editor posted on

      Edward,

      You're quite right, that was a pretty bad typo! We've fixed it to read "finding two factors in the same place at the same time".

      ~Ed

  2. Comment by Bren posted on

    Hello Ed,

    Thanks for the blog and I have read the point from Edward to your blog too.

    I guess the only other point I would is the evidence, and for me there are many forma of evidence, let alone methods to analyse and synthesis the evidence.

    I think I got the main point you were making and I believe it is about making the judgement on the evidence to hand, which often is not always as strong as we would like it to be.

    Thanks and all good wishes,

    Bren.

    • Replies to Bren>

      Comment by Editor posted on

      Thank you Bren, and you're quite right that there are many varieties of evidence and analysis. A real challenge with this blog was simplifying they issue enough to be easily understood by those without a scientific background while still conveying the key point: that the nature of evidence and science means it's very hard (if not impossible) to give absolute answers, because there may always be something new to discover. Additional comments such as yours are very valuable to expanding on the point we were making.

      ~Ed

  3. Comment by Julian Flowers posted on

    I'm not sure if this adds to the blog but there is simple method of assessing risk of events which haven't yet occurred called Hanley's formula or the "rule of three" (see Eypasch, E., Lefering, R., Kum, C. K., & Troidl, H. (1995). Probability of adverse events that have not yet occurred: a statistical reminder. BMJ British Medical Journal, 311(7005), 619–620). Basically we can estimate the maximum risk of an event that hasn't yet occurred as 3/n where n is the number of people exposed to the potential risk. 3/n is the upper 95% confidence limit calculated by Hanley's formula for the probability of an event that hasn't yet occurred (provided n>30). So for example, if 100 people are exposed to a "risk" but none have had an event, then the maximum risk is 3/100 or 3%. If a million people are exposed with no events, the maximum risk will be 3 per million and so on. Clearly the larger the sample exposed without adverse events, the smaller the potential risk. This is why we need the denominator in comparing relative safety. A surgeon who has had no complications in a 1000 operations is safer than one who has had no complications in 100. If a billion people are exposed to a risk (e.g. mobile phones) and none had, say, a brain tumour attributable to their exposure, the risk is 3 per billion - vanishingly small.

    Another useful measure of comparative risk is the micromort - one in a million risk of an event. There are some nifty animations to explain this at http://understandinguncertainty.org/micromorts