Uncertainty assessment

We can never be completely certain about the future, either in science, or in everyday life. Even when there is strong evidence that something will happen, there will almost always be uncertainty about the outcome. But by taking account of this uncertainty, we often can make better, more transparent decisions about things that may affect the outcome.

For example, meteorologists review satellite images to make predictions about the weather. They are rarely 100 per cent certain what will happen. So when they make a forecast, they usually indicate how likely it is. If they say there is a “strong chance” of rain, you will probably decide to take your umbrella when you go outside. If the chance of rain is “slight”, you are more likely to decide to leave your umbrella at home. If the forecaster uses percentages – a 90% or 10% chance of rain – for many of us the message becomes even clearer.

The same principles apply to food safety. For example, scientists may be asked to assess the safety of a new food, pesticide or food-borne bacteria. When evidence or knowledge is incomplete, they try to explain how the uncertainty may affect their conclusions.

They carry out an “uncertainty assessment” to identify and describe the scientific uncertainties, and explain the implications for decision-making. They may indicate whether or not there is more than one possible outcome and the relative likelihood of each.

As with the weather forecast, how certain you are (e.g. 10%, 50% or 90% certain) is important information for decision-making. Such information becomes crucial if the decisions have serious consequences for the health of people, animals and the environment.

Since 2013, EFSA’s Scientific Committee has been developing guidance on how to assess uncertainty in a structured and systematic way. The aim is to offer a tool-box of methodologies – both quantitative and qualitative – and related training for EFSA’s Scientific Panels and staff as well as for other organisations (e.g. researchers, national authorities) that carry out scientific work on EFSA’s behalf.

In early 2016, EFSA’s Scientific Panels started to trial the revised draft guidance on at least one of their scientific opinions or other scientific assessments. Feedback from the public consultation in 2015 helped EFSA’s Scientific Committee to revise and clarify key aspects of the previous draft.

2016 EFSA’s Scientific Panels start to trial the revised draft guidance on at least one of their scientific opinions or other scientific assessments. Feedback from the public consultation in 2015 helped EFSA’s experts to revise and clarify key aspects of the previous draft.

2015 EFSA publicly consults on its draft Guidance on Uncertainty in Scientific Assessments. The document proposes a new standardised toolbox of methodologies for analysing, explaining and accounting for uncertainties in scientific assessments.

2015 Leading experts and practitioners of regulatory science from across Europe and the globe take part in a workshop organised by EFSA to gather feedback and insights on its on-going efforts to harmonise and strengthen the cross-cutting methodologies that underpin its scientific assessments. There is a dedicated session on EFSA’s Draft Guidance on Uncertainty in Scientific Assessment.

2013 EFSA’s Scientific Committee requests a self-task mandate to develop Guidance on Uncertainty in Scientific Assessment as part of a major push to increase robustness, transparency and openness of scientific assessments.

2009 EFSA’s Scientific Committee publishes a scientific opinion on general principles to ensure the transparency of risk assessment, including the need to identify and characterise uncertainties.

2006 EFSA’s Scientific Committee publishes a scientific opinion related to uncertainties in dietary exposure assessment.

EFSA’s Scientific Committee develops harmonised risk assessment methodologies on scientific matters of a horizontal nature in the fields within EFSA's remit where EU-wide approaches are not already defined.

EFSA asked the Scientific Committee to develop guidance on how to characterise, document and explain uncertainties in risk assessment. This should cover uncertainties at the various steps of risk assessment, i.e. hazard identification and characterisation, exposure assessment and risk characterisation. As far as possible, the proposed framework should be harmonised and applicable to all relevant working areas of EFSA and its applicability demonstrated through case studies.

The Scientific Committee set up a working group to carry out the preparatory work for its draft Guidance on Uncertainty in Scientific Assessment.

1. Uncertainty? Don’t scientists know everything?

Science is the pursuit of knowledge. Scientists are constantly striving to fill in the gaps in human knowledge about how the world works. They often know a great deal about their specialist fields; they also know a lot about what is not known. Their confidence in their conclusions rests on the quality of the available scientific evidence, their experience and judgment in interpreting the evidence and their understanding of the possible impact of what they do not know (i.e. the uncertainty).

2. Why is describing scientific uncertainty important?

Identifying and describing scientific uncertainties, and explaining their implications for assessment conclusions, are crucial parts of providing transparent scientific advice. When dealing with uncertainty, decision-makers need to know what the different outcomes might be and how likely they are. How scientists report uncertainties and how public bodies such as EFSA communicate them to decision-makers, stakeholders and the wider public can alter perceptions about the risks and benefits of assessments and impact on related policy decisions. This can also directly or indirectly affect the choices made by individuals.

3. Who should take account of scientific uncertainties?

Risk assessors such as EFSA are responsible for describing uncertainty to decision-makers and other stakeholders when providing scientific advice. Decision-makers are responsible for resolving the impact of uncertainty on their decisions, i.e. deciding whether and in what way decision-making should take account of the uncertainty.

4. Can you give some examples of scientific uncertainties?

Scientists routinely strive to address a wide range of factors that can create uncertainty in their scientific assessments. EFSA’s Scientific Committee defines uncertainty as referring to “all types of limitations in the knowledge available to assessors at the time an assessment is conducted and within the time and resources available for the assessment”. Examples include:

  • Possible limitations in the quality and representativeness of data.
  • Comparing non-standardised data across countries or categories.
  • Choosing one predictive modelling technique over another.
  • Using default factors (such as the weight of an average adult).

5. Why is quantifying uncertainty preferable?

Qualifying uncertainty with terms such as “negligible”, “low” or “high” can give a sense of the degree of certainty of an assessment outcome. But such terms are interpreted differently by different people. Quantifying uncertainty, for example, on a percentage scale is more effective because it reduces the room for ambiguity. It also helps that quantitative methods are generally more technically rigorous than qualitative methods. Quantifying uncertainty, therefore, is both more robust and provides a clearer picture for decision-makers.

6. Can you give an example?

Probability is the natural quantitative measure for expressing and understanding the relative likelihood of outcomes. EFSA’s Scientific Committee provisionally endorsed a scale (developed by the Intergovernmental Panel on Climate Change) for quantifying the probability of uncertain outcomes.

Probability scale (IPCC, revised)


Probability term

Subjective probability range

Extremely likely

99-100%

Very likely

90-99%

Likely

66-90%

As likely as not

33-66%

Unlikely

10-33%

Very unlikely

1-10%

Extremely unlikely

0-1%

If assessors consider a conclusion is very likely (90-99% probable), decision-makers and the public will have a high degree of confidence in measures that are in line with that conclusion. If the outcome is “as likely as not” (33-66% probable), the decision-maker may be less persuaded depending on the greater weight of other non-scientific factors (e.g. social or economic)and may be more inclined to take precautionary measures unless there is scope to reduce the uncertainty (e.g. through new research). If assessors consider a conclusion is very unlikely (1-10% probable), decision-makers may give it little weight when choosing how to proceed.

7. How challenging is it to quantify scientific uncertainties?

Quantifying uncertainties raises several challenges but it is not impossible. There are different quantitative methods for characterising uncertainty. EFSA’s revised draft guidance on uncertainty describes about 10 quantitative methods in detail. The choice of method may depend on such factors as the types of uncertainty identified and the expertise and time available for the assessment. Many data-related uncertainties such as limited sample size and measurement error can be quantified relatively easily using established statistical tools. In other cases, expert judgement will be needed and, although subjective, can be a great strength of scientific assessments if well-reasoned. EFSA published separate guidance on formal approaches to obtaining expert judgements in 2014, and is developing training for experts in making probability judgements. Whatever the method, it is important to describe clearly why and how each method was used.

8. But surely it’s not possible to quantify all uncertainties?

No, it’s never possible to quantify ‘unknown unknowns’ – uncertainties that we are not yet aware of – and even some of the known unknowns may be too complex or difficult for experts to quantify. EFSA’s Scientific Panels are asked to quantify as many as possible of the uncertainties affecting their assessments, and describe qualitatively those they can identify but not quantify.

9. Is EFSA proposing a one-size fits all approach?

No, EFSA’s proposed approach is flexible and offers a selection of tools to adapt to the circumstances of each assessment. The time devoted to uncertainty would understandably be limited in an urgent situation where advice could be needed in a matter of hours (although crucial to address as uncertainty is often greatest in such situations). More effort could be dedicated to assessing uncertainties during a longer-term comprehensive review of all available scientific knowledge. Likewise, different approaches would apply to well-studied issues with fewer uncertainties than those at the forefront of scientific knowledge where evidence may be scarcer.

10. Who will use EFSA’s guidance on uncertainty?

The guidance is aimed primarily at the experts on EFSA’s Scientific Panels and their working groups, EFSA scientific staff and scientific organisations carrying out scientific work on EFSA’s behalf. It is also relevant for risk managers in the European Commission and EU Member States who take decisions on the basis of EFSA’s scientific advice. Once finalised, the guidance will apply to all areas of EFSA’s work and all types of scientific assessment, including risk assessment and all its constituent parts (hazard identification and characterisation, exposure assessment and risk characterisation).

Uncertainty assessment requires expert training both for assessors and for the decision-makers who use the assessments. EFSA is providing training to its scientists and working with EU risk managers as well as other European and international risk assessors to promote a harmonised understanding of uncertainty assessment.

11. Why did EFSA publicly consult on its draft guidance on uncertainty?

In June 2015, EFSA called on the international scientific community, European and national risk assessors, risk communicators and risk managers, as well as EFSA’s stakeholders to provide feedback on its proposed systematic approach to uncertainty assessment. Input from other scientific advisory bodies as well as academic or applied experts in uncertainty analysis, particularly on the proposed methods contained in the tool box, was needed to strengthen the draft before EFSA began to trial the approach across the full food safety panorama.