By Ellen Stewart and Katherine Smith
A range of techniques and methods exist to assemble and present research findings in a way that will be ‘useful’ to policymakers. In public health, three of the most popular are Health Impact Assessments, systematic reviews, and economic decision-making tools (including cost-benefit analysis and scenario modelling). Despite the broadly shared goals, these methodologies have developed distinct and often parallel ‘epistemic cultures’ (Knorr-Cetina) through mailing lists, training courses, journals and conferences devoted to each one.
In a recent article, we conceptualised all three as examples of ‘evidence tools’, arguing that they all assemble, assess, and present evidence to influence decision-making processes despite their differences. Paradoxically, we found that very little attention had been paid to how policymakers experienced these tools despite this explicit aim. Based on Katherine’s interviews with public health policymakers, in policy practice, evidence tools are perceived as useful when they:-
- save time, especially where others have carried out the work
- can be adapted to different contexts
- convey credibility to external audiences
- offer clear, quantified answers and/or predictions of likely policy outcomes
Scenario modelling, which is widely perceived to have been a critical factor in introducing minimum unit pricing for alcohol in Scotland, was described as particularly appealing because it predicted a very specific, quantified benefit (for example, potentially saved lives). This was described as gold dust in the political process. However, most research users were frank in admitting that they had little understanding of how modelling produced this figure. Far from being a drawback, we argue (in contrast to researchers who have found that policymakers value transparency in their evidence tools) that in public health policy, at least in this particular example, the black magic of modelling actually appeared to enhance its appeal.
The practical technical advice often offered to researchers to make their findings more useful, or ‘impactful’, often presents failures of evidence-based policy as a supply-side issue. Research findings are not relevant enough, too wordy, or buried in obscure academic journals. In contrast, examining how policy actors describe using tailor-made ‘evidence tools’ highlights the complicated role evidence plays within the inevitably political and democratic process of policymaking.
Cover Picture: JESHOOTS.COM on Unsplash.