By Katherine Smith and Ellen Stewart
Academics working in the UK are being increasingly encouraged and incentivised to seek research impact beyond the academy, and the consequences of these changes have caused alarm for some. In a new article in the Journal of Social Policy, we outline a range of concerns that have been raised in publications to date across disciplines, and then present an interview-based case study of 52 academics working on health inequalities during the decade in which the UK’s current research impact architecture has evolved. We assess these concerns in the context of impact-related guidance from research funders and REF2014 panels. Our findings highlight a range of problems with the current approach to measuring, assessing and rewarding research impact.
In this blog, we briefly summarise each key concerns that we identified, before making some tentative suggestions as to how existing knowledge about research and policy might be used to inform changes to the current research impact system to address each of these concerns. This is by no means a polished list – the intention is merely to start a conversation about how we might make better use of existing empirical and theoretical knowledge to inform attempts to monitor and reward research impact.
First, before we summarise the key concerns, it is worth noting that academics involved in health inequalities research (or at least the ones we interviewed) tend to be keen to influence policy. Reflecting this, most of the academics we spoke to were broadly supportive of the idea that research impact should be encouraged and rewarded (things may feel very different for academics working on less overtly policy-orientated issues). Despite this, most of the academics we interviewed had concerns about the current research impact reward system and here we summarise six of the most common.
Problem 1: Research impact agenda might encourage and reward what Ben Baumberg Geiger describes as ‘bad’ impact. This fear included the possibility that REF structures encourage researchers to seek impact for (potentially misleading) single studies, rather than seeking to promote wider bodies of research. The ESRC’s guidance states that “you can’t have impact without excellence” but many interviewees felt that this not only possible but, in many ways, actively incentivised by the current system.
How might existing science and policy knowledge help here? Science studies authors such as Knorr Cetina clearly demonstrate that academics work hard to develop grant applications that have the greatest chance of success and that, as a consequence, grant applications reflect academics’ perceptions of what funders are looking for. If major sources of academic funding (from the higher education councils, through REF, and UK research councils) are all stressing the importance of research impact, then we can expect that many academics will commit themselves to undertake this kind of work, regardless of how appropriate they may feel it is for specific studies. It is therefore crucial to both think more carefully about what kinds of research impact we want to encourage and to make space for researchers to apply for funding for projects that are not impact orientated. Both REF and the UK research councils currently tend to reward impact for single studies or small groups of researchers within single institutions. If we want to encourage researchers to work to improve the use of collective bodies of research, stretching beyond their institution, then clear incentive and reward systems are needed to encourage this kind of outward-facing, synthesising work.
Problem 2: Some of our interviewees recounted instances where their work had been reinterpreted, and sometimes completely misinterpreted, by research users in troubling (sometimes ethically problematic) ways. Currently, neither REF nor UK research councils do much to acknowledge the fact that impact may not always be desirable.
How might existing science and policy knowledge help here?: Researchers across disciplines but particularly those involving humans and animals in their research have spent a lot of time developing guidelines on conducting ethical research. But undertaking research ethically does not guarantee that the results of that research will be used ethically. Now that research impact is being so explicitly incentivised in the UK, we need to be building on the foundations of research ethics guidance to develop ethical dimension to reward systems for research impact (for both potential research funders and REF2020 case studies).
Problem 3: Even researchers supportive of the aims of the current research impact agenda expressed concerns about whether its current architecture captures impact in a meaningful way. Many described the disconnect between their knowledge of the complex ways in which evidence influences policy and the experience of ‘playing the game’ of depicting a much more straightforward impact: one interviewee wryly stated that one achievement of the impact agenda has been to make “people lie more convincingly on grant applications”. It is often the most far-reaching kinds of research impacts that are most difficult to demonstrably track (whereas it is the less ambitious forms of impact that more easily enable a clear audit trail). Or it may be, as Christina Boswell has demonstrated for immigration policies, that research is used by policymakers more for ‘symbolic’ than ‘instrumental’ reasons (i.e. rather than informing policymakers assessments of the optimal policy route, research is used to lend authority to decisions or to signal a ‘capacity to make sound decisions’).
How might existing science and policy knowledge help here? The point here is that both research on policymaking and science studies suggest these kinds of behaviours are rationale and unlikely to change. Hence, it may be necessary to broaden the criteria for demonstrating impact for research, especially where research has contributed to substantial social or policy changes over longer periods of time. Indeed, it’s possible to conceive of a system that would enable a broader, more encompassing set of criteria for demonstrating impact that was deemed particularly significant/large-scale, whereas narrower/lower-level impact would require much more specific supporting evidence (as we suggest in Figure 1 in our article).
Problem 4: Several researchers with ongoing policy connections suggested that policymakers who are already struggling with information overload may not welcome increasing numbers of academics making ever-more efforts to send research outputs their way or to involve them in research design.
How might existing science and policy knowledge help here? Looking at current guidance on research impact, the system currently seems to be predicated on the idea that improving the use of research in policy means increasing the flow of research into policy. Empirically informed theories of the policymaking process, from Lindblom’s concept of ‘muddling through’ to Kingdon’s ‘policy streams’ model, paint a picture in which decision-makers face a daily barrage of information, with advocates and lobbyists working to pull their focus in different directions. This suggests that academic incentive structure ought to focus more on improve the use of research in policy, which may actually mean reducing the flow of research outputs into policy but improving their quality or accessibility. As noted above, one way of doing this might involve doing more to incentivise academics to synthesise existing bodies of research for policy, advocacy and public audiences. It also suggests that a lot more could be done to incorporate the realities of policymaking processes into research impact guidance and tools.
Problem 5: Several interviewees, particularly those who were at an earlier career stage, suggested that impact reward systems may be unintentionally reifying traditional academic elites. It is, after all, a lot easier to achieve research impact if you are already a senior academic with a strong reputation in policy circles – it’s even easier to do this if you went to school with, or are otherwise personally connected to (e.g. as friends/neighbours/family) senior policy folk (see for example Ball; Ball & Exley). Further issues arise when we consider that the timing of key opportunities for ‘impact’ can be particularly difficult for academics with caring responsibilities (evening and weekend networking opportunities, for example). There are also basic workload issues that are likely to be more constraining for those with caring roles (several interviewees reflected that they did not feel the demands of achieving impact had been matched by commensurate reductions in other workloads). Since we know that women take on a disproportionate amount of caring work, then this is a gender issue.
How might existing science and policy knowledge help here? Research on policymaking processes in UK suggest that older, white men continue to dominate structures. Surveys of higher education suggests the situation is similarly imbalanced for chair level academic posts. If our interviewees are correct, then the research impact agenda is reinforcing this, much as Les Back argues, with the impact agenda encouraging ‘an arrogant, self-crediting, boastful and narrow’ form of academic work that positions ‘big research stars’ as ‘impact super heroes’. This suggests, at the very least, that REF impact case studies ought to be subject to some form of equality assessment (as research outputs are). It also underlines the importance of taking impact-related work into account in workload allocation models. It may also be worth considering additional support for achieving, and/or a lower threshold for demonstrating, research impact for earlier career academics and those with substantial caring roles.
Problem 6: around a third of our interviewees discussed the challenges of maintaining critical, intellectual independence while trying to align one’s research ever more closely with policymakers’ concerns. The ‘fudge’ which several of our interviewees described resorting to, involved phrasing policy recommendations in strategically vague ways, softening perceived criticism and (as one put it) “bend[ing] with the wind in order to get research cited”.
How might existing science and policy knowledge help here? Research on both academic and policy work has highlighted the value of critical and blue skies academic work and few seem to be actively suggesting that it is desirable to restrict this kind of work. Yet, a research funding system that rewards demonstrable research impact clearly squeezes the opportunities for this kind of work when it is competing against proposals for empirical research offering research impact potential. Potential ways of redressing this might include protecting/increasing research calls/funding that are specifically for critical and blue-skies research, removing the time limits for impact case studies in REF2020, and rewarding efforts for knowledge exchange and public engagement with academic work that could not (for economic, political or practical reasons) be expected to have an impact in terms of achieving demonstrable policy or social change.
We expect that most of the concerns raised here will be familiar to colleagues who have come across contributions on this issue from Greenhalgh & Farly, Pain et al, Back, or simply from discussions with colleagues in the offices, staff rooms and corridors of the UK’s universities. Our goal in writing this is to encourage, in the hiatus between REFs, colleagues to begin making clearer suggestions for more constructive approaches to encouraging and rewarding research impact. The current architecture is not a ‘done deal’: researchers who care about the relationships between research and policy, perhaps especially those whose research careers are based on studying the construction of research and policy, such as those connected to SKAPE, must try to improve how we recognise and reward research in the ‘real world’.
SMITH, KATHERINE E., and ELLEN STEWART. “We Need to Talk about Impact: Why Social Policy Academics Need to Engage with the UK’s Research Impact Agenda.” Journal of Social Policy 46, no. 1 (2017): 109–27. doi:10.1017/S0047279416000283.