What does good ‘use’ of evidence look like?

This is based on my contribution to a discussion held at the 2019 William T Grant URE meeting

 

Most of us who are involved with the research world – as researchers, as funders, as practitioners, and/or as audiences of different kinds – do so because we share a wish to change the world for the better. For many of us, this translates into ‘doing more, better research’. Yet to truly ensure that we are able to improve social outcomes, we have to be sure that we’re doing the right research, and that it is being used in the right way. This means we need to understand what we mean by ‘use’, as our model of ‘use’ informs how we do our research, how we think about impact, and how we try and achieve it.

Those of us in working in the meta-field of evidence production and use often use Carol Weiss’s typology of use. Her 7 types (knowledge-driven (instrumental), problem-solving, political, tactical, interactive, enlightenment, and research as intellectual enterprise) are often used to inform how we develop and implement strategies to increase improve and measure use. Yet too often, we use this typology to diagnose which category of use we are observing, and lose sight of the fact that all use implies a user and a context. Weiss’s typology has attained an analytical status she probably never intended.

So, what would it mean to take the user and the context seriously? I’d like to offer three metaphors which might help us think about what ‘use’ means, when considering these aspects. These metaphors describe, for me, a (somewhat painful) journey towards involving grappling with insights crossing sectors and disciplines.

Assembling a jigsaw puzzleDD Collection DD Wooden Jigsaw Puzzles Winding Elephant Toys Game Children Letter Numbers Preschool Puzzles Educational Toys for Toddler/Little Kid/Big Kid

When we first enter the evidence use space, we often imagine that decision-makers (the ‘users’) are in the business of assembling a jigsaw puzzle. We recognise that many kinds of evidence go into a decision, and that when all the pieces are assembled, a picture is revealed which tells decision-makers what to do next. In this scenario, the role of researchers is to produce polished jigsaw pieces, even to shape them according to a policymakers’ perspective; and the role of policymakers is to create a vision and populate it with useful evidence. This model of use (assembling, informing, selecting) is compatible with all of Carol’s types, and has directly informed strategies which researchers use to measure (impact!) and increase use. If the role of research is to provide content for or support policy and practice decisions, then our strategies are to amplify both quality and volume of our jigsaw pieces; to tailor out messages and to disseminate as widely as possible, and our impact can be quantified.

These strategies are rarely effective.

Cooking a stew

Image result for stew

Once we have been involved in this type of activity for a while, researchers often recognise that in fact, decision-making is not about assembling a picture which reveals an obvious answer. Rather, there are many people standing round the table, many institutional rules and processes which influence decisions and processes. We start to see the decision-making context more along the lines of cooking a stew together. We’re not always certain what the end product might be, but we can see that there is an end point we’re moving towards. Researchers and decision-makers might work together to select a recipe, to select and process the freshest, tastiest ingredients. They might even talk about what to plant in next years’ garden. Here, one can see that ‘use’ starts to look like ‘finding an appropriate balance’. No one wants an over-salted stew, after all.  And the strategies we use when thinking of use in this way tend to focus on collaborative research practices – for example coproduction and stakeholder engagement.

Yet, attractive though this cosy scene is, we have to admit that this is not, ultimately, how decision-making works. The ‘use’ is the ‘what next’, after the stew has been cooked. We can see that decision-making is rarely about reaching a static endpoint, and that rather than drawing on a bounded community of gardeners, and a fixed recipe, the process of decision-making is contingent on many factors and events, that it is relational dependent on the relationships between those involved and implicated – and complex, with all parts of a complex ecosystem interacting with each other.

Making sense of the beach pebbles

My current metaphor for use, then, is that of a group of people standing on a pebbly beach. They are looking down at the pebbles, which are regularly rearranged by unexpected and uncontrollable waves, and trying to make sense of the patterns they see, in conversation with those around them. In this metaphor, bits of research may appear as attractive and shiny pebbles, amongst many other kinds of knowledge which form temporary but meaningful patterns, seen through the lenses worn by those on the beach.

But as you raise your head, you can see a vast shoreline covered in pebbles. The people a mile down the beach have a very different perspective.

Image result for pebbly beach

This metaphor allows us to start to see a very different set of questions, and potential strategies for improving use. We have to ask ourselves – who gets to stand on the beach? Who is allowed, enabled to participate in the process of providing pebbles, let alone interpreting them? How are people’s views shaped by those around them? Would more diversity in decision-making and research communities affect how people make sense of what they’re seeing, or even the questions they are asking?

We can also see that this metaphor helps us to understand the potential roles of research. It might help us conceptualise and focus on new patters, but also might shape narratives and perspectives, might even create communities or provide lenses to look through. And the strategies we might employ would not be around trying to rake pebbles into attractive patterns, but rather in trying to help each other hold meaningful conversation about what we are, and are not seeing.

Take-home messages

If we accept this metaphor, we acknowledge the diversity of roles and skills in this ecosystem. Some are good at polishing beautiful pebbles. Others at spotting and encouraging diversity of pebbles. Still others at helping build connections and facilitate conversation. We need all these, and we need to respect and support the skills and expertise of all.

Ultimately, we are acknowledging the need to engage with the political economy of knowledge, its production and use. We are saying that we need to think critically about how and why evidence is produced (and not produced), by whom, how, and for what purposes. We need to engage with critical sociological theories, and apply them to our understanding of what consequential ‘use’ is, before we can meaningfully try and change the world.

Leave a comment

Filed under Uncategorized

Transforming Use of Research Evidence

Reflections from the 2016 URE event in DC:

 

Trying to understand how evidence may have influenced a policy decision is like trying to pick out the noise of a recorder from a whole symphony orchestra. This memorable metaphor was introduced by Maureen Dobbins at the start of the Use of Research Evidence meeting supported by the William T. Grant Foundation in Washington D.C. last week, and wove its way throughout the discussions about evidence-informed decision-making (EIDM) over the next three days.

A broad and diverse set of conversations reflected the make-up of the particpants; in part, reflecting the field itself. Practitioners and decision-makers (from education, health, criminal justice and social policy) brought experiences about trying to find and use evidence. Knowledge translation and mobilisation experts discussed initiatives and strategies to increase uptake and improve use of evidence. And academics (from political science, sociology, science and technology studies, psychology, communications, and others) provided theoretical framings and methodological contributions to this debate. In other words, people trying to “do” EIDM, people advocating for EIDM, and people studying EIDM came together to learn from one another.  Without attempting to untangle each thread fro the next, what overall picture was woven by these conversations?

 

Firstly, we have a lot we can learn from one another. Many of us come from very different disciplinary and epistemological backgrounds, pulling behind us huge comets of discipline-specific theory and tools – we just happen to have mutual interests in how evidence and policy/practice relate to one another. Mapping this fieldshows us that from improvement science comes practical models of change management, for those interested in developing strategies to increase evidence use. Political science and policy studies provide us with models and theoryto understand the processes of decision-making and deliberative democracy.  Sociology teaches us how forms of knowledge are constructed and valued in different social arenas. The journal Evidence and Policy(amongst others) is a key forum to bring these strands together, and provide the field with a space to negotiate potentially unfamiliar theoretical or methodological terrain.

 

Secondly, and without descending into semantics, discussions focused on defining and understanding “use” of evidence. Cynthia Coburnoutlined methodological developments in how to measure conceptual use of evidence, parsing out “good use of evidence” from “good decision-making”. This distinction allowed the conversation to focus on what “use” looks like – still using Carol Weiss’ six “meanings of research utilisation”, but also considering how these models may be further broken down and re-imagined. For example, symbolic use of evidence could be understood as legitimising or substantiating, as outlined in Christina Boswell’swork. We still have more work to do in theorising the interactions around evidence use.

 

However, exciting methods are being widely applied to track, measure and identify evidence use; including social network analysis,contribution mapping, coding of archivedrecords and briefs, and many more. A recent systematic review brings together insights on the Science of Using Science, where ideas from across social science identify new places for us to look for inspiration, including marketing, communications and computer studies.

 

Finally, we talked about what the big questions were for the field – which meant being honest about the assumptions we were bringing along. Do we really have a theory of evidence use – and is one possible? We think that interactions promote evidence-use; but do they? When? Of what kinds? And what changes as a result?  We believe that trusting relationships are important, but of what kinds (stakeholder engagement, advice, collaboration, co-production…)?

 

Understanding the roles knowledge plays in decision-making processes is less like hearing one instrument in an orchestral sound, and more like understanding how one change to one note for each instrument may influence the overall sound. A complex and difficult challenge – but an exciting one.

 

Disclaimer: This blog is a personal reflection on the meeting, rather than a complete report.

Leave a comment

Filed under Uncategorized

Policy networks – an idea whose time has come?

For Policy and Politics

April 2015

With a general election just around the corner, everyone is on high alert for scandals. No one (well, ok – everyone except the politicians) wants to see another Bullingdon Club revelation, or a phone-hacking story. While there are a myriad ways for a politician to damage their credibility, it seems that old-boy’s networks are pretty widely understood to be Bad News.  Getting a job or any other benefit through a friend, a school-mate, a wife, or a man you met down the pub is – however usual – frowned on.

But human beings, like all primates, are social beings. This does not stop being the case just because people have got decision-making tasks. Interpersonal connections are known to influence everything from where policymakers find evidence, create agendas, develop policies – in fact, as our systematic review showed, every part of the policy process.

In health policy, much of the work done in this topic focuses on how to get more evidence into policy. However, as we recently argued, understanding the ‘street-level’ day-to-day business of policy actors – of all kinds – is far more likely to yield fruit than simply exhorting policymakers to use more evidence. The interesting part of this day-to-day business is, for me, the networks between the policy actors. Network analysis allows us to map and analyse these connections, and this is the basis of a new project at UCL STEaPP.

The concept of the ‘policy network’ has been around for some time – understood variously as a mode of governance, and a metaphor for the reality of governance, or configurations of individuals/organisations engaged in a policy sector. Networks as a concept, or an empirical phenomena, contrast with the hierarchical, market-based perspective which has traditionally been used to analyse public policy. However, although there are lots of new studies which have collected empirical data, they often draw on different types of nodes (individuals, organisations, countries) and ties (friends with, funds, collaborates with, cites… and so on). Drawing generalised lessons about policy networks is therefore challenging.

This summer, I am launching a new project aiming to collate policy network data, and conduct comparative analyses. I’m chairing three conference panels on policy networks – at the ICPP in Milan, Sunbelt 2015 in Brighton, and a section on political networks at the 2015 ECPR conference in Montreal. With collaborators and contributors, I’m aiming to develop our understanding of how networks affect policy processes. Because, as Jenny Lewis said:

…actions and beliefs cannot tell the whole story. Policy certainly arises from interactions between actors in networks, but the structure of these networks matters a great deal since they shape who interacts with whom.” Lewis, 2011, pp. 1127

 

Leave a comment

Filed under Uncategorized

Getting complex about reality, or getting real about complexity?

Blog written for theHealth Foundation

How can we solve the persistent problem of increasing inequality that is facing public health?

The Health Foundation has been working with Dr Harry Rutter to develop a new model of evidence that will inform public health research, policy and practice. I was invited to attend a workshop to discuss the ways in which the systems we use to generate evidence can be improved for public health.

Much of the conversation explored the reality of trying to generate research evidence for an incredibly complex set of interventions and policies, which interact in multiple ways for different populations, at different levels.

Trying to improve childhood obesity, for example, requires intervening in homes, schools, communities and the workplace. It requires infrastructure and built environment improvements, taxation, and regulation of planning laws, as well as health promotion and literacy.

Coordinating these is an immensely difficult task, yet it’s a task we must tackle if we want to see improvement. We know that research funds and resources are increasingly targeted towards biomedical institutions and programmes, but there’s limited evidence that interventions which operate at the individual or group level actually help address complex social problems.

In fact, it looks increasingly as though current approaches often inadvertently cause harm. Public health policymakers are in the unenviable position of needing to act, but having a limited range of tools available to them. They know that policy instruments are likely to be interacting with one another but are unable to evaluate how, or for whom.

Complex systems are non-linear, emergent and adaptive. In other words, they’re extremely difficult to research using standard social research methods. Using the example of childhood obesity above, we may wish to use qualitative, quantitative, experimental, longitudinal, modelling and simulation methods; participatory and deliberative methods. One researcher is unlikely to have the expertise to manage all these tasks. It is also challenging for one policymaker, or even a team of policymakers, to have oversight of all research likely to be relevant for one policy outcome. Yet, it is important that policies are based on the best available evidence.

How can this gap be bridged? One approach is to move away from the linear model of evidence informing policy and towards an idea of an evidence ecosystem. In this approach, the process of evidence provision becomes more conversational.

Learning how to identify and engage with stakeholders across government and research organisations is just the first start in mapping the complex system within which we are hoping to create change. Talking regularly with policymakers about upcoming challenges is likely to help researchers produce more policy-relevant evidence. Equally, understanding the upcoming policy agenda will help useful evidence syntheses to be produced, which can help inform discussions about how different policy instruments may interact.

But co-producing evidence for policy, and co-producing policy options themselves, is intensely difficult and expensive. Tools and approaches to help us navigate this new territory are in their infancy, and the public health research community is only just starting to embrace the radically different way of doing research which coproduction entails.

We need to learn how to take the problem of complexity seriously, in both developing new research methods and in learning how to communicate complex research to a policy audience. This could be achieved through embedding complexity training in public health education and doctorates, and by investing in coordinated research programmes and schools.

A revitalised, refocused public health research and practitioner community producing excellent, relevant research can only do so much. For complexity to be taken seriously, we also need greater coordination between government departments, and with local government. We need political leaders who are unafraid to experiment and change course, to discuss uncertainty and failure. Ultimately, we need politicians, partners, and the public on board.

Leave a comment

Filed under Uncategorized

If scientists want to influence policymaking, they need to understand it

Originally posted on the Guardian’s Science Policy blog

Turning scientific evidence into policy exposes a gulf between how scientists think and how policymakers work. Here’s what scientists need to know

Leave a comment

Filed under Uncategorized

Middle managers hold key public health role

From Policy@Manchester 

March 17, 2014

Ignore middle managers at your peril. They may be central to development and implementation of policy, explains Dr Kathryn Oliver

Middle managers are more important than people often think – and that is very true when it comes to influencing and implementing public health policy.

In fact, middle managers without a professional training in public health may be the most influential people in public health policy – holding a more significant role than directors of public health, academics, business leaders, or politicians.

That middle managers in the health service and local government hold the key public health policy roles will come as a surprise to many experts in the field.  To date, research into the best ways to influence public health policy has tended to focus on the skills and attitudes of researchers and policy makers.

Research has too often overlooked the role of those who are heavily involved in policy development and implementation – the middle managers. Yet it is middle managers who have successfully brought issues such as a minimum unit price for alcohol to the forefront of the policy agenda.

The impetus for minimum unit alcohol pricing can be seen as having been created by local NHS managers and council officers in Greater Manchester who provoked Westminster into a response by declaring that Manchester was unilaterally considering the policy.

And it was the Greater Manchester region’s managers – not health bosses – who were responsible for transforming stroke services from having the worst survival rates in England to amongst the best.

Directors of public health and academics do, nonetheless, hold important roles in making and developing public health policy, but it is important to recognise what these roles are.  Our research shows that public health professionals and academics are only indirectly connected to policy – via managers.

We conducted a study of 152 policy leaders across local government and the NHS in Greater Manchester, asking academics, public health professionals and policymakers to nominate the people they thought were most influential on public health policy in the region.

Only a small minority of the ten or so local directors of public health were named as having any influence at all.  We found that the majority of academics were only connected to other academics, while most public health directors were only connected to their own teams and to fellow professionals.  It was the middle managers who had the strongest connections across sectors.

It may be difficult to accept the conclusion that middle managers hold the key policy position, as it appears to challenge the role of professional experts, including directors of public health and academics.  But the conclusion actually suggests we should pay serious attention to how the middle managers operate.

We think that one reason this attention has not been given previously may be that the skills which are so essential to good management seem prosaic.  These include persuading people, running meetings, bringing people together and being able to seem friendly and credible to multiple audiences.

These skills and roles are not often taught, they are not part of professional training or regulation and as a result they are usually not measured.  They are not ‘headline grabbing’ – yet nevertheless they seem to be essential in getting things done.

We developed a framework to categorise the activities of these managers. This framework shows how they were active through the whole policy process – from conceiving ideas, to developing detailed proposals, finding evidence and champions, and masterminding debates. The influential managers were able to wield this influence because they were seen as credible, friendly and reliable by people across the NHS and local government.

This places the managers in a very powerful position – able to control how information is passed between different groups.  They become the ‘go-to’ people from research and policy alike. They act as gatekeepers for important meetings and for groups of executives. They provide policy content and context and manage selected experts and executives to act as ‘champions’ on policies.

The reality is that if you have a good idea, you must first convince the middle managers. NHS and local government executives and council leaders trusted the managers to ensure that only feasible policy options are brought in front of them.

These findings are very important in terms of planning and implementing NHS reforms  – and provide an implicit challenge to some of the reforms currently taking place.  Health Secretary Jeremy Hunt and his predecessor Andrew Lansley criticised the ‘pen-pushing culture’ in the NHS, promising to reduce bureaucracy. This research suggests that approach may need to be re-evaluated.

Our experience was that managers are always looking for evidence and for engagement from universities. The usual barriers were always there – shortage of time, apparent irrelevance and difficulty in accessing journal articles. However, from our observations, we suggest that universities and academics could do more to engage with the managers. I went to a lot of public meetings in which local health policy was developed and discussed. Often, I was the only academic there.

A good first step for any research project aiming to have local impact would be to identify the prominent managers – getting to  know them, letting them know about your time frames and likely outputs. Collaborative institutions such as the CLARHCs may help, but only where there is a willingness to enter into the debate with an intention to learn on all sides.

Academics that want to influence the managers should be realistic, be truthful – and go to the meetings.

Leave a comment

Filed under Uncategorized

Government buries its own research – and that’s bad for democracy

From The Conversation

The UK government spends billions on research aiming to guide and inform its policies. Yet it turns out the government doesn’t know exactly what it has commissioned or published. Worse, there is evidence that government-funded research is sometimes deliberately buried or delayed. Transparent and open government, this is not.

Government funds research to help it understand what evidence there is about particular policy problems (such as the effect of immigration on employment) or solutions (such as alcohol pricing). But a new reportfrom campaign group Sense about Science found that only four of the 24 government departments keep central records of what research they commission and publish.

The investigation, led by former Court of Appeal judge Sir Stephen Sedley, also identified several cases where research may have been delayed to avoid political embarrassment, or to prevent informed public debate. For example, research on depression and the recession was delayed due to a Number 10 speech on the state of the economy. A report into the horsemeat scandal was delayed as it suggested local authority cuts had directly reduced food safety – which was potentially very embarrassing for the government.

Government research can be highly valuable but is costly and time consuming. So you’d hope taxpayer money was being spent on the most important research priorities. But this report – to which I contributed – suggests that there is no overall strategy. Instead, government may be commissioning research reactively, for example in response to a minister’s latest ideas or a media furore.

If this is the case, it would indeed lead to a situation as described by Sense About Science: a system full of inefficiencies with no overall strategy, quality control, or monitoring. Researchers are placed under embargo, and not allowed to publish without political endorsement, or until the policy itself is announced.

However, there are alternative explanations. Maybe research is not always commissioned to inform government about a policy problem, or public policy solutions. Research may be commissioned to reassure MPs that the government is looking into an issue – in other words, as a way of putting problems on the back burner.

A report may be commissioned in good faith, but changes in the political environment suddenly make the report irrelevant or unpopular. It may be that the government genuinely wants information, but the research findings are so politically explosive that the government decides it is less risky to bring new information into the debate, which may challenge the existing policy.

I’ll just leave this here. Shutterstock

For example, when the government wanted to introduce a cap on non-EU immigrants, Theresa May, the home secretary, claimed in December 2012 that 23 British workers would become unemployed for every 100 migrants that entered the country. This statistic was immediately criticised for being cherry-picked from a report that found no overall significant effects on employment as a result of immigration.

Rather than admit to the complexity of the data, the Home Office delayed publication of its own analyses – which agreed there were no significant effects – for a further 15 months. Misuse of statistics by ministers is hardly new, but ignoring their own research in order to avoid difficult conversations is highly undemocratic.

Joined-up government?

Regardless of the reasons for the current situation, it’s clear we need to improve the way government organises and commissions its research. Without access to high-quality and reliable information to help them plan and implement effective policies, government officials can’t be confident that they are making the best decisions. Poor record keeping leads to a lack of transparency and poor access to this research, as well as making it more likely government will commission the same research twice.

Citizens also have a right to know what is being done with their taxes. If there is no way of telling what research is being done to inform policy, citizens are less able to judge how well informed policies are. Obscuring this allows politicians to paint their own pictures, even distorting the facts to suit their own agendas. It is also important for the public to be able to judge whether government research is well conducted, and if it is addressing issues of importance.

What would a good system look like?

In the report, Sedley called for a publicly searchable central database of all funded research. This would be a good start, but it would not really address the underlying problems of poor commissioning and research-use practices.

Imagine instead a system where the public, researchers and any other involved groups had a serious and transparent say in what research was needed, and what should be commissioned. This would involve reviewing the strengths and gaps in the evidence base to reduce duplication and ensure only needed research was commissioned.

New research could be registered in advance, which would allow us to monitor progress. Government could hold consultations about the findings of the research once published, opening the door for informed debate about the quality and relevance of the work.

Few would advocate for a technocratic system in which policies are decided purely on the basis of available research evidence and without considering the political or ethical implications. For one thing, the evidence available is often weak and fragmented, and can be just as affected by subjective values as any other way of determining policy. Politicians must also take into account the views of their constituents, and the political machinery itself.

There are often very good reasons for policymakers to make the decisions they do –- even occasionally when they go against the evidence. But rather than burying research that is unwanted or appears to contradict policy, government should come out of the shadows and explain their reasoning. As Sedley’s report says, good political leaders should always be able to explain their “grounds of doubt or disagreement”. It takes bravery to have those honest conversations, but it’s a bravery citizens are entitled to -– and pay for.

Leave a comment

Filed under Uncategorized

What’s the impact of the research impact agenda?

At the 2014 Circling the Square conference, the topic of the “Pathways to Impact” agenda was touched on several times. Many academics in the research world consider the ‘impact agenda’ as a form of performance management – and it is certainly true that academics are exposed to more performance management than ever before. Until relatively recently, academic contracts didn’t even have annual leave allowances, let alone quarterly career objectives to meet. Many senior staff I know look back on this era with fondness, interpreting this as a time of mutual trust and respect between academics and their employers.

How times have changed. Now, in today’s “publish or perish” culture, academics are expected to write more, get more grants, engage more – and are judged against these outputs. This has made the scientific community very uneasy, as summed up in this quote (courtesy of David Colquhoun)

Clearly, there are legitimate fears about ‘gaming’ (publishing papers likely to have a high impact, rather than address substantive gaps), excessive publication (slicing results up to increase the number of papers), authorship manipulation and other corruptions of the scientific process. It seems likely, however, that the impact agenda is here to stay, and indeed will become more intensive, rather than less.

The logical endpoint of this process is a discipline-specific ‘best practice’ metric of productivity and impact: for instance, one could calculate the value of each paper from each grant in a pounds/paper format), or a more impact-oriented assessment of how societal, physical or environmental outcomes have changed. Many academics would regard this as an Orwellian intrusion into their academic freedom and a gross statement of mistrust by an officious state or employer. And indeed, the evaluation of research impact has lead to the creation of a new, and probably costly bureaucracy.

But was this the intention of HEFCE, when they made ‘impact’ a criterion within the last round of the REF. Aren’t there, in fact, good arguments that should encourage publicly-funded academics (often trained and supported by public money) to use their training and activities for the good of society? Engaging publics in scientific processes, and promoting better understanding of science are surely part of any scientist-citizen’s role. The moral and ethical arguments around public engagement and impact, in addition to financial accountability all support the use of an impact metric.

Undoubtedly both sides have valid points. The question is whether the processes around the REF can be used to support scientists, and improve the quality of science and scientific engagement – not just increase the volume. This will require thought and care, as issues like “scientist as advocate vs honest broker” arise.

Both these positions seem to have a rather simplistic idea about the evidence-into-policy process, and what ‘impact’ may be. As many have pointed out, expecting every piece of research to yield economic impact (let alone papers) is unrealistic – and not reflective of political or societal changes processes. Scientific engagement and impact can be about ongoing work, and ongoing debates – not always about patents and inventions. I can understand why scientists would reject any such simplistic valuing of their activities. But what if “impact” were agreed to include “contributing to academic debate”, or “developing new research questions”?

At the conference I also – playing devil’s advocate – asked whether engagement of this type – using social media, writing blogs, media engagement – would be likely to help early careers researchers, who are put under increasing pressure to publish papers. Sadly, the consensus was that it was the work that counts. Is this a false dichotomy? Open access publishing, post-publication review and wider use of blogging and commentary has certainly taken hold of the academic imagination in ways likely to improve the quality of science, and the quality of debate around it.

The fundamental point is that we don’t understand well how scientific evidence contributes to societal, political, physical, or economic outcomes – and acknowledging that makes a straw man of the direct impact metrics so many rail against. To me, this argues that we need better understanding of these processes, and a greater and more nuanced range of methods to understand what valuable and high-quality science looks like.

Leave a comment

Filed under Uncategorized

Negative stereotypes about the policymaking process hinder productive action toward evidence-based policy.

From LSE Impact blogs

A dearth of clear, relevant and reliable research evidence continue to block the use of research, according to a study of 145 research papers on evidence use. According to the authors of the review, Kathryn Oliver, Simon Innvær, Theo Lorenc, Jenny Woodman, and James Thomas difficulty finding and accessing this research is also a major problem. 

Despite several decades of work on evidence-based policy, the goals of improving research uptake and promoting greater use of research within policy making are still elusive.  Academic interest in the area grew out of Evidence-Based Medicine, dating back to Archie Cochrane’s book Effectiveness and Efficiency: Random Reflections on Health Services published in 1972. To describe the current body of literature in the area, we carried out a systematic review, updating an earlier study of the reasons why policy-makers use (or don’t use) evidence. We found 145 papers from countries all around the world, covering a wide range of policy areas, from health to criminal justice, to transport, to environmental conservation. We have concluded our findings below in the hopes of reducing the barriers of evidence-based policy.

oliver table 1Source: A systematic review of barriers to and facilitators of the use of evidence by policymakers (2014)

What are the major barriers to policymakers using evidence?

Despite these very different contexts, the barriers and facilitators of research were remarkably similar. Researchers, for example, found that poor access to research, dissemination and costs were still a problem. A dearth of clear, relevant and reliable research evidence, and difficulty finding and accessing it, were the main barriers to the use of research. Access to research, managerial and organisational resources to support finding and using research were all important factors.

Institutions and formal organisations themselves have an important role to play. For example, decision-making bodies often have formal processes by which to consult on and take decisions. Participation of organisations such as NICE can help research to be part of a decision, but many study participants saw the influence of vested interest groups and lobbyists as an obstacle to the use of evidence. The production of guidelines by professional organisations was also noted in the review as a possible facilitator of evidence use by policy makers; however, in cases where the organisation was itself regarded as not being influential (such as the WHO) this was not the case. Whether true or not, these perceptions can damage an organisation’s ability to change practice or policy. On the other hand, institutions can make a positive difference by providing clear leadership to champion evidence-based policy.

Evidence comes from people we know, not journals

Unsurprisingly, good relationships and collaborations between researchers and policymakers help research use. This is a common finding—we already knew that policymakers often prefer to get information and advice from friends and colleagues, rather than papers and journals. One consequence is that policymakers may take advice from academics they already know through university or school. In the UK, that tends to mean Oxbridge, LSE, UCL and Kings.

In our everyday lives, we seek advice from friends and colleagues; we don’t trust people who have been wrong before, or who seem to be biased – or with whom we just don’t get along. Exactly the same applies to the process of policymaking and, in our experience, in the process of research collaboration. Negative stereotypes abound on both sides, and our review found that personal experiences, judgments, and values were important factors in whether evidence was used.

Policymakers and researchers of course have very different pressures, working environments, demands, needs, career paths and merit structures. Their definitions of ‘evidence’, ‘use’ and ‘reliability’ are also likely to differ. Hence, they have very different perceptions of the value of research for policy-making.

However, while these differing perceptions are well documented, we know surprisingly little about what policy-makers actually do with research. This gap helps to perpetuate academics’ commonly held view that policymakers do not use evidence—a negative stereotype that, as any politician will tell you, is hardly conducive to trust or collaboration. As a result, we still do not understand how to improve research impact on policy – or, indeed, why we should try.

Do definitions matter?

Policymakers tend to have a broader definition of evidence than that usually accepted by academics. Academic researchers, understandably, tend to think of ‘evidence’ as academic research findings, while policy-makers often use and value other types of evidence. For example, over a third of the studies mentioned that use of informal evidence such as local data or tacit knowledge.

Other factors reported to facilitate use of evidence are timely access to good quality and relevant research evidence. However, ‘making policy’ and ‘using evidence’ are both really difficult to do and to understand. These are not simple, linear processes

oliver table 2

 Linear Stages of Policy Process from the Rational Framework. Adapted from Policy Analysis: A Political and Organizational Perspective by William Jenkins (1978, p. 17).

oliver table 3

Figure 1: Examples of models of the policy process found in academia. Source: Wellcome Trust Educational Resources.

But nor are they repeatable, predictable cycles of events. The reality is likely to be far more complicated. The role of chance, e.g. who was available that day, or who you happened to have a conversation with, is hard to underestimate.

It is worth asking why most policymakers are not prioritising overcoming these barriers themselves. It is possible – even likely – that politicians do not want their pet policies undermined by evidence. However, we believe the research on this question—as well as much of the debate among policy-makers and commentators about research impact and evidence-based policy—is based on the three flawed assumptions that: 1) Policymakers do not use evidence 2) Policymakers should use more evidence and 3) If research evidence had more impact, policy would be better.

In fact, these assumptions tell us more about academics than about policy. Policy uses evidence all the time—just not necessarily research evidence.  Academics used to giving 45-minute seminars do not always understand that a hard-pressed policymaker would prefer a 20-second phone call. Nor do researchers understand the types of legitimate evidence, such as public opinion, political feasibility and knowledge of local contexts, that are essential to understanding how policies happen.

There are a great many finger-wagging papers telling policymakers to ‘upskill’, but few are urging academics and researchers to learn about the policy process. What are the incentives for policymakers to try and engage with science and evidence, when it takes too long, is often poor quality, and they are told they are not skilled enough to understand it?

There are important unanswered questions relating to the use of research in policy. We don’t know how evidence actually impacts policy, or how policy-makers ‘use’ evidence—to create options? Defeat opponents? Strengthen bargaining positions? What is the role of interpersonal interaction in policy? Finally, does increasing research impact actually make for better policy, whatever this means? Academics have any number of stories about poor policies that ignore research, but less in the way of rigorous evidence showing the beneficial impact of research.

Other countries have tried creating formal organisations and spaces for relationships to flourish.In the USA, for an academic to leave a University and work at an organisation such as Brookings is seen as a promotion, not a failure. Australia’s Sax Institute provides a forum for policymakers and academics come together, similar to the ideas behind Cambridge’s Centre for Science and Policy.

It remains to be seen how influential these organisations will be, and whether they succeed in changing the stereotypes and behaviours of both policymakers and academics. And critically, whether relationships brokered in this way can transcend the traditional ‘expert advice’ model of research mobilisation to encompass evidence that aims to be an accurate view of the totality of current knowledge on a problem – like this systematic review.In the meantime, rigorous studies of the three common misconceptions about evidence-based policy will help us all understand how and whether we should get which evidence into policy. Solutions are better than excuses.

This blog has been previously published by Research Fortnight and the Alliance for Useful Evidence.

Leave a comment

Filed under Uncategorized

Making the connection

About 2 years ago, I had one of those ‘Eureka’ moments that totally changed my life. Genuinely. It was right up there with finding out about Oyster cards, or washing machines, or something.

At the time, I was a PhD student in my first year, working on a fairly standard project about developing health indicators. As a project, it was fine – about the use of evidence by policy makers, one of my main interests, and I was getting lots of experience in survey design. But for years, I’d been kicking round ideas in my head about the importance of personal relations. Didn’t they really explain nearly all human behaviour? Weren’t peer effects important for the spread of obesity or smoking? Wasn’t social capital important for mental health?

I’d been living on my own in London for a year or two and had found myself pondering the role of human relationships more and more. Of course, I had friends and relations, but I also liked being known by the man in the newsagents and the end of the road, and saying ‘hi’ to the neighbours. Did they count, I wondered? Would these relationships be enough to protect me from isolation, or going ballistic on the tube?

Imagine my delight when, attending a Social Network Analysis seminar day, run by the Mitchell Centre at the University of Manchester, I discovered an entire body of research – methods, philosophy, approaches – which looked at connections between individuals using formal statistical methods. Finding out that other people had had similar ideas to me, and had developed dedicated research methods to investigating these ideas was probably one of the best research moments I’ve ever had.

Unlike traditional statistics, network analysis does not treat individuals (whether bridgespolicy makers, or swingers) as independent. Instead, any ties between actors are identified, described quantitatively and/or qualitatively and mapped. The statistics used are based on graph theory, but you don’t have to understand it to admire the elegance and usefulness of network analysis. Depending on the relationship collected, people’s attitudes, behaviours, health outcomes and more can be predicted.

For me, this is really the missing element from a lot of public health research. It can be used to identify good targets for research, or opinion leaders in secondary schools, so more targeted messages can be produced and sent out. It allows us to understand, describe, and analyse the social context within which individuals live. And, of course, make beautiful pictures.

Example of Social Network Analysis diagram.

People have used network analysis to study all kinds of things – it’s very popular in the business world to identify ‘future leaders’ or ‘people who make things happen within my business’. Researchers have compared US senators voting patterns to cows who lick one another.

My PhD changed quite a lot after this seminar. I ended up using a combination of social network analysis and ethnography to study where public health policy makers found evidence, who the main sources of evidence were and how evidence was incorporated in the policy process. For years, academics in my field have been talking about the importance of interpersonal knowledge translation and how policy makers prefer to get their info from real people. Now I’ve been able to add my own tiny part of the story, come up with new research ideas on the basis of my findings, and learn a niche method (always useful).

My boyfriend still calls them snog webs though.

First published on http://fuseopenscienceblog.blogspot.co.uk

Leave a comment

Filed under Networks