Why Good Evidence Doesn't Automatically Become Good Policy
This framework was developed for FCDO’s Research Commissioning Centre as part of a broader research programme on evidence use in policymaking. The full framework narrative is available here.
The Puzzle
Development research has never been more rigorous. We have thousands of impact evaluations, systematic reviews synthesising what works, evidence gap maps showing where the holes are. The infrastructure for producing credible evidence has improved dramatically over the past two decades.
And yet.
Policy decisions still routinely ignore the best available evidence. Interventions with strong track records go unfunded while those with weak or no evidence continue. Policymakers who genuinely want to use evidence can’t find what they need when they need it. Researchers who produce excellent work watch it disappear into academic journals, never touching a policy conversation.
The problem isn’t just that we need more evidence. It’s that the pathways from evidence to policy are poorly understood, full of bottlenecks, and rarely the subject of rigorous study themselves.
What We’re Trying to Understand
The Evidence-Informed Policy Making (EIPM) Framework is an attempt to map those pathways systematically. It’s not a prescriptive model—“do these five things and evidence will be used.” It’s a conceptual framework for understanding what influences whether evidence gets taken up into policy, and under what conditions.
The framework asks: what are the mechanisms through which evidence might influence policy? Where do they break down? What interventions have been tried to strengthen them, and what do we know about whether those interventions work?
These questions turn out to be surprisingly understudied. We have decades of normative arguments about why policymakers should use evidence. We have much less empirical knowledge about when and how they actually do.
A Living Framework
The EIPM Framework isn’t finished. It’s a guidance document, but it’s also being tested and refined through an active research programme. We’re commissioning two complementary workstreams:
Studies of evidence use in practice. How do policymaking processes actually work in Sub-Saharan Africa and South Asia? What role does evidence play—or not play—in shaping the policy agenda, decisions, or implementation? These are empirical studies of real policy episodes, trying to understand the underlying political, institutional, and systemic factors that influence evidence uptake.
Evaluations of interventions to influence evidence use. When organisations try to increase evidence uptake—through knowledge translation platforms, policymaker training, embedded researchers, rapid evidence services—does it work? For whom? Under what circumstances? We have surprisingly few rigorous evaluations of evidence-use interventions.
Together, these workstreams should help us refine the framework based on empirical findings rather than assumptions.
What We Already Know
Even before the new research comes in, the framework synthesises existing knowledge about evidence-to-policy pathways. Some of what we know:
Supply-side improvements are necessary but not sufficient. Making evidence more accessible, timely, and readable helps. But the binding constraint is often not supply. Policymakers face political pressures, bureaucratic incentives, and information overload that have nothing to do with whether good evidence exists.
Relationships matter as much as products. The most effective knowledge brokers aren’t the ones who produce the best policy briefs. They’re the ones who have trusted relationships with decision-makers, understand the policy process, and can provide responsive support when windows of opportunity open.
Context is everything. An intervention that works in one institutional setting may fail completely in another. Evidence use is shaped by electoral cycles, bureaucratic culture, the power of technical agencies, civil society strength, media environments. Generic interventions tend to underperform context-specific ones.
Timing is critical. Evidence that arrives at the right moment in a policy cycle can be influential. The same evidence arriving six months earlier or later may be ignored entirely. This has implications for how research is commissioned and communicated.
The Bigger Question
Behind the framework is a question about how the development sector operates. We invest heavily in producing evidence. We invest relatively little in understanding whether and how that evidence changes anything.
If evidence-informed policymaking is the goal, then the pathways from research to policy deserve as much rigorous attention as the research itself. The EIPM Framework is a step toward taking those pathways seriously—not as a matter of communications strategy, but as an object of empirical study.
The framework is available as a resource for researchers designing studies on evidence use, for funders commissioning evidence-uptake interventions, and for anyone trying to think systematically about why good evidence doesn’t automatically become good policy.
The EIPM Framework was developed for the FCDO Research Commissioning Centre. Related funding opportunities are available for studies of evidence use in practice and evaluations of evidence-use interventions.