Can the link between research and practice be fixed?
Take a second to reflect: What ideas over the last century or two have effected a sea change for the better in the way we live our lives? Here are a couple that occurred to us: Germs cause disease. Cigarettes cause cancer. Somehow the research behind these ideas has become generally accepted knowledge, now so embedded in our worldview that it is as reflexive as “the sky is blue.”
What ideas in crime, justice and urban policy have had a similar impact? This is a harder question to answer, perhaps because the unit of analysis in these fields is less the iron rules of cells and chemistry than the messy knots of human behavior.
New ideas always face significant barriers to entry into the world of policy and practice. The relationship between academic research and the “real world” of government work is a vexed one. Social scientists often complain that their work isn’t taken seriously and that, despite their best efforts, government agencies continue to pursue interventions (take a bow, gun buybacks!) that have not shown strong evidence of effectiveness.
On the other side of the fence, government decision-makers may not regularly gripe about researchers, but that is only because they are so busy with the daily grind that they don’t have time to wade through abstruse papers larded with voluminous footnotes and obfuscating language (take a bow, “iatrogenic”!).
Each of us has dedicated a good chunk of our careers to trying to bridge the divide between criminal justice research and practice. This has been driven by a conviction that knowledge is a good thing and that decision-making about public safety should be guided by data rather than the politics of the moment or the whims of whomever happens to be in charge.
The relationship between academic research and the “real world” of government work is a vexed one.
We are hardly alone in this. Starting in earnest during the Clinton administration, there has been a concerted effort by a range of important actors to try to encourage “evidence-based” criminal justice policy and programs — a phrase at once hilarious and poignant. In the 1990s, one of us excitedly told a prominent historian — a field where evidence is the sine qua non — about this new turn in our profession. He rolled his eyes and said pitilessly: “And before, you based your world on what? Voodoo?”
But the phrase does have a meaning, if coded. The subtext, rarely spoken aloud, is an attempt to reduce the temperature of the public discourse about criminal justice, moving policymaking away from the realm of politics and into the realm of science as much as possible. In the years before evidence-based reform emerged as a concept, high-profile tragedies — cases of child abduction or random murders — had been used to make the case for more punitive lawmaking throughout the country. At the federal level, the infamous Willie Horton campaign advertisement in 1988 performed similar work.
The evidence-based policy movement, in criminal justice and other fields, sought to move away from such demagoguery. During the era of reduced crime that began in the 1990s, it proved reasonably successful. “Follow the data” became a rallying cry that appealed to both Democrats and Republicans. One sign of the movement’s success was the creation of CrimeSolutions.gov, a website administered by the U.S. Department of Justice that summarizes academic research in an effort to help policymakers and practitioners figure out which criminal justice programs and practices work and which do not.
Recent years, however, have seen the emergence of a palpable backlash to the evidence-based movement. Perhaps the most extreme expression of this backlash has been the argument by prison abolitionists and other radical activists that the evidence-based paradigm “strengthens the influence of neoliberalism, punitive impulses, and white supremacy over criminal system policy and procedure.” They point to the fact that the United States is still plagued by levels of violence, racial disparities and incarceration rates that dwarf peer nations. What use is social science evidence if it cannot prevent, or meaningfully ameliorate, these kinds of problems?
Earlier this year, Megan Stevenson, an economist at the University of Virginia Law School, published an essay in the Boston University Law Review raising further questions about evidence-based reform. In “Cause, Effect, and the Structure of the Social World,” Stevenson reviews a half-century of randomized controlled trials (“RCTs” are known as the “gold standard” of social science) and finds that, “Most reforms and interventions in the criminal legal space are shown to have little lasting impact when evaluated with gold-standard methods of causal inference.” For Stevenson, this is a reflection of the immutable social structures of the world that make change hard to engineer, at least when using the kinds of “limited-scope interventions” that lend themselves to randomized trials. Provocatively, Stevenson argues that it may be necessary to abandon narrow, evidence-based reform and instead “seek systemic reform, with all its uncertainties.”
Stevenson’s essay got us thinking. Is the notion that we can meaningfully address social problems a myth? Does it really make sense to rely on evidence to guide policy? And if so, what should this look like?
Recent years have seen the emergence of a palpable backlash to the evidence-based movement in criminal justice.
At the same time, our friends at Hypertext, the journal of the Niskanen Center — recently named the “most interesting think tank in American politics” by Time magazine — were wrestling with similar questions. So we decided to join forces. Together, we asked more than a dozen leading scholars, philanthropists, journalists and government policymakers to discuss the role of evidence in policymaking, using Stevenson’s article as a jumping-off point. The result of this exploration makes up the bulk of this issue of Vital City.
Several themes run through this issue:
Are we back to “nothing works”? Stevenson argues that most reforms “have little to no lasting effect when evaluated by RCTs and the occasional success usually fails to replicate when evaluated in other settings.” A number of our contributors contest this reading. For example, Anna Harvey points out that RCTs are not the only form of evidence that matters and highlights a number of “quasi-experimental” evaluations that have documented positive impacts for interventions such as community crime monitoring programs and reducing home foreclosures. In a similar vein, Alex Tabarrok argues that, if you look beyond RCTs, you can find strong “causal-inference evidence” to support the effectiveness of policing and incapacitation. While a number of our contributors agree with this more positive take on the evidence, others share Stevenson’s pessimistic analysis. The bottom line is that there is no broad consensus about how to interpret the literature.
The devil is in the details. There is a strong tendency to judge social programs on a pass-fail basis: Did this initiative work or not? But this kind of binary analysis (what Tracy Palandjian and Jake Segal might call “false simplicity”) obscures the nit and grit of social science research. Well-executed evaluations might sometimes point in contradictory directions, but they almost always contain a lot of useful information. Chloe Gibbs identifies the research on Head Start as an example of how it is often difficult to reduce complicated findings to simple slogans. Initially, the findings from Head Start suggested that program effects faded as the years went on, leading many to declare Head Start a failure. But as methods improved and the data were reanalyzed, a more nuanced picture emerged, including the reality that the program had different results for different populations. Given these kinds of experiences, Palandjian and Segal appeal for greater humility, suggesting that the call of “we know what works” be replaced with something more accurate: “It might work for some people.”
Compared to what? What is the real-world alternative to relying on evidence to guide policy decisions? Candice Jones offers the perspective of someone who has worked in federal and state government. “Subject matter experts existed, but their experience did not necessarily drive policy, especially when it conflicted with special interests,” she writes. John Arnold bemoans “uninformed policies” that risk “inflicting harm on the very communities that we seek to serve.” This argument is made most pointedly by Phil Cook and Jens Ludwig, who argue against what they call the “YOLO approach to policy,” which they believe is a dangerous way to move forward in a world full of unintended consequences. Instead of turning away from evidence-based policymaking, as Stevenson suggests, many of our contributors essentially argue that we have not gone far enough.
It’s the implementation, stupid! Researchers and policymakers may often share the same goal: to make the world a better place. But the different incentives that drive their behavior mean that the distance between the ivory tower and the arena of government action is often enormous. The accumulation of knowledge can be a lengthy process. Applying knowledge to see if it “works” in the real world can take even longer. In addition, the hierarchy of status in the academy tends to value the production of theory, not applied research. Meanwhile, policymakers, and elected officials in particular, need answers to social problems urgently. What “works” for them is often the appearance of action. Given the fierce push and pull of politics, the nuances of evidence often fall by the wayside. Thus we get the National Guard on every subway platform, whether there is any evidence whatsoever for that deployment.
Instead of turning away from evidence-based policymaking, many of our contributors essentially argue that we have not gone far enough.
Mainly, we fail. As in many things in life, the cult of success often obscures hard questions. And the answers to those questions could in fact advance knowledge and thus well-being. Jeff Liebman points out that the unfortunate effect of the “evidence-based” label is that it hardens programs into inflexibility because of the fear that any adjustment might result in different results. But inflexibility can be a real problem on the ground. Experimentation and adjustment — and, yes, even failure — are often essential to developing good practice. How to define success will also depend on the many different contexts in which a particular idea is tested, as Jennifer Doleac and others in this collection point out. Can even the most rigorous evidence be brought to scale when every aspect of how and where it is implemented — from the charisma of a leader to the weather of a jurisdiction — could potentially affect the results?
The joy of incrementalism. The world is complicated. People are complex. It is exceedingly difficult to accomplish goals, like reducing recidivism, that are contingent on multiple variables — motivation, opportunity and human nature, among many other things. Our contributors mostly seem to agree about this reality. From this baseline of agreement, a glass half-full/half-empty divide emerges. Some, like Stevenson, look at modest results and throw up their hands in frustration. Others, like Aaron Chalfin and Sherry Glied, choose to celebrate the value of incremental improvements. “Modest effects are not unimportant,” declares Glied, “The power of incrementalist policy is in the accumulation of increments.” Chalfin agrees: “It is possible to make the world better, slowly, one step at a time.”
We recognize the limitations of social science evidence. There is still a great deal that researchers don’t know how to measure. We may not even know how much we don’t know. The funding available to support high-quality research is still not what it needs to be.
There are less benign problems as well. In recent years, we have seen high-profile accounts of how professional pressures and perverse incentives have warped the behavior of researchers. There are many legitimate reasons for the erosion in public confidence in academic research.
Nevertheless, we believe that translating research into policy and practice is essential to the task of preventing and reducing crime. We hope that this issue, and indeed the work of Vital City in general, serves as a bridge between the worlds of research and policy. Our goal, as ever, is to find good ideas from academia (and other places as well) and to ensure that they are accessible to those who make and influence government decisions.
We hope you enjoy this special issue of Vital City. We thank David Dagan, Greg Newburn and Richard Hahn of Hypertext and the Niskanen Center for their partnership in putting it together. And we are grateful to Dan Wilhelm, Joel Wallman and the Harry Frank Guggenheim Foundation for their support.
As always, we are eager to hear your thoughts, comments and ideas.
Elizabeth Glazer
Founder and co-editor, Vital City
Greg Berman
Co-editor, Vital City