Randomised control trials (RCTs) are well known for the part they play in developing life-saving medical treatments. In the past year alone, RCTs have played a significant role in all our lives, of course, by testing the efficacy of vaccinations that have now been administered to millions of people across the world. What people may not know is that the RCT is also one of the most widely used methods for the evaluation of social policies. And right now, they are at the heart of an ethical debate.
There is no doubt that the use of RCTs to evaluate social policies has significantly improved lives. In 2019, Abhijit Banerjee, Esther Duflo and Michael Kremer were awarded the Nobel prize for economics for their “experimental approach to alleviating global poverty”. Their studies, and those of their co-authors, have generated invaluable evidence on topics as diverse as the impacts of free mosquito nets on health, the effects of microcredit on poverty and the merits of different educational interventions for low-income students. However, as more and more people take part in RCTs, the ethics of this research methodology is coming under increasing scrutiny.
The basic premise of RCTs is simple: the outcomes of a group receiving a “treatment” (the policy being tested) are compared against the outcomes of a group receiving no treatment (the control group). Crucially, it is entirely random whether an individual is assigned to the treatment or the control group. Random assignment enables researchers to produce the strongest possible evidence on the impacts of the policy. This evidence will later be used to improve policy design and so will benefit the future recipients of the policy. However, random assignment is not ideal from the point of view of the people who take part in the trial itself: some people will receive no treatment when a treatment could be beneficial to them, while others may receive a treatment that has no effect on them or is even harmful.
This tension between the welfare of study participants and the need to generate the best possible policy evidence creates an ethical dilemma. Is it right, for example, to withhold cash incentives from poor workers, or educational improvements from impoverished children, when such interventions are likely to be beneficial for them? Or, when an NGO cannot help an entire population, is it ethically justifiable to distribute the scarce programme slots at random, rather than offering them to those who are likely to benefit the most? Striking the right balance between collecting data that could improve the lives of many and improving the welfare of trial participants is not always easy.
These concerns have led some to doubt that RCTs can ever be ethically justifiable. There is considerable opposition to their use among some scholars. But I think that, rather than giving up on RCTs altogether, we need to focus on developing this powerful research method to address the ethical controversies that surround it.
My co-authors and I have developed an algorithm that enables researchers to balance their desire to generate the best possible evidence of a policy’s effects with their objective to increase the welfare of the people taking part in the study. Taking advantage of the fact that studies are often performed using multiple cohorts of individuals, the algorithm learns over time what interventions work best for different types of people and then assigns participants to the treatment group that is most likely to benefit them. Importantly, the algorithm also assigns a minimum share of the sample to each treatment randomly, so as not to compromise the study’s ability to generate high-quality evidence.
We tested the algorithm in a field experiment designed to help Syrian refugees in Jordan find work – with intriguing results. Our experiment offered three interventions: a small cash transfer; job search advice; and motivational support. In our setting, the cash transfer proved to be the most successful intervention for helping refugees into employment. Job search advice and motivational support had weaker impacts.
Surprisingly, we found that the timing, as well as the type, of the intervention was very important. One month after treatment, none of the interventions we tried had an effect, while the effects of cash transfers became visible two months after the cash was disbursed.
We were able to show that, when the algorithm is set up to maximise employment two months after treatment, it can increase the share of refugees in work by 80 per cent − a very large effect. On the other hand, the algorithm has limited benefits when it is set up to maximise employment one month after treatment. These results showcase the potential of our new method, but also the importance of understanding the likely timing and nature of the changes one wants to bring about.
RCTs for policy evaluation are here to stay, but their ethical challenges needn’t be. Our study shows that it is possible to generate high-quality data while also improving the welfare of the people taking part in our studies, that we can make research more ethical without compromising its quality, and that finding more ways to reconcile ethics and academic rigour is an exciting future research agenda.
Stefano Caria is associate professor of economics at the University of Warwick and an affiliate of the Abdul Latif Jameel Poverty Action Lab.
comment