Skip to main content

Institute for Data Science and Artificial Intelligence

Who benefits in policy trials?

Collaborators: Dr ZhiMin Xiao (Graduate School of Education) and Professor Mihaela van der Schaar (John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine, University of Cambridge & Turing Fellow).

IDSAI Research Fellow: Dmitry Kangin

Description: The use of large-scale randomised controlled trials (RCTs) are fast becoming “the gold standard” of testing the causal effects of policy interventions. RCTs are typically evaluated by the statistical, educational, and socioeconomic significance of the average treatment effect (ATE) on the study sample. Interventions that do not have a statistically significant ATE are often discarded as not meaningful and, as a result, usually not implemented more widely, partly due to the failure to recognise the difference between statistical and substantive hypotheses. However, while some interventions may not have an effect on average, they might still have a meaningful effect on a relevant subgroup of individuals, for whom the treatment is beneficial. In some cases, an intervention could thus still provide a net benefit if rolled out. Understanding and identifying for whom a treatment works is therefore critical for policy-makers and society at large.

Recognising the strengths of RCTs as a research and evaluation tool this project proposed an individualised approach to impact estimation. Thanks to recent development in machine learning and the increase in both quantity and quality of research data, it makes greater sense than before to estimate and compare treatment effects at individual and group levels, ultimately generating deeper insights into the causal treatment effects of tested interventions by uncovering what worked, for whom, and by how much. As in any research, the individualised approach, has its limitations. This project looked to validate the individualised approach using both simulations and empirical studies to help practitioners better understand the cutting-edge influence function based approach to the evaluation of machine learning algorithms for ITEs without the need to access counter-factual outcomes.