Interpretable Causal Inference for Advancing Healthcare and Public Health

You are all invited to an exciting _online_ seminar at Biostats on Thursday, October 17 @ 15:30: 

Harsh Parikh, Dept. of Biostatistics, Johns Hopkins University, USA

"Interpretable Causal Inference for Advancing Healthcare and Public Health”

NOTE: This will be a zoom seminar: https://ucph-ku.zoom.us/j/8274562019?pwd=ZEJuZkZUY05WNE02YzNWUGhONWoyUT09

Causal inference methods have become critical across diverse domains, such as healthcare, public health, and social sciences, enabling the dissection of complex systems and guiding pivotal decisions. The recent integration of machine learning and advanced statistical techniques has enhanced the power of causal inference, but many of these methods rely on complex, black-box models. While effective, these models can obscure the underlying mechanisms of their estimates, raising concerns about credibility, especially in contexts where lives or significant resources are at stake. To mitigate these risks, ensuring interpretability in causal estimation is paramount.

n this presentation, I introduce our interpretable approach to addressing a fundamental challenge in randomized controlled trials (RCTs): generalizing results to target populations when certain subgroups are underrepresented. RCTs are essential for understanding causal effects, but effect heterogeneity and underrepresentation often limit their generalizability. Our work proposes a novel framework to identify and characterize these underrepresented subgroups, refining target populations for improved inference. Specifically, we present the Rashomon Set of Optimal Trees (ROOT), an optimization-based method that minimizes the variance of treatment effect estimates to ensure more precise and interpretable results. We apply ROOT to the Starting Treatment with Agonist Replacement Therapies (START) trial, extending inferences to the broader population represented by the Treatment Episode Dataset: Admissions (TEDS-A). This method enhances decision-making accuracy and improves communication of treatment effects, offering a systematic approach for future trials and applications in healthcare. Our findings demonstrate that improving the interpretability of causal inference can advance precision and applicability in real-world scenarios.