Priming bias versus post-treatment bias in experimental designs

(2024)

(with Jacob R. Brown, Sophie Hill, Kosuke Imai, and Teppei Yamamoto)

Conditioning on variables affected by treatment can induce post-treatment bias when estimating causal effects. Although this suggests that researchers should measure potential moderators before administering the treatment in an experiment, doing so may also bias causal effect estimation if the covariate measurement primes respondents to react differently to the treatment. This paper formally analyzes this trade-off between post-treatment and priming biases in three experimental designs that vary when moderators are measured: pre-treatment, post-treatment, or a randomized choice between the two. We derive nonparametric bounds for interactions between the treatment and the moderator in each design and show how to use substantive assumptions to narrow these bounds. These bounds allow researchers to assess the sensitivity of their empirical findings to either source of bias. We extend the basic framework in two ways. First, we apply the framework to the case of post-treatment attention checks and bound how much inattentive respondents can attenuate estimated treatment effects. Second, we develop a parametric Bayesian approach to incorporate pre-treatment covariates in the analysis to sharpen our inferences and quantify estimation uncertainty. We apply these methods to a survey experiment on electoral messaging. We conclude with practical recommendations for scholars designing experiments.