A Graphical Method for Comparing Response-Adaptive Randomization Procedures
Response-adaptive randomization procedures have a dual goal of estimating the treatment effect and randomizing patients with a higher probability of receiving the superior treatment. These are competing objectives, and no procedure in the literature is “perfect” with respect to both objectives. For clinical trials of two treatments, we discuss metrics for comparing response-adaptive randomization procedures that can be represented graphically to compare designs. These metrics involve the simulated distribution of the set of jointly sufficient statistics for estimating functions of the unknown parameters. We explore the binary response and normal cases, and compare numerous procedures found in the literature. We distinguish between metrics of efficiency and metrics that measure ethical cost. Each of these is a function of the joint sufficient statistics. When graphed against each other, we can gauge competing designs in obtaining these competing objectives. We find that, contrary to asymptotic results, tuning parameters that affect the variability of the procedure do not have much impact in the finite case. We also find that procedures that target an optimal allocation based on ethical and efficiency considerations generally provide a better compromise design than procedures that do not.