Adaptive trial design


The US Food and Drug Administration (FDA) defines an adaptive design as a “clinical trial design that allows for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial.” Put more simply, it is a trial that changes in response to the information being collected.

Adaptive changes can be as simple as reassessing sample size as the study progresses to work out if more (or fewer) people need to be enrolled but can also include complicated changes such as treatment arm selection, dose selection, patient population enrichment, and seamless phase transitions. These elements allow researchers to adapt the trial based on interim data without starting over from scratch.

  1. Pre-planned
    Except for issues of patient safety, it is best that any change to the study procedures planned well in advance – i.e. the exact changes to be made and the circumstances under which those changes are enacted are defined in detail before starting the study. This avoids changing the study in ways that might introduce bias or increase the risk of ‘statistically significant’ results being found simply by chance (i.e. type I error).
  2. Interim analysis
    Any analysis of the data before all participants have completed the study can be considered an ‘interim’ analysis. As an example, a study might pause recruitment once half the target participants are recruited and analyse the data to re-estimate the necessary sample size. However, as any analysis during a study can affect how the rest of the study is conducted, it is essential that the assumptions underlying any interim analysis and the potential implications of the results are properly understood.

The FDA divides adaptive designs into two categories defined by whether the interim analysis compares study outcomes between the intervention and control arms or not:

  1. Non-comparative adaptive designs
    Any analysis not involving the treatment allocation cannot provide any estimate of the effect of the treatment compared to the control and is considered non-comparative. An example is an analysis of the number of study events, or variability in study outcomes, in all, or part, of the study population that allows the researcher to recalculate their sample size estimate. This is typically done to allow changes in the number of participants recruited into the study.
  2. Comparative adaptive designs
    Any analysis that takes account of the treatment group allocation is considered comparative. Testing for differences between the treatment and control groups before the study ends is like ‘checking’ the results of the study early. In such cases each interim analysis comes with risk of a false positive finding and so directly affects the risk of type I error in the study overall. Type I error risk is described statistically with p-values and confidence intervals (by convention, often set to a 1 in 20 risk [p = 0.05]) and comparative interim analyses may require adjustments to the p-values considered ‘significant’ at the end of the study.

Study efficiency – adaptive designs can provide a greater chance of finding a treatment effect or require fewer participants or a shorter trial duration. Adaptive allocation of more resources to promising treatment arms or modifying trial parameters based on emerging trends or outcomes can make studies cheaper and more feasible.

Flexibility – flexibility in key trial design elements such as sample size, treatment arms, patient population, and endpoints. This flexibility enables researchers to make adjustments based on interim results without compromising the integrity of the trial.

Ethical advantages – Adaptive designs can maximise the number of people receiving effective treatment and/or minimise exposure to inferior treatments. For instance, if an interim analysis shows that a treatment is highly unlikely to be effective then the study can be halted (known as stopping for futility) thereby avoiding the exposure of additional participants to an ineffective treatment. Conversely, a treatment might be found to be clearly effective, resulting in the study being halted to ensure that the treatment can be made available to patients earlier than anticipated and avoiding the ethical problem of randomising participants in the control arm to treatment known to be inferior (this is known as stopping for efficacy).

Acceptability to stakeholders – The potential efficiency and/or ethical gains may make adaptive trials more appealing to participants, clinicians, and research funders.

Adaptive designs are not the solution to all research problems. The choice to use an adaptive design depends very much on the specific research question.

The statistical complexity is, in many cases, substantial. Simulations of possible outcomes are often necessary to properly inform design. This lowers the risk of problems developing, such as conducting an interim analysis when there is insufficient data to make any reliable judgements. Links are provided below that introduce the reader to some relevant concepts, including Bayesian approaches to adaptation.

The efficiency gains are not always guaranteed and may be marginal. Practical considerations, such as the additional time and cost to perform interim analyses and implement their findings, may offset the gains from reduced sample sizes.

Reduced sample sizes may also provide less scope for examining secondary research hypotheses and intervention safety.


Bhatt DL, Mehta C. Adaptive Designs for Clinical Trials. N Engl J Med. 2016 Jul 7;375(1):65-74. doi: 10.1056/NEJMra1510061.

Thorlund K, Haggstrom J, Park J J, Mills E J. Key design considerations for adaptive clinical trials: a primer for clinicians BMJ 2018; 360 :k698 doi:10.1136/bmj.k698.

Wason JMS, et al. Practical guidance for planning resources required to support publicly-funded adaptive clinical trials. BMC Med. 2022 Aug 10;20(1):254. doi: 10.1186/s12916-022-02445-7.

Adaptive Designs for Clinical Trials of Drugs and Biologics Guidance for Industry (FDA). 2019.

Statistics focused resources

Wason et al. A Bayesian adaptive design for biomarker trials with linked treatments. Br J Cancer. 2015 Sep 1;113(5):699-705.

Simon N, Simon R. Adaptive enrichment designs for clinical trials. Biostatistics. 2013 Sep;14(4):613-25.

Muehlemann et al. A Tutorial on Modern Bayesian Methods in Clinical Trials. Ther Innov Regul Sci. 2023 May;57(3):402-416.

Ryan, et al. Do we need to adjust for interim analyses in a Bayesian adaptive trial design?. BMC Med Res Methodol 20, 150 (2020).

Ashby, D. Bayesian statistics in medicine: a 25 year review. Stat Med. 2006 Nov 15;25(21):3589-631.


What are adaptive clinical trials? MRC Biostatistics Unit (Cambridge, UK).

What Clinicians Should Know About Adaptive Clinical Trials. Berry Consultants.

Adaptive Design Clinical Trials. Australian Clinical Trials Alliance (Webinar presented by Scott Berry, Berry Consultants).

Adaptive Trial Designs – Introduction for Non-Statisticians. Cytel.

Bayesian Way. NEJM Evidence

The Use of Historical Information in Clinical Trials. Berry Consultants

Examples of adaptive trials in nephrology (5th accordion)

  • CALCIPHYX study [using sample size re-estimation]
  • Papachristofi O, et al. Interim decision making in seamless trial designs: An application in an adaptive dose-finding study in a rare kidney disease. Pharm Stat. 2024 Jan-Feb;23(1):20-30. doi: 10.1002/pst.2335.
  • Pritchett Y, et al. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease. Clin Trials. 2011 Apr;8(2):165-74. doi: 10.1177/1740774511399128.
  • Schrier RW, et al. Blood pressure in early autosomal dominant polycystic kidney disease. N Engl J Med. 2014 Dec 11;371(24):2255-66. doi: 10.1056/NEJMoa1402685.