Randomized clinical trial study design


















Any of a number of mechanisms used to assign participants into different groups with the expectation that these groups will not differ in any significant way other than treatment and outcome. Research alternative Hypothesis. The relationship between the independent and dependent variables that researchers believe they will prove through conducting a study. The relationship between what is considered a symptom of an outcome and the outcome itself; or the percent chance of not getting a false positive see formulas.

The relationship between not having a symptom of an outcome and not having the outcome itself; or the percent chance of not getting a false negative see formulas. Rejecting a null hypothesis when it is in fact true. This is also known as an error of commission. The failure to reject a null hypothesis when it is in fact false. This is also known as an error of omission. Having a volunteer bias in the population group is a good thing because it means the study participants are eager and make the study even stronger.

Ask us. Now test yourself! Try out PMC Labs and tell us what you think. Learn More. Most randomised controlled trials focus on outcomes, not on the processes involved in implementing an intervention. Using an example from school based health promotion, this paper argues that including a process evaluation would improve the science of many randomised controlled trials.

Because of their multifaceted nature and dependence on social context, complex interventions pose methodological challenges, and require adaptations to the standard design of such trials. It draws on experience from a cluster randomised trial of peer led sex education. Conventional RCTs evaluate the effects of interventions on prespecified health outcomes. They may aim to examine the views of participants on the intervention; study how the intervention is implemented; distinguish between components of the intervention; investigate contextual factors that affect an intervention; monitor dose to assess the reach of the intervention; and study the way effects vary in subgroups.

The RIPPLE randomised intervention of pupil peer led sex education study is a cluster RCT designed to investigate whether peer delivered sex education is more effective than teacher delivered sessions at decreasing risky sexual behaviour. It involves 27 English secondary schools and follow-up to age In the schools randomised to the experimental arm, pupils aged years given a brief training by an external team delivered the programme to two successive year cohorts of year olds.

Control schools continued with their usual teacher led sessions. The trial was informed by a systematic review and pilot study in four schools. Several methods were used to collect process data box 1 , including questionnaire surveys, focus groups, interviews, researcher observations, and structured field notes. Some methods, such as questionnaire surveys, were also used to collect outcome data. Other methods, such as focus groups and interviews with school staff and peer educators, were specific to process evaluation.

The outcome results by age 16 showed that the peer led approach improved some knowledge outcomes; increased satisfaction with sex education; and reduced intercourse and increased confidence about the use of condoms in girls. Girls in the peer led arm reported lower confidence about refusing unwanted sexual activity borderline significance.

The incidence of intercourse before age 16 in boys and of unprotected first sex, regretted first intercourse or other quality measure of sexual experiences or relationships , confidence about discussing sex or contraception, and some knowledge outcomes for both girls and boys were not affected.

We analysed process data in two stages. Before analysing outcome data box 2 , we analysed the data to answer questions arising from the aims outlined above. The hypotheses generated were tested in statistical analyses that integrated process and outcome data to address three questions:.

What was the relation between trial outcomes and variation in the extent and quality of the implementation of the intervention? We used several strategies to combine the different types of data. These included on-treatment analyses, in which results for students who received the peer led intervention were compared with results from the standard intention to treat approach where allocation to, rather than receipt of, the intervention forms the basis of the analysis.

We then carried out regression analyses and, where appropriate, tests for interactions, to examine the relation between key dimensions of sex education, subgroups of schools and students most and least likely to benefit from the peer led programme, and study outcomes further details are available elsewhere. More consistent implementation of the peer led programme might have had a greater impact on several knowledge outcomes and reduced the proportion of boys having sex by age 16, but it would not have changed other behavioural outcomes.

There were key interactions between the extent to which sex education is participative and skills based and who provided it: when sex education was participative and skills based, the peer led intervention was more effective, but when these methods were not used, teacher led education was more effective.

Process evaluation in the RIPPLE trial is an example of the trend to move beyond the rhetoric of quantitative versus qualitative methods. By integrating process and outcome data, we maximised our ability to interpret results according to empirical evidence. Importantly, an on-treatment approach to outcomes using process data made little difference to the results. Using process data to identify key dimensions of sex education and examining these in relation to the trial arm revealed the circumstances in which peer led sex education was most effective, as did analysis of risk for both individual schools and students.

The conclusion that the peer led approach is not a straightforward answer to the problem of school disengagement and social exclusion is an important policy message. Other recent trials of complex interventions in which integral process evaluations helped explain the outcome findings include a trial of peer education for homosexual men that had no apparent impact on HIV risk behaviour; the process evaluation established that the intervention was largely unacceptable to the peer educators recruited to deliver it.

Box 2: Methodological issues of integrating process evaluation within trials. Steps should be taken to minimise the possibility of bias and error in interpreting the findings from statistical approaches, such as on-treatment analyses and regression and subgroup analyses.

The additional costs such as collecting and analysing qualitative data would probably be balanced by greater explanatory power and understanding of the generalisability of the intervention. The move towards combining process and outcome evaluation builds on other, related methodological developments. For example, more attention is being paid to the need for pilot and consultation work in the development phase of RCTs, 23 the importance of a more theory based approach to evaluation, 24 and the modification of intervention effects by context in community intervention trials.

Some of the methods we suggest—such as allowing hypotheses about the effectiveness of interventions to be derived from process data collected during a trial, and drawing conclusions with reservations from on-treatment and subgroup analyses—go against the conventions of clinical RCTs.

A detailed process evaluation should be integral to the design of many randomised controlled trials. Choose a significant endpoint that could be simply and practically verified. RCTs typically assess a single intervention or a treatment in a limited and controlled setting.

This because of strengths and restrictions associated to the nature of this kind of study. For these reasons, they have certain limitations to explore composite interventions in complex populations e.

Nevertheless, they have limitations in investigate rare outcomes or delayed effects. Thus, results imperative to choose a clear hypothesis, verifiable by a limited number of strong and clinically significant endpoints. The general objective of RCTs is to obtain results of easy and concrete applicability for clinicians and that can be easy implemented in ordinary clinical settings.

Is about transfer scientific knowledge in medical decision-making strategies. For example, Wolfe et al. The robust underlying hypothesis, the clear endpoints, and the relevance of the intervention, such as thymectomy, highlight how this RCT has a strong validity and helps changing clinical practice On the other hand, Kleinberg et al.

Tips: find an equilibrium between very strict and selective criteria standardized patient group and more heterogeneous conditions external validity of the results. Always taking account of possible under recruitment and loss to follow up. A precise statistical preparation of the RCT must taking account of the selection criteria and the power needed to obtain valuable results. The patient selection criteria inclusion and exclusion, both must be chosen to avoid possible confounding factors, to exclude patients in whom the intervention is useless or dangerous.

Moreover, the selection criteria should be not too severe; the risk is to conduct the RCT in an overly selected population and to obtain results not generalizable at the actual clinical practice. A sufficient sample size is fundamental to detect a reliable statistical difference among the study groups. The sample size needed to reach an adequate power in a study is inversely proportional to the intervention effect squared Consequently, considering that frequently the effect of the studied intervention is relatively small, the number of patients needed is relatively large.

Nevertheless, an insufficient sample size is a frequent problematic issue in several published RCTs. For example, Portier et al.

The results indicated an effect of adjuvant chemotherapy with an improved progression-free survival but not a statistically significant effect on overall survival. However, a trend towards an improvement on overall survival was observed and, probably, significance was not reached because a lacking statistical power due to a small sample size.

Indeed, diverse RCTs on adjuvant chemotherapy regimens after liver metastasectomy are usually underpowered to find significant conclusion Contrariwise, Pompili et al. The proper number of recruited patients permit them to demonstrate a significant improvement from digital chest drain utilization. Another recurrent problem in most of RCTs is a low recruitment rate. This is due to recruitment difficulties, inadequate selection criteria or patient willing. Moreover, some patients leave the intervention group due to patient or physician choices or treatment complications.

Finally, other patients will be lost at the follow-up, and define the outcome for this patient will be not possible. Therefore, is mandatory to forecast that the RCT will be completed only in a relatively small percentage of all potentially eligible population. For example, the ENG trial on adjuvant systemic treatment after liver resection for colorectal metastasis, closed prematurely due to poor recruitment.

The final analysis was carried out on patients and no effect on overall survival was observed. Tips: choose and report the methods of Randomization correctly. Balance the study group using stratification technique. At least, outcomes evaluation should be blinded. Always apply intention to treat analysis. A key aspect of RCTs is the method of randomization. Therefore, the study group will differ for treatments type assigned only, avoiding the selection bias One of the most important aspects of randomization is the impossibility to determine a priori the allocation of each patient.

Consequently, is mandatory to report all the aspects of the randomization process: the randomization method e. For example, Pompili et al. Despite randomization, especially in small RCTs, the risk of unbalance in important prognostic factors amongst groups could remain relevant. The stratification method allows to balance groups over the predictors of interest and to increase analysis power. Blind methodology is used to prevent the possible bias derivate from the knowledge of group allocation of a patient e.

In these cases, at least the personnel dedicated to the evaluation of the response to the treatment should do not have information regarding group allocation. This assures to avoid subjectivity in outcome assessing and to permit the reliability and the objectivity of the results 21 , For example, in the recent trial, evaluating effect of surgical thymectomy for improve clinical outcomes in non-thymomatous myasthenia patients, double blinding is fairly difficult.

Thus, to preserve rater blinding almost in outcome, patients were evaluated 4 months after surgical procedure by a neurologist who was not aware of the trial-group assignments. The intention to treat analysis ITT analysis is a solid method to avoid analytical bias The patients that did not performed the planned intervention will be not excluded from RCTs, and this prevent the possible bias of patient withdraw or crossover.



0コメント

  • 1000 / 1000