Study design for IR projects

Similar to other types of research, study designs used in IR can be interventional or observational. In an interventional research design, the researcher influences objects or situations and then measures the outcome of these manipulations. In an observational study design, the researcher observes and analyses researchable objects or situations without intervening. These non-intervention studies can be exploratory, descriptive and comparative (analytical) studies, while intervention studies can be experimental studies, quasi-experimental, before and after, cohort studies or randomized controlled trials. Below each of these study designs is briefly explained.

Descriptive

Descriptive studies are used when you want to describe the implementation of health-related interventions and any problems or barriers within that context. Depending on your familiarity with the subject of the study, different study designs can be used to answer your research questions. If the subject is new and no prior knowledge exists, you can conduct an exploratory study using qualitative methods. The results from this qualitative study can then be used to develop subsequent research, using quantitative methods, to measure to what extent these problems occur. A descriptive study can also begin with quantitative methods (i.e. survey) to quantify the intervention barrier followed by qualitative methods to describe the context where the implementation problems exist. More details about methods are provided later in this module.

Most surveys used within descriptive studies use a cross-sectional design, a relatively simple and inexpensive design that is useful for investigating contexts with many variables to take into consideration. Data from repeated cross-sectional surveys provide useful indicators of trends, given that they have representative, independent and random samples as well as standardized definitions. Each survey should have a clear purpose. Valid surveys need well-designed questionnaires, an appropriate sample of sufficient size, a scientific sampling method and a good response rate.

Analytical

Analytical studies investigate and establish a causal relationship between the independent and dependent variables under study. Traditionally, cohort or case control study designs are used for non-intervention studies, in order to establish likely causal relationship. However, cohort study design is more commonly used for IR.

In a cohort study, the researcher recruits a group of people – who are free from disease, for example – and who are classified into sub-groups according to exposure status. Sub-groups are then followed up to see subsequent development of specific outcomes, such as specific health conditions. Cohort design can be used to measure typical IR-related outcomes over time (i.e. acceptability, adoption, appropriateness, feasibility, fidelity of interventions, implementation costs and cost-effectiveness, determinants of coverage and sustainability/maintenance). This design produces high quality, individual level data, enabling researchers to examine if better implementation outcomes are associated with exposures at the individual level, including the timing and direction of any effects.1 A cohort design can also be used to assess the uptake and retention of patients in specific services, particularly for chronic illnesses such as the continuation of antiretroviral (ARV) therapy among people living with HIV or treatment adherence among people with multidrug-resistant TB (MDR-TB).

Analytical studies can have a cross-sectional study design. However, such study designs cannot establish causal relationships between independent and dependent variables, as the measurement of both variables is conducted simultaneously.

Experimental research is the only type of research that can establish cause and effect. The randomized controlled trial (RCT) study in particular, is known for establishing causal relationships due to its ability to control for confounders, and for ensuring that the only difference between different study arms is the intervention in question. In an experimental study, the researcher is interested in the effect of an independent variable (also known as the experimental or treatment variable) on one or more dependent variables (also known as the criterion or outcome variables). In effect, the researcher changes the independent variable and measures the dependent variable(s). There are usually two groups of subjects in experimental research: The experimental group, which receives an intervention (e.g. taught by a new teaching method, receives a new drug), and the control group, which receives no intervention (e.g. continues to be taught by the old method, receives a placebo). Sometimes, a comparison group will also be used in addition, or instead of a control group. The comparison group receives a different treatment from the experimental group. The control and/or comparison groups are critical in experimental research as they allow the researcher to determine whether the intervention had an effect or whether one intervention was more effective than another.

The following are different types of experimental studies.

Randomized control trial (RCT)

This is the ‘gold standard’ for efficacy studies in clinical trials. IR, on the other hand, focuses more on generalizability of results to different settings rather than the efficacy of a given intervention. For this reason, RCT is not a commonly used study design in IR. In RCT, the subjects should be randomly assigned to the treatment and control groups to ensure that all groups are homogenous before an intervention is applied, and that the intervention is the only difference between the groups. Randomization is used to ensure internal validity.

Quasi-experimental

This study design is similar to RCT but lacks the key characteristic of random assignment. The design is frequently used when it is not logistically feasible or ethical to conduct an RCT. The assignment to the treatment group uses criteria other than randomization e.g. matching by individual or matching by group of sociodemographic factors. Quasi-experimental design is suitable for IR by virtue of the fact that the design allows for real-life factors – such as cost, feasibility and political concerns – to be integral factors in the study.

Pragmatic trials

Pragmatic trials evaluate the effects of health service interventions under the human, financial and logistic constraints of typical, real-world situations. The aim of this study design is to measure effectiveness rather than efficacy.2, 3 Contrary to the efficacy study, where participants are recruited from a homogeneous sub population (e.g. gender, age, ethnicity etc.) and randomly assigned to arms of the study, the design of the pragmatic trial presents higher degrees of variation in study participants. Participants are selected from within a real clinical or population setting to be representative of the population. To improve validity of pragmatic trials, randomization is conducted at the facility level (cluster randomization) rather than at the individual level.

As the effectiveness of a treatment is influenced by the extent to which an intervention is acceptable to patients, a pragmatic trial not only measures treatment outcome but also evaluates measures designed to increase effectiveness. For example, while patients in both control and interventions arms receive identical treatment, the intervention group receives additional interventions to increase treatment acceptance or adherence (e.g. counselling, home visit or mobile reminder).

Stepped-wedge cluster randomized trial

This is a variant of cluster randomized trial design in which the selected clusters are randomly allocated at the time point when they receive the intervention. In this design, all clusters are assigned in both intervention and control arms (Figure 2). The cluster can be geographical areas, clinics or other types of facilities.4 The advantage of this design is that each cluster can serve as a control for itself. The design also addresses any ethical issues where, for example, the randomization of patients to an intervention believed to be inferior or the withdrawal of an intervention believed to be superior, is considered unethical.

Adaptive design trial

This variant of experimental design anticipates intentional changes to the trial plan. It is characterized by the idea that collected data will be used to make decisions regarding the trial while it is ongoing. The objective of an adaptive design is to maintain the validity of the study while maintaining the flexibility to identify optimal treatment. Researchers can modify both trial and statistical procedures. Adaptation of statistical procedures can include the sample size, randomization, study design, data monitoring and the analysis plan. Typical trial procedures can include eligibility criteria, enrolment design, study dose, treatment duration including early stopping, follow-up design, study endpoints or laboratory/diagnostic procedures.5, 6 Figure 3 provides a description of the adaptive sequencing of trials. The changes in the subsequent trials are conditional on the outcome of the previous trial and/or certain parameter values.

From an ethical standpoint, adaptive design is advantageous since the methods enable the researcher to detect outcome differences early and allow for changes to the intervention concurrently during the trial. However, this flexibility limits the measurement of treatment effect for each group.

Table 2 provides a summary of study designs reviewed in this module, according to the stage of the intervention under study. This table has been adapted from Bowen et al (2009)3 to reflect IR questions.

TDR Implementation research toolkit(Second edition)

© 2024 TDR. All rights reserved

References