Depending on the research questions you want to answer and the type of data you have collected (i.e. quantitative or qualitative data), different types of analysis can be performed. Before we begin to analyse the data, we need to remember the different audiences to be reached with the results and recommendations of the IR project. What are their needs for information, and what is the best way to reach them? Click on each of the headings below for an explanation on each type of analysis.
To ensure that the analysis is undertaken in a systematic manner, an analysis plan should first be created. The analysis plan should contain a description of the research question and the various steps that will be followed in the research process. It is best practice to develop your data analysis plan at the start of your project, in order to capture the hypotheses you have about your research question. You may amend the data analysis plan as your research progresses.
Designing data analysis in an IR project is based on the premise that IR aims to: (i) understand the implementation processes for a given intervention, focusing on mechanisms that support or constrain those processes; and (ii) communicate that understanding of the implementation process to multiple stakeholders, who may consequently contribute to the integration of findings into current and/or future research, policy and/or programming.
Most IR proposals use mixed methods in which quantitative and qualitative techniques are combined. Under many circumstances, mixed-methods approaches can provide a better understanding of the problem than either approach can achieve alone. However, few of the stakeholders in the IR project team are likely to have specialized knowledge of both quantitative and qualitative research methods. It is therefore essential that the analysis and most importantly, the presentation of findings, be carefully considered to avoid potential misinterpretations that could lead to inappropriate conclusions and/or responses. Emphasis should be placed on simplicity and interpretability because stakeholders need to both understand the information provided and also be able to interpret it correctly.21 Data analysis should take place along with the data collection process. This continual data analysis process facilitates regular sharing and discussion of findings.
An important preliminary consideration when designing your data analysis plan is to clearly define the primary objectives of the analysis by identifying the specific issues to be addressed. It is important to remember that data from IR is, by its nature, intended not only to simply describe an intervention but also to improve it.
For example, IR research may focus on:
Before any statistical analysis is undertaken, some factors need to be taken into account in order to select the most appropriate statistical analysis approach. These are described briefly below.
Measurement scale is a way to define and categorize variables. There are four different measurement scales (nominal, ordinal, continuous and ratio scale). Each measurement scale has different properties, which are required for different statistical analysis.Table 15 summarizes the properties for different measurement scales, described in detail below.
The nominal scale can only differentiate the category. We cannot say that one category is higher or better than the other category. An example of a nominal scale is gender. If we code Male as 1 and Female as 2 or vice versa (i.e. when we enter the variable into the computer), it does not mean that one gender is better than the other. The numbers 1 and 2 only represent categories of data.
Ordinal scales represent an ordered series of relationships or rank order. However, we cannot quantify the difference between the categories. We can only say that one category is better or higher than the other categories. An example of an ordinal scale is the level of a health facility (e.g. primary, secondary, tertiary).
Continuous scales represent a rank order with equal unit of quantity or measurement. However, in this scale, zero simply represents an additional point of measurement not the lowest value. An example of such a scale is a temperature scale in Celsius or Fahrenheit. In this scale, zero (0) is one point on the scale with numbers above and below it.
Ratio scale is similar to the continuous scale, in that it represents a rank order with equal unit of quantity or measurement. However, ratio scale has an absolute zero, in which zero is the lowest value. An example of ratio scale is body mass index (BMI) in which the lowest value (theoretically) is zero.
The continuous and ratio data are referred to as parametric as these types of data have certain parameters with regards to distribution of the population as a whole (assumption of normal distribution with mean as a measure of central tendency and variance as a measure of dispersion). Parametric also means that the data can be added, subtracted, multiplied and divided. The statistical analysis for these types of data is referred to as parametric test.
On the other hand, nominal and ordinal scales are referred to as non-parametric. Non-parametric data lacks the parameters that the parametric data have. Furthermore, it lacks quantifiable values and as such nonparametric data cannot be added, subtracted, multiplied or divided. Nominal and ordinal data are analysed using non-parametric tests.
A parametric test is considered to be more robust than a non-parametric test. Furthermore, there are more statistical options available for analysing parametric data. However, most parametric tests assume that the data is normally distributed.
The way we formulate research questions also determines what kind of statistical techniques need to be used for analysis. Examples of IR questions include:
Quantitative research generates large volumes of data that require organizing and summarizing. These operations facilitate a better understanding of how the data vary or relate to each other. The data reveals distributions of the values of study variables within a study population. For example:
Analysis of the type of data described above essentially involves the use of techniques to summarize these distributions and to estimate the extent to which they relate to other variables.
The use of frequency distributions for this purpose has several advantages:
The different data presentation formats help to reach different target audiences. Tables are a useful presentation format when you want to communicate within the scientific community. Graphical data presentations help to communicate with a wider, less scientific, audience in the community or policy makers. You can read further about data presentation and how to present data to different audiences in the Advocacy and Communication module of this Toolkit.
A key decision in constructing a frequency distribution relates to the choice of intervals along the measuring scale. For example:
There are two conflicting objectives when determining the number of intervals:
Note: Distributions based on unequal intervals should be used with caution, as they can be easily misinterpreted, especially when distributions are presented graphically.
Careful examination of the frequency distribution of a variable is a crucial step and can be an extremely powerful and robust form of analysis. There can be a tendency to move too quickly to the calculation of simpler summary statistics (e.g. mean, variance) that are intended (but often fail) to capture the essential features of a distribution.
Summary statistics usually focus on deriving the measures indicating the overall tendency location of a distribution (e.g. how sick, poor or educated a study population is, on average) or on indicating the extent of variation within a population. However, the reasons for selecting a particular summary statistic should relate to the purpose for which it is intended.
The central tendency measures the central location of a data distribution. The mean, or average, is the most commonly used parameter because the mean is simple to calculate and manipulate. For example, it is straightforward to combine the mean of sub-populations to calculate the overall population mean. However, the mean is often inappropriately used. It can also be misinterpreted as the typical value in a population.
The median, defined as the middle value, is relatively easy to explain. The magnitudes of other values are irrelevant. For example, if the largest value in a given range increases or the smallest value decreases, the median remains unchanged. When a data set is not skewed (or when data is distributed ‘normally’), the mean and the median are the same (Figure 5). It is therefore preferable to use median as a measure of central tendency when the data set is skewed as the value is independent to the shape of the data distribution.
In a skewed distribution, the mean is difficult to interpret (Figure 6).
Measure of dispersion denotes how much variability occurs in a given population, as follows:
Variances, standard deviations and coefficients of variation are widely used in statistical analysis. As with the mean, this is not because they are always the best measures of variability (they can be easily interpreted for normally distributed variables but not for other distributions), but mainly because they can be readily calculated and manipulated.
For example, given the variances of two population sub-groups it is easy to combine them to calculate the overall population variance. However, while they may have technical advantages, these measures have serious limitations in terms of policy application.
More readily interpreted measures include quartiles and percentiles. Quartiles: divide data into four quarters (Q1 to Q4), with 25% of available data in each:
Q3–Q1 is the inter-quartile range, comprising the middle 50% of a population. Percentiles divide the data into two parts:
Other common percentiles:
Other descriptive statistics
The outcomes of an intervention may vary substantially between different sub-groups of the target population. Sub-group analysis can be complex if the sub-groups are not pre-defined. Investigating a relationship within a sub-group simply because it appears interesting could bias the findings.
Data mining (i.e. exploring data sets to discover apparent relationships) is useful in formulating new hypotheses but requires great caution in IR. The context within which this sub-analysis is undertaken should be considered carefully, because relationships between inputs and outcomes may be mediated by contextual variables. For example, we might assume that it would be useful to undertake an analysis of chronic illness by age group and sex, as shown in Table 16. For meaningful interpretation of the results, the type of chronic illness and the background of the patients experiencing them will be important variables to consider.
Although measures of risk are widely used in health research, they are not always well understood. For example, risk and odds are often used interchangeably however they do not mean the same thing:
statistical test is performed so that we can make inferences concerning some unknown aspects of a statistical population from the sample that we have collected from a study. There are different types of statistical tests that we can use depending on the research questions, type of measurement scale and assumptions about data distribution. A Simple univariate and bivariate analyses should be done before a sophisticated analysis such as the multivariate analysis, is undertaken.
Association is a relationship between two variables which are statistically dependent. The two variables are equivalent; there are no independent and dependent variables. Correlation can be considered as one type of association where the relationship between variables is linear. There are several statistical tests to assess the correlation between variables (Table 17).
Group comparison analysis is used to explore the statistically significant difference of study outcomes between groups. The groups can be categorized by exposures under study. When there is a significant difference between groups we assume that the difference is due to the exposures (Table 18).
Regression analysis is the type of analysis used to predict study outcome from a number of independent variables. If the outcome variable is on a continuous or ratio scale and has a normal distribution of data, we can use linear regression. If the outcome variable is dichotomous i.e the variable has only two possible values such as “yes” or “no”, we can use logistic regression.
There are many traditions of qualitative research and it has been argued that there cannot and should not be a uniform approach to qualitative analysis methods (Bradley et al 2007).22 Similarly, there are few ‘agreed-on’ canons for qualitative data analysis, in the sense of shared ground rules for drawing conclusions and verifying sturdiness.23 Many qualitative studies adopt an iterative strategy: collect some data, construct initial concepts and hypotheses, test against new data, revise concepts and hypotheses. This approach implies that data collection and analysis are embedded in a single process and are undertaken by the same individuals. However, with the increasing use of qualitative research in health research, objectives are often pre-defined prior to the start of data collection, as opposed to being developed as information for the data collected emerges.
Researchers can also use several different computer qualitative data analysis (QDA) softwares to help them manage their data. The term “QDA software” is slightly misleading because the software does not actually analyse the data, but organizes them to make it easier to find and identify themes. Software can also be relatively expensive (up to around US$900 per single user). For these reasons, some researchers prefer analysing data manually. However, as the software improves, researchers are finding QDA increasingly useful in helping analyse data and saving time. Here are some of the more common QDA software names:
Researchers should feel free to use whatever analysis method (with or without software) they are comfortable with. Whatever approach is used, all qualitative analyses involve making sense of large amounts of data, identifying significant patterns and communicating the essence of what the data reveal.
Qualitative data analysis consists of data management, data reduction and coding of data. In short, the goal is to identify patterns (themes) in the data and the links that exist between them. As mentioned, there is no set formula for analysing qualitative data, but there are three core requirements of qualitative analysis to adhere to:
The following steps describe these three core components in more detail:
The research team must ensure scientific rigour in qualitative methods analysis. For example, will your study provide participants with a copy of their interview transcripts to give them an opportunity to verify and clarify their points of view? Will you use software to help manage your data and increase rigour? Will you conduct member checks (have more than one researcher analyse sections of the data to compare and verify results (called inter-rater reliability)? Will you triangulate the data to increase the rigour? Will you report disconfirming evidence?
In quantitative studies, reliability means repeatability and independence of findings from the specific researchers generating those findings. In qualitative research, reliability implies that given the data collected, the results are dependable and consistent.10 The strength of qualitative research lies in validity (closeness to truth). Good qualitative research, using a selection of data collection methods, should touch the core of what is going on rather than just skimming the surface. When analysing your qualitative data, look for internal validity, where an in-depth understanding will allow you to counter alternative explanations for your findings.
The basic process for the analysis of text derived from qualitative interviews or discussions is relatively straightforward and includes:
One relatively simple approach is based on the identification of key topics, referred to as ‘domains,’ and the relationships between them.
There are four stages in domain/theme analysis:
After listing the domains, it is useful to start arranging the actual segments of text into the primary domains. This process groups actual phrases together and allows the sub-categories to emerge directly from the interviewees’ own words.
This stage involves identifying relationships between the domains or topics to build up an overall picture. Within the collection of actual quotations from respondents, the researcher should identify statements that relate one topic to another. For example, in the study described above, researchers were able to establish associations between the domains that linked women’s previous experiences, risk perceptions and socioeconomic situation and their evaluations of health services (Figure 9).
Following an initial analysis to gain an overall understanding of the main features of the data, many analysts apply a systematic coding procedure. The researchers determine the most appropriate way to conduct a systematic analysis, uncovering and documenting links between topics, themes and sub-themes.23 These codes are then assigned to specific occurrences of words or phrases, highlighting patterns within the text while preserving their context, as in Table 19.
In a mixed methods IR project, demonstrating how scientific rigour will be ensured throughout your study is critical. It is important to examine the validity (i.e. being able to draw meaningful inferences from a population) and reliability (i.e. stability of instrument scores over time) of the quantitative data.
To ensure qualitative validation, the researcher will use a number of strategies. First, opportunity will be provided for the participants to review the findings and then provide feedback as to whether the findings are an accurate reflection of their experience. Second, triangulation of the data will be used from various sources (transcripts and individual interviews) and from multiple participants. Finally, any ‘disconfirming’ evidence will be reported. This is to ensure that accounts provided by the participants are trustworthy.
Before beginning the analysis, consider how the mixed method study was designed. Refer to Table 7 on mixed methods approaches to review the order in which data was collected. This will guide the process indicating which data (qualitative or quantitative) should be analysed first.
One of the important aspects of mixed methods analysis is the capability in the presentation of these data to have the different methodologies ‘speak’ to each other. For example if the quantitative survey results show that 45% of mothers do not attend antenatal services, adding a direct quotation from a mother collected in a FGD will add a real-life and tangible element to this result.
When working through the analysis of the data collected in the IR project, it is important to remember who will receive the results of the research. This will determine how the research findings are presented. For example, if the results are disseminated in community meetings, it is important to use simple infographics and quotations or stories; in contrast during a workshop style meeting with high level policy-makers, more detailed information and numerical explanations will be required. This is dealt in more detail in the Communications and advocacy module of this Toolkit.