Data analysis

Introduction

Depending on the research questions you want to answer and the type of data you have collected (i.e. quantitative or qualitative data), different types of analysis can be performed. Before we begin to analyse the data, we need to remember the different audiences to be reached with the results and recommendations of the IR project. What are their needs for information, and what is the best way to reach them? Click on each of the headings below for an explanation on each type of analysis.

To ensure that the analysis is undertaken in a systematic manner, an analysis plan should first be created. The analysis plan should contain a description of the research question and the various steps that will be followed in the research process. It is best practice to develop your data analysis plan at the start of your project, in order to capture the hypotheses you have about your research question. You may amend the data analysis plan as your research progresses.

Designing data analysis in an IR project is based on the premise that IR aims to: (i) understand the implementation processes for a given intervention, focusing on mechanisms that support or constrain those processes; and (ii) communicate that understanding of the implementation process to multiple stakeholders, who may consequently contribute to the integration of findings into current and/or future research, policy and/or programming.

Most IR proposals use mixed methods in which quantitative and qualitative techniques are combined. Under many circumstances, mixed-methods approaches can provide a better understanding of the problem than either approach can achieve alone. However, few of the stakeholders in the IR project team are likely to have specialized knowledge of both quantitative and qualitative research methods. It is therefore essential that the analysis and most importantly, the presentation of findings, be carefully considered to avoid potential misinterpretations that could lead to inappropriate conclusions and/or responses. Emphasis should be placed on simplicity and interpretability because stakeholders need to both understand the information provided and also be able to interpret it correctly.21 Data analysis should take place along with the data collection process. This continual data analysis process facilitates regular sharing and discussion of findings.

Designing analysis by purpose

An important preliminary consideration when designing your data analysis plan is to clearly define the primary objectives of the analysis by identifying the specific issues to be addressed. It is important to remember that data from IR is, by its nature, intended not only to simply describe an intervention but also to improve it.

For example, IR research may focus on:

  • Effectiveness: Aims to modify implementation procedures in order to improve the generation of benefits.
  • Efficiency: Attempts to assess the implications of possible modifications to the implementation process in order to increase the benefits in relation to resources.
  • Equity: Focuses on distributional issues, i.e. how benefits and resource costs are distributed.
  • Sustainability: Focuses on identifying essential inputs, potential constraints on their availability and other possible barriers to medium and long-term sustainability.

Before any statistical analysis is undertaken, some factors need to be taken into account in order to select the most appropriate statistical analysis approach. These are described briefly below.

Measurement scale and different statistical techniques

Measurement scale is a way to define and categorize variables. There are four different measurement scales (nominal, ordinal, continuous and ratio scale). Each measurement scale has different properties, which are required for different statistical analysis.Table 15 summarizes the properties for different measurement scales, described in detail below.

The nominal scale can only differentiate the category. We cannot say that one category is higher or better than the other category. An example of a nominal scale is gender. If we code Male as 1 and Female as 2 or vice versa (i.e. when we enter the variable into the computer), it does not mean that one gender is better than the other. The numbers 1 and 2 only represent categories of data.

Ordinal scales represent an ordered series of relationships or rank order. However, we cannot quantify the difference between the categories. We can only say that one category is better or higher than the other categories. An example of an ordinal scale is the level of a health facility (e.g. primary, secondary, tertiary).

Continuous scales represent a rank order with equal unit of quantity or measurement. However, in this scale, zero simply represents an additional point of measurement not the lowest value. An example of such a scale is a temperature scale in Celsius or Fahrenheit. In this scale, zero (0) is one point on the scale with numbers above and below it.

Ratio scale is similar to the continuous scale, in that it represents a rank order with equal unit of quantity or measurement. However, ratio scale has an absolute zero, in which zero is the lowest value. An example of ratio scale is body mass index (BMI) in which the lowest value (theoretically) is zero.

The continuous and ratio data are referred to as parametric as these types of data have certain parameters with regards to distribution of the population as a whole (assumption of normal distribution with mean as a measure of central tendency and variance as a measure of dispersion). Parametric also means that the data can be added, subtracted, multiplied and divided. The statistical analysis for these types of data is referred to as parametric test.

On the other hand, nominal and ordinal scales are referred to as non-parametric. Non-parametric data lacks the parameters that the parametric data have. Furthermore, it lacks quantifiable values and as such nonparametric data cannot be added, subtracted, multiplied or divided. Nominal and ordinal data are analysed using non-parametric tests.

A parametric test is considered to be more robust than a non-parametric test. Furthermore, there are more statistical options available for analysing parametric data. However, most parametric tests assume that the data is normally distributed.

Research questions

The way we formulate research questions also determines what kind of statistical techniques need to be used for analysis. Examples of IR questions include:

  • Describing patterns/distributions of study variables in terms of “What, Who, Where, and When”.
  • Comparing differences between groups.
  • Exploring possible associations/correlation between independent variables (exposures) and dependent variables (study outcomes).
  • Exploring a possible causal relationship between independent variables (exposures) and dependent variables (study outcomes).

Quantitative research generates large volumes of data that require organizing and summarizing. These operations facilitate a better understanding of how the data vary or relate to each other. The data reveals distributions of the values of study variables within a study population. For example:

  • The number of children under five years in various households in a given population.
  • Daily outpatient attendance in a health facility.
  • The birth weights of children born in a particular health facility over a period of time.
  • Educational levels of mothers of children born in a particular health facility.

Analysis of the type of data described above essentially involves the use of techniques to summarize these distributions and to estimate the extent to which they relate to other variables.

The use of frequency distributions for this purpose has several advantages:

  • Useful for all types of variables
  • Easy to explain and interpret for audiences without specialist knowledge.
  • Can be presented graphically and in different formats to aid interpretation (e.g. tables, bar charts, pie chart, graphs, etc.).

The different data presentation formats help to reach different target audiences. Tables are a useful presentation format when you want to communicate within the scientific community. Graphical data presentations help to communicate with a wider, less scientific, audience in the community or policy makers. You can read further about data presentation and how to present data to different audiences in the Advocacy and Communication module of this Toolkit.

Defining intervals for frequency distributions

A key decision in constructing a frequency distribution relates to the choice of intervals along the measuring scale. For example:

  • Ordinal: Level of health facility (e.g. primary, secondary, tertiary).
  • Continuous: Body temperature (e.g. below normal, normal, above normal).
  • Ratio: Body mass index (BMI) (e.g. <25, 25–29, 30+).

There are two conflicting objectives when determining the number of intervals:

  • Limiting the loss of information through the use of a relatively large number of intervals.
  • Providing a simple, interpretable and useful summary through the use of a relatively small number of intervals.

Note: Distributions based on unequal intervals should be used with caution, as they can be easily misinterpreted, especially when distributions are presented graphically.

Summary statistics and frequency distribution

Careful examination of the frequency distribution of a variable is a crucial step and can be an extremely powerful and robust form of analysis. There can be a tendency to move too quickly to the calculation of simpler summary statistics (e.g. mean, variance) that are intended (but often fail) to capture the essential features of a distribution.

Summary statistics usually focus on deriving the measures indicating the overall tendency location of a distribution (e.g. how sick, poor or educated a study population is, on average) or on indicating the extent of variation within a population. However, the reasons for selecting a particular summary statistic should relate to the purpose for which it is intended.

Measure of central tendency

The central tendency measures the central location of a data distribution. The mean, or average, is the most commonly used parameter because the mean is simple to calculate and manipulate. For example, it is straightforward to combine the mean of sub-populations to calculate the overall population mean. However, the mean is often inappropriately used. It can also be misinterpreted as the typical value in a population.

The median, defined as the middle value, is relatively easy to explain. The magnitudes of other values are irrelevant. For example, if the largest value in a given range increases or the smallest value decreases, the median remains unchanged. When a data set is not skewed (or when data is distributed ‘normally’), the mean and the median are the same (Figure 5). It is therefore preferable to use median as a measure of central tendency when the data set is skewed as the value is independent to the shape of the data distribution.

In a skewed distribution, the mean is difficult to interpret (Figure 6).

Measure of dispersion

Measure of dispersion denotes how much variability occurs in a given population, as follows:

  • Low variability: Measures of location can be seen as reasonably representative of the overall population; there is limited loss of information through aggregation.
  • High variability: Measures of location are less useful; there is a substantial risk of losing information by aggregation unless the nature of the distribution is well understood.
Choice of measures

Variances, standard deviations and coefficients of variation are widely used in statistical analysis. As with the mean, this is not because they are always the best measures of variability (they can be easily interpreted for normally distributed variables but not for other distributions), but mainly because they can be readily calculated and manipulated.

For example, given the variances of two population sub-groups it is easy to combine them to calculate the overall population variance. However, while they may have technical advantages, these measures have serious limitations in terms of policy application.

Alternative measures

More readily interpreted measures include quartiles and percentiles. Quartiles: divide data into four quarters (Q1 to Q4), with 25% of available data in each:

  • Q2 is the median.
  • Q1 is the median of the data points below the median.
  • Q3 is the median of the data points above the median.

Q3–Q1 is the inter-quartile range, comprising the middle 50% of a population. Percentiles divide the data into two parts:

  • p percent have values less than the percentile.
  • (100 – p) percent have greater values.
  • 50th percentile = median; 25th percentile = first quartile.

Other common percentiles:

  • 20th (which defines the first quintile group).
  • 10th (which defines the first decile group).

Other descriptive statistics

Sub-group analysis

The outcomes of an intervention may vary substantially between different sub-groups of the target population. Sub-group analysis can be complex if the sub-groups are not pre-defined. Investigating a relationship within a sub-group simply because it appears interesting could bias the findings.

Data mining (i.e. exploring data sets to discover apparent relationships) is useful in formulating new hypotheses but requires great caution in IR. The context within which this sub-analysis is undertaken should be considered carefully, because relationships between inputs and outcomes may be mediated by contextual variables. For example, we might assume that it would be useful to undertake an analysis of chronic illness by age group and sex, as shown in Table 16. For meaningful interpretation of the results, the type of chronic illness and the background of the patients experiencing them will be important variables to consider.

Measures of risk

Although measures of risk are widely used in health research, they are not always well understood. For example, risk and odds are often used interchangeably however they do not mean the same thing:

  • Risk (P): number of people experiencing an event/population exposed to the event.
  • Relative risk RR = (PA/PB): risk in group A compared to risk in group B.
  • Odds: number of people experiencing an event versus number of people not experiencing the same event = P / (1-P)
  • Odds ratio: OR = [PA/(1- PA)] / [PB/(1-PB)]

statistical test is performed so that we can make inferences concerning some unknown aspects of a statistical population from the sample that we have collected from a study. There are different types of statistical tests that we can use depending on the research questions, type of measurement scale and assumptions about data distribution. A Simple univariate and bivariate analyses should be done before a sophisticated analysis such as the multivariate analysis, is undertaken.

Finding association/correlation

Association is a relationship between two variables which are statistically dependent. The two variables are equivalent; there are no independent and dependent variables. Correlation can be considered as one type of association where the relationship between variables is linear. There are several statistical tests to assess the correlation between variables (Table 17).

Finding causality: group comparison

Group comparison analysis is used to explore the statistically significant difference of study outcomes between groups. The groups can be categorized by exposures under study. When there is a significant difference between groups we assume that the difference is due to the exposures (Table 18).

Finding causality: prediction

Regression analysis is the type of analysis used to predict study outcome from a number of independent variables. If the outcome variable is on a continuous or ratio scale and has a normal distribution of data, we can use linear regression. If the outcome variable is dichotomous i.e the variable has only two possible values such as “yes” or “no”, we can use logistic regression.

There are many traditions of qualitative research and it has been argued that there cannot and should not be a uniform approach to qualitative analysis methods (Bradley et al 2007).22 Similarly, there are few ‘agreed-on’ canons for qualitative data analysis, in the sense of shared ground rules for drawing conclusions and verifying sturdiness.23 Many qualitative studies adopt an iterative strategy: collect some data, construct initial concepts and hypotheses, test against new data, revise concepts and hypotheses. This approach implies that data collection and analysis are embedded in a single process and are undertaken by the same individuals. However, with the increasing use of qualitative research in health research, objectives are often pre-defined prior to the start of data collection, as opposed to being developed as information for the data collected emerges.

Researchers can also use several different computer qualitative data analysis (QDA) softwares to help them manage their data. The term “QDA software” is slightly misleading because the software does not actually analyse the data, but organizes them to make it easier to find and identify themes. Software can also be relatively expensive (up to around US$900 per single user). For these reasons, some researchers prefer analysing data manually. However, as the software improves, researchers are finding QDA increasingly useful in helping analyse data and saving time. Here are some of the more common QDA software names:

Researchers should feel free to use whatever analysis method (with or without software) they are comfortable with. Whatever approach is used, all qualitative analyses involve making sense of large amounts of data, identifying significant patterns and communicating the essence of what the data reveal.

Qualitative data analysis consists of data management, data reduction and coding of data. In short, the goal is to identify patterns (themes) in the data and the links that exist between them. As mentioned, there is no set formula for analysing qualitative data, but there are three core requirements of qualitative analysis to adhere to:

  • Detailed description of techniques and methods used to select samples and generate data.
  • Carefully specified analysis, paying attention to issues of validity and reliability.
  • Triangulation with other data collection methods.

The following steps describe these three core components in more detail:

  • Detailed description of techniques and methods used to select samples and generate data
    • If conducting interviews or focus group discussions, all sessions are recorded (preferably with a recording device, although where this is not accepted by the participants, with hand written notes).
    • All recordings have to be transcribed verbatim (i.e. typed out in full, word-for-word).
    • If observation has been done, document the times, locations and important events (e.g. interruptions, significant events, etc.)
    • All background information about the participants should be appended to each transcript.
  • Carefully specified analysis, paying attention to issues of validity and reliability
    • In the initial step of the analysis, the researcher will read/re-read the first set of data and write notes, comments and observations in the margin, with regard to interesting data that is relevant to answering the research question(s).
    • While reading the data, the researchers should begin developing a preliminary list of emergent categories into which they will group the notes and comments. These categories are guided by the purpose of the study, the researchers’ knowledge and orientation, and the meanings made explicit by the participants.24 A list of these categories is compiled and attached to the data.
    • The next set of data collected is then carefully read and, with the previously constructed list of categories in mind, notes, comments and observations are once again recorded in the margin. This second data set is grouped into categories and a list of the categories is compiled. The two lists are then compared and merged to create a master list of categories. This list reflects the recurring regularities or patterns in the study.
    • These categories are then given names. Category names may emerge from the researcher, from the participants or from the literature. According to Merriam (1998),24 these categories should be: exhaustive; mutually exclusive; sensitive to what is in the data; conceptually congruent; and, in effect, the answers to the research questions. Category names or codes in data analysis can also be derived from the questions asked in the data collection tools based on the objectives of the study.
    • Once the researchers are satisfied with the categories, the data is assigned to these categories. Taking a clean copy of the data, the researcher organizes the data into meaning units and assigns them to the relevant categories, writing the category code in the margin.
    • The researchers then create separate files for each category and cut and paste the meaning units into the relevant category, creating a file containing all the relevant data. Care should be taken to avoid context stripping by carefully cross-referencing all units and coding them with the participants’ pseudonym, the date of data collection, and the page number.25
    • The researchers then try to link the categories in a meaningful way. Diagrams can be used to facilitate this process. For example, in a study to determine causes of malaria, a number of prevention themes emerged (Figure 7).
  • Triangulation with other data collection methods
    • Review your results against those collected using other data collection methods to determine the validity or truthfulness of your findings.
    • Review if routine data sources confirm your findings.
Rigour in qualitative research

The research team must ensure scientific rigour in qualitative methods analysis. For example, will your study provide participants with a copy of their interview transcripts to give them an opportunity to verify and clarify their points of view? Will you use software to help manage your data and increase rigour? Will you conduct member checks (have more than one researcher analyse sections of the data to compare and verify results (called inter-rater reliability)? Will you triangulate the data to increase the rigour? Will you report disconfirming evidence?

Validity and reliability in analysing qualitative research

In quantitative studies, reliability means repeatability and independence of findings from the specific researchers generating those findings. In qualitative research, reliability implies that given the data collected, the results are dependable and consistent.10 The strength of qualitative research lies in validity (closeness to truth). Good qualitative research, using a selection of data collection methods, should touch the core of what is going on rather than just skimming the surface. When analysing your qualitative data, look for internal validity, where an in-depth understanding will allow you to counter alternative explanations for your findings.

Analysis of textual material

The basic process for the analysis of text derived from qualitative interviews or discussions is relatively straightforward and includes:

  • Identification of similar phrases, themes and relationships between themes.
  • Identification of similarities and differences between population sub-groups (e.g. men/women, rural/urban, young/old, richer/poorer, etc.).
  • Initial attempts to generalize by identifying consistent patterns across or within sub-groups.
  • Critical review and revision of generalizations, paying particular attention to contradictory evidence and outliers.
Domain/theme analysis

One relatively simple approach is based on the identification of key topics, referred to as ‘domains,’ and the relationships between them.

There are four stages in domain/theme analysis:

  • Identify main issues raised by the interviewees – the domains /themes.
  • Group more detailed topics within each of these domains to construct a taxonomy of sub-categories.
  • Specify what was actually said and the components within each sub-category.
  • Explore the of interrelationships between the various domains.
  • Domain/theme identification
    • Index texts, identifying topics line-by-line.
    • Collate these topics across all interviews to identify a preliminary list.
    • Some will recur more frequently than others and some of the latter can be classified as sub-topics.
    • Systematically combine related topics to develop a list of just a few fairly broad domains.

    After listing the domains, it is useful to start arranging the actual segments of text into the primary domains. This process groups actual phrases together and allows the sub-categories to emerge directly from the interviewees’ own words.

  • Relationships between domains/themes

    This stage involves identifying relationships between the domains or topics to build up an overall picture. Within the collection of actual quotations from respondents, the researcher should identify statements that relate one topic to another. For example, in the study described above, researchers were able to establish associations between the domains that linked women’s previous experiences, risk perceptions and socioeconomic situation and their evaluations of health services (Figure 9).

Coding schemes

Following an initial analysis to gain an overall understanding of the main features of the data, many analysts apply a systematic coding procedure. The researchers determine the most appropriate way to conduct a systematic analysis, uncovering and documenting links between topics, themes and sub-themes.23 These codes are then assigned to specific occurrences of words or phrases, highlighting patterns within the text while preserving their context, as in Table 19.

In a mixed methods IR project, demonstrating how scientific rigour will be ensured throughout your study is critical. It is important to examine the validity (i.e. being able to draw meaningful inferences from a population) and reliability (i.e. stability of instrument scores over time) of the quantitative data.

To ensure qualitative validation, the researcher will use a number of strategies. First, opportunity will be provided for the participants to review the findings and then provide feedback as to whether the findings are an accurate reflection of their experience. Second, triangulation of the data will be used from various sources (transcripts and individual interviews) and from multiple participants. Finally, any ‘disconfirming’ evidence will be reported. This is to ensure that accounts provided by the participants are trustworthy.

Before beginning the analysis, consider how the mixed method study was designed. Refer to Table 7 on mixed methods approaches to review the order in which data was collected. This will guide the process indicating which data (qualitative or quantitative) should be analysed first.

One of the important aspects of mixed methods analysis is the capability in the presentation of these data to have the different methodologies ‘speak’ to each other. For example if the quantitative survey results show that 45% of mothers do not attend antenatal services, adding a direct quotation from a mother collected in a FGD will add a real-life and tangible element to this result.

Data presentation for your audience

When working through the analysis of the data collected in the IR project, it is important to remember who will receive the results of the research. This will determine how the research findings are presented. For example, if the results are disseminated in community meetings, it is important to use simple infographics and quotations or stories; in contrast during a workshop style meeting with high level policy-makers, more detailed information and numerical explanations will be required. This is dealt in more detail in the Communications and advocacy module of this Toolkit.

TDR Implementation research toolkit(Second edition)

© 2024 TDR. All rights reserved

References