Participating in Health Research Studies

  • What is Health Research?
  • Is Health Research Safe?
  • Is Health Research Right for Me?

Types of Health Research

Behavioral studies.

These are studies that test how people act in different ways.

Clinical Trials

These are studies of a drug, surgery, or medical device in healthy volunteers or people who have a specific disease. See below for more information.

Community-Based Participatory Research (CBPR)

This is research that engages community partners as equal participants in the research.

Genetic Studies

These are studies to find the role of genes in different diseases.

Observational Studies

These are studies in which a group of people is observed for many years.

Physiological Studies

These are studies to better understand how the human body functions.

Prevention Studies

These are studies that test ways to prevent specific conditions or diseases.

Public Health Research

This type of research can be one or a combination of the types of research mentioned above. Public health research tries to improve the health and well-being of people from a  population-level  perspective.

More Information about Clinical Trials

Clinical trials are often done in a "randomized" way. These are sometimes called RCTs for "randomized clinical trials." In an RCT, some people will be chosen at random to receive a treatment or intervention, such as a new drug. The rest of the participants will be given a "placebo," such as a sugar pill. In other cases, when two interventions are being compared, one group will receive one of the interventions and the other group will be given a different one. Some clinical trials are also "blinded." This means that both the volunteers and the doctors do not know if people are taking the new medicine or the placebo. Only at the end of the study will this be revealed. Since people are chosen at random (similar to a coin toss) in an RCT, people who receive the treatment should be no different than those who do not. For instance, there should be an equal number of males who receive treatment compared with those who do not. This helps reduce bias due to something like gender in a study.

New drugs are first developed in research labs, and then tested in animals. Only then are clinical studies done in humans. Clinical trials of new drugs are done in different phases:

  • Phase I  studies test a new drug for the first time in a small group of people (about 20-80) to see if it safe, to find the right dose, and to know the side effects.
  • Phase II  studies are done in more people (about 100-300) to see how well the new drug treats a disease.
  • Phase III  studies are done in large groups of people (about 1000 to 3000) to see if the new drug works well, has side effects, and how it compares to other drugs.
  • Phase IV  studies are done after the treatment is approved by the U.S. Food and Drug Administration (FDA).
  • << Previous: Is Health Research Right for Me?
  • Last Updated: May 27, 2020 3:05 PM
  • URL: https://guides.library.harvard.edu/healthresearch
         


10 Shattuck St, Boston MA 02115 | (617) 432-2136

| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Prevent plagiarism. Run a free check.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

Get unlimited documents corrected

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

what are three types of health research methodology

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

what are three types of health research methodology

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

what are three types of health research methodology

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Dr. Zara

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Baraka Mfilinge

I struggle so much with designs of the research for sure!

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

41k Accesses

58 Citations

61 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

what are three types of health research methodology

Banner

Healthcare Research and Evidence Based Practice: Types of Research

  • Evidence Based Practice Context & Resources
  • Types of Research
  • Filtered Resources
  • Evidence in Articles
  • Evidence in Books
  • Evidence Appraisal
  • The WilmU Library Discovery Search Engine
  • Find Textbooks
  • Find eBooks
  • Health Science Web Resources
  • Electronic Reference Collection
  • APA Citation Help
  • Database Tutorials
  • Ethical Use of Information
  • Get Items From Other Libraries This link opens in a new window

Qualitative Research

Qualitative research is used to explore and understand people's beliefs, experiences, attitudes, behaviour and interactions. It generates descriptive, non-numerical data.  Qualitative research methods include:

  • Documents - the study of documentary accounts of events, such as minutes of meetings
  • Passive observation - the systematic watching and recording of behaviour  
  • Participant observation – here, the researcher also occupies a role or part in the setting, in addition to observing
  • In-depth interview - a face-to-face conversation to explore issues or topics in detail
  • Focus group - method of group interview which explicitly includes and uses the group interaction to generate data.

Quantitative Research

Quantitative research is used to generate numerical data or data that can be converted into numbers. Study types that are used in the health and medical field include:  

  • Case report or case series - a report on one or more individual patients.  There is no "control group" so this study type is considered to have low statistical validity
  • Case control study - this studies patients with a particular outcome (cases) and control patients without the outcome. Is useful in aetiology (causation) research but prone to causation error
  • Cohort study – identifies and follows two groups (cohorts) of patients, one having received the intervention being studied, and and one having not. Useful in both aetiology and prognosis research. Because the groups are not randomised, they may differ in ways other than in the variable under study
  • Randomised Controlled Trial (RCT) - a clinical trial in which participants are randomly allocated to a test treatment and a control. This is considered the “gold standard” in testing the efficacy of an intervention. RCTs include methodologies - randomisation and blinding - that reduce the potential for bias and provide good evidence for cause and effect.

Mixed Methods

Please note that a research study does not have to be exclusively quantitative or qualitative. Many studies will use a combination of both types of research.

In the Dictionary of Statistics and Methodology , Mixed-Method Research is defined as:

"Inquiry that combines two or more methods. This particular term usually refers to mixing that crosses the quantitative-qualitative boundary. However, that boundary is not necessarily the most difficult one to cross. For example, mixing surveys and experiments (both quantitative methods) may require more effort for many researchers than combining surveys and focus groups (the first quantitative and the second qualitative)."

Mixed method research. (1999). In Vogt, P. W. (Ed.). Dictionary of statistics & methodology (2nd ed.).

  • << Previous: Evidence Based Practice Context & Resources
  • Next: Filtered Resources >>
  • Last Updated: Feb 14, 2024 9:15 AM
  • Guide URL: https://libguides.wilmu.edu/HSC343

what are three types of health research methodology

  • Fact sheets
  • Facts in pictures
  • Publications
  • Questions and answers
  • Tools and toolkits
  • Endometriosis
  • Excessive heat
  • Mental disorders
  • Polycystic ovary syndrome
  • All countries
  • Eastern Mediterranean
  • South-East Asia
  • Western Pacific
  • Data by country
  • Country presence 
  • Country strengthening 
  • Country cooperation strategies 
  • News releases
  • Feature stories
  • Press conferences
  • Commentaries
  • Photo library
  • Afghanistan
  • Cholera 
  • Coronavirus disease (COVID-19)
  • Greater Horn of Africa
  • Israel and occupied Palestinian territory
  • Disease Outbreak News
  • Situation reports
  • Weekly Epidemiological Record
  • Surveillance
  • Health emergency appeal
  • International Health Regulations
  • Independent Oversight and Advisory Committee
  • Classifications
  • Data collections
  • Global Health Estimates
  • Mortality Database
  • Sustainable Development Goals
  • Health Inequality Monitor
  • Global Progress
  • World Health Statistics
  • Partnerships
  • Committees and advisory groups
  • Collaborating centres
  • Technical teams
  • Organizational structure
  • Initiatives
  • General Programme of Work
  • WHO Academy
  • Investment in WHO
  • WHO Foundation
  • External audit
  • Financial statements
  • Internal audit and investigations 
  • Programme Budget
  • Results reports
  • Governing bodies
  • World Health Assembly
  • Executive Board
  • Member States Portal
  • Publications /

A practical guide for health researchers

A practical guide for health researchers

  • français
  • español
  • português

Related Links

Health research methodology : a guide for training in research methods. 2nd ed..

Thumbnail

View Statistics

Description, more languages, collections.

  • Information products

Show Statistical Information

  • 7. Regional Office for the Western Pacific

This site uses session cookies and persistent cookies to improve the content and structure of the site.

By clicking “ Accept All Cookies ”, you agree to the storing of cookies on this device to enhance site navigation and content, analyse site usage, and assist in our marketing efforts.

By clicking ' See cookie policy ' you can review and change your cookie preferences and enable the ones you agree to.

By dismissing this banner , you are rejecting all cookies and therefore we will not store any cookies on this device.

Research methodology

Research projects are diverse and so you will need to consider and identify the methodology that is appropriate for your particular project. Key to this activity is being clear about the purpose of the research and formulating your research question. 

Examples of methodologies include:

  • Case series/case note review
  • Cohort observation
  • Controlled trial without randomisation
  • Epidemiology
  • Qualitative research
  • Observational research
  • Randomised controlled trial
  • Questionnaires

Sometimes a mix of methodologies may be used. In England, you may be able to obtain advice on research methodology, statistics and protocol design from the NIHR Research Design Service .

  • Privacy notice
  • Terms & conditions
  • Accessibility statement
  • Feedback or concerns
  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • For Patients
  • Clinical Trials: What Patients Need to Know

What Are the Different Types of Clinical Research?

Different types of clinical research are used depending on what the researchers are studying. Below are descriptions of some different kinds of clinical research.

Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to surgery or radiation therapy. 

Prevention Research looks for better ways to prevent disorders from developing or returning. Different kinds of prevention research may study medicines, vitamins, vaccines, minerals, or lifestyle changes. 

Diagnostic Research refers to the practice of looking for better ways to identify a particular disorder or condition. 

Screening Research aims to find the best ways to detect certain disorders or health conditions. 

Quality of Life Research explores ways to improve comfort and the quality of life for individuals with a chronic illness. 

Genetic studies aim to improve the prediction of disorders by identifying and understanding how genes and illnesses may be related. Research in this area may explore ways in which a person’s genes make him or her more or less likely to develop a disorder. This may lead to development of tailor-made treatments based on a patient’s genetic make-up. 

Epidemiological studies seek to identify the patterns, causes, and control of disorders in groups of people. 

An important note: some clinical research is “outpatient,” meaning that participants do not stay overnight at the hospital. Some is “inpatient,” meaning that participants will need to stay for at least one night in the hospital or research center. Be sure to ask the researchers what their study requires. 

Phases of clinical trials: when clinical research is used to evaluate medications and devices Clinical trials are a kind of clinical research designed to evaluate and test new interventions such as psychotherapy or medications. Clinical trials are often conducted in four phases. The trials at each phase have a different purpose and help scientists answer different questions. 

Phase I trials Researchers test an experimental drug or treatment in a small group of people for the first time. The researchers evaluate the treatment’s safety, determine a safe dosage range, and identify side effects. 

Phase II trials The experimental drug or treatment is given to a larger group of people to see if it is effective and to further evaluate its safety.

Phase III trials The experimental study drug or treatment is given to large groups of people. Researchers confirm its effectiveness, monitor side effects, compare it to commonly used treatments, and collect information that will allow the experimental drug or treatment to be used safely. 

Phase IV trials Post-marketing studies, which are conducted after a treatment is approved for use by the FDA, provide additional information including the treatment or drug’s risks, benefits, and best use.

Examples of other kinds of clinical research Many people believe that all clinical research involves testing of new medications or devices. This is not true, however. Some studies do not involve testing medications and a person’s regular medications may not need to be changed. Healthy volunteers are also needed so that researchers can compare their results to results of people with the illness being studied. Some examples of other kinds of research include the following: 

A long-term study that involves psychological tests or brain scans

A genetic study that involves blood tests but no changes in medication

A study of family history that involves talking to family members to learn about people’s medical needs and history.

Health Study Evaluation Toolkit

External sources

  • Other sources
  • Pre-appraised research
  • Critical appraisal tools

Useful information

  • Sampling methods
  • Replicability
  • Confounders
  • Asking the right questions
  • Are some types of evidence better than others?
  • Populations and samples
  • Correlation and causation
  • Scientific uncertainty
  • How to read a scientific paper
  • How science media stories work
  • Mixed methods research
  • Common sources of bias
  • Evidence-based medicine, practice and policy
  • How was Understanding Health Research developed?
  • Who was involved in the project?
  • Privacy Policy
  • UHR workshops
  • What is the Understanding Health Research tool?

What type of research is it?

To be able to ask the right questions about a paper, you need to find out what type of research it is. Read the abstract of the paper to find out what the researchers were trying to find out, and which methods they used.

There are four main types of health research. Research that:

  • Evaluates the impact of a treatment or other intervention Researchers are often interested in finding out about the health effects of something, for example a new drug, health intervention or a change in the environment. To do this, researchers usually carry out trials .
  • Explores risk factors Sometimes researchers want to find out what causes differences or changes in health. This could be something we do (e.g. smoking) or something about our environment (e.g. air pollution). To do this, researchers conduct observational studies using survey data or routinely collected data. Observational studies can be called  cross-sectional studies ,  cohort studies ,  longitudinal studies , or  case-control studies .
  • Explores how common something is Researchers use methods like surveys and analyses of routine data  to find out how common health behaviours (e.g. smoking or drinking) or health conditions (e.g. depression or high blood pressure) are. Types of  survey  include  cross-sectional surveys ,  censuses , and longitudinal surveys . Sometimes researchers carry out a new survey, and other times they use data from a previous survey.
  • Seeks to understand people's views, experiences, attitudes and beliefs Some studies report on what people think, or what their beliefs about an issue are. For example, a study might look at what people think about a treatment they have been given, what they think caused their illness or whether they were satisfied with a service they received. To do this, researchers usually use qualitative  methods, including  interviews ,  focus groups , and  ethnography .

Often, research does not fit into neat categories. If you find the questions in your chosen category are not relevant to the paper you are reviewing, you use the back button to return here and choose a different category.

Some research uses a combination of different research methods. This is called mixed-methods research . Reviewing mixed-methods research can be difficult because different methods have different quality concerns. If the paper you wish to review uses mixed-methods, you could decide which method is the dominant method used in the research, and select the appropriate option below. For more information, see our introduction to mixed methods research .

Try to choose the category that best fits the research you are reviewing. If the research you are reviewing does not seem to fall under any of these categories, select  Other types of research .

Go to Summary

Trust rating |.

MRC CSO SPHS GLASGOW

  • Privacy Policy

Research Method

Home » Quantitative Research – Methods, Types and Analysis

Quantitative Research – Methods, Types and Analysis

Table of Contents

What is Quantitative Research

Quantitative Research

Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions . This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected. It often involves the use of surveys, experiments, or other structured data collection methods to gather quantitative data.

Quantitative Research Methods

Quantitative Research Methods

Quantitative Research Methods are as follows:

Descriptive Research Design

Descriptive research design is used to describe the characteristics of a population or phenomenon being studied. This research method is used to answer the questions of what, where, when, and how. Descriptive research designs use a variety of methods such as observation, case studies, and surveys to collect data. The data is then analyzed using statistical tools to identify patterns and relationships.

Correlational Research Design

Correlational research design is used to investigate the relationship between two or more variables. Researchers use correlational research to determine whether a relationship exists between variables and to what extent they are related. This research method involves collecting data from a sample and analyzing it using statistical tools such as correlation coefficients.

Quasi-experimental Research Design

Quasi-experimental research design is used to investigate cause-and-effect relationships between variables. This research method is similar to experimental research design, but it lacks full control over the independent variable. Researchers use quasi-experimental research designs when it is not feasible or ethical to manipulate the independent variable.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This research method involves manipulating the independent variable and observing the effects on the dependent variable. Researchers use experimental research designs to test hypotheses and establish cause-and-effect relationships.

Survey Research

Survey research involves collecting data from a sample of individuals using a standardized questionnaire. This research method is used to gather information on attitudes, beliefs, and behaviors of individuals. Researchers use survey research to collect data quickly and efficiently from a large sample size. Survey research can be conducted through various methods such as online, phone, mail, or in-person interviews.

Quantitative Research Analysis Methods

Here are some commonly used quantitative research analysis methods:

Statistical Analysis

Statistical analysis is the most common quantitative research analysis method. It involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis can be used to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.

Regression Analysis

Regression analysis is a statistical technique used to analyze the relationship between one dependent variable and one or more independent variables. Researchers use regression analysis to identify and quantify the impact of independent variables on the dependent variable.

Factor Analysis

Factor analysis is a statistical technique used to identify underlying factors that explain the correlations among a set of variables. Researchers use factor analysis to reduce a large number of variables to a smaller set of factors that capture the most important information.

Structural Equation Modeling

Structural equation modeling is a statistical technique used to test complex relationships between variables. It involves specifying a model that includes both observed and unobserved variables, and then using statistical methods to test the fit of the model to the data.

Time Series Analysis

Time series analysis is a statistical technique used to analyze data that is collected over time. It involves identifying patterns and trends in the data, as well as any seasonal or cyclical variations.

Multilevel Modeling

Multilevel modeling is a statistical technique used to analyze data that is nested within multiple levels. For example, researchers might use multilevel modeling to analyze data that is collected from individuals who are nested within groups, such as students nested within schools.

Applications of Quantitative Research

Quantitative research has many applications across a wide range of fields. Here are some common examples:

  • Market Research : Quantitative research is used extensively in market research to understand consumer behavior, preferences, and trends. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform marketing strategies, product development, and pricing decisions.
  • Health Research: Quantitative research is used in health research to study the effectiveness of medical treatments, identify risk factors for diseases, and track health outcomes over time. Researchers use statistical methods to analyze data from clinical trials, surveys, and other sources to inform medical practice and policy.
  • Social Science Research: Quantitative research is used in social science research to study human behavior, attitudes, and social structures. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform social policies, educational programs, and community interventions.
  • Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.
  • Environmental Research: Quantitative research is used in environmental research to study the impact of human activities on the environment, assess the effectiveness of conservation strategies, and identify ways to reduce environmental risks. Researchers use statistical methods to analyze data from field studies, experiments, and other sources.

Characteristics of Quantitative Research

Here are some key characteristics of quantitative research:

  • Numerical data : Quantitative research involves collecting numerical data through standardized methods such as surveys, experiments, and observational studies. This data is analyzed using statistical methods to identify patterns and relationships.
  • Large sample size: Quantitative research often involves collecting data from a large sample of individuals or groups in order to increase the reliability and generalizability of the findings.
  • Objective approach: Quantitative research aims to be objective and impartial in its approach, focusing on the collection and analysis of data rather than personal beliefs, opinions, or experiences.
  • Control over variables: Quantitative research often involves manipulating variables to test hypotheses and establish cause-and-effect relationships. Researchers aim to control for extraneous variables that may impact the results.
  • Replicable : Quantitative research aims to be replicable, meaning that other researchers should be able to conduct similar studies and obtain similar results using the same methods.
  • Statistical analysis: Quantitative research involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis allows researchers to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.
  • Generalizability: Quantitative research aims to produce findings that can be generalized to larger populations beyond the specific sample studied. This is achieved through the use of random sampling methods and statistical inference.

Examples of Quantitative Research

Here are some examples of quantitative research in different fields:

  • Market Research: A company conducts a survey of 1000 consumers to determine their brand awareness and preferences. The data is analyzed using statistical methods to identify trends and patterns that can inform marketing strategies.
  • Health Research : A researcher conducts a randomized controlled trial to test the effectiveness of a new drug for treating a particular medical condition. The study involves collecting data from a large sample of patients and analyzing the results using statistical methods.
  • Social Science Research : A sociologist conducts a survey of 500 people to study attitudes toward immigration in a particular country. The data is analyzed using statistical methods to identify factors that influence these attitudes.
  • Education Research: A researcher conducts an experiment to compare the effectiveness of two different teaching methods for improving student learning outcomes. The study involves randomly assigning students to different groups and collecting data on their performance on standardized tests.
  • Environmental Research : A team of researchers conduct a study to investigate the impact of climate change on the distribution and abundance of a particular species of plant or animal. The study involves collecting data on environmental factors and population sizes over time and analyzing the results using statistical methods.
  • Psychology : A researcher conducts a survey of 500 college students to investigate the relationship between social media use and mental health. The data is analyzed using statistical methods to identify correlations and potential causal relationships.
  • Political Science: A team of researchers conducts a study to investigate voter behavior during an election. They use survey methods to collect data on voting patterns, demographics, and political attitudes, and analyze the results using statistical methods.

How to Conduct Quantitative Research

Here is a general overview of how to conduct quantitative research:

  • Develop a research question: The first step in conducting quantitative research is to develop a clear and specific research question. This question should be based on a gap in existing knowledge, and should be answerable using quantitative methods.
  • Develop a research design: Once you have a research question, you will need to develop a research design. This involves deciding on the appropriate methods to collect data, such as surveys, experiments, or observational studies. You will also need to determine the appropriate sample size, data collection instruments, and data analysis techniques.
  • Collect data: The next step is to collect data. This may involve administering surveys or questionnaires, conducting experiments, or gathering data from existing sources. It is important to use standardized methods to ensure that the data is reliable and valid.
  • Analyze data : Once the data has been collected, it is time to analyze it. This involves using statistical methods to identify patterns, trends, and relationships between variables. Common statistical techniques include correlation analysis, regression analysis, and hypothesis testing.
  • Interpret results: After analyzing the data, you will need to interpret the results. This involves identifying the key findings, determining their significance, and drawing conclusions based on the data.
  • Communicate findings: Finally, you will need to communicate your findings. This may involve writing a research report, presenting at a conference, or publishing in a peer-reviewed journal. It is important to clearly communicate the research question, methods, results, and conclusions to ensure that others can understand and replicate your research.

When to use Quantitative Research

Here are some situations when quantitative research can be appropriate:

  • To test a hypothesis: Quantitative research is often used to test a hypothesis or a theory. It involves collecting numerical data and using statistical analysis to determine if the data supports or refutes the hypothesis.
  • To generalize findings: If you want to generalize the findings of your study to a larger population, quantitative research can be useful. This is because it allows you to collect numerical data from a representative sample of the population and use statistical analysis to make inferences about the population as a whole.
  • To measure relationships between variables: If you want to measure the relationship between two or more variables, such as the relationship between age and income, or between education level and job satisfaction, quantitative research can be useful. It allows you to collect numerical data on both variables and use statistical analysis to determine the strength and direction of the relationship.
  • To identify patterns or trends: Quantitative research can be useful for identifying patterns or trends in data. For example, you can use quantitative research to identify trends in consumer behavior or to identify patterns in stock market data.
  • To quantify attitudes or opinions : If you want to measure attitudes or opinions on a particular topic, quantitative research can be useful. It allows you to collect numerical data using surveys or questionnaires and analyze the data using statistical methods to determine the prevalence of certain attitudes or opinions.

Purpose of Quantitative Research

The purpose of quantitative research is to systematically investigate and measure the relationships between variables or phenomena using numerical data and statistical analysis. The main objectives of quantitative research include:

  • Description : To provide a detailed and accurate description of a particular phenomenon or population.
  • Explanation : To explain the reasons for the occurrence of a particular phenomenon, such as identifying the factors that influence a behavior or attitude.
  • Prediction : To predict future trends or behaviors based on past patterns and relationships between variables.
  • Control : To identify the best strategies for controlling or influencing a particular outcome or behavior.

Quantitative research is used in many different fields, including social sciences, business, engineering, and health sciences. It can be used to investigate a wide range of phenomena, from human behavior and attitudes to physical and biological processes. The purpose of quantitative research is to provide reliable and valid data that can be used to inform decision-making and improve understanding of the world around us.

Advantages of Quantitative Research

There are several advantages of quantitative research, including:

  • Objectivity : Quantitative research is based on objective data and statistical analysis, which reduces the potential for bias or subjectivity in the research process.
  • Reproducibility : Because quantitative research involves standardized methods and measurements, it is more likely to be reproducible and reliable.
  • Generalizability : Quantitative research allows for generalizations to be made about a population based on a representative sample, which can inform decision-making and policy development.
  • Precision : Quantitative research allows for precise measurement and analysis of data, which can provide a more accurate understanding of phenomena and relationships between variables.
  • Efficiency : Quantitative research can be conducted relatively quickly and efficiently, especially when compared to qualitative research, which may involve lengthy data collection and analysis.
  • Large sample sizes : Quantitative research can accommodate large sample sizes, which can increase the representativeness and generalizability of the results.

Limitations of Quantitative Research

There are several limitations of quantitative research, including:

  • Limited understanding of context: Quantitative research typically focuses on numerical data and statistical analysis, which may not provide a comprehensive understanding of the context or underlying factors that influence a phenomenon.
  • Simplification of complex phenomena: Quantitative research often involves simplifying complex phenomena into measurable variables, which may not capture the full complexity of the phenomenon being studied.
  • Potential for researcher bias: Although quantitative research aims to be objective, there is still the potential for researcher bias in areas such as sampling, data collection, and data analysis.
  • Limited ability to explore new ideas: Quantitative research is often based on pre-determined research questions and hypotheses, which may limit the ability to explore new ideas or unexpected findings.
  • Limited ability to capture subjective experiences : Quantitative research is typically focused on objective data and may not capture the subjective experiences of individuals or groups being studied.
  • Ethical concerns : Quantitative research may raise ethical concerns, such as invasion of privacy or the potential for harm to participants.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Textual Analysis

Textual Analysis – Types, Examples and Guide

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Triangulation

Triangulation in Research – Types, Methods and...

Experimental Research Design

Experimental Design – Types, Methods, Guide

Exploratory Research

Exploratory Research – Types, Methods and...

Observational Research

Observational Research – Methods and Guide

American Psychological Association

Title Page Setup

A title page is required for all APA Style papers. There are both student and professional versions of the title page. Students should use the student version of the title page unless their instructor or institution has requested they use the professional version. APA provides a student title page guide (PDF, 199KB) to assist students in creating their title pages.

Student title page

The student title page includes the paper title, author names (the byline), author affiliation, course number and name for which the paper is being submitted, instructor name, assignment due date, and page number, as shown in this example.

diagram of a student page

Title page setup is covered in the seventh edition APA Style manuals in the Publication Manual Section 2.3 and the Concise Guide Section 1.6

what are three types of health research methodology

Related handouts

  • Student Title Page Guide (PDF, 263KB)
  • Student Paper Setup Guide (PDF, 3MB)

Student papers do not include a running head unless requested by the instructor or institution.

Follow the guidelines described next to format each element of the student title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Cecily J. Sinclair and Adam Gonzaga

Author affiliation

For a student paper, the affiliation is the institution where the student attends school. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author name(s).

Department of Psychology, University of Georgia

Course number and name

Provide the course number as shown on instructional materials, followed by a colon and the course name. Center the course number and name on the next double-spaced line after the author affiliation.

PSY 201: Introduction to Psychology

Instructor name

Provide the name of the instructor for the course using the format shown on instructional materials. Center the instructor name on the next double-spaced line after the course number and name.

Dr. Rowan J. Estes

Assignment due date

Provide the due date for the assignment. Center the due date on the next double-spaced line after the instructor name. Use the date format commonly used in your country.

October 18, 2020
18 October 2020

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

Professional title page

The professional title page includes the paper title, author names (the byline), author affiliation(s), author note, running head, and page number, as shown in the following example.

diagram of a professional title page

Follow the guidelines described next to format each element of the professional title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

 

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Francesca Humboldt

When different authors have different affiliations, use superscript numerals after author names to connect the names to the appropriate affiliation(s). If all authors have the same affiliation, superscript numerals are not used (see Section 2.3 of the for more on how to set up bylines and affiliations).

Tracy Reuter , Arielle Borovsky , and Casey Lew-Williams

Author affiliation

 

For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center each affiliation on its own line.

 

Department of Nursing, Morrigan University

When different authors have different affiliations, use superscript numerals before affiliations to connect the affiliations to the appropriate author(s). Do not use superscript numerals if all authors share the same affiliations (see Section 2.3 of the for more).

Department of Psychology, Princeton University
Department of Speech, Language, and Hearing Sciences, Purdue University

Author note

Place the author note in the bottom half of the title page. Center and bold the label “Author Note.” Align the paragraphs of the author note to the left. For further information on the contents of the author note, see Section 2.7 of the .

n/a

The running head appears in all-capital letters in the page header of all pages, including the title page. Align the running head to the left margin. Do not use the label “Running head:” before the running head.

Prediction errors support children’s word learning

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Are the Different Types of Bullying?

Bullying can come in many different forms

Ridofranz / Getty Images

6 Types of Bullying

  • Mental Health Effects

When you think of bullying, the physically and verbally aggressive behavior that school children endure from their peers might be what immediately springs to mind. However, it's important to recognize that bully can come in many different forms. Just because it doesn't involve physical or verbal aggression doesn't mean that it doesn't count as bullying. In fact, there are actually six different kinds of bullying: physical, verbal, relational, cyber, sexual, and prejudicial.

These types of bullying often overlap. Bullies frequently use more than one form to abuse a victim. Moreover, bullying isn't limited to kids and teenagers. Adults can also be guilty of bullying, too.

At a Glance

Bullying is a common problem among school-age kids, but it can affect anyone of any age. This intentional aggressive behavior is often about intimidation and control, and it can range from overt acts of violence to more subtle forms of emotional intimidation. Being able to recognize the different forms that bullying can take, including physical, verbal, relational, cyber, sexual, and prejudicial, is important. It can take a heavy toll on victims, so spotting the signs and taking action is crucial.

What Exactly Is Bullying?

Bullying is defined as any intentional, repeated aggressive behavior directed by a perpetrator against a target in the same age group.

One of the most noteworthy components of bullying is an imbalance of power between the bully and the victim.

Sometimes, the power imbalance is obvious when, for example, a bigger, stronger kid bullies a weaker, smaller kid or when a group of people bullies a single individual. However, sometimes the power imbalance is more difficult to discern because it involves less obvious factors, such as differences in popularity, intelligence, or ability, or knowledge of the information the victim finds embarrassing.

Bullying falls into six categories, some of which are more obvious than others. They include:

  • Physical bullying
  • Verbal bullying
  • Relational bullying

Cyberbullying

  • Sexual bullying
  • Prejudicial bullying

Physical Bullying

Physical bullying is the most obvious type of bullying and what many people think of when they imagine this kind of aggression .

Physical bullying involves any assault on a person's body, including hitting, kicking, tripping, or pushing. It can also extend to inappropriate hand gestures or stealing or breaking a victims' belongings.

Physical bullying is perpetrated by an individual or group of individuals who are bigger or stronger than the individual being targeted.

If a physical altercation happens between two people of similar size and strength, it's not considered physical bullying.

Studies have shown that boys are more likely to be involved in physical bullying than girls. For example, research has found that boys are more likely to be both the perpetrators and victims of physical bullying.

Some research suggests that such differences stem from gender differences in socialization. Boys are socialized to use direct aggression, whereas girls are socialized to express aggression indirectly.

Verbal Bullying

Verbal bullying involves using spoken or written words to insult or intimidate a victim. It includes name-calling, teasing, and even threats.

One study found that verbal bullying was the most common form of bullying. Boys experienced this type of bullying at a slightly higher rate than girls, and most were bullied by their own friends.

Verbal bullying isn't always easy to recognize because it often takes place when authority figures aren't around. Moreover, a bully can pass it off as good-natured ribbing between friends. As a result, it can be difficult for the victim to prove. Therefore, this form of bullying can become a long-term source of stress and anxiety.

Relational Bullying

Relational bullying, which is also referred to as relational aggression or social bullying, involves actions intended to harm a victim's reputation or relationships. It can include embarrassing the victim in public, spreading rumors, purposely leaving them out of social situations, or ostracizing them from a group.

Unlike more overt types of bullying, it is especially sly and insidious because it involves social manipulation.

Relational bullying is often associated with so-called "mean girls." However, while research has shown girls are more often the victims of relational bullying than boys, both boys are more likely to be perpetrators of this type of bullying.

On the other hand, studies suggest that girls who engage in relational bullying have worse adjustment problems , including issues maintaining fulfilling and positive relationships.

Relational bullying can lead to isolation , loneliness , depression, and social anxiety. Unfortunately, research indicates that teachers, school counselors, and other educational staff tend to feel relational bullying is less serious and have less empathy for victims of relational bullying than victims of physical and verbal bullying.

This may be because the severity of relational bullying is more challenging to detect. Physical and verbal bullying results in disciplinary action toward the perpetrator around 50% of the time, whereas this response only happens 10% of the time with relational bullying

Cyberbullying is bullying that happens via electronic devices like computers, smart phones, and tablets. It can take place over text messages, social media, apps, or online forums and involves posting or sending harmful content, including messages and photos, and sharing personal information that causes humiliation.

Research by the Cyberbullying Research Center shows that 15% of 9- to 12-year-olds and 37% of 13- to 17-year-olds have experienced cyberbullying at some point in their lives.

In-person bullying is still more prevalent than cyberbullying but cyberbullying is a growing problem. Not only are perpetrators of cyberbullying less likely to be caught, but the online nature of cyberbullying can also be especially damaging to victims.

People have their devices on them all day, every day, so if they're being cyberbullied, they never get a break, even in their homes.

Similarly, targets of cyberbullying may be constantly reminded of the online bullying they've endured because, even if they block the cyberbully, others may see and share the evidence.

Sexual Bullying

Sexual bullying is online or in-person bullying that involves sexual comments or actions, including sexual jokes and name-calling, crude gestures, spreading sexual rumors, sending sexual photos or videos, and touching or grabbing someone without permission.

Sexual bullying and harassment are remarkably widespread. A 2019 study found that 81% of women and 43% of men experienced sexual harassment or assault at some point in their lifetime.

Meanwhile, sexting, sending or receiving sexually explicit messages or images between electronic devices, is becoming increasingly common.

Research shows that among kids between the ages of 11 and 17, 15% of them sent sexts and 27% received sexts; the prevalence of the behavior increases as adolescents age.

When sexts are sent without consent, such as when private nude photos or videos of an individual are widely shared among a peer group, it can lead to sexual bullying and even sexual assault .

Prejudicial Bullying

Prejudicial bullying involves online or in-person bullying based on the target's race, ethnicity, religion, or sexual orientation . It is based on stereotypes and is often a result of the belief that some people deserve to be treated with less respect than others.

Though prejudicial bullying has been studied less than other types of bullying, research indicates that ethnic and sexual minorities are more likely to be bullied than their peers.

However, ethnic minorities that attend more ethnically diverse schools experience less bullying than those in schools that are more ethnically homogenous.

How Common Is Bullying?

Bullying is widespread and can negatively impact both bullying victims and the bullies themselves. A 2019 survey by the Centers for Disease Control and Prevention (CDC) found that 19.5% of ninth through twelfth graders were bullied on school property in the 12 months prior to completing the questionnaire.

Moreover, a study by the World Health Organization (WHO) conducted in 2013 and 2014 in 42 countries in Europe and North America found that, on average, 14% of 11-year-old boys and 11% of 11-year-old girls were bullied at least twice in the previous two to three months.

Mental Health Effects of Bullying

People who are bullied can experience a plethora of short- and long-term problems , including depression and anxiety, social withdrawal , substance abuse, difficulties at school or work such as underachieving and poor attendance, and even suicide .

In addition, children who are targets of bullying may become victims or perpetrators of violence later in life. Meanwhile, those who bully others are more likely to get into fights and vandalize property, abuse drugs and alcohol, have criminal convictions in adulthood , and abuse their romantic partners and children .

Even people who simply observe bullying can experience issues, including mental health difficulties and increased substance use.

Bullying can have lasting mental health effects, which is why it's so important to recognize it and address it as soon as possible. While physical and verbal bullying are the most recognizable forms, other types are also common and often occur together. Relational, cyber, sexual, and prejudicial bullying are other types of bullying that are sometimes less readily apparent (but just as damaging).

If you are having suicidal thoughts, contact the National Suicide Prevention Lifeline at 988 for support and assistance from a trained counselor. If you or a loved one are in immediate danger, call 911.

For more mental health resources, see our National Helpline Database .

Krešić Ćorić M, Kaštelan A. Bullying through the Internet - Cyberbullying .  Psychiatr Danub . 2020;32(Suppl 2):269-272.

Arseneault L. Annual research review: The persistent and pervasive impact of being bullied in childhood and adolescence: implications for policy and practice .  Journal of Child Psychology and Psychiatry . 2018;59(4):405-421. doi:10.1111/jcpp.12841

StopBullying.gov. What is bullying ?

Armitage R. Bullying in children: impact on child health .  BMJ Paediatr Open . 2021;5(1):e000939. doi:10.1136/bmjpo-2020-000939

Elmahdy M, Maashi NA, Hakami SO, et al. Prevalence of bullying and its association with health-related quality of life among adolescents in Jazan: A cross-sectional study .  Cureus . 2022;14(8):e28522. doi:10.7759/cureus.28522

Orpinas P, McNicholas C, Nahapetyan L. Gender differences in trajectories of relational aggression perpetration and victimization from middle to high school .  Aggress Behav . 2015;41(5):401-412. doi:10.1002/ab.21563

Centifanti LCM, Fanti KA, Thomson ND, Demetriou V, Anastassiou-Hadjicharalambous X. Types of relational aggression in girls are differentiated by callous-unemotional traits, peers and parental overcontrol .  Behavioral Sciences . 2015;5(4):518-536. doi:10.3390/bs5040518

Cook EE, Nickerson AB, Werth JM, Allen KP. Service providers’ perceptions of and responses to bullying of individuals with disabilities . J Intellect Disabil . 2017;21(4):277-296. doi:10.1177/1744629516650127

Kumar VL, Goldstein MA. Cyberbullying and adolescents .  Curr Pediatr Rep . 2020;8(3):86-92. doi:10.1007/s40124-020-00217-6

Cyberbullying Research Center. Tween Cyberbullying in 2020 .

Cyberbullying Research Center. 2019 Cyberbullying Data .

Graber D. Raising Humans in a Digital World: Helping Kids Build a Healthy Relationship with Technology . HarperCollins Leadership; 2019.

Stop Street Harassment. National studies .

Madigan S, Ly A, Rash CL, Van Ouytsel J, Temple JR. Prevalence of multiple forms of sexting behavior among youth: A systematic review and meta-analysis .  JAMA Pediatr . 2018;172(4):327-335. doi:10.1001/jamapediatrics.2017.5314

Menesini E, Salmivalli C. Bullying in schools: The state of knowledge and effective interventions .  Psychology, Health & Medicine . 2017;22(sup1):240-253. doi: 10.1080/13548506.2017.1279740

Centers for Disease Control and Prevention. YRBSS | Youth Risk Behavior Surveillance System | Data | Adolescent and School Health . Cdc.gov. 2019.

World Health Organization. Health Behaviour In School-Aged Children (HBSC) .

StopBullying.gov. Effects of Bullying .

By Cynthia Vinney, PhD Cynthia Vinney, PhD is an expert in media psychology and a published scholar whose work has been published in peer-reviewed psychology journals.

Choosing Dialysis: Which type is right for me?

Table of contents, table: types of dialysis, sign up for a deep dive into dialysis.

Learn about the different types of dialysis, receive additional resources, and learn so much more.

Check out our online communities to connect, learn more and hear from others going through similar experiences.

what are three types of health research methodology

Treatment Options

Make the best choice for yourself in our free, self-paced guide.

How helpful was this content?

Related kidney topics, five drugs you may need to avoid or adjust if you have kidney disease, lithotripsy, percutaneous nephrolithotomy / nephrolithotripsy, non-steroidal mineralocorticoid receptor antagonists (nsmras), which drugs are harmful to your kidneys, bk virus: what transplant patients need to know, nutrition and hemodialysis, sodium-glucose cotransporter-2 (sglt2) inhibitors, dialysis: dry, itchy skin, related news and stories.

Three scientists in protective gear high fiving

July 25, 2024

The Future of Artificial Kidneys

Sarah Hyland web.jpg

December 12, 2018

Modern Family Star Opens Up About Her Kidney Disease and Transplants

mari_artinian.png

June 18, 2024

A Son’s Gift. A Wife’s Promise: Fighting for Living Kidney Donors

anthony-tuggle-walk.png

June 04, 2024

Unfiltered Story: Losing a Transplant

xenotransplant-timeline.png

May 31, 2024

Breaking Ground in Transplantation: A New Era with Xenotransplantation

anthony_tuggle.png

April 24, 2024

Unfiltered Story: Anthony Tuggle

molly_matilda_and_anthony_ruane_.png

April 02, 2024

Matilda's Miracle: From Dialysis to Kidney Transplant

making_the_case_for_home_dialysis.png

March 26, 2024

Making the Case for Home Dialysis

charles-rice_002.jpg

March 01, 2024

Shining A Bright Light of Advocacy

Data, analysis, convening and action.

The world’s largest and most diverse environmental network.

iucn-marseille

  • IUCN WORLD CONSERVATION CONGRESS
  • REGIONAL CONSERVATION FORA
  • CONTRIBUTIONS FOR NATURE
  • IUCN ENGAGE (LOGIN REQUIRED)

IUCN tools, publications and other resources.

  • News & Events
  • Eastern and Southern Africa
  • Eastern Europe and Central Asia
  • Mediterranean
  • Mexico, Central America and the Caribbean
  • North America
  • South America
  • West and Central Africa
  • IUCN Academy
  • IUCN Contributions for Nature
  • IUCN Library
  • IUCN Red List of Threatened Species TM
  • IUCN Green List of Protected and Conserved Areas
  • IUCN World Heritage Outlook
  • IUCN Leaders Forum
  • Protected Planet
  • Union Portal (login required)
  • IUCN Engage (login required)
  • Commission portal (login required)

Get Involved

Trees

Our resources share the knowledge gathered by IUCN’s unique global community of 16,000+ experts. They include databases, tools, standards, guidelines and policy recommendations. We author hundreds of books, assessments, reports, briefs and research papers every year.

content featured image

IUCN Issues Briefs provide key information on selected issues central to IUCN’s work. They are aimed at policy-makers, journalists or anyone looking for an accessible overview of the often complex issues related to nature conservation and sustainable development.

eDNA Issues Brief

Conservation tools

IUCN's conservation tools consist of conservation databases, metrics and other knowledge products. These products have helped hundreds of organisations design, monitor and implement just and effective conservation.

EoH Toolkit 2.0 cover photo

IUCN produces publications on a wide range of topics to share our expertise on nature, conservation and sustainable development. Publications include reports, analyses, best practices, standards, periodicals from IUCN Commissions and numerous other types of knowledge from the Union.

what are three types of health research methodology

Search all resources

what are three types of health research methodology

  • Sunday 28 July 28 2024, 11:00 – 16:30 (TBC)
  • Courtyard Marriot (Frangipani Room)

ORCF 2024 thumbnail

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Research MethodologyOverview of Qualitative Research

Qualitative research methods are a robust tool for chaplaincy research questions. Similar to much of chaplaincy clinical care, qualitative research generally works with written texts, often transcriptions of individual interviews or focus group conversations and seeks to understand the meaning of experience in a study sample. This article describes three common methodologies: ethnography, grounded theory, and phenomenology. Issues to consider relating to the study sample, design, and analysis are discussed. Enhancing the validity of the data, as well reliability and ethical issues in qualitative research are described. Qualitative research is an accessible way for chaplains to contribute new knowledge about the sacred dimension of people's lived experience.

INTRODUCTION

Qualitative research is, “the systematic collection, organization, and interpretation of textual material derived from talk or conversation. It is used in the exploration of meanings of social phenomena as experienced by individuals themselves, in their natural context” ( Malterud, 2001 , p. 483). It can be the most accessible means of entry for chaplains into the world of research because, like clinical conversations, it focuses on eliciting people's stories. The stories can actually be expressed in almost any medium: conversations (interviews or focus groups), written texts (journal, prayers, or letters), or visual forms (drawings, photographs). Qualitative research may involve presenting data collected from a single person, as in a case study ( Risk, 2013 ), or from a group of people, as in one of my studies of parents of children with cystic fibrosis (CF) ( Grossoehme et al., 2013 ). Whole books are devoted to qualitative research methodology and, indeed, to the individual methods themselves. This article is intended to present, in rather broad brushstrokes, some of the “methods of choice” and to suggest some issues to consider before embarking on a qualitative research project. Helpful texts are cited to provide resources for more complete information.

Although virtually anything may be data, spoken mediums are the most common forms of collecting data in health research, so the focus of this article will mainly be on interviews and to a lesser extent, focus groups. Interviews explore experiences of individuals, and through a series of questions and answers, the meaning individuals give to their experiences ( Tong, Sainsbury, & Craig, 2007 ). They may be “structured” interviews, in which an interview guide is used with pre-determined questions from which no deviation is permitted by the interviewer, or semi-structured interviews, in which an interview guide is used with pre-determined questions and potential follow-up questions. The latter allows the interviewer to pursue topics that arise during the interview that seem relevant ( Cohen & Crabtree, 2006 ). Writing good questions is harder than it appears! In my first unit of CPE, the supervisor returned verbatims, especially our early efforts, with “DCFQ” written in the margin, for “direct, closed, factual question.” We quickly learned to avoid DCFQs in our clinical conversations because they did not create the space for reflection on illness and the sacred the way open-ended questions did. To some extent, writing good open-ended questions that elicit stories can come more readily to chaplains, due perhaps to our training, than to investigators from other disciplines. This is not to say writing an interview guide is easy or an aspect of research that can be taken lightly, as the quality of the data you collect, and hence the quality of your study, depends on the quality of your interview questions.

Data may also be collected using focus groups. Focus groups are normally built around a specific topic. They almost always follow a semi-structured format and include open discussion of responses among participants, which may range from four to twelve people ( Tong et al., 2007 ). They provide an excellent means to gather data on an entire range of responses to a topic, or on the social interactions between participants, or to clarify a process. Once the data are collected, the analytic approach is typically similar to that of interview data.

Qualitative investigators are not disinterested outsiders who merely observe without interacting with participants, but affect and are affected by their data. The investigator's emotions as they read participants' narratives are data to be included in the study. Simply asking “research” questions can itself be a chaplaincy intervention: what we ask affects the other person and can lead them to reflect and change ( Grossoehme, 2011 ). It is important to articulate our biases and understand how they influence us when we collect and analyze data. Qualitative research is often done by a small group of researchers, especially the data coding. This minimizes the bias of an individual investigator. Inevitably, two or more people will code passages differently at times. It is important to establish at the outset how such discrepancies will be handled.

Ensuring Rigor, Validity, and Reliability

Some people do not think qualitative research is not very robust or significant. This attitude is due, in part, to the poor quality of some early efforts. Increasingly, however, qualitative studies have improved in rigor, and reviewers of qualitative manuscripts expect investigators to have addressed problematic issues from the start of the project. Two important areas are validity and reliability. Validity refers to whether or not the final product (usually referred to a “model”) truly portrays what it claims to portray. If you think of a scale on which you weigh yourself, you want a valid reading so that you know your correct weight. Reliability refers to the extent to which the results are repeatable; if someone else repeated this study, would they obtain the same result? To continue the scale analogy, a reliable scale gives the same weight every time I step on it. A scale can be reliable without being valid. The scale could reliably read 72 pounds every time I step on it, but that value is hardly correct, so the measure is not valid.

Swinton and Mowat (2006) discussed ensuring the “trustworthiness” of the data. N narrative data which are “rich” in their use of metaphor and description, and which express deeper levels of meaning and nuance compared to everyday language are likely to yield a trustworthy final model because the investigators have done a credible job of completely describing and understanding the topic that is under study. Validity is also enhanced by some methodologies, such as grounded theory, which use participants' own words to name categories and themes, instead of using labels given by the investigator. The concept of “member checking” also enhances validity. Once the analyses are complete and a final model has been developed, these findings are shown to all or some of the participants (the members) who are invited to check the findings and give feedback. Do they see themselves in the words or conceptual model that is presented? Do they offer participants a new insight, or do they nod agreement without really reengaging the findings?

Reliability

One means of demonstrating reliability is to document the research decisions made along the way, as they were made, perhaps in a research diary ( Swinton & Mowat, 2006 ). Qualitative methodologies accept that the investigator is part of what is being studied and will influence it, and that this does not devalue a study but, in fact, enhances it. Simply deciding what questions to ask or not ask, and who you ask them to (and not) reflect certain decisions that should be consciously made and documented. Another researcher should be able to understand what was done and why from reading the research diary.

ETHNOGRAPHIC RESEARCH

Elisa Sobo (2009) defines ethnography as the presentation of, “… a given group's conceptual world, seen and experienced from the inside” (p. 297). Ethnography answers the question, “what's it like to be this person?” One example of this kind of study comes from the work of Fore and colleagues ( Fore, Goldenhar, Margolis, & Seid, 2013 ). In order to design tools that would enable clinicians and persons with pediatric inflammatory bowel disease (IDB) to work together more efficiently, an ethnographic study was undertaken to learn what it was like for a family when a child had IDB. After 36 interviews, the study team was able to create three parent-child dyad personas: archetypes of parents and children with IDB based directly on the data they gathered. These personas were used by the design team to think about how different types of parents and children adapted to the disease and to think what tools should be developed to help different types of parents and children with IDB. An ethnographic study is the method of choice when the goal is to understand a culture, and to present, or explain, its spoken and unspoken nature to people who are not part of the culture, as in the example above of IDB. Before “outsiders” could think about the needs of people with IDB, it was necessary to learn what it is like to live with this disease.

Determining the sample in ethnographic studies typically means using what is called a purposive sample ( Newfield, Sells, Smith, Newfield, & Newfield, 1996 ). Purposive samples are based on criteria that the investigator establishes at the outset, which describe participant characteristics. In the aforementioned IDB example, the criteria were: (1) being a person with IDB who was between 12 and 22 years old or the parent of such a child; (2) being or having a child whose IDB care was provided at one of a particular group of treatment centers; (3) being a pediatric gastrointestinal nurse at one of the centers; or (4) a physician/researcher at one of five treatment centers. Having a sample that is representative of the larger population, always the goal in quantitative research, is not the point in ethnographic studies. Here, the goal is to recruit participants who have the experience to respond to the questions. Out of their intimate knowledge of their culture, the investigator can build a theory, or conceptual model, which could later be tested for generalizability in an entire population.

Ethnographic study designs typically involve a combination of data collection methods. Whenever possible, observing the participants in the midst of whatever experience is the study's focus is desirable. In the process of an ethnographic project on CF, for instance, two students spent a twelve-hour period at the home of a family with a child who had CF, taking notes about what they saw and heard. Interviews with participants are frequently employed to learn more about the experience of interest. An example of this is the work of Sobo and colleagues, who interviewed parents of pediatric patients in a clinic to ask about the barriers they experienced obtaining health care for their child ( Seid, Sobo, Gelhard, & Varni, 2004 ). Diaries and journals detailing people's lived experience may also be used, alone, or in combination with other methods.

Analysis of ethnographic data is variable, depending on the study's goal. One common analytic approach is to begin analysis after the first few interviews have been completed, and to read them to get a sense of their content. The next step is to name the seemingly important words or phrases. At this point, one might begin to see how the names relate to each other; this is the beginning of theory development. This process continues until all the data are collected. At that point, the data are sorted by the names, with data from multiple participants clustered under each topic name ( Boyle, 1994 ). Similar names may be grouped together, or placed under a larger label name (i.e., category). In a sense, what happens is that each interviewer's voice is broken into individual fragments, and everyone's fragments that have the same name are put together. From individual voices speaking on multiple topics, there is now one topic with multiple voices speaking to it.

GROUNDED THEORY RESEARCH

Grounded theory is “grounded” in its data; this inductive approach collects data while simultaneously analyzing it and using the emerging theory to inform data collection ( Rafuls & Moon, 1996 ). This cycle continues until the categories are said to be “saturated,” which typically means the point when no new information is being learned ( Morse, 1995 ). This methodology is generally credited to Glaser and Strauss, who wanted to create a means of developing theoretical models from empirical data ( Charmaz, 2005 ). Perhaps, more than in any other qualitative methodology, the person of the investigator is the key. The extent to which the investigator notices subtle nuances in the data and responds to them with new questions for future participants, or revises an emerging theory, is the extent to which a grounded theory research truly presents a theory capturing the fullness of the data from which it was built. It is also the extent to which the theory is capable of being used to guide future research or alter clinical practice. Grounded theory is the method of choice when there is no existing hypothesis to test. For instance, there was no published data on how parents use faith to cope after their child's diagnosis with CF. Using grounded theory allowed us to develop a theory, or a conceptual model, of how parents used faith to cope ( Grossoehme, Ragsdale, Wooldridge, Cotton, & Seid, 2010 ). An excellent discussion of this method is provided by Charmaz (2006) .

The nature of the research question should dictate the sample description, which should be defined before beginning the data collection. In some cases, the incidence of the phenomena may set some limits on the sample. For example, a study of religious coping by adults who were diagnosed with CF after age 18 years began with a low incidence: this question immediately limited the number of eligible adults in a four-state area to approximately 25 ( Grossoehme et al., 2012 ). Knowing that between 12 and 20 participants might be required in order to have sufficient data to convince ourselves that our categories were indeed saturated, limiting our sample in other ways: for example, selecting representative individuals spread across the number of years since diagnosis would not have made sense. In some studies, the goal is to learn what makes a particular subset of a larger sample special; these subsets are known as “positive deviants” ( Bradley et al., 2009 ).

Once the sample is defined and data collection begins, the analytic process begins shortly thereafter. As will be described in the following paragraphs, interviews and other forms of spoken communication are nearly always transcribed, typically verbatim. Unlike most other qualitative methods, grounded theory uses an iterative design. Sometime around the third or fourth interview has been completed and transcribed and before proceeding with further interviews, it is time to begin analyzing the transcripts. There are two aspects to this. The first is to code the data that you have. Grounded theory prefers to use the participants' own words as the code, rather than having the investigator name it. For example, in the following transcript excerpt, we coded part of the following except:

  • INTERVIEWER: OK. Have your beliefs or perhaps relationship with God changed at all because of what you've gone through the last nine and 10 months with N.?
  • INTERVIEWEE: Yeah, I mean, I feel that I'm stronger than I was before actually.
  • INTERVIEWER: Hmm-hmm. How so? Can you put that into words? I know some of these could be hard to talk about but …
  • INTERVIEWEE: I don't know, I feel like I'm putting his life more in God's hands than I ever was before.

We labeled, or coded, these data as, “I'm putting his life more in God's hands,” whereas in a different methodology we might have simply named it “Trusting God.” Focus on the action in the narrative. Although it can be difficult, you as a researcher must try very hard to set your own ideas aside. Remember you are doing this because there is no pre-existing theory about what you are studying, so you should not be guided by a theory you have in your mind. You must let the data speak for themselves.

The second point is to reflect on the codes and what they are already telling you. What questions are eliciting the narrative data you want? Which ones are not? Questions that are not leading you to the data you want probably need to be changed. Interesting, novel ideas may emerge from the data, or topics that you want to know more about that you did not anticipate and so the interviewer did not' follow up on them. What are the data not telling you that you are seeking? All of this information flows back to revising the semi-structured interview guide ( Charmaz, 2006 ). This issue raised mild concern with the IRB reviewer who had not encountered this methodology before. This concern was overcome by showing that this is an accepted method with voluminous literature behind it, and by showing that the types of item revisions were not expected to significantly alter the study's effect on the participants. From this point onward you collect data, code it, and analyze it simultaneously. As you code a new transcript and come across a statement similar to others, you can begin to put them together. If you are using qualitative analysis software such as NVivo ( “NVivo qualitative data analysis software,” 2012 ), you can make these new codes “children” of a “parent” node (the first statement you encountered on this topic). The next step is called “focused coding” and in this phase you combine what seems to you to be the most significant codes ( Charmaz, 2006 ). These may also be the most frequently occurring, or the topic with the most duplicates, but not necessarily. This is not a quantitative approach in which having large amounts of data is important. You combine codes at this stage in such a way that your new, larger, categories begin to give shape to aspects of the theory you think is going to emerge. As you collect and code more data, and revise your categories, your idea of the theory will change.

Axial coding follows, as you look at your emerging themes or categories, and begin to associate coded data that explains that category. Axial coding refers to coding the words or quotations that are around the category's “axis,” or core. For example, in a study of parental faith and coping in the first year after their child's diagnosis with CF ( Grossoehme et al., 2010 ), one of the categories which emerged was, “Our beliefs have changed.” There were five axial codes which explain aspects of this category. The axial codes were, “Unchanged,” “We've learned how fragile life is,” “Our faith has been strengthened,” “We've gotten away from our parents' viewpoints,” and “I'm better in tune with who I am.” Each of these axial codes had multiple explanatory phrases or sentences under them; together they explain the breadth and dimensions of the category, “Our beliefs have changed.”

The next step is theoretical coding, and here the categories generated during focused coding are synthesized into a theory. Some grounded theorists, notably one of the two most associated with it (Glaser), do not use axial coding but proceed directly to this step as the means of creating coherence out of the data ( Charmaz, 2006 ). As your emerging theory crystallizes, you may pause to see if it has similarities with other theoretical constructs you encountered in your literature search. Does your emerging theory remind you of anything? It would be appropriate to engage in member-checking at this point. In this phase, you show your theoretical model and its supporting categories to participants and ask for their feedback. Does your model make sense to them? Does it help them see this aspect of their experience differently ( Charmaz, 2005 ). Use their feedback to revise your theory and put it in its final form. At this point, you have generated new knowledge: a theory no one has put forth previously, and one that is ready to be tested.

PHENOMENOLOGY RESEARCH

Perhaps the most chaplain-friendly qualitative research approach is phenomenology, because it is all about the search for meaning. Its roots are in the philosophical work of Husserl, Heidegger and Ricoeur ( Boss, Dahl, & Kaplan, 1996 ; Swinton & Mowat, 2006 ). This approach is based on several assumptions: (1) meaning and knowing are social constructions, always incomplete and developing; (2) the investigator is a part of the experience being studied and the investigator's values play a role in the investigation; (3) bias is inherent in all research and should be articulated at the beginning; (4) participants and investigators share knowledge and are partners; (5) common forms of expression (e.g., words or art) are important; and (6) meanings may not be shared by everyone (Boss et al.). John Swinton and Harriet Mowat (2006) described the process of carrying out a phenomenological study of depression and spirituality in adults and reading their book is an excellent way to gain a sense of the whole process. Phenomenology may be the method of choice when you want to study what an experience means to a particular group of people. May not be the best choice when you want to be able to generalize your findings. An accurate presentation of the experience under study is more important in this approach than the ability to claim that the findings apply to across situations or people (Boss et al.). A study of the devil among predominately Hispanic horse track workers is unlikely to be generalizable to experiences of the devil among persons of Scandinavian descent living in Minnesota. Care must be taken not to overstate the findings from a study and extend the conclusions beyond what the data support.

The emphasis on accurately portraying the phenomenon means that large numbers of participants are not required. In fact, relatively small sample sizes are required compared to most quantitative, clinical studies. The goal is to gather descriptions of their lived experience which are rich in detail and imagery, as well as reflection on its theological or psychological meaning. The likelihood of achieving this goal can be enhanced by using a purposeful sample. That is, decide in the beginning approximately how large and how diverse your sample needs to be. For example, CF can be caused by over 1,000 different genetic mutations; some cause more pulmonary symptoms while others cause more gastrointestinal problems. Some people with CF have diabetes and others do not; some have a functioning pancreas and others need to take replacement enzymes before eating or drinking anything other than water. Some CF adolescents may have lung function that is over 100% of what is expected for healthy adolescents of their age and gender, whereas others, with severe pulmonary disease, may have lung function that is just 30% of what is expected for their age and gender. A study of what it is like for an adolescent to live with a life-shortening genetic disease using this approach might benefit from purposive sampling. For example, lung disease severity in CF is broadly described as mild, moderate or severe. A purposeful sample might call for 18 participants divided into 3 age groups (11–13 years; 14–16 years; and 17–19 years old) and disease severity (mild, moderate, and severe). In each of those nine groups there would be one male and one female. In actual practice, one might want to have more than 18 to allow for attrition, but this breakdown gives the basic idea of defining a purposive sample. One could reasonably expect that having the experience of both genders across the spectrum of disease severity and the developmental range of adolescence would permit an accurate, multi-dimensional understanding to emerge of what living with this life-shortening disease means to adolescents. In fact, such an accurate description is more likely to emerge with this purposeful sample of 18 adolescents than with a convenience sample of the first 18 adolescents who might agree to participate in the study during their outpatient clinic appointment. Defining the sample to be studied requires some forethought about what is likely to be needed to gain the fullest understanding of the topic.

Any research design may be used. The design will be dictated by what data are required to understand the phenomena and its meaning. Interviews are by far the most common means of gathering data, although one might also use written texts, such as prayers written in open prayer books in hospital chapels, for example ( ap Sion, 2013 ; Grossoehme, 1996 ), or drawings ( Pendleton, Cavalli, Pargament, & Nasr, 2002 ), or photographs/videos ( Olausson, Ekebergh, & Lindahl, 2012 ). Although the word “text” appears, it should be with the understanding that any form of data is implied.

The theoretical underpinnings of phenomenology, which are beyond the scope of this article, suggest to users that “a method” is unnecessary or indeed, contrary, to phenomenology. However, one phenomenological researcher did articulate a method ( Giorgi, 1985 ), which consists of the following steps. First, the research team immerses themselves in the data. They do this by reading and re-reading the transcribed interviews and listening to the recorded interviews so that they can hear the tone and timbre of the voices. The goal at this stage is to get a sense of the whole. Second, the texts are coded, in which the words, phrases or sentences that stand out as describing the experience or phenomena under study, or which express outright its meaning for the participant are extracted or highlighted. Each coded bit of data is sometimes referred to as a “meaning unit.” Third, similar meaning units are placed into categories. Fourth, for each meaning unit the meaning of the participants' own words is spelled out. For chaplains, this may mean articulating what the experience means in theological language. Other disciplines might transform the participants' words into psychological, sociological or anthropological language. Here the investigators infer the meaning behind the participants' words and articulate it. Finally, each of the transformed statements of meaning are combined into a few thematic statements that describe the experience ( Bassett, 2004 ; Boss et al., 1996 ). After this, it would be appropriate to do member-checking and a subsequent revision of the final model based on participants' responses and feedback.

PRACTICAL CONCERNS

Just as questionnaires or blood samples contain data, in qualitative research it is the recording of people's words, whether in an audio, video, or paper format which hold the data. Interviews, either in-person or by telephone should be recorded using audio, video or both. It is important to have a device with suitable audio quality and fresh batteries. Experience has shown me the benefit of using two audio recorders so that you do not lose data if one of them fails. There are several small recorders available that have USB connections that allow the audio file to be uploaded to a computer easily. To protect participants' privacy, all data should be anonymized by removing any information that could identify individuals. The Standard Operating Procedure in my research group is to replace all participants' names with an “ N .” During the transcription process, all other individuals are identified by their role in square brackets, “[parent].” Depending on the study's goal and the analytic method you have selected, you may want to include symbols for pauses before participants respond, or non-fluencies (e.g., “ummm. …”, “well … uh …”) or non-verbal gestures (if you are video recording). Decide before beginning whether it is important to capture these as data or not. There are conventional symbols which are inserted into transcriptions which capture these data for you. After the initial transcription, these need to be verified by comparing the written copy against the original recording. Verification should be done by someone other than the transcriptionist. There are several tasks at this stage. Depending on the quality of your recording, the clarity of participants' speech and other factors, some words or phrases may have been unintelligible to the transcriptionist, and this is the time to address them. In my research group our Standard Operating Procedure is to highlight unintelligible text during the transcription phase, and a “verifier” attempts three times to clarify the words on the original recording before leaving them marked “unintelligible” in the transcript. No transcriptionist is perfect and if they are unfamiliar with the topic, they may transcribe the recording inaccurately. I recently verified a transcript where a commercial medical transcriptionist changed the participant's gender from “he” to “she” when the word prior to the pronoun ended with an “s.” If this pattern had not been caught during the verification process, it would have been very difficult during the coding to know whether the pronoun referred to the participant or to their daughter.

ETHICAL ISSUES IN QUALITATIVE RESEARCH

Study design.

The issue of power and the possibility of subtle coercion is the concern here. There is an inherent power differential between a research participant and the investigator, which is exacerbated when the investigator is a chaplain. Despite our attempts to be non-threatening, the very words, "chaplain," or "clergy" connote power. For this reason, the chaplain-investigator should not approach potential participants regarding a study. Potential participants may be informed regarding their eligibility to participate by their physician or a chaplain, but the recruitment and informed consent process should be handled by someone else, perhaps a clinical research coordinator. However, as the chaplain-investigator, you will need to teach them how to talk with potential participants about your study and answer their questions. Choose a data collection method that is best-suited to the level of sensitivity of your research topic. Focus groups can provide data with multiple perspectives, and they are a poor choice when there may be pressure to provide socially correct responses, or when disclosures may be stigmatizing. In such cases, it is better to collect data using individual semi-structured interviews.

Develop a plan for assessing participants' discomfort, anxiety, or even more severe reactions during the study. For instance, what will you do when someone discloses his/her current thoughts of self-harm, or experiences a flashback to a prior traumatic event that was triggered during an interview? How will you handle this if you are collecting data in person? By telephone? You will need to be specific who must be informed and who will make decisions about responding to the risk.

Privacy and Confidentiality

In addition to maintaining privacy and confidentiality of your actual data and other study documents, consider how you will protect participants' privacy when you write the study up for publication. Make sure that people cannot be identified by their quotations that you include as you publish data. The smaller the population you are working with, the more diligently you need to work on this. If the transcriptionist is not an employee of your institution and under the same privacy and confidentiality policies, it is up to you to ensure that an external transcriptionist takes steps to protect and maintain the privacy of participants' data.

Qualitative research is an accessible way for chaplains to contribute new knowledge regarding the sacred dimension of people's lived experience. Chaplains are already sensitive to and familiar with many aspects of qualitative research methodologies. Studies need to be designed to be valid and meaningful, and are best done collaboratively. They provide an excellent opportunity to develop working relationships with physicians, medical anthropologists, nurses, psychologists, and sociologists, all of whom have rich traditions of qualitative research. This article can only provide an overview of some of the issues related to qualitative research and some of its methods. The texts cited, as well as others, provide additional information needed before designing and carrying out a qualitative study. Qualitative research is a tool that chaplains can use to develop new knowledge and contribute to professional chaplaincy's ability to facilitate the healing of brokenness and disease.

Publisher's Disclaimer: Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

  • ap Sion T. Coping through prayer: An empirical study in implicit religion concerning prayers for children in hospital. Mental Health, Religion & Culture. 2013; 16 (9):936–952. doi: 10.1080/13674676.2012.756186. [ Google Scholar ]
  • Bassett C. Phenomenology. In: Bassett C, editor. Qualitative research in health care. Whurr Publishers, Ltd; London, UK: 2004. pp. 154–177. [ Google Scholar ]
  • Boss P, Dahl C, Kaplan L. The use of phenomenology for family therapy research. In: Sprenkle DH, Moon SM, editors. Research methods in family therapy. Guilford Press; New York, NY: 1996. pp. 83–106. [ Google Scholar ]
  • Boyle J. Styles of ethnography. In: Morse JM, editor. Critical issues in qualitative research methods. Sage Publications; Thousand Oaks, CA: 1994. pp. 159–185. [ Google Scholar ]
  • Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: Using positive deviance to improve quality of health care. Implementation Science. 2009; 4 (25) doi: 10.1186/1748-5908-4-25. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Charmaz K. Grounded theory in the 21st century. In: Denzin NK, Lincoln YS, editors. The SAGE handbook of qualitative research. Sage Publications; Thousand Oaks, CA: 2005. pp. 507–535. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory. Sage Publications; Thousand Oaks, CA: 2006. [ Google Scholar ]
  • Cohen D, Crabtree B. Qualitative research guidelines project. 2006 Retrieved March 11, 2014, from http://www.qualres.org/HomeSemi-3629.html .
  • Fore D, Goldenhar LM, Margolis PA, Seid M. Using goal-directed design to create a novel system for improving chronic illness. JMIR Research Protocols. 2013; 2 (2):343. doi: 10.2196/resprot.2749. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Giorgi A. Phenomenology and psychological research. Dusquesne University Press; Pittsburgh, PA: 1985. [ Google Scholar ]
  • Grossoehme DH. Prayer reveals belief: Images of God from hospital prayers. Journal of Pastoral Care. 1996; 50 (1):33–39. [ PubMed ] [ Google Scholar ]
  • Grossoehme DH. Research as a chaplaincy intervention. Journal of Health Care Chaplaincy. 2011; 17 (3–4):97–99. doi: 10.1080/08854726.2011.616165. [ PubMed ] [ Google Scholar ]
  • Grossoehme DH, Cotton S, Ragsdale J, Quittner AL, McPhail G, Seid M. "I honestly believe God keeps me healthy so I can take care of my child": Parental use of faith related to treatment adherence. Journal of Health Care Chaplaincy. 2013; 19 (2):66–78. doi: 10.1080/08854726.2013.779540. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Grossoehme DH, Ragsdale JR, Cotton S, Meyers MA, Clancy JP, Seid M, Joseph PM. Using spirituality after an adult CF diagnosis: Cognitive reframing and adherence motivation. Journal of Health Care Chaplaincy. 2012; 18 (3–4):110–120. doi: 10.1080/08854726.2012.720544. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Grossoehme DH, Ragsdale J, Wooldridge JL, Cotton S, Seid M. We can handle this: Parents' use of religion in the first year following their child's diagnosis with cystic fibrosis. Journal of Health Care Chaplaincy. 2010; 16 (3–4):95–108. doi: 10.1080/08854726.2010.480833. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Malterud K. Qualitative research: Standards, challenges, and guidelines. The Lancet. 2001; 358 (9280):483–488. [ PubMed ] [ Google Scholar ]
  • Morse JM. The significance of saturation. Qualitative Health Research. 1995; 5 :147–149. [ Google Scholar ]
  • Newfield N, Sells SP, Smith TE, Newfield S, Newfield F. Ethnographic research methods. In: Sprenkle DH, Moon SM, editors. Research methods in family therapy. The Guilford Press; New York, NY: 1996. pp. 25–63. [ Google Scholar ]
  • NVivo qualitative data analysis software . QSR International Pty Ltd. 2012. [ Google Scholar ]
  • Olausson S, Ekebergh M, Lindahl B. The ICU patient room: Views and meanings as experienced by the next of kin: A phenomenological hermeneutical study. Intensive and Critical Care Nursing. 2012; 28 (3):176–184. [ PubMed ] [ Google Scholar ]
  • Pendleton SM, Cavalli KS, Pargament KI, Nasr SZ. Religious/spiritual coping in childhood cystic fibrosis: A qualitative study. Pediatrics. 2002; 109 (1):E8. [ PubMed ] [ Google Scholar ]
  • Rafuls SE, Moon SM. Grounded theory methodology in family therapy research. In: Sprenkle DH, Moon SM, editors. Research methods in family therapy. The Guilford Press; New York, NY: 1996. [ Google Scholar ]
  • Risk JL. Building a new life: A chaplain's theory based case study of chronic illness. Journal of Health Care Chaplaincy. 2013; 19 :81–98. [ PubMed ] [ Google Scholar ]
  • Seid M, Sobo EJ, Gelhard LR, Varni JW. Parents' reports of barriers to care for children with special health care needs: Development and validation of the barriers to care questionnaire. Ambulatory Pediatrics. 2004; 4 (4):323–331. doi: 10.1367/A03-198R.1. [ PubMed ] [ Google Scholar ]
  • Sobo EJ. Culture and meaning in health services research. Left Coast Press, Inc; Walnut Creek, CA: 2009. [ Google Scholar ]
  • Swinton J, Mowat H. Practical theology and qualitative research. SCM Press; London, UK: 2006. [ Google Scholar ]
  • Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007; 19 (6):349–357. [ PubMed ] [ Google Scholar ]

IMAGES

  1. Three Types Of Health Research Methodology

    what are three types of health research methodology

  2. Types Of Charts In Research Methodology

    what are three types of health research methodology

  3. Admin, Author at Library & Information Management

    what are three types of health research methodology

  4. Different Types of Research Methods

    what are three types of health research methodology

  5. Types of Research by Method

    what are three types of health research methodology

  6. 类型的研究方法| 11个类型的研究

    what are three types of health research methodology

COMMENTS

  1. A tutorial on methodological studies: the what, when, how and why

    They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. ... conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries ...

  2. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

  3. PDF HEALTH RESEARCH METHODOLOGY

    Health research methodology: A guide for training in research methods INTRODUCTION This is a revised version of an earlier manual on Health Research Methodology and deals with the basic concepts and principles of scientific research methods with particular attention to research in the health field. The research process is the cornerstone for ...

  4. Types of Health Research

    Community-Based Participatory Research (CBPR) This is research that engages community partners as equal participants in the research. Genetic Studies. These are studies to find the role of genes in different diseases. Observational Studies. These are studies in which a group of people is observed for many years. Physiological Studies

  5. Methodology for clinical research

    Primary research can be further classified into three types as shown in Figure 1: basic or laboratory studies, also known as preclinical studies, clinical research, and epidemiological research. Both clinical and epidemiological research involve observational and experimental methods.

  6. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  7. How To Choose The Right Research Methodology

    1. Understanding the options. Before we jump into the question of how to choose a research methodology, it's useful to take a step back to understand the three overarching types of research - qualitative, quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

  8. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  9. Types of Research

    Please note that a research study does not have to be exclusively quantitative or qualitative. Many studies will use a combination of both types of research. In the Dictionary of Statistics and Methodology, Mixed-Method Research is defined as: "Inquiry that combines two or more methods.

  10. A practical guide for health researchers

    Overview. This comprehensive guide to health research reaches out to a wide spectrum of people: students who wish to learn the basic principles of health research and how to conduct it, field researchers, and those involved in teaching and training of health research methodologies. It seeks to develop practical skills, starting with defining ...

  11. PDF Methodology: What It Is and Why It Is So Important

    components of methodology one could add. For example, the historical roots of science and science and social policy are legitimate topics that could be covered as well. Yet, in developing an appreciation for methodology and the skills involved in many of the key facets of actually conducting research, the five will suffice.

  12. Health research methodology : a guide for training in research methods

    World Health Organization. Regional Office for the Western Pacific. (‎2001)‎. Health research methodology : a guide for training in research methods. 2nd ed..

  13. What are Different Research Approaches? Comprehensive Review of

    different research fields such as social and health research. As these methods cover the advantages of both qualita-tive and quantitative methods, they can be useful in case that employing one of the approaches is not adequate in a study. Nowadays, in an interdisciplinary research atmo-sphere, a team of researchers with different methodolog-

  14. PDF Healthcare Research Methods

    that sell health-related products to clinicians and to patients. Perhaps of greater value is the perspective qualitative research can offer regarding the experi - ences of individuals and groups with health prob-lems and with the healthcare system. Qualitative research has an advantage in this arena due to its

  15. Research methodology

    Research projects are diverse and so you will need to consider and identify the methodology that is appropriate for your particular project. Key to this activity is being clear about the purpose of the research and formulating your research question. Examples of methodologies include: Case series/case note review. Cohort observation.

  16. Qualitative Methods in Health Care Research

    The greatest strength of the qualitative research approach lies in the richness and depth of the healthcare exploration and description it makes. In health research, these methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings. Table 1.

  17. Research methods

    Health Science Research by Jennifer Peat. ISBN: 9780761974024. Publication Date: 2002-03-28. For research to be effective, it is essential that every aspect of the study is well planned. Health Science Research has been written to help researchers from all disciplines conduct their studies with this kind of integrity.

  18. What Are the Different Types of Clinical Research?

    Below are descriptions of some different kinds of clinical research. Treatment Research generally involves an intervention such as medication, psychotherapy, new devices, or new approaches to ...

  19. Understanding Health Research · What type of research is it?

    Researchers use methods like surveys and analyses of routine data to find out how common health behaviours (e.g. smoking or drinking) or health conditions (e.g. depression or high blood pressure) are. Types of survey include cross-sectional surveys , censuses, and longitudinal surveys. Sometimes researchers carry out a new survey, and other ...

  20. Quantitative Research

    Health Research: Quantitative research is used in health research to study the effectiveness of medical treatments, identify risk factors for diseases, and track health outcomes over time. Researchers use statistical methods to analyze data from clinical trials, surveys, and other sources to inform medical practice and policy.

  21. Title page setup

    Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize major words of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

  22. Types of Study in Medical Research

    In principle, medical research is classified into primary and secondary research. While secondary research summarizes available studies in the form of reviews and meta-analyses, the actual studies are performed in primary research. Three main areas are distinguished: basic medical research, clinical research, and epidemiological research.

  23. What Are the Different Types of Bullying?

    Though prejudicial bullying has been studied less than other types of bullying, research indicates that ethnic and sexual minorities are more likely to be bullied than their peers. ... a study by the World Health Organization (WHO) conducted in 2013 and 2014 in 42 countries in Europe and North America found that, on average, 14% of 11-year-old ...

  24. Choosing Dialysis: Which type is right for me?

    There are 3 main types of dialysis: in-center hemodialysis, home hemodialysis, and peritoneal dialysis. Each type has pros and cons. Each type has pros and cons. It's important to remember that even once you choose a type of dialysis, you always have the option to change, so you don't have to feel "locked in" to any one type of dialysis.

  25. PDF 2023 1internet Crime Report

    FEDERAL BUREAU OF INVESTIGATION8 TOP FIVE CRIME TYPE COMPARISON4 4 Accessibility description: Chart includes a loss comparison for the top five reported crime types for the years of 2019 to 2023. 114,702 38,218 61,832 43,101 13,633 241,342 45,330 108,869 76,741 15,421 323,972

  26. Cellulose Surface Nanoengineering for Visualizing Food Safety

    Food safety is vital to human health, necessitating the development of nondestructive, convenient, and highly sensitive methods for detecting harmful substances. This study integrates cellulose dissolution, aligned regeneration, in situ nanoparticle synthesis, and structural reconstitution to create flexible, transparent, customizable, and nanowrinkled cellulose/Ag nanoparticle membranes (NWCM ...

  27. In brief: What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions. Depending on the medical condition and patient's personal situation, the following ...

  28. Resources

    The IUCN Global Ecosystem Typology is a comprehensive classification framework for Earth's ecosystems that integrates their functional and compositional features. This new typology helps identify the ecosystems that are most critical for biodiversity conservation, research, management and human wellbeing into the future.

  29. Research MethodologyOverview of Qualitative Research

    Similar to much of chaplaincy clinical care, qualitative research generally works with written texts, often transcriptions of individual interviews or focus group conversations and seeks to understand the meaning of experience in a study sample. This article describes three common methodologies: ethnography, grounded theory, and phenomenology.