Search form

University Libraries

Intersession hours Jan. 3-11. The Libraries are open daily during intersession. Get more details.

  • University of Arizona Libraries

How do I find quantitative research articles?

Quantitative research focuses on gathering numerical data.

To locate qualitative research articles, use a  subject-specific database  or a general library database like  Academic Search Ultimate  or  Google Scholar .

Finding this types of research takes a bit of investigation. Try this method.

Begin by entering your keywords and conducting a search.     Example:      gardening AND mental health AND students

Since quantitative research is based on the collection and analysis of data (like numbers or statistics), you will need to look at article titles and abstracts for clues.   If a title or abstract contains terms like these, it's probably a quantitative research article.

  • Data Analysis
  • Longitudinal Studies
  • Statistical Analysis
  • Statistical Studies
  • Statistical Surveys

You could also experiment with using one of those terms in your search query.     Example:      gardening AND mental health AND data analysis

See this guide from the University of Texas. Quantitative and Qualitative Research

Download this guide from Winston Salem State University Key Elements of a Research Proposal: Quantitative Design (PDF)

  • 4 Campus & community resources
  • 5 Campus resources
  • 11 Evaluating
  • 6 Getting started
  • 17 Giving credit
  • 38 Materials
  • 3 Requesting items
  • 49 Searching
  • 1 Software & Tech Support
  • 22 Special Collections
  • 23 Systematic reviews
  • 10 Technology

Question and Answer

Related faqs, frequently asked questions.

Live Chat

Your browser is not supported

Sorry but it looks as if your browser is out of date. To get the best experience using our site we recommend that you upgrade or switch browsers.

Find a solution

  • Skip to main content
  • Skip to navigation

where can i find quantitative research articles

  • Back to parent navigation item
  • Primary teacher
  • Secondary/FE teacher
  • Early career or student teacher
  • Higher education
  • Curriculum support
  • Literacy in science teaching
  • Periodic table
  • Interactive periodic table
  • Climate change and sustainability
  • Resources shop
  • Collections
  • Remote teaching support
  • Starters for ten
  • Screen experiments
  • Assessment for learning
  • Microscale chemistry
  • Faces of chemistry
  • Classic chemistry experiments
  • Nuffield practical collection
  • Anecdotes for chemistry teachers
  • On this day in chemistry
  • Global experiments
  • PhET interactive simulations
  • Chemistry vignettes
  • Context and problem based learning
  • Journal of the month
  • Chemistry and art
  • Art analysis
  • Pigments and colours
  • Ancient art: today's technology
  • Psychology and art theory
  • Art and archaeology
  • Artists as chemists
  • The physics of restoration and conservation
  • Ancient Egyptian art
  • Ancient Greek art
  • Ancient Roman art
  • Classic chemistry demonstrations
  • In search of solutions
  • In search of more solutions
  • Creative problem-solving in chemistry
  • Solar spark
  • Chemistry for non-specialists
  • Health and safety in higher education
  • Analytical chemistry introductions
  • Exhibition chemistry
  • Introductory maths for higher education
  • Commercial skills for chemists
  • Kitchen chemistry
  • Journals how to guides
  • Chemistry in health
  • Chemistry in sport
  • Chemistry in your cupboard
  • Chocolate chemistry
  • Adnoddau addysgu cemeg Cymraeg
  • The chemistry of fireworks
  • Festive chemistry
  • Education in Chemistry
  • Teach Chemistry
  • On-demand online
  • Live online
  • Selected PD articles
  • PD for primary teachers
  • PD for secondary teachers
  • What we offer
  • Chartered Science Teacher (CSciTeach)
  • Teacher mentoring
  • UK Chemistry Olympiad
  • Who can enter?
  • How does it work?
  • Resources and past papers
  • Top of the Bench
  • Schools' Analyst
  • Regional support
  • Education coordinators
  • RSC Yusuf Hamied Inspirational Science Programme
  • RSC Education News
  • Supporting teacher training
  • Interest groups

A primary school child raises their hand in a classroom

  • More navigation items

All Quantitative research articles

An illustration showing four people piecing a box together

Harness self-regulation to nurture independent study skills

2020-10-29T10:15:00Z

Follow these tips to engage students with learning processes

An image showing a percentage sign built out of a pencil and two pie charts overlaid on an empty notebook

Why declining science scores are no reason to panic

2020-02-05T10:31:00Z

PISA provides an interesting background to teaching, but is it only for policymakers?

A pawn before a mirror, reflected as a king

Dunning-Kruger: the gap between prediction and performance

2018-03-19T14:15:00Z

Improve expectations to improve learning

Ed-Res-News-1Alamy-GA9C2F300tb

Encouraging inquiry-based approaches

2016-09-28T00:00:00Z

Manage the load for students

Transforming-educational-research-in-UKshutterstock376152052300tb

Transforming education research

2016-09-14T00:00:00Z

New project to investigate the opportunities and challenges for teachers and researchers

0516EiCEd-Res-News-2ModelsiStock67203999300tb

The value of modelling molecules

2016-08-10T00:00:00Z

Challenge of visual-spatial representations

Education research shutterstock 139305425 300tb[1]

Why don't teachers use education research in teaching?

2016-08-09T07:57:00Z

Paul MacLellan digs into the problem with research from Durham, a secondary school teacher and a journal editor

0516EiCEd-Res-News-1ConfidenceiStock66853949300tb

What influences future science study?

2016-07-27T00:00:00Z

Study beyond GCSE linked to confidence and perceptions

0416EiCEdResNewsPeer-work300tb

It’s good to talk

2016-06-08T00:00:00Z

Facilitating peer group learning

Micer shutterstock 348717923 300tb[1]

The community of chemistry education research

2016-03-03T15:11:00Z

Michael Seery talks about being part of the chemistry education research community in the UK and Ireland

0615EiCReviewsTools300tb

Tools of chemistry education research

2015-11-09T00:00:00Z

Methods and strategies

EDITORIAL-PICKaren-Ogilvie300tb

Understanding education

2015-11-06T00:00:00Z

Raising awareness of teaching and learning opportunities all around us

Organic reaction mechanisms

Organic confusion

Rote memorising v deep understanding

Img 0013 300tb[1]

Variety in Chemistry Education 2015

2015-08-24T16:14:00Z

Michael Seery reports from the conference for chemistry teaching and learning in higher education

Students in a chemistry lab

The case against inquiry-based learning

2015-05-26T10:44:00Z

Michael Seery takes a critical look at inquiry-based learning

Go-kart

Rationalising reasoning

2015-05-11T00:00:00Z

Is contextualisation the best solution?

0315EiCEdResNewsAnalogy300tb

Analysing analogies

Teacher CPD could support analogical thinking

shutterstock132457238300tb

Flipped chemistry revisited

2015-03-05T00:00:00Z

Successful organic chemistry teaching

Sl india 300tb[1]

International Conference on Education in Chemistry, 2014

2015-01-20T13:20:00Z

Simon Lancaster reports on his visit to ICEC-2014 in Mumbai

0115EICCPDThumb300tb

Moles and titrations

2015-01-06T00:00:00Z

Dorothy Warren describes some of the difficulties with teaching this topic and shows how you can help your students to master aspects of quantitative chemistry

  • Previous Page
  • Contributors
  • Email alerts

Site powered by Webvision Cloud

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Quantitative Research? | Definition, Uses & Methods

What Is Quantitative Research? | Definition, Uses & Methods

Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).

Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.

  • What is the demographic makeup of Singapore in 2020?
  • How has the average temperature changed globally over the last century?
  • Does environmental pollution affect the prevalence of honey bees?
  • Does working from home increase productivity for people with long commutes?

Table of contents

Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.

You can use quantitative research methods for descriptive, correlational or experimental research.

  • In descriptive research , you simply seek an overall summary of your study variables.
  • In correlational research , you investigate relationships between your study variables.
  • In experimental research , you systematically examine whether there is a cause-and-effect relationship between variables.

Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.

To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).

Quantitative research methods
Research method How to use Example
Control or manipulate an to measure its effect on a dependent variable. To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention.
Ask questions of a group of people in-person, over-the-phone or online. You distribute with rating scales to first-year international college students to investigate their experiences of culture shock.
(Systematic) observation Identify a behavior or occurrence of interest and monitor it in its natural setting. To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds.
Secondary research Collect data that has been gathered for other purposes e.g., national surveys or historical records. To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available .

Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .

Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.

Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .

First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.

You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.

Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:

  • Replication

Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.

  • Direct comparisons of results

The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.

  • Large samples

Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.

  • Hypothesis testing

Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.

Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:

  • Superficiality

Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.

  • Narrow focus

Predetermined variables and measurement procedures can mean that you ignore other relevant observations.

  • Structural bias

Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.

  • Lack of context

Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved August 27, 2024, from https://www.scribbr.com/methodology/quantitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Ashland University wordmark

Archer Library

Nursing resources: finding quantitative research articles.

  • Online Medical Reference E-Books
  • Key word search or subject search?
  • Finding Qualitative Research Articles
  • Database Tutorials
  • Finding Quantitative Research Articles
  • Research Tools
  • Copyright Resources This link opens in a new window
  • Transcultural
  • Public Health
  • Evidence Based Nursing

What is Quantitative Research?

"Quantitative research is a systematic process used to gather and statistically analyze information that has been measured by an instrument. Instruments are used to convert information into numbers. It studies only quantifiable concepts (concepts that can be measured and turned into numbers)." It examines phenomenon through the numerical representation of observations and statistical analysis.

Langford, R. ( 2000). Navigating the Maze of Nursing Research . Elsevier.

Tips for Finding Quantitative Articles with a Keyword Search

If you want to limit your search to quantitative  studies, first try "quantitative" as a keyword, then try using one of the following terms/phrases in your search (example: lactation AND statistics):

Correlational design*

Effect size

Empirical research

Experiment*

Quasi-experiment*

Reliability

  • << Previous: Database Tutorials
  • Next: Internet Resources >>
  • Last Updated: Apr 16, 2024 2:41 PM
  • URL: https://libguides.ashland.edu/nursing

Archer Library • Ashland University © Copyright 2023. An Equal Opportunity/Equal Access Institution.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Quantitative research questionsQuantitative research hypotheses
Descriptive research questionsSimple hypothesis
Comparative research questionsComplex hypothesis
Relationship research questionsDirectional hypothesis
Non-directional hypothesis
Associative hypothesis
Causal hypothesis
Null hypothesis
Alternative hypothesis
Working hypothesis
Statistical hypothesis
Logical hypothesis
Hypothesis-testing
Qualitative research questionsQualitative research hypotheses
Contextual research questionsHypothesis-generating
Descriptive research questions
Evaluation research questions
Explanatory research questions
Exploratory research questions
Generative research questions
Ideological research questions
Ethnographic research questions
Phenomenological research questions
Grounded theory questions
Qualitative case study questions

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Quantitative research questions
Descriptive research question
- Measures responses of subjects to variables
- Presents variables to measure, analyze, or assess
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training?
Comparative research question
- Clarifies difference between one group with outcome variable and another group without outcome variable
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)?
- Compares the effects of variables
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells?
Relationship research question
- Defines trends, association, relationships, or interactions between dependent variable and independent variable
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic?

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Quantitative research hypotheses
Simple hypothesis
- Predicts relationship between single dependent variable and single independent variable
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered.
Complex hypothesis
- Foretells relationship between two or more independent and dependent variables
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable).
Directional hypothesis
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects.
Non-directional hypothesis
- Nature of relationship between two variables or exact study direction is not identified
- Does not involve a theory
Women and men are different in terms of helpfulness. (Exact study direction is not identified)
Associative hypothesis
- Describes variable interdependency
- Change in one variable causes change in another variable
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable).
Causal hypothesis
- An effect on dependent variable is predicted from manipulation of independent variable
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient.
Null hypothesis
- A negative statement indicating no relationship or difference between 2 variables
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2).
Alternative hypothesis
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2).
Working hypothesis
- A hypothesis that is initially accepted for further research to produce a feasible theory
Dairy cows fed with concentrates of different formulations will produce different amounts of milk.
Statistical hypothesis
- Assumption about the value of population parameter or relationship among several population characteristics
- Validity tested by a statistical experiment or analysis
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2.
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan.
Logical hypothesis
- Offers or proposes an explanation with limited or no extensive evidence
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less.
Hypothesis-testing (Quantitative hypothesis-testing research)
- Quantitative research uses deductive reasoning.
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses.

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative research questions
Contextual research question
- Ask the nature of what already exists
- Individuals or groups function to further clarify and understand the natural context of real-world problems
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems)
Descriptive research question
- Aims to describe a phenomenon
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities?
Evaluation research question
- Examines the effectiveness of existing practice or accepted frameworks
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility?
Explanatory research question
- Clarifies a previously studied phenomenon and explains why it occurs
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania?
Exploratory research question
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic?
Generative research question
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative?
Ideological research question
- Aims to advance specific ideas or ideologies of a position
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care?
Ethnographic research question
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis?
Phenomenological research question
- Knows more about the phenomena that have impacted an individual
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual)
Grounded theory question
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed?
Qualitative case study question
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions
- Considers how the phenomenon is influenced by its contextual situation.
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan?
Qualitative research hypotheses
Hypothesis-generating (Qualitative hypothesis-generating research)
- Qualitative research uses inductive reasoning.
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis.
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach.

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

VariablesUnclear and weak statement (Statement 1) Clear and good statement (Statement 2) Points to avoid
Research questionWhich is more effective between smoke moxibustion and smokeless moxibustion?“Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” 1) Vague and unfocused questions
2) Closed questions simply answerable by yes or no
3) Questions requiring a simple choice
HypothesisThe smoke moxibustion group will have higher cephalic presentation.“Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group.1) Unverifiable hypotheses
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group.2) Incompletely stated groups of comparison
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” 3) Insufficiently described variables or outcomes
Research objectiveTo determine which is more effective between smoke moxibustion and smokeless moxibustion.“The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” 1) Poor understanding of the research question and hypotheses
2) Insufficient description of population, variables, or study outcomes

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

VariablesUnclear and weak statement (Statement 1)Clear and good statement (Statement 2)Points to avoid
Research questionDoes disrespect and abuse (D&A) occur in childbirth in Tanzania?How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania?1) Ambiguous or oversimplistic questions
2) Questions unverifiable by data collection and analysis
HypothesisDisrespect and abuse (D&A) occur in childbirth in Tanzania.Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania.1) Statements simply expressing facts
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania.2) Insufficiently described concepts or variables
Research objectiveTo describe disrespect and abuse (D&A) in childbirth in Tanzania.“This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” 1) Statements unrelated to the research question and hypotheses
2) Unattainable or unexplorable objectives

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Walden University: Walden University banner

  • Walden University

How do I find a quantitative article?

  • Student-Facing Quick Answers
  • Walden College of Nursing | Continuing Education
  • zPop Up Widgets
  • 119 Academic Residencies
  • 267 Academic Skills Center
  • 16 Accommodations
  • 5 APA: in-text citations
  • 3 APA: references
  • 5 APA: Word formatting
  • 12 Appointments
  • 106 Archive
  • 1 Blackboard App
  • 1 Blackboard: Discussion Posts
  • 7 Capstone Intensive
  • 9 Career Management
  • 27 Career Planning & Development
  • 13 Career Services Center
  • 5 Center for Global, Professional, and Applied Learning
  • 3 Commencement & Graduation
  • 18 Course Materials
  • 63 Course-Level Statistics
  • 123 Customer Care Team
  • 15 Disability Services
  • 48 Dissertation
  • 61 Doctoral Capstone
  • 12 Doctoral Degree Coach
  • 29 Doctoral Peer Mentors
  • 48 Doctoral Study
  • 28 Doctoral Writing Assessment
  • 19 Doctoral Writing Workshops
  • 7 ePortfolio
  • 39 Field Experience
  • 35 Financial Aid
  • 18 Financial Services
  • 18 Full Text
  • 18 Google Scholar
  • 2 Grammarly
  • 8 Institutional Review Board (IRB)
  • 31 International
  • 1 International Student Finance Portal
  • 15 Job Search
  • 211 Library
  • 39 Library Databases
  • 34 Library Research
  • 161 Library Skills
  • 72 Literature Review
  • 36 Mechanics/Punctuation
  • 12 Methodology
  • 29 Military Services
  • 2 Military Spouses & Dependants
  • 11 MS PowerPoint
  • 10 MyWalden
  • 7 Networking
  • 12 New Students
  • 5 Office 365
  • 11 Office of Degree Acceleration
  • 40 Office of Research and Doctoral Services
  • 69 Policies
  • 15 Practicum
  • 32 Project Study
  • 7 Reading Skills
  • 19 Registration
  • 12 School-Life Balance
  • 29 Skills Courses
  • 89 Software/Technology
  • 4 Statistical Tests: Multivariate Methods
  • 17 Statistical Tests: Probability and Regression
  • 15 Statistical Tests: Tests of Mean Differences
  • 48 Statistical Tools
  • 61 Statistics
  • 4 Student Organizations
  • 4 Student Records
  • 46 Student Success Advising
  • 1 Student Wellness & Disability Services
  • 6 Study Abroad
  • 17 Study Skills
  • 7 Taskstream
  • 21 Textbooks & Course Materials
  • 4 Transcripts
  • 15 Tutoring
  • 28 Veterans
  • 20 Virtual Residency
  • 363 Writing Center
  • 3 Writing: Academic Writing

Answered By: Jon Allinder Last Updated: Jan 29, 2023     Views: 126065

You can find quantitative articles by searching in the Library databases using methodology terms as keywords. To find a quantitative study, possible keywords include the type of study, data analysis type, or terminology used to describe the results.

Example quantitative keywords

 

 

 

 

The following search uses our multi-database search tool  to find examples of quantitative research studies. However, you can search in any article or dissertation database for quantitative studies.

  • On the  Library homepage , type your general term in the main search box, and hit the search button:  quantitative  
  • Sign in with your Walden credentials if prompted.
  • Type more methodology terms in the first search box. Use as many alternative terms as are relevant to your search. Use the remaining search box(es) to narrow your search to a specific topic of interest.
  • Click the  Search  button.

Here is an example search set up:

First search box:

Quantitative OR Statistic* OR Correlation*

Second search box:

Post-Traumatic Stress Disorder OR PTSD

where can i find quantitative research articles

Database Search Tips

Connecting the alternative terms with OR tells the database to search for any of these terms.

Using the asterisk (*) truncates the search.  The database will search for the part of the word you typed before the asterisk, along with any possible endings of the word. Using statistic* tells the database to search for statistics, statistical, etc.

  • Some methodologies are rarely used for certain research topics. You may need to broaden your search topic to find a study that uses your methodology.
  • Many articles and dissertations will include methodology terms in the abstract or title. To make sure that you have an example of your methodology, be sure to look at the  methodology section  in the full text. This will provide detailed information about the methodology used.

To find more results or if you are searching for a very specific type of study design you can try a different search setup. 

  • Type your terms into the first search box. 
  • To the right of that, change the  Select a Field (optional)  drop-down menu to  TX All Text.
  •  Type your other keyword term into the second search box

For example: 

Pretest AND Posttest

Second search box: 

post-traumatic stress disorder OR ptsd

Search Tip : Connecting terms with  AND  tells the database to search for both of these words.

where can i find quantitative research articles

More Information:

Does the Library have information about different quantitative research methods?

How do I find an article that uses a specific methodology?

How do I find a qualitative article?

How do I find a mixed-method article?

How do I find original research studies that include empirical data?

Learn more about methodologies by searching encyclopedias and SAGE Research Methods Online.    

Do you have other methodology search questions?   Ask a Librarian !

  • Share on Facebook

Help us do better. Was this helpful?

Related Topics

  • Library Skills
  • Methodology

More Information

Need more information? Ask us !

Or browse Quick Answers by Topic .

  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Student Affairs
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

La Salle University

Connelly library, library main menu.

  • Course Reserves
  • Interlibrary Loan (ILL)
  • Study Room Use & Reservations
  • Technology & Printing
  • Citation Guides
  • Reserve Library Space
  • Request Instruction
  • Copyright Information
  • Guides for Faculty

Special Collections

  • University Archives
  • Historical & Cultural Collections
  • Rare Bibles & Prayer Books
  • Historical Research Guides
  • Information & Guidelines
  • Staff Directory
  • Meet with a Librarian
  • Directions & Building Maps

Research Hub

  • Research Tools
  • Research Guides

Qualitative and Quantitative Research

Locating articles in pubmed.

  • What is "Empirical Research"?
  • Locating Articles in Cinahl and PsycInfo
  • Getting the Articles

Remember to use  PUBMED FROM CONNELLY  to take advantage of Connelly library links to journals, ILL etc. Read more about PubMed from Connelly here

When searching for Qualitative studies in PubMed you can use the controlled MeSH terms. Use the Advanced Search, change the field to MeSH terms and enter the phrase qualitative resesearch

where can i find quantitative research articles

Finding Quantitative studies is a bit different.  You must run your search and then apply limits by clicking on the Customize link under Article Types. There are many different types of quantitative studies.  You can choose as many as you want - or as few. They are listed below.  After you choose the types you want, click Show.  Then the types show up in the Article Type field and you can click on them to filter out the types you want

where can i find quantitative research articles

When you click Show the Article Types show up on the left hand side.  Click the ones you want to filter out the correct type of article

where can i find quantitative research articles

  • << Previous: Locating Articles in Cinahl and PsycInfo
  • Next: Getting the Articles >>

Chat Assistance

The Chicago School Library Logo

  • The Chicago School
  • The Chicago School Library
  • Research Guides

Quantitative Research Methods

What is quantitative research, about this guide, introduction, quantitative research methodologies.

  • Key Resources
  • Quantitative Software
  • Finding Qualitative Studies

 The purpose of this guide is to provide a starting point for learning about quantitative research. In this guide, you'll find:

  • Resources on diverse types of quantitative research.
  • An overview of resources for data, methods & analysis
  • Popular quantitative software options
  • Information on how to find quantitative studies

Research involving the collection of data in numerical form for quantitative analysis. The numerical data can be durations, scores, counts of incidents, ratings, or scales. Quantitative data can be collected in either controlled or naturalistic environments, in laboratories or field studies, from special populations or from samples of the general population. The defining factor is that numbers result from the process, whether the initial data collection produced numerical values, or whether non-numerical values were subsequently converted to numbers as part of the analysis process, as in content analysis.

Citation: Garwood, J. (2006). Quantitative research. In V. Jupp (Ed.), The SAGE dictionary of social research methods. (pp. 251-252). London, England: SAGE Publications. doi:10.4135/9780857020116

Watch the following video to learn more about Quantitative Research:

(Video best viewed in Edge and Chrome browsers, or click here to view in the Sage Research Methods Database)

Correlational

Researchers will compare two sets of numbers to try and identify a relationship (if any) between two things.

Descriptive

Researchers will attempt to quantify a variety of factors at play as they study a particular type of phenomenon or action. For example, researchers might use a descriptive methodology to understand the effects of climate change on the life cycle of a plant or animal.

Experimental

To understand the effects of a variable, researchers will design an experiment where they can control as many factors as possible. This can involve creating control and experimental groups. The experimental group will be exposed to the variable to study its effects. The control group provides data about what happens when the variable is absent. For example, in a study about online teaching, the control group might receive traditional face-to-face instruction while the experimental group would receive their instruction virtually.

Quasi-Experimental/Quasi-Comparative

Researchers will attempt to determine what (if any) effect a variable can have. These studies may have multiple independent variables (causes) and multiple dependent variables (effects), but this can complicate researchers' efforts to find out if A can cause B or if X, Y, and Z are also playing a role.

Surveys can be considered a quantitative methodology if the researchers require their respondents to choose from pre-determined responses.

  • Next: Key Resources >>
  • Last Updated: Aug 20, 2024 5:29 PM
  • URL: https://library.thechicagoschool.edu/quantitative

Quantitative research: Understanding the approaches and key elements

Quantitative Research Understanding The Approaches And Key Elements

Quantitative research has many benefits and challenges but understanding how to properly conduct it can lead to a successful marketing research project.

Choosing the right quantitative approach

Editor’s note: Allison Von Borstel is the associate director of creative analytics at The Sound. This is an edited version of an article that originally appeared under the title “ Understanding Quantitative Research Approaches .”

What is quantitative research?

The systematic approaches that ground quantitative research involve hundreds or thousands of data points for one research project. The wonder of quantitative research is that each data point, or row in a spreadsheet, is a person and has a human story to tell. 

Quantitative research aggregates voices and distills them into numbers that uncover trends, illuminates relationships and correlations that inform decision-making with solid evidence and clarity.

The benefits of quantitative approach es

Why choose a quantitative   approach? Because you want a very clear story grounded in statistical rigor as a guide to making smart, data-backed decisions. 

Quantitative approaches shine because they:

Involve a lot of people

Large sample sizes (think hundreds or thousands) enable researchers to generalize findings because the sample is representative of the total population.  

They are grounded in statistical rigor

Allowing for precise measurement and analysis of data, providing statistically significant results that bolster confidence in research.

Reduce bias

Structured data collection and analysis methods enhance the reliability of findings. 

Boost efficiency

Quantitative methods often follow a qualitative phase, allowing researchers to validate findings by reporting the perspective of hundreds of people in a fraction of the time. 

Widen the analysis’ scope

The copious data collected in just a 20-minute (max) survey positions researchers to evaluate a broad spectrum of variables within the data. This thorough comprehension is instrumental when dealing with complex questions that require in-depth analysis. 

Quantitative approaches have hurdles, which include:

Limited flexibility

Once a survey is fielded, or data is gathered, there’s no opportunity to ask a live follow-up question. While it is possible to follow-up with the same people for two surveys, the likelihood of sufficient responses is small. 

Battling bots

One of the biggest concerns in data quality is making sure data represents people and not bots. 

Missing body language cues

Numbers, words and even images lack the cues that a researcher could pick up on during an interview. Unlike in a qualitative focus group, where one might deduce that a person is uncertain of an answer, in quantitative research, a static response is what the researcher works with.

Understanding quantitative research methods 

Quantitative approaches approach research from the same starting point as qualitative approaches – grounded in business objectives with a specific group of people to study. 

Once research has kicked off, the business objective thoroughly explored and the approach selected, research follows a general outline:  

Consider what data is needed

Think about what type of information needs to be gathered, with an approach in mind. While most quantitative research involves numbers, words and images also count.

  • Numbers: Yes, the stereotypical rows of numbers in spreadsheets. Rows that capture people’s opinions and attitudes and are coded to numbers for comparative analytics. Numerical analysis is used for everything from descriptive statistics to regression/predictive analysis. 
  • Words:  Text analysis employs a machine learning model to identify sentiment, emotion and meaning of text. Often used for sentiment analysis or content classification, it can be applied to single-word responses, elaborate open-ends, reviews or even social media posts.
  • Images: Image analysis extracts meaningful information from images. A computer vision model that takes images as inputs and outputs numerical information (e.g., having a sample upload their favorite bag of chips and yielding the top three brands).

Design a survey

Create a survey to capture the data needed to address the objective. During this process, different pathways could be written to get a dynamic data set (capturing opinions that derive from various lived experiences). Survey logic is also written to provide a smooth UX experience for respondents.    

Prepare the data

The quality of quantitative research rests heavily on the quality of data. After data is collected (typically by fielding a survey or collecting already-existing data, more on that in a bit), it’s time to clean the data. 

Begin the analysis process

Now that you have a robust database (including numbers, words or images), it’s time to listen to the story that the data tells. Depending on the research approach used, advanced analytics come into play to tease out insights and nuances for the business objective. 

Tell the story

Strip the quantitative jargon and convey the insights from the research. Just because it’s quantitative research does not mean the results have to be told in a monotone drone with a monochrome chart. Answer business objectives dynamically, knowing that research is grounded in statistically sound information. 

The two options: Primary vs. secondary research

The two methods that encompass quantitative approaches are primary (collecting data oneself) and secondary (relying on already existing data).

Primary  research  is primarily used  

Most research involves primary data collection – where the researcher collects data directly. The main approach in primary research is survey data collection.  

The types of survey questions

Span various measurement scales (nominal, ordinal, interval and ratio) using a mix of question types (single and multi-choice, scales, matrix or open-ends).  

Analysis methods

Custom surveys yield great data for a variety of methods in market analysis. Here are a couple favorites: 

  • Crosstabulation : Used to uncover insights that might not be obvious at first glance. This analysis organizes data into categories, revealing trends or patterns between variables. 
  • Sentiment analysis: Used to sift through text to gauge emotions, opinions and attitudes. This method helps understand perception, fine-tune strategies and effectively respond to feedback.
  • Market sizing: Used to map out the dimensions of a market. By calculating the total potential demand for a product or service in a specific market, this method reveals the scope of opportunities needed to make informed decisions about investment and growth strategies. 
  • Conjoint analysis : Used to uncover what people value most in products or services. It breaks down features into bits and pieces and asks people to choose their ideal combo. By analyzing these preferences, this analysis reveals the hidden recipe for customer satisfaction.
  • Job-To-Be-Done : Used to understand the underlying human motivations that drive people to act. People are multifaceted and experience a myriad of situations each day – meaning that a brand’s competition isn’t limited to in-category. 
  • Segmentation: Used to identify specific cohorts within a greater population. It groups people with similar characteristics, behaviors or needs together. This method helps tailor products or services to specific groups, boosting satisfaction and sales.

Statistical rigor

Regardless of method, a quantitative approach then enables researchers to draw inferences and make predictions based upon the confidence in the data (looking at confidence intervals, margin of error, etc.)

Let’s not forget secondary research

By accessing a wide range of existing information, this research can be a cost-effective way to gain insights or can supplement primary research findings. 

Here are popular options: 

Government sources

Government sources can be extremely in-depth, can range across multiple industries and markets and reflect millions of people. This type of data is often instrumental for longitudinal or cultural trends analysis. 

Educational institutions

Research universities conduct in-depth studies on a variety of topics, often aggregating government data, nonprofit data and primary data.  

Client data

This includes any research that was conducted for or by companies before the   present research project. Whether it’s data gathered from customer reviews or prior quantitative work, these secondary resources can help extend findings and detect trends by connecting past data to future data.

Quantitative research enhances research projects

Quantitative research approaches are so much more than “how much” or “how many,” they reveal the   why   behind people’s actions, emotions and behaviors. By using standardized collection methods, like surveys, quant instills confidence and rigor in findings.

7 Top Sampling Providers Related Categories: Research Industry, Data Analysis, Sampling Research Industry, Data Analysis, Sampling, Software-Sampling, Audience Research, Data Collection Field Services, Panels-Proprietary, Qualitative Research, Qualitative-Online

Talk Shoppe: Human-powered insights in a technology-driven world Related Categories: Research Industry, Quantitative Research, Hybrid Research (Qual/Quant) Research Industry, Quantitative Research, Hybrid Research (Qual/Quant), Brand Positioning Studies, Qualitative Research

Canvs AI: Unlock critical insights from unstructured feedback Related Categories: Research Industry, Data Analysis, Quantitative Research Research Industry, Data Analysis, Quantitative Research, Artificial Intelligence / AI, Text Analytics

Segmentation in the pharma industry: How to create resilient strategies Related Categories: Research Industry, Sampling, Survey Research Research Industry, Sampling, Survey Research, Market Segmentation Studies, Segmentation Studies, Health Care (Healthcare), Health Care (Healthcare) Research, Patients , Questionnaire Analysis, Social Media Research

where can i find quantitative research articles

How To Write The Results/Findings Chapter

For quantitative studies (dissertations & theses).

By: Derek Jansen (MBA) | Expert Reviewed By: Kerryn Warren (PhD) | July 2021

So, you’ve completed your quantitative data analysis and it’s time to report on your findings. But where do you start? In this post, we’ll walk you through the results chapter (also called the findings or analysis chapter), step by step, so that you can craft this section of your dissertation or thesis with confidence. If you’re looking for information regarding the results chapter for qualitative studies, you can find that here .

Overview: Quantitative Results Chapter

  • What exactly the results chapter is
  • What you need to include in your chapter
  • How to structure the chapter
  • Tips and tricks for writing a top-notch chapter
  • Free results chapter template

What exactly is the results chapter?

The results chapter (also referred to as the findings or analysis chapter) is one of the most important chapters of your dissertation or thesis because it shows the reader what you’ve found in terms of the quantitative data you’ve collected. It presents the data using a clear text narrative, supported by tables, graphs and charts. In doing so, it also highlights any potential issues (such as outliers or unusual findings) you’ve come across.

But how’s that different from the discussion chapter?

Well, in the results chapter, you only present your statistical findings. Only the numbers, so to speak – no more, no less. Contrasted to this, in the discussion chapter , you interpret your findings and link them to prior research (i.e. your literature review), as well as your research objectives and research questions . In other words, the results chapter presents and describes the data, while the discussion chapter interprets the data.

Let’s look at an example.

In your results chapter, you may have a plot that shows how respondents to a survey  responded: the numbers of respondents per category, for instance. You may also state whether this supports a hypothesis by using a p-value from a statistical test. But it is only in the discussion chapter where you will say why this is relevant or how it compares with the literature or the broader picture. So, in your results chapter, make sure that you don’t present anything other than the hard facts – this is not the place for subjectivity.

It’s worth mentioning that some universities prefer you to combine the results and discussion chapters. Even so, it is good practice to separate the results and discussion elements within the chapter, as this ensures your findings are fully described. Typically, though, the results and discussion chapters are split up in quantitative studies. If you’re unsure, chat with your research supervisor or chair to find out what their preference is.

Free template for results section of a dissertation or thesis

What should you include in the results chapter?

Following your analysis, it’s likely you’ll have far more data than are necessary to include in your chapter. In all likelihood, you’ll have a mountain of SPSS or R output data, and it’s your job to decide what’s most relevant. You’ll need to cut through the noise and focus on the data that matters.

This doesn’t mean that those analyses were a waste of time – on the contrary, those analyses ensure that you have a good understanding of your dataset and how to interpret it. However, that doesn’t mean your reader or examiner needs to see the 165 histograms you created! Relevance is key.

How do I decide what’s relevant?

At this point, it can be difficult to strike a balance between what is and isn’t important. But the most important thing is to ensure your results reflect and align with the purpose of your study .  So, you need to revisit your research aims, objectives and research questions and use these as a litmus test for relevance. Make sure that you refer back to these constantly when writing up your chapter so that you stay on track.

There must be alignment between your research aims objectives and questions

As a general guide, your results chapter will typically include the following:

  • Some demographic data about your sample
  • Reliability tests (if you used measurement scales)
  • Descriptive statistics
  • Inferential statistics (if your research objectives and questions require these)
  • Hypothesis tests (again, if your research objectives and questions require these)

We’ll discuss each of these points in more detail in the next section.

Importantly, your results chapter needs to lay the foundation for your discussion chapter . This means that, in your results chapter, you need to include all the data that you will use as the basis for your interpretation in the discussion chapter.

For example, if you plan to highlight the strong relationship between Variable X and Variable Y in your discussion chapter, you need to present the respective analysis in your results chapter – perhaps a correlation or regression analysis.

Need a helping hand?

where can i find quantitative research articles

How do I write the results chapter?

There are multiple steps involved in writing up the results chapter for your quantitative research. The exact number of steps applicable to you will vary from study to study and will depend on the nature of the research aims, objectives and research questions . However, we’ll outline the generic steps below.

Step 1 – Revisit your research questions

The first step in writing your results chapter is to revisit your research objectives and research questions . These will be (or at least, should be!) the driving force behind your results and discussion chapters, so you need to review them and then ask yourself which statistical analyses and tests (from your mountain of data) would specifically help you address these . For each research objective and research question, list the specific piece (or pieces) of analysis that address it.

At this stage, it’s also useful to think about the key points that you want to raise in your discussion chapter and note these down so that you have a clear reminder of which data points and analyses you want to highlight in the results chapter. Again, list your points and then list the specific piece of analysis that addresses each point. 

Next, you should draw up a rough outline of how you plan to structure your chapter . Which analyses and statistical tests will you present and in what order? We’ll discuss the “standard structure” in more detail later, but it’s worth mentioning now that it’s always useful to draw up a rough outline before you start writing (this advice applies to any chapter).

Step 2 – Craft an overview introduction

As with all chapters in your dissertation or thesis, you should start your quantitative results chapter by providing a brief overview of what you’ll do in the chapter and why . For example, you’d explain that you will start by presenting demographic data to understand the representativeness of the sample, before moving onto X, Y and Z.

This section shouldn’t be lengthy – a paragraph or two maximum. Also, it’s a good idea to weave the research questions into this section so that there’s a golden thread that runs through the document.

Your chapter must have a golden thread

Step 3 – Present the sample demographic data

The first set of data that you’ll present is an overview of the sample demographics – in other words, the demographics of your respondents.

For example:

  • What age range are they?
  • How is gender distributed?
  • How is ethnicity distributed?
  • What areas do the participants live in?

The purpose of this is to assess how representative the sample is of the broader population. This is important for the sake of the generalisability of the results. If your sample is not representative of the population, you will not be able to generalise your findings. This is not necessarily the end of the world, but it is a limitation you’ll need to acknowledge.

Of course, to make this representativeness assessment, you’ll need to have a clear view of the demographics of the population. So, make sure that you design your survey to capture the correct demographic information that you will compare your sample to.

But what if I’m not interested in generalisability?

Well, even if your purpose is not necessarily to extrapolate your findings to the broader population, understanding your sample will allow you to interpret your findings appropriately, considering who responded. In other words, it will help you contextualise your findings . For example, if 80% of your sample was aged over 65, this may be a significant contextual factor to consider when interpreting the data. Therefore, it’s important to understand and present the demographic data.

 Step 4 – Review composite measures and the data “shape”.

Before you undertake any statistical analysis, you’ll need to do some checks to ensure that your data are suitable for the analysis methods and techniques you plan to use. If you try to analyse data that doesn’t meet the assumptions of a specific statistical technique, your results will be largely meaningless. Therefore, you may need to show that the methods and techniques you’ll use are “allowed”.

Most commonly, there are two areas you need to pay attention to:

#1: Composite measures

The first is when you have multiple scale-based measures that combine to capture one construct – this is called a composite measure .  For example, you may have four Likert scale-based measures that (should) all measure the same thing, but in different ways. In other words, in a survey, these four scales should all receive similar ratings. This is called “ internal consistency ”.

Internal consistency is not guaranteed though (especially if you developed the measures yourself), so you need to assess the reliability of each composite measure using a test. Typically, Cronbach’s Alpha is a common test used to assess internal consistency – i.e., to show that the items you’re combining are more or less saying the same thing. A high alpha score means that your measure is internally consistent. A low alpha score means you may need to consider scrapping one or more of the measures.

#2: Data shape

The second matter that you should address early on in your results chapter is data shape. In other words, you need to assess whether the data in your set are symmetrical (i.e. normally distributed) or not, as this will directly impact what type of analyses you can use. For many common inferential tests such as T-tests or ANOVAs (we’ll discuss these a bit later), your data needs to be normally distributed. If it’s not, you’ll need to adjust your strategy and use alternative tests.

To assess the shape of the data, you’ll usually assess a variety of descriptive statistics (such as the mean, median and skewness), which is what we’ll look at next.

Descriptive statistics

Step 5 – Present the descriptive statistics

Now that you’ve laid the foundation by discussing the representativeness of your sample, as well as the reliability of your measures and the shape of your data, you can get started with the actual statistical analysis. The first step is to present the descriptive statistics for your variables.

For scaled data, this usually includes statistics such as:

  • The mean – this is simply the mathematical average of a range of numbers.
  • The median – this is the midpoint in a range of numbers when the numbers are arranged in order.
  • The mode – this is the most commonly repeated number in the data set.
  • Standard deviation – this metric indicates how dispersed a range of numbers is. In other words, how close all the numbers are to the mean (the average).
  • Skewness – this indicates how symmetrical a range of numbers is. In other words, do they tend to cluster into a smooth bell curve shape in the middle of the graph (this is called a normal or parametric distribution), or do they lean to the left or right (this is called a non-normal or non-parametric distribution).
  • Kurtosis – this metric indicates whether the data are heavily or lightly-tailed, relative to the normal distribution. In other words, how peaked or flat the distribution is.

A large table that indicates all the above for multiple variables can be a very effective way to present your data economically. You can also use colour coding to help make the data more easily digestible.

For categorical data, where you show the percentage of people who chose or fit into a category, for instance, you can either just plain describe the percentages or numbers of people who responded to something or use graphs and charts (such as bar graphs and pie charts) to present your data in this section of the chapter.

When using figures, make sure that you label them simply and clearly , so that your reader can easily understand them. There’s nothing more frustrating than a graph that’s missing axis labels! Keep in mind that although you’ll be presenting charts and graphs, your text content needs to present a clear narrative that can stand on its own. In other words, don’t rely purely on your figures and tables to convey your key points: highlight the crucial trends and values in the text. Figures and tables should complement the writing, not carry it .

Depending on your research aims, objectives and research questions, you may stop your analysis at this point (i.e. descriptive statistics). However, if your study requires inferential statistics, then it’s time to deep dive into those .

Dive into the inferential statistics

Step 6 – Present the inferential statistics

Inferential statistics are used to make generalisations about a population , whereas descriptive statistics focus purely on the sample . Inferential statistical techniques, broadly speaking, can be broken down into two groups .

First, there are those that compare measurements between groups , such as t-tests (which measure differences between two groups) and ANOVAs (which measure differences between multiple groups). Second, there are techniques that assess the relationships between variables , such as correlation analysis and regression analysis. Within each of these, some tests can be used for normally distributed (parametric) data and some tests are designed specifically for use on non-parametric data.

There are a seemingly endless number of tests that you can use to crunch your data, so it’s easy to run down a rabbit hole and end up with piles of test data. Ultimately, the most important thing is to make sure that you adopt the tests and techniques that allow you to achieve your research objectives and answer your research questions .

In this section of the results chapter, you should try to make use of figures and visual components as effectively as possible. For example, if you present a correlation table, use colour coding to highlight the significance of the correlation values, or scatterplots to visually demonstrate what the trend is. The easier you make it for your reader to digest your findings, the more effectively you’ll be able to make your arguments in the next chapter.

make it easy for your reader to understand your quantitative results

Step 7 – Test your hypotheses

If your study requires it, the next stage is hypothesis testing. A hypothesis is a statement , often indicating a difference between groups or relationship between variables, that can be supported or rejected by a statistical test. However, not all studies will involve hypotheses (again, it depends on the research objectives), so don’t feel like you “must” present and test hypotheses just because you’re undertaking quantitative research.

The basic process for hypothesis testing is as follows:

  • Specify your null hypothesis (for example, “The chemical psilocybin has no effect on time perception).
  • Specify your alternative hypothesis (e.g., “The chemical psilocybin has an effect on time perception)
  • Set your significance level (this is usually 0.05)
  • Calculate your statistics and find your p-value (e.g., p=0.01)
  • Draw your conclusions (e.g., “The chemical psilocybin does have an effect on time perception”)

Finally, if the aim of your study is to develop and test a conceptual framework , this is the time to present it, following the testing of your hypotheses. While you don’t need to develop or discuss these findings further in the results chapter, indicating whether the tests (and their p-values) support or reject the hypotheses is crucial.

Step 8 – Provide a chapter summary

To wrap up your results chapter and transition to the discussion chapter, you should provide a brief summary of the key findings . “Brief” is the keyword here – much like the chapter introduction, this shouldn’t be lengthy – a paragraph or two maximum. Highlight the findings most relevant to your research objectives and research questions, and wrap it up.

Some final thoughts, tips and tricks

Now that you’ve got the essentials down, here are a few tips and tricks to make your quantitative results chapter shine:

  • When writing your results chapter, report your findings in the past tense . You’re talking about what you’ve found in your data, not what you are currently looking for or trying to find.
  • Structure your results chapter systematically and sequentially . If you had two experiments where findings from the one generated inputs into the other, report on them in order.
  • Make your own tables and graphs rather than copying and pasting them from statistical analysis programmes like SPSS. Check out the DataIsBeautiful reddit for some inspiration.
  • Once you’re done writing, review your work to make sure that you have provided enough information to answer your research questions , but also that you didn’t include superfluous information.

If you’ve got any questions about writing up the quantitative results chapter, please leave a comment below. If you’d like 1-on-1 assistance with your quantitative analysis and discussion, check out our hands-on coaching service , or book a free consultation with a friendly coach.

where can i find quantitative research articles

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Soo

Thank you. I will try my best to write my results.

Lord

Awesome content 👏🏾

Tshepiso

this was great explaination

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • For Students
  • For Faculty
  • Interlibrary Loan
  • Request an Item from Everett Library
  • Study Rooms & Labs
  • Archives & Special Collections
  • Accessibility Guide
  • Citing Your Sources
  • Off-Campus Access
  • Online Tutorials
  • Library 101
  • Library Staff
  • Library Hours
  • Library Diversity Statement
  • Friends of the Library
  • History of Everett Library
  • Give to the Library
  • Library Mission & Vision
  • Library Policies

Service Alert

logo

  • Nursing Databases
  • APA Tutorial
  • EBP - Evidence Based Practice and PICO
  • Journal Search
  • Nursing Websites
  • Video Tutorials
  • Digital Literacy Sites

Articles from Ovid Database

Abnf articles.

  • Journal of Midwifery and Women's Health
  • Link to Nursing Research (journal) This journal includes nursing research and has some quantitative studies.
  • AJN, American Journal of Nursing
  • Increasing Access to Diabetes Education in Rural Alabama Through Telehealth
  • Evaluating the Impact of Smartphones on Nursing Workflow: Lessons Learned
  • Validity of the Montreal Cognitive Assessment Screener in Adolescents and Young Adults With and Without Congenital Heart Disease
  • Pharmacogenetics of Ketamine-Induced Emergence Phenomena
  • Pressure Pain Phenotypes in Women Before Breast Cancer Treatmen
  • Efficacy of a Breastfeeding Pain SelfManagement Intervention: A Pilot Randomized Controlled Tria
  • Stress and Health in Nursing Students The Nurse Engagement and Wellness Study
  • The Efficacy and Safety of an RN-Driven Ketamine Protocol for Adjunctive Analgesia During Burn Wound Care
  • Potassium Channel Candidate Genes Predict the Development of Secondary Lymphedema Following Breast Cancer Surgery
  • Social Support Is Inversely Associated With Sleep Disturbance, Inflammation, and Pain Severity in Chronic Low Back Pain.
  • Effect of a Nurse-Led Community Health Worker Intervention on Latent Tuberculosis Medication Completion Among Homeless Adults
  • Poor Sleep Predicts Increased Pain Perception Among Adults With Mild Cognitive Impairment
  • Feasibility, Acceptability, and Preliminary Effects of “Mindful Moms” A Mindful Physical Activity Intervention for Pregnant Women with Depression
  • Associations Among Nitric Oxide and Enkephalinases With Fibromyalgia Symptoms
  • Prescribed Walking for Glycemic Control and Symptom Management in Patients Without Diabetes Undergoing Chemotherapy
  • Dysmenorrhea Symptom-Based Phenotypes A Replication and Extension Study
  • Influence of Oxidative Stress-Related Genes on Susceptibility to Fibromyalgia
  • ABNF journal link
  • Cardiovascular Diseases in African-American Women: An Assessment of Awareness
  • Document links
  • << Previous: Digital Literacy Sites
  • Last Updated: Feb 27, 2024 4:29 PM
  • URL: https://library.queens.edu/nursing

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 21, Issue 4
  • How to appraise quantitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

This article has a correction. Please see:

  • Correction: How to appraise quantitative research - April 01, 2019

Download PDF

  • Xabi Cathala 1 ,
  • Calvin Moorley 2
  • 1 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • 2 Nursing Research and Diversity in Care , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Mr Xabi Cathala, Institute of Vocational Learning, School of Health and Social Care, London South Bank University London UK ; cathalax{at}lsbu.ac.uk and Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/eb-2018-102996

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Some nurses feel that they lack the necessary skills to read a research paper and to then decide if they should implement the findings into their practice. This is particularly the case when considering the results of quantitative research, which often contains the results of statistical testing. However, nurses have a professional responsibility to critique research to improve their practice, care and patient safety. 1  This article provides a step by step guide on how to critically appraise a quantitative paper.

Title, keywords and the authors

The authors’ names may not mean much, but knowing the following will be helpful:

Their position, for example, academic, researcher or healthcare practitioner.

Their qualification, both professional, for example, a nurse or physiotherapist and academic (eg, degree, masters, doctorate).

This can indicate how the research has been conducted and the authors’ competence on the subject. Basically, do you want to read a paper on quantum physics written by a plumber?

The abstract is a resume of the article and should contain:

Introduction.

Research question/hypothesis.

Methods including sample design, tests used and the statistical analysis (of course! Remember we love numbers).

Main findings.

Conclusion.

The subheadings in the abstract will vary depending on the journal. An abstract should not usually be more than 300 words but this varies depending on specific journal requirements. If the above information is contained in the abstract, it can give you an idea about whether the study is relevant to your area of practice. However, before deciding if the results of a research paper are relevant to your practice, it is important to review the overall quality of the article. This can only be done by reading and critically appraising the entire article.

The introduction

Example: the effect of paracetamol on levels of pain.

My hypothesis is that A has an effect on B, for example, paracetamol has an effect on levels of pain.

My null hypothesis is that A has no effect on B, for example, paracetamol has no effect on pain.

My study will test the null hypothesis and if the null hypothesis is validated then the hypothesis is false (A has no effect on B). This means paracetamol has no effect on the level of pain. If the null hypothesis is rejected then the hypothesis is true (A has an effect on B). This means that paracetamol has an effect on the level of pain.

Background/literature review

The literature review should include reference to recent and relevant research in the area. It should summarise what is already known about the topic and why the research study is needed and state what the study will contribute to new knowledge. 5 The literature review should be up to date, usually 5–8 years, but it will depend on the topic and sometimes it is acceptable to include older (seminal) studies.

Methodology

In quantitative studies, the data analysis varies between studies depending on the type of design used. For example, descriptive, correlative or experimental studies all vary. A descriptive study will describe the pattern of a topic related to one or more variable. 6 A correlational study examines the link (correlation) between two variables 7  and focuses on how a variable will react to a change of another variable. In experimental studies, the researchers manipulate variables looking at outcomes 8  and the sample is commonly assigned into different groups (known as randomisation) to determine the effect (causal) of a condition (independent variable) on a certain outcome. This is a common method used in clinical trials.

There should be sufficient detail provided in the methods section for you to replicate the study (should you want to). To enable you to do this, the following sections are normally included:

Overview and rationale for the methodology.

Participants or sample.

Data collection tools.

Methods of data analysis.

Ethical issues.

Data collection should be clearly explained and the article should discuss how this process was undertaken. Data collection should be systematic, objective, precise, repeatable, valid and reliable. Any tool (eg, a questionnaire) used for data collection should have been piloted (or pretested and/or adjusted) to ensure the quality, validity and reliability of the tool. 9 The participants (the sample) and any randomisation technique used should be identified. The sample size is central in quantitative research, as the findings should be able to be generalised for the wider population. 10 The data analysis can be done manually or more complex analyses performed using computer software sometimes with advice of a statistician. From this analysis, results like mode, mean, median, p value, CI and so on are always presented in a numerical format.

The author(s) should present the results clearly. These may be presented in graphs, charts or tables alongside some text. You should perform your own critique of the data analysis process; just because a paper has been published, it does not mean it is perfect. Your findings may be different from the author’s. Through critical analysis the reader may find an error in the study process that authors have not seen or highlighted. These errors can change the study result or change a study you thought was strong to weak. To help you critique a quantitative research paper, some guidance on understanding statistical terminology is provided in  table 1 .

  • View inline

Some basic guidance for understanding statistics

Quantitative studies examine the relationship between variables, and the p value illustrates this objectively.  11  If the p value is less than 0.05, the null hypothesis is rejected and the hypothesis is accepted and the study will say there is a significant difference. If the p value is more than 0.05, the null hypothesis is accepted then the hypothesis is rejected. The study will say there is no significant difference. As a general rule, a p value of less than 0.05 means, the hypothesis is accepted and if it is more than 0.05 the hypothesis is rejected.

The CI is a number between 0 and 1 or is written as a per cent, demonstrating the level of confidence the reader can have in the result. 12  The CI is calculated by subtracting the p value to 1 (1–p). If there is a p value of 0.05, the CI will be 1–0.05=0.95=95%. A CI over 95% means, we can be confident the result is statistically significant. A CI below 95% means, the result is not statistically significant. The p values and CI highlight the confidence and robustness of a result.

Discussion, recommendations and conclusion

The final section of the paper is where the authors discuss their results and link them to other literature in the area (some of which may have been included in the literature review at the start of the paper). This reminds the reader of what is already known, what the study has found and what new information it adds. The discussion should demonstrate how the authors interpreted their results and how they contribute to new knowledge in the area. Implications for practice and future research should also be highlighted in this section of the paper.

A few other areas you may find helpful are:

Limitations of the study.

Conflicts of interest.

Table 2 provides a useful tool to help you apply the learning in this paper to the critiquing of quantitative research papers.

Quantitative paper appraisal checklist

  • 1. ↵ Nursing and Midwifery Council , 2015 . The code: standard of conduct, performance and ethics for nurses and midwives https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21.8.18 ).
  • Gerrish K ,
  • Moorley C ,
  • Tunariu A , et al
  • Shorten A ,

Competing interests None declared.

Patient consent Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Correction notice This article has been updated since its original publication to update p values from 0.5 to 0.05 throughout.

Linked Articles

  • Miscellaneous Correction: How to appraise quantitative research BMJ Publishing Group Ltd and RCN Publishing Company Ltd Evidence-Based Nursing 2019; 22 62-62 Published Online First: 31 Jan 2019. doi: 10.1136/eb-2018-102996corr1

Read the full text or download the PDF:

Logo for University of Iowa Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Unit 6: Qual vs Quant.

27 Quantitative Methods in Communication Research

Quantitative methods in communication research.

In communication research, both quantitative and qualitative methods are essential for understanding different aspects of communication processes and effects. Here’s how quant methods can be applied:

  • Collecting data on communication patterns, relationship satisfaction, or conflict resolution strategies among different groups.
  • Collecting numerical data on audience demographics, media consumption habits, or attitudes towards specific communication messages.
  • Testing hypotheses about the effects of specific communication behaviors (e.g., eye contact, tone of voice) on relationship outcomes.
  • Testing the effects of different communication strategies or messages on audience behavior or perception.
  • Quantifying the frequency and types of communication behaviors in recorded interactions (e.g., supportive vs. critical comments)
  • Quantifying the frequency of certain themes, words, or images in media content to identify patterns or trends.
  • Statistical Analysis :  Using statistical tools to analyze data from surveys or experiments, such as correlation or regression analysis to explore relationships between variables.

Communication Research in Real Life Copyright © 2023 by Kate Magsamen-Conrad. All Rights Reserved.

Share This Book

Banner

  • MJC Library & Learning Center
  • Research Guides

Finding Qualitative & Quantitative Research Articles

  • Qualitative vs. Quantitative Research
  • General Strategies
  • Other Resources

About MEDLINE Complete

MEDLINE Complete  provides authoritative medical information on general health & medicine, pharmacology, neurology, molecular biology, genetics and genomics, histology, microbiology and many other subject domains.  MEDLINE Complete  uses MeSH (Medical Subject Headings) indexing with tree, tree hierarchy, subheadings and explosion capabilities to search citations from over 5,600 current biomedical journals. 

With coverage dating back to 1809 and full-text back to 1865,  MEDLINE Complete  is the definitive research tool for medical literature.  MEDLINE Complete  is an unfiltered database that contains over 5,000 full-text journals related to the biomedical and health fields. While the database itself is unfiltered, you can still use it to find filtered, evidence-based practice resources, including systematic reviews.

What's the Difference Between PubMed & Medline? 

PubMed vs. MEDLINE

  • Both databases search a similar group of medical literature (mostly medical journals) compiled by the National Library of Medicine (NLM).  PubMed includes "future MEDLINE" articles that have not yet been fully indexed (assigned detailed MeSH / Medical Subject Headings) , as well as a small amount of additional content (from life sciences journals and selected medical books).
  • All MEDLINE content is from a medical journals and all content includes detailed indexing according to carefully controlled and applied Medical Subject Headings (MeSH), insuring all research on a specific content is accessible under standardized terms.

MEDLINE: Qualitative Studies

Tips for locating qualitative research in medline.

The term 'qualitative research' is indexed as  "Qualitative Research"  or  "Nursing Methodology Research"  in Medline. 

  • Qualitative Research  [research that derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants. Year introduced: 2003]
  • Interviews as Topic  [conversations with an individual or individuals held in order to obtain information about their background and other personal biographical data, their attitudes and opinions, etc. It includes school admission or job interviews. Year introduced: 2008 (1980)]
  • Focus Groups  [a method of data collection and a qualitative research tool in which a small group of individuals are brought together and allowed to interact in a discussion of their opinions about topics, issues, or questions. Year introduced: 1993]
  • Nursing Methodology Research  [research carried out by nurses concerning techniques and methods to implement projects and to document information, including methods of interviewing patients, collecting data, and forming inferences. The concept includes exploration of methodological issues such as human subjectivity and human experience. Year introduced: 1991(1989)]
  • Anecdotes as Topic  [brief accounts or narratives of an incident or event. Year introduced: 2008(1963)]
  • Narration  [the act, process, or an instance of narrating, i.e., telling a story. In the context of MEDICINE or ETHICS, narration includes relating the particular and the personal in the life story of an individual. Year introduced: 2003]
  • Video Recording  [the storing or preserving of video signals for television to be played back later via a transmitter or receiver. Recordings may be made on magnetic tape or discs (VIDEODISC RECORDING). Year introduced: 1984]
  • Tape Recording  [recording of information on magnetic or punched paper tape. Year introduced: 1967(1964)]
  • Personal Narratives as Topic  [works about accounts of individual experience in relation to a particular field or of participation in related activities. Year introduced: 2013]

NOTE:  Inconsistent indexing in Medline. For example, grounded theory articles are not always indexed for qualitative research. Need to TextWord search for additional terms: “grounded theory”, “action research”, ethnograph* etc.

Additional MeSH terms that may be applicable to your topic include:  Attitude of Health Personnel ;  Attitude to Death ;  Attitude to Health ; or  Health Knowledge, Attitudes, Practice.

  • Interview  [work consisting of a conversation with an individual regarding his or her background and other personal and professional details, opinions on specific subjects posed by the interviewer, etc. Year introduced: 2008(1993)]
  • Diaries  [works consisting of records, usually private, of writers' experiences, observations, feelings, attitudes, etc. They may also be works marked in calendar order in which to note appointments and the like. (From Random House Unabridged Dictionary, 2d ed) Year introduced: 2008(1997)]
  • Anecdotes  [works consisting of brief accounts or narratives of incidents or events. Year introduced: 2008(1999)]
  • Personal Narratives  [works consisting of accounts of individual experience in relation to a particular field or of participation in related activities. Year introduced: 2013]
  • Use Text Words  to find articles missed by MeSH terms
  • Select  Topic-Specific Queries  from the PubMed home page and then  Health Services Research Queries .
  • This page provides a filter for specialized PubMed searches on healthcare quality and costs.
  • Enter your search topic and select Qualitative Research under Category  
  • Use a filter for:  Clinical Queries--qualitative (in Medline via Ovid)
  • PubMed : [copy and paste the following filter into PubMed and  combine  your subject terms with this search filter using AND:]

(((“semi-structured”[TIAB] OR semistructured[TIAB] OR unstructured[TIAB] OR informal[TIAB] OR “in-depth”[TIAB] OR indepth[TIAB] OR “face-to-face”[TIAB] OR structured[TIAB] OR guide[TIAB] OR guides[TIAB]) AND (interview*[TIAB] OR discussion*[TIAB] OR questionnaire*[TIAB])) OR (“focus group”[TIAB] OR “focus groups”[TIAB] OR qualitative[TIAB] OR ethnograph*[TIAB] OR fieldwork[TIAB] OR “field work”[TIAB] OR “key informant”[TIAB])) OR “interviews as topic”[Mesh] OR “focus groups”[Mesh] OR narration[Mesh] OR qualitative research[Mesh] OR "personal narratives as topic"[Mesh]

Quantitative Research

Locating quantitative research in medline complete .

There is no easy way to specify quantitative studies in  Medline  and  CINAHL .

Check box in the  Publication Type  menu for Randomized Controlled Trials (RCT). RCTs  are often quantitative, or at least have a quantitative aspect. Another trick for both  Medline  is to select  Charts  from the  Image types  at the bottom of the Search screen. Some qualitative studies also use charts and graphs (and some studies are mixed method) and some quantitative studies won't have charts or graphs listed as images.

Cochrane Database of Systematic Reviews is included in Medline. You can also use it to find unfiltered resources, including randomized controlled trials, case studies, and other primary research studies.

On this page you will learn how to limit your results in MEDLINE to:

  • systematic reviews
  • randomized controlled trials
  • cohort studies
  • case studies
  • other document types.

1. Set Up Your Search: 

  • Once you are in the database, use the search boxes to enter your keywords. For example, in the  first search box , enter:

Neonatal OR NICU

Note:  You can use  OR  to link together your synonyms, or related words, in a search box, allowing the database to search more broadly.

  • In the  second search box  enter:

Handwashing OR "Hand Washing" OR "Hand Rubs" OR "Hand Disinfection"

Note:  Putting quotation marks around phrases tells the database to search for these words as a phrase and not as individual words.

"Infection Control" OR "Cross Infection"

where can i find quantitative research articles

Systematic reviews

You can find a number of systematic reviews in MEDLINE. Once you have set up your search, here is how you can limit your results to only systematic reviews:

  • Scroll down the page below the search boxes until you find the  Subject Subset  box.

where can i find quantitative research articles

  • Click on the  Search  button to run your search.

Randomized controlled trials

Randomized controlled trials are the studies commonly used to support systematic reviews and are a high level of evidence.

Once you have set up your search, here is how you can limit your results to only randomized controlled trials:

  • Scroll down the page below the search boxes until you see the  Publication Type  box.

where can i find quantitative research articles

  • Then click on the  Search  button to run your search.

Cohort studies

Cohort studies are a type of longitudinal study, or observational study, that analyze risk factors by following groups that share a common characteristic or experience over time. Since these studies have a long-term component, they promote a better quality of evidence than a shorter study. There are also fewer of them, and they are harder to find.

Here is an example of a search for a cohort study in MEDLINE:

"Infection Control"

"Cohort Studies"

where can i find quantitative research articles

Case studies

A case study, or case report, is a research method involving a detailed investigation of a single individual or a single organized group. Case studies may be prospective (in which criteria are established and cases fitting the criteria are included as they become available) or retrospective (in which criteria are established and cases are selected from historical records for inclusion in the study).

Once you have set up your search, here is how you can limit your results to only case studies:

  • Scroll down the page below the search boxes until you find the  Publication Type  box.

where can i find quantitative research articles

Other filters in MEDLINE

MEDLINE with Full Text offers a number of additional filters or limiters that can help you find specific types of studies.

Scroll down the page below the search boxes to locate these filters or limiters. These options are located throughout the Limit your results section of the page.

Clinical Queries

This filter, in ADVANCED search, can be used find articles that are clinically-sound. The nine options are: 

  • Clinical Prediction
  • Qualitative
  • Causation (Etiology)

To get the most results, select all three sub-divisions:  High Sensitivity ,  High Specificity , and  Best Balance .

Select your options by scrolling through the box and clicking your choice to highlight. Hold down the  Ctrl key  to select multiple options.

where can i find quantitative research articles

Publication Types

This limiter box allows you to select specific article types.. We've already shown how to use this limiter for randomized controlled trials and case reports; other useful publication types for evidence-based practice include  Clinical   Trial ,  Comparative Study ,  Meta Analysis ,  Practice Guideline , and  Validation Studies .

Select an option by finding it in the list and clicking on it (it will then be highlighted).

where can i find quantitative research articles

Medline EBM Reviews limiter (available in Basic search menu) 

This limiter limits your search to only the following evidence-based medicine resources:

  • The Cochrane Database of Systematic Reviews
  • ACP Journal Club
  • Clinical Evidence
  • Evidence-based Mental Health
  • Evidence-based Nursing
  • Evidence report/Technology assessment

Click in the check box below  EBM Reviews  to select this option.

where can i find quantitative research articles

Note:  With the EBM Reviews limiter you will need to evaluate your results to determine what type of evidence each article contains.

  • << Previous: CINAHL
  • Next: PubMed >>
  • Last Updated: Dec 6, 2021 1:27 PM
  • URL: https://libguides.mjc.edu/qualresearch

Except where otherwise noted, this work is licensed under CC BY-SA 4.0 and CC BY-NC 4.0 Licenses .

Quantitative Data Analysis: Everything You Need to Know

11 min read

Quantitative Data Analysis: Everything You Need to Know cover

Does the thought of quantitative data analysis bring back the horrors of math classes? We get it.

But conducting quantitative data analysis doesn’t have to be hard with the right tools. Want to learn how to turn raw numbers into actionable insights on how to improve your product?

In this article, we explore what quantitative data analysis is, the difference between quantitative and qualitative data analysis, and statistical methods you can apply to your data. We also walk you through the steps you can follow to analyze quantitative information, and how Userpilot can help you streamline the product analytics process. Let’s get started.

  • Quantitative data analysis is the process of using statistical methods to define, summarize, and contextualize numerical data.
  • Quantitative analysis is different from a qualitative one. The first deals with numerical data and focuses on answering “what,” “when,” and “where.” However, a qualitative analysis relies on text, graphics, or videos and explores “why” and “how” events occur.
  • Pros of quantitative data analysis include objectivity, reliability, ease of comparison, and scalability.
  • Cons of quantitative metrics include the data’s limited context and inflexibility, and the need for large sample sizes to get statistical significance.
  • The methods for analyzing quantitative data are descriptive and inferential statistics.
  • Choosing the right analysis method depends on the type of data collected and the specific research questions or hypotheses.
  • These are the steps to conduct quantitative data analysis: 1. Defining goals and KPIs . 2. Collecting and cleaning data. 3. Visualizing the data. 4. Identifying patterns . 5. Sharing insights. 6. Acting on findings to improve decision-making.
  • With Userpilot , you can auto-capture in-app user interactions and build analytics dashboards . This tool also lets you conduct A/B and multivariate tests, and funnel and cohort analyses .
  • Gather and visualize all your product analytics in one place with Userpilot. Get a demo .

where can i find quantitative research articles

Try Userpilot and Take Your Product Experience to the Next Level

  • 14 Day Trial
  • No Credit Card Required

where can i find quantitative research articles

What is quantitative data analysis?

Quantitative data analysis is about applying statistical analysis methods to define, summarize, and contextualize numerical data. In short, it’s about turning raw numbers and data into actionable insights.

The analysis will vary depending on the research questions and the collected data (more on this below).

Quantitative vs qualitative data analysis

The main difference between these forms of analysis lies in the collected data. Quantitative data is numerical or easily quantifiable. For example, the answers to a customer satisfaction score (CSAT) survey are quantitative since you can count the number of people who answered “very satisfied”.

Qualitative feedback , on the other hand, analyzes information that requires interpretation. For instance, evaluating graphics, videos, text-based answers, or impressions.

Another difference between quantitative and qualitative analysis is the questions each seeks to answer. For instance, quantitative data analysis primarily answers what happened, when it happened, and where it happened. However, qualitative data analysis answers why and how an event occurred.

Quantitative data analysis also looks into identifying patterns , drivers, and metrics for different groups. However, qualitative analysis digs deeper into the sample dataset to understand underlying motivations and thinking processes.

Pros of quantitative data analysis

Quantitative or data-driven analysis has advantages such as:

  • Objectivity and reliability. Since quantitative analysis is based on numerical data, this reduces biases and allows for more objective conclusions. Also, by relying on statistics, this method ensures the results are consistent and can be replicated by others, making the findings more reliable.
  • Easy comparison. Quantitative data is easily comparable because you can identify trends , patterns, correlations, and differences within the same group and KPIs over time. But also, you can compare metrics in different scales by normalizing the data, e.g., bringing ratios and percentages into the same scale for comparison.
  • Scalability. Quantitative analysis can handle large volumes of data efficiently, making it suitable for studies involving large populations or datasets. This makes this data analysis method scalable. Plus, researchers can use quantitative analysis to generalize their findings to broader populations.

Cons of quantitative data analysis

These are common disadvantages of data-driven analytics :

  • Limited context. Since quantitative data looks at the numbers, it often strips away the data from the context, which can show the underlying reasons behind certain trends. This limitation can lead to a superficial understanding of complex issues, as you often miss the nuances and user motivations behind the data points.
  • Inflexibility. When conducting quantitative research, you don’t have room to improvise based on the findings. You need to have predefined hypotheses, follow scientific methods, and select data collection instruments. This makes the process less adaptable to new or unexpected findings.
  • Large sample sizes necessary. You need to use large sample sizes to achieve statistical significance and reliable results when doing quantitative analysis. Depending on the type of study you’re conducting, gathering such extensive data can be resource-intensive, time-consuming, and costly.

Quantitative data analysis methods

There are two statistical methods for reviewing quantitative data and user analytics . However, before exploring these in-depth, let’s refresh these key concepts:

  • Population. This is the entire group of individuals or entities that are relevant to the research.
  • Sample. The sample is a subset of the population that is actually selected for the research since it is often impractical or impossible to study the entire population.
  • Statistical significance. The chances that the results gathered after your analysis are realistic and not due to random chance.

Here are methods for analyzing quantitative data:

Descriptive statistics

Descriptive statistics, as the name implies, describe your data and help you understand your sample in more depth. It doesn’t make inferences about the entire population but only focuses on the details of your specific sample.

Descriptive statistics usually include measures like the mean, median, percentage, frequency, skewness, and mode.

Inferential statistics

Inferential statistics aim to make predictions and test hypotheses about the real-world population based on your sample data.

Here, you can use methods such as a T-test, ANOVA, regression analysis, and correlation analysis.

Let’s take a look at this example. Through descriptive statistics, you identify that users under the age of 25 are more likely to skip your onboarding. You’ll need to apply inferential statistics to determine if the result is statistically significant and applicable to your entire ’25 or younger’ population.

How to choose the right method for your quantitative data analysis

The type of data that you collect and the research questions that you want to answer will impact which quantitative data analysis method you choose. Here’s how to choose the right method:

Determine your data type

Before choosing the quantitative data analysis method, you need to identify which group your data belongs to:

  • Nominal —categories with no specific order, e.g., gender, age, or preferred device.
  • Ordinal —categories with a specific order, but the intervals between them aren’t equal, e.g., customer satisfaction ratings .
  • Interval —categories with an order and equal intervals, but no true zero point, e.g., temperature (where zero doesn’t mean “no temperature”).
  • Ratio —categories with a specific order, equal intervals, and a true zero point, e.g., number of sessions per user .

Applying any statistical method to all data types can lead to meaningless results. Instead, identify which statistical analysis method supports your collected data types.

Consider your research questions

The specific research questions you want to answer, and your hypothesis (if you have one) impact the analysis method you choose. This is because they define the type of data you’ll collect and the relationships you’re investigating.

For instance, if you want to understand sample specifics, descriptive statistics—such as tracking NPS —will work. However, if you want to determine if other variables affect the NPS, you’ll need to conduct an inferential analysis.

The overarching questions vary in both of the previous examples. For calculating the NPS, your internal research question might be, “Where do we stand in customer loyalty ?” However, if you’re doing inferential analysis, you may ask, “How do various factors, such as demographics, affect NPS?”

6 steps to do quantitative data analysis and extract meaningful insights

Here’s how to conduct quantitative analysis and extract customer insights :

1. Set goals for your analysis

Before diving into data collection, you need to define clear goals for your analysis as these will guide the process. This is because your objectives determine what to look for and where to find data. These goals should also come with key performance indicators (KPIs) to determine how you’ll measure success.

For example, imagine your goal is to increase user engagement. So, relevant KPIs include product engagement score , feature usage rate , user retention rate, or other relevant product engagement metrics .

2. Collect quantitative data

Once you’ve defined your goals, you need to gather the data you’ll analyze. Quantitative data can come from multiple sources, including user surveys such as NPS, CSAT, and CES, website and application analytics , transaction records, and studies or whitepapers.

Remember: This data should help you reach your goals. So, if you want to increase user engagement , you may need to gather data from a mix of sources.

For instance, product analytics tools can provide insights into how users interact with your tool, click on buttons, or change text. Surveys, on the other hand, can capture user satisfaction levels . Collecting a broad range of data makes your analysis more robust and comprehensive.

Raw event auto-tracking in Userpilot

3. Clean and visualize your data

Raw data is often messy and contains duplicates, outliers, or missing values that can skew your analysis. Before making any calculations, clean the data by removing these anomalies or outliers to ensure accurate results.

Once cleaned, turn it into visual data by using different types of charts , graphs, or heatmaps . Visualizations and data analytics charts make it easier to spot trends, patterns, and anomalies. If you’re using Userpilot, you can choose your preferred visualizations and organize your dashboard to your liking.

4. Identify patterns and trends

When looking at your dashboards, identify recurring themes, unusual spikes, or consistent declines that might indicate data analytics trends or potential issues.

Picture this: You notice a consistent increase in feature usage whenever you run seasonal marketing campaigns . So, you segment the data based on different promotional strategies. There, you discover that users exposed to email marketing campaigns have a 30% higher engagement rate than those reached through social media ads.

In this example, the pattern suggests that email promotions are more effective in driving feature usage.

If you’re a Userpilot user, you can conduct a trend analysis by tracking how your users perform certain events.

Trend analysis report in Userpilot

5. Share valuable insights with key stakeholders

Once you’ve discovered meaningful insights, you have to communicate them to your organization’s key stakeholders. Do this by turning your data into a shareable analysis report , one-pager, presentation, or email with clear and actionable next steps.

Your goal at this stage is for others to view and understand the data easily so they can use the insights to make data-led decisions.

Following the previous example, let’s say you’ve found that email campaigns significantly boost feature usage. Your email to other stakeholders should strongly recommend increasing the frequency of these campaigns and adding the supporting data points.

Take a look at how easy it is to share custom dashboards you built in Userpilot with others via email:

6. Act on the insights

Data analysis is only valuable if it leads to actionable steps that improve your product or service. So, make sure to act upon insights by assigning tasks to the right persons.

For example, after analyzing user onboarding data, you may find that users who completed the onboarding checklist were 3x more likely to become paying customers ( like Sked Social did! ).

Now that you have actual data on the checklist’s impact on conversions, you can work on improving it, such as simplifying its steps, adding interactive features, and launching an A/B test to experiment with different versions.

How can Userpilot help with analyzing quantitative data

As you’ve seen throughout this article, using a product analytics tool can simplify your data analysis and help you get insights faster. Here are different ways in which Userpilot can help:

Automatically capture quantitative data

Thanks to Userpilot’s new auto-capture feature, you can automatically track every time your users click, write a text, or fill out a form in your app—no engineers or manual tagging required!

Our customer analytics platform lets you use this data to build segments, trigger personalized in-app events and experiences, or launch surveys.

If you don’t want to auto-capture raw data, you can turn this functionality off in your settings, as seen below:

Auto-capture raw data settings in Userpilot

Monitor key metrics with customizable dashboards for real-time insights

Userpilot comes with template analytics dashboards , such as new user activation dashboards or customer engagement dashboards . However, you can create custom dashboards and reports to keep track of metrics that are relevant to your business in real time.

For instance, you could build a customer retention analytics dashboard and include all metrics that you find relevant, such as customer stickiness , NPS, or last accessed date.

Analyze experiment data with A/B and multivariate tests

Userpilot lets you conduct A/B and multivariate tests , either by following a controlled or a head-to-head approach. You can track the results on a dashboard.

For example, let’s say you want to test a variation of your onboarding flow to determine which leads to higher user activation .

You can go to Userpilot’s Flows tab and click on Experiments. There, you’ll be able to select the type of test you want to run, for instance, a controlled A/B test , build a new flow, test it, and get the results.

Creating new experiments for A/B and multivariate testing in Userpilot

Use quantitative funnel analysis to increase conversion rates

With Userpilot, you can track your customers’ journey as they complete actions and move through the funnel. Funnel analytics give you insights into your conversion rates and conversion times between two events, helping you identify areas for improvement.

Imagine you want to analyze your free-to-paid conversions and the differences between devices. Just by looking at the graphic, you can draw some insights:

  • There’s a significant drop-off between steps one and two, and two and three, indicating potential user friction .
  • Users on desktops convert at higher rates than those on mobile or unspecified devices.
  • Your average freemium conversion time is almost three days.

funnel analysis view in Userpilot

Leverage cohort analysis to optimize retention

Another Userpilot functionality that can help you analyze quantitative data is cohort analysis . This powerful tool lets you group users based on shared characteristics or experiences, allowing you to analyze their behavior over time and identify trends, patterns, and the long-term impact of changes on user behavior.

For example, let’s say you recently released a feature and want to measure its impact on user retention. Via a cohort analysis, you can group users who started using your product after the update and compare their retention rates to previous cohorts.

You can do this in Userpilot by creating segments and then tracking user segments ‘ retention rates over time.

Retention analysis example in Userpilot

Check how many users adopted a feature with a retention table

In Userpilot, you can use retention tables to stay on top of feature adoption . This means you can track how many users continue to use a feature over time and which features are most valuable to your users. The video below shows how to choose the features or events you want to analyze in Userpilot.

As you’ve seen, to conduct quantitative analysis, you first need to identify your business and research goals. Then, collect, clean, and visualize the data to spot trends and patterns. Lastly, analyze the data, share it with stakeholders, and act upon insights to build better products and drive customer satisfaction.

To stay on top of your KPIs, you need a product analytics tool. With Userpilot, you can automate data capture, analyze product analytics, and view results in shareable dashboards. Want to try it for yourself? Get a demo .

Leave a comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Book a demo with on of our product specialists

Get The Insights!

The fastest way to learn about Product Growth,Management & Trends.

The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.

where can i find quantitative research articles

The fastest way to learn about Product Growth, Management & Trends.

You might also be interested in ...

Guide to auto-capture in saas: benefits, use cases and tools.

Aazar Ali Shad

Amplitude Tracking: How Does It Work and Are There Better Alternatives?

Saffa Faisal

Banner Image

Quantitative and Qualitative Research

  • I NEED TO . . .

What is Quantitative Research?

  • What is Qualitative Research?
  • Quantitative vs Qualitative
  • Step 1: Accessing CINAHL
  • Step 2: Create a Keyword Search
  • Step 3: Create a Subject Heading Search
  • Step 4: Repeat Steps 1-3 for Second Concept
  • Step 5: Repeat Steps 1-3 for Quantitative Terms
  • Step 6: Combining All Searches
  • Step 7: Adding Limiters
  • Step 8: Save Your Search!
  • What Kind of Article is This?
  • More Research Help This link opens in a new window

Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns . Quantitative research gathers a range of numeric data. Some of the numeric data is intrinsically quantitative (e.g. personal income), while in other cases the numeric structure is  imposed (e.g. ‘On a scale from 1 to 10, how depressed did you feel last week?’). The collection of quantitative information allows researchers to conduct simple to extremely sophisticated statistical analyses that aggregate the data (e.g. averages, percentages), show relationships among the data (e.g. ‘Students with lower grade point averages tend to score lower on a depression scale’) or compare across aggregated data (e.g. the USA has a higher gross domestic product than Spain). Quantitative research includes methodologies such as questionnaires, structured observations or experiments and stands in contrast to qualitative research. Qualitative research involves the collection and analysis of narratives and/or open-ended observations through methodologies such as interviews, focus groups or ethnographies.

Coghlan, D., Brydon-Miller, M. (2014).  The SAGE encyclopedia of action research  (Vols. 1-2). London, : SAGE Publications Ltd doi: 10.4135/9781446294406

What is the purpose of quantitative research?

The purpose of quantitative research is to generate knowledge and create understanding about the social world. Quantitative research is used by social scientists, including communication researchers, to observe phenomena or occurrences affecting individuals. Social scientists are concerned with the study of people. Quantitative research is a way to learn about a particular group of people, known as a sample population. Using scientific inquiry, quantitative research relies on data that are observed or measured to examine questions about the sample population.

Allen, M. (2017).  The SAGE encyclopedia of communication research methods  (Vols. 1-4). Thousand Oaks, CA: SAGE Publications, Inc doi: 10.4135/9781483381411

How do I know if the study is a quantitative design?  What type of quantitative study is it?

Quantitative Research Designs: Descriptive non-experimental, Quasi-experimental or Experimental?

Studies do not always explicitly state what kind of research design is being used.  You will need to know how to decipher which design type is used.  The following video will help you determine the quantitative design type.

  • << Previous: I NEED TO . . .
  • Next: What is Qualitative Research? >>
  • Last Updated: Aug 19, 2024 2:09 PM
  • URL: https://libguides.uta.edu/quantitative_and_qualitative_research

University of Texas Arlington Libraries 702 Planetarium Place · Arlington, TX 76019 · 817-272-3000

  • Internet Privacy
  • Accessibility
  • Problems with a guide? Contact Us.

Banner

Biological Literature: How do I find a Quantitative article?

  • The Scientific Method Explained
  • Parts of a Scientific Article
  • How to Read Scientific Articles
  • Research vs Review Articles
  • Quantitative vs Qualitative Research
  • How do I find a Quantitative article?
  • Find Books/eBooks
  • Linking Google Scholar to Gee Library Resources
  • Related Websites
  • Evaluating Resources
  • Documenting Sources (Citations)
  • Avoiding Plagiarism
  • Choosing Your Topic This link opens in a new window
  • Creating a Literature Review
  • Related Guides
  • Library Session Materials

Finding a Quantitative Article

You can find quantitative articles by searching with methodology terms as keywords. To find a quantitative study, possible keywords include the type of study, data analysis type, or terminology used to describe the results.

     Example quantitative keywords
 

 

 

 

 

adapted from:  Walden University. Q. How do I find a quantitative article? http://academicanswers.waldenu.edu/faq/72857

Sample Search using OneSearch Advanced Search Options

You can use our library's databases to search for these kinds of research studies:  

  • nutrition AND qualitative
  • nutrition AND quantitative
  • dieting AND survey  
  • marijuana AND controlled trial
  • In ProQuest's PsycARTICLES, for example, there is a "Methodology" box (scroll down a little to see it).  Qualitative and quantitative are both options there, among many others.  
  • Limiting your search to "scholarly" or "peer-reviewed" journals will also help.

adapted from: Richard G. Trefry Library. Q. How can I find a qualitative or quantitative research article?  apus.libanswers.com/faq/2257

Search Tips

Connecting the alternative terms with OR tells the database to search for any of these terms.

Connecting the alternative terms with AND tells the database to search for ALL those terms.

Using the asterisk (*) truncates the search.  The database will search for the part of the word you typed before the asterisk, along with any possible endings of the word. Using statistic* tells the database to search for statistics, statistical, etc.

Some methodologies are rarely used for certain research topics. You may need to broaden your search topic to find a study that uses your methodology.

Many articles and dissertations will include methodology terms in the abstract or title. To make sure that you have an example of your methodology, be sure to look at the  methodology section  in the full text. This will provide detailed information about the methodology used.

  • << Previous: Quantitative vs Qualitative Research
  • Next: Find Books/eBooks >>
  • Last Updated: Oct 3, 2022 2:30 PM
  • URL: https://libguides.mssu.edu/c.php?g=865170

This site is maintained by the librarians of George A. Spiva Library . If you have a question or comment about the Library's LibGuides, please contact the site administrator .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 July 2024

Neural general circulation models for weather and climate

  • Dmitrii Kochkov   ORCID: orcid.org/0000-0003-3846-4911 1   na1 ,
  • Janni Yuval   ORCID: orcid.org/0000-0001-7519-0118 1   na1 ,
  • Ian Langmore 1   na1 ,
  • Peter Norgaard 1   na1 ,
  • Jamie Smith 1   na1 ,
  • Griffin Mooers 1 ,
  • Milan Klöwer 2 ,
  • James Lottes 1 ,
  • Stephan Rasp 1 ,
  • Peter Düben   ORCID: orcid.org/0000-0002-4610-3326 3 ,
  • Sam Hatfield 3 ,
  • Peter Battaglia 4 ,
  • Alvaro Sanchez-Gonzalez 4 ,
  • Matthew Willson   ORCID: orcid.org/0000-0002-8730-1927 4 ,
  • Michael P. Brenner 1 , 5 &
  • Stephan Hoyer   ORCID: orcid.org/0000-0002-5207-0380 1   na1  

Nature ( 2024 ) Cite this article

53k Accesses

3 Citations

678 Altmetric

Metrics details

  • Atmospheric dynamics
  • Climate and Earth system modelling
  • Computational science

General circulation models (GCMs) are the foundation of weather and climate prediction 1 , 2 . GCMs are physics-based simulators that combine a numerical solver for large-scale dynamics with tuned representations for small-scale processes such as cloud formation. Recently, machine-learning models trained on reanalysis data have achieved comparable or better skill than GCMs for deterministic weather forecasting 3 , 4 . However, these models have not demonstrated improved ensemble forecasts, or shown sufficient stability for long-term weather and climate simulations. Here we present a GCM that combines a differentiable solver for atmospheric dynamics with machine-learning components and show that it can generate forecasts of deterministic weather, ensemble weather and climate on par with the best machine-learning and physics-based methods. NeuralGCM is competitive with machine-learning models for one- to ten-day forecasts, and with the European Centre for Medium-Range Weather Forecasts ensemble prediction for one- to fifteen-day forecasts. With prescribed sea surface temperature, NeuralGCM can accurately track climate metrics for multiple decades, and climate forecasts with 140-kilometre resolution show emergent phenomena such as realistic frequency and trajectories of tropical cyclones. For both weather and climate, our approach offers orders of magnitude computational savings over conventional GCMs, although our model does not extrapolate to substantially different future climates. Our results show that end-to-end deep learning is compatible with tasks performed by conventional GCMs and can enhance the large-scale physical simulations that are essential for understanding and predicting the Earth system.

Similar content being viewed by others

where can i find quantitative research articles

Accurate medium-range global weather forecasting with 3D neural networks

where can i find quantitative research articles

Deep learning for twelve hour precipitation forecasts

where can i find quantitative research articles

Skilful predictions of the Asian summer monsoon one year ahead

Solving the equations for Earth’s atmosphere with general circulation models (GCMs) is the basis of weather and climate prediction 1 , 2 . Over the past 70 years, GCMs have been steadily improved with better numerical methods and more detailed physical models, while exploiting faster computers to run at higher resolution. Inside GCMs, the unresolved physical processes such as clouds, radiation and precipitation are represented by semi-empirical parameterizations. Tuning GCMs to match historical data remains a manual process 5 , and GCMs retain many persistent errors and biases 6 , 7 , 8 . The difficulty of reducing uncertainty in long-term climate projections 9 and estimating distributions of extreme weather events 10 presents major challenges for climate mitigation and adaptation 11 .

Recent advances in machine learning have presented an alternative for weather forecasting 3 , 4 , 12 , 13 . These models rely solely on machine-learning techniques, using roughly 40 years of historical data from the European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis v5 (ERA5) 14 for model training and forecast initialization. Machine-learning methods have been remarkably successful, demonstrating state-of-the-art deterministic forecasts for 1- to 10-day weather prediction at a fraction of the computational cost of traditional models 3 , 4 . Machine-learning atmospheric models also require considerably less code, for example GraphCast 3 has 5,417 lines versus 376,578 lines for the National Oceanic and Atmospheric Administration’s FV3 atmospheric model 15 (see Supplementary Information section  A for details).

Nevertheless, machine-learning approaches have noteworthy limitations compared with GCMs. Existing machine-learning models have focused on deterministic prediction, and surpass deterministic numerical weather prediction in terms of the aggregate metrics for which they are trained 3 , 4 . However, they do not produce calibrated uncertainty estimates 4 , which is essential for useful weather forecasts 1 . Deterministic machine-learning models using a mean-squared-error loss are rewarded for averaging over uncertainty, producing unrealistically blurry predictions when optimized for multi-day forecasts 3 , 13 . Unlike physical models, machine-learning models misrepresent derived (diagnostic) variables such as geostrophic wind 16 . Furthermore, although there has been some success in using machine-learning approaches on longer timescales 17 , 18 , these models have not demonstrated the ability to outperform existing GCMs.

Hybrid models that combine GCMs with machine learning are appealing because they build on the interpretability, extensibility and successful track record of traditional atmospheric models 19 , 20 . In the hybrid model approach, a machine-learning component replaces or corrects the traditional physical parameterizations of a GCM. Until now, the machine-learning component in such models has been trained ‘offline’, by learning parameterizations independently of their interaction with dynamics. These components are then inserted into an existing GCM. The lack of coupling between machine-learning components and the governing equations during training potentially causes serious problems, such as instability and climate drift 21 . So far, hybrid models have mostly been limited to idealized scenarios such as aquaplanets 22 , 23 . Under realistic conditions, machine-learning corrections have reduced some biases of very coarse GCMs 24 , 25 , 26 , but performance remains considerably worse than state-of-the-art models.

Here we present NeuralGCM, a fully differentiable hybrid GCM of Earth’s atmosphere. NeuralGCM is trained on forecasting up to 5-day weather trajectories sampled from ERA5. Differentiability enables end-to-end ‘online training’ 27 , with machine-learning components optimized in the context of interactions with the governing equations for large-scale dynamics, which we find enables accurate and stable forecasts. NeuralGCM produces physically consistent forecasts with accuracy comparable to best-in-class models across a range of timescales, from 1- to 15-day weather to decadal climate prediction.

Neural GCMs

A schematic of NeuralGCM is shown in Fig. 1 . The two key components of NeuralGCM are a differentiable dynamical core for solving the discretized governing dynamical equations and a learned physics module that parameterizes physical processes with a neural network, described in full detail in Methods , Supplementary Information sections  B and C , and Supplementary Table 1 . The dynamical core simulates large-scale fluid motion and thermodynamics under the influence of gravity and the Coriolis force. The learned physics module (Supplementary Fig. 1 ) predicts the effect of unresolved processes, such as cloud formation, radiative transport, precipitation and subgrid-scale dynamics, on the simulated fields using a neural network.

figure 1

a , Overall model structure, showing how forcings F t , noise z t (for stochastic models) and inputs y t are encoded into the model state x t . The model state is fed into the dynamical core, and alongside forcings and noise into the learned physics module. This produces tendencies (rates of change) used by an implicit–explicit ordinary differential equation (ODE) solver to advance the state in time. The new model state x t +1 can then be fed back into another time step, or decoded into model predictions. b , The learned physics module, which feeds data for individual columns of the atmosphere into a neural network used to produce physics tendencies in that vertical column.

The differentiable dynamical core in NeuralGCM allows an end-to-end training approach, whereby we advance the model multiple time steps before employing stochastic gradient descent to minimize discrepancies between model predictions and reanalysis (Supplementary Information section  G.2 ). We gradually increase the rollout length from 6 hours to 5 days (Supplementary Information section  G and Supplementary Table 5 ), which we found to be critical because our models are not accurate for multi-day prediction or stable for long rollouts early in training (Supplementary Information section  H.6.2 and Supplementary Fig. 23 ). The extended back-propagation through hundreds of simulation steps enables our neural networks to take into account interactions between the learned physics and the dynamical core. We train deterministic and stochastic NeuralGCM models, each of which uses a distinct training protocol, described in full detail in Methods and Supplementary Table 4 .

We train a range of NeuralGCM models at horizontal resolutions with grid spacing of 2.8°, 1.4° and 0.7° (Supplementary Fig. 7 ). We evaluate the performance of NeuralGCM at a range of timescales appropriate for weather forecasting and climate simulation. For weather, we compare against the best-in-class conventional physics-based weather models, ECMWF’s high-resolution model (ECMWF-HRES) and ensemble prediction system (ECMWF-ENS), and two of the recent machine-learning-based approaches, GraphCast 3 and Pangu 4 . For climate, we compare against a global cloud-resolving model and Atmospheric Model Intercomparison Project (AMIP) runs.

Medium-range weather forecasting

Our evaluation set-up focuses on quantifying accuracy and physical consistency, following WeatherBench2 12 . We regrid all forecasts to a 1.5° grid using conservative regridding, and average over all 732 forecasts made at noon and midnight UTC in the year 2020, which was held-out from training data for all machine-learning models. NeuralGCM, GraphCast and Pangu compare with ERA5 as the ground truth, whereas ECMWF-ENS and ECMWF-HRES compare with the ECMWF operational analysis (that is, HRES at 0-hour lead time), to avoid penalizing the operational forecasts for different biases than ERA5.

Model accuracy

We use ECMWF’s ensemble (ENS) model as a reference baseline as it achieves the best performance across the majority of lead times 12 . We assess accuracy using (1) root-mean-squared error (RMSE), (2) root-mean-squared bias (RMSB), (3) continuous ranked probability score (CRPS) and (4) spread-skill ratio, with the results shown in Fig. 2 . We provide more in-depth evaluations including scorecards, metrics for additional variables and levels and maps in Extended Data Figs. 1 and 2 , Supplementary Information section  H and Supplementary Figs. 9 – 22 .

figure 2

a , c , RMSE ( a ) and RMSB ( c ) for ECMWF-ENS, ECMWF-HRES, NeuralGCM-0.7°, NeuralGCM-ENS, GraphCast 3 and Pangu 4 on headline WeatherBench2 variables, as a percentage of the error of ECMWF-ENS. Deterministic and stochastic models are shown in solid and dashed lines respectively. e , g , CRPS relative to ECMWF-ENS ( e ) and spread-skill ratio for the ENS and NeuralGCM-ENS models ( g ). b , d , f , h , Spatial distributions of RMSE ( b ), bias ( d ), CRPS ( f ) and spread-skill ratio ( h ) for NeuralGCM-ENS and ECMWF-ENS models for 10-day forecasts of specific humidity at 700 hPa. Spatial plots of RMSE and CRPS show skill relative to a probabilistic climatology 12 with an ensemble member for each of the years 1990–2019. The grey areas indicate regions where climatological surface pressure on average is below 700 hPa.

Deterministic models that produce a single weather forecast for given initial conditions can be compared effectively using RMSE skill at short lead times. For the first 1–3 days, depending on the atmospheric variable, RMSE is minimized by forecasts that accurately track the evolution of weather patterns. At this timescale we find that NeuralGCM-0.7° and GraphCast achieve best results, with slight variations across different variables (Fig. 2a ). At longer lead times, RMSE rapidly increases owing to chaotic divergence of nearby weather trajectories, making RMSE less informative for deterministic models. RMSB calculates persistent errors over time, which provides an indication of how models would perform at much longer lead times. Here NeuralGCM models also compare favourably against previous approaches (Fig. 2c ), with notably much less bias for specific humidity in the tropics (Fig. 2d ).

Ensembles are essential for capturing intrinsic uncertainty of weather forecasts, especially at longer lead times. Beyond about 7 days, the ensemble means of ECMWF-ENS and NeuralGCM-ENS forecasts have considerably lower RMSE than the deterministic models, indicating that these models better capture the average of possible weather. A better metric for ensemble models is CRPS, which is a proper scoring rule that is sensitive to full marginal probability distributions 28 . Our stochastic model (NeuralGCM-ENS) running at 1.4° resolution has lower error compared with ECMWF-ENS across almost all variables, lead times and vertical levels for ensemble-mean RMSE, RSMB and CRPS (Fig. 2a,c,e and Supplementary Information section  H ), with similar spatial patterns of skill (Fig. 2b,f ). Like ECMWF-ENS, NeuralGCM-ENS has a spread-skill ratio of approximately one (Fig. 2d ), which is a necessary condition for calibrated forecasts 29 .

An important characteristic of forecasts is their resemblance to realistic weather patterns. Figure 3 shows a case study that illustrates the performance of NeuralGCM on three types of important weather phenomenon: tropical cyclones, atmospheric rivers and the Intertropical Convergence Zone. Figure 3a shows that all the machine-learning models make significantly blurrier forecasts than the source data ERA5 and physics-based ECMWF-HRES forecast, but NeuralCGM-0.7° outperforms the pure machine-learning models, despite its coarser resolution (0.7° versus 0.25° for GraphCast and Pangu). Blurry forecasts correspond to physically inconsistent atmospheric conditions and misrepresent extreme weather. Similar trends hold for other derived variables of meteorological interest (Supplementary Information section  H.2 ). Ensemble-mean predictions, from both NeuralGCM and ECMWF, are closer to ERA5 in an average sense, and thus are inherently smooth at long lead times. In contrast, as shown in Fig. 3 and in Supplementary Information section  H.3 , individual realizations from the ECMWF and NeuralGCM ensembles remain sharp, even at long lead times. Like ECMWF-ENS, NeuralGCM-ENS produces a statistically representative range of future weather scenarios for each weather phenomenon, despite its eight-times-coarser resolution.

figure 3

All forecasts are initialized at 2020-08-22T12z, chosen to highlight Hurricane Laura, the most damaging Atlantic hurricane of 2020. a , Specific humidity at 700 hPa for 1-day, 5-day and 10-day forecasts over North America and the Northeast Pacific Ocean from ERA5 14 , ECMWF-HRES, NeuralGCM-0.7°, ECMWF-ENS (mean), NeuralGCM-ENS (mean), GraphCast 3 and Pangu 4 . b , Forecasts from individual ensemble members from ECMWF-ENS and NeuralGCM-ENS over regions of interest, including predicted tracks of Hurricane Laura from each of the 50 ensemble members (Supplementary Information section  I.2 ). The track from ERA5 is plotted in black.

We can quantify the blurriness of different forecast models via their power spectra. Supplementary Figs. 17 and 18 show that the power spectra of NeuralCGM-0.7° is consistently closer to ERA5 than the other machine-learning forecast methods, but is still blurrier than ECMWF’s physical forecasts. The spectra of NeuralGCM forecasts is also roughly constant over the forecast period, in stark contrast to GraphCast, which worsens with lead time. The spectrum of NeuralGCM becomes more accurate with increased resolution (Supplementary Fig. 22 ), which suggests the potential for further improvements of NeuralGCM models trained at higher resolutions.

Water budget

In NeuralGCM, advection is handled by the dynamical core, while the machine-learning parameterization models local processes within vertical columns of the atmosphere. Thus, unlike pure machine-learning methods, local sources and sinks can be isolated from tendencies owing to horizontal transport and other resolved dynamics (Supplementary Fig. 3 ). This makes our results more interpretable and facilitates the diagnosis of the water budget. Specifically, we diagnose precipitation minus evaporation (Supplementary Information section  H.5 ) rather than directly predicting these as in machine-learning-based approaches 3 . For short weather forecasts, the mean of precipitation minus evaporation has a realistic spatial distribution that is very close to ERA5 data (Extended Data Fig. 4c–e ). The precipitation-minus-evaporation rate distribution of NeuralGCM-0.7° closely matches the ERA5 distribution in the extratropics (Extended Data Fig. 4b ), although it underestimates extreme events in the tropics (Extended Data Fig. 4a ). It is noted that the current version of NeuralGCM directly predicts tendencies for an atmospheric column, and thus cannot distinguish between precipitation and evaporation.

Geostrophic wind balance

We examined the extent to which NeuralGCM, GraphCast and ECMWF-HRES capture the geostrophic wind balance, the near-equilibrium between the dominant forces that drive large-scale dynamics in the mid-latitudes 30 . A recent study 16 highlighted that Pangu misrepresents the vertical structure of the geostrophic and ageostrophic winds and noted a deterioration at longer lead times. Similarly, we observe that GraphCast shows an error that worsens with lead time. In contrast, NeuralGCM more accurately depicts the vertical structure of the geostrophic and ageostrophic winds, as well as their ratio, compared with GraphCast across various rollouts, when compared against ERA5 data (Extended Data Fig. 3 ). However, ECMWF-HRES still shows a slightly closer alignment to ERA5 data than NeuralGCM does. Within NeuralGCM, the representation of the geostrophic wind’s vertical structure only slightly degrades in the initial few days, showing no noticeable changes thereafter, particularly beyond day 5.

Generalizing to unseen data

Physically consistent weather models should still perform well for weather conditions for which they were not trained. We expect that NeuralGCM may generalize better than machine-learning-only atmospheric models, because NeuralGCM employs neural networks that act locally in space, on individual vertical columns of the atmosphere. To explore this hypothesis, we compare versions of NeuralCGM-0.7° and GraphCast trained to 2017 on 5 years of weather forecasts beyond the training period (2018–2022) in Supplementary Fig. 36 . Unlike GraphCast, NeuralGCM does not show a clear trend of increasing error when initialized further into the future from the training data. To extend this test beyond 5 years, we trained a NeuralGCM-2.8° model using only data before 2000, and tested its skill for over 21 unseen years (Supplementary Fig. 35 ).

Climate simulations

Although our deterministic NeuralGCM models are trained to predict weather up to 3 days ahead, they are generally capable of simulating the atmosphere far beyond medium-range weather timescales. For extended climate simulations, we prescribe historical sea surface temperature (SST) and sea-ice concentration. These simulations feature many emergent phenomena of the atmosphere on timescales from months to decades.

For climate simulations with NeuralGCM, we use 2.8° and 1.4° deterministic models, which are relatively inexpensive to train (Supplementary Information section  G.7 ) and allow us to explore a larger parameter space to find stable models. Previous studies found that running extended simulations with hybrid models is challenging due to numerical instabilities and climate drift 21 . To quantify stability in our selected models, we run multiple initial conditions and report how many of them finish without instability.

Seasonal cycle and emergent phenomena

To assess the capability of NeuralGCM to simulate various aspects of the seasonal cycle, we run 2-year simulations with NeuralGCM-1.4°. for 37 different initial conditions spaced every 10 days for the year 2019. Out of these 37 initial conditions, 35 successfully complete the full 2 years without instability; for case studies of instability, see Supplementary Information section  H.7 , and Supplementary Figs. 26 and 27 . We compare results from NeuralGCM-1.4° for 2020 with ERA5 data and with outputs from the X-SHiELD global cloud-resolving model, which is coupled to an ocean model nudged towards reanalysis 31 . This X-SHiELD run has been used as a target for training machine-learning climate models 24 . For comparison, we evaluate models after regridding predictions to 1.4° resolution. This comparison slightly favours NeuralGCM because NeuralGCM was tuned to match ERA5, but the discrepancy between ERA5 and the actual atmosphere is small relative to model error.

Figure 4a shows the temporal variation of the global mean temperature to 2020, as captured by 35 simulations from NeuralGCM, in comparison with the ERA5 reanalysis and standard climatology benchmarks. The seasonality and variability of the global mean temperature from NeuralGCM are quantitatively similar to those observed in ERA5. The ensemble-mean temperature RMSE for NeuralGCM stands at 0.16 K when benchmarked against ERA5, which is a significant improvement over the climatology’s RMSE of 0.45 K. We find that NeuralGCM accurately simulates the seasonal cycle, as evidenced by metrics such as the annual cycle of the global precipitable water (Supplementary Fig. 30a ) and global total kinetic energy (Supplementary Fig. 30b ). Furthermore, the model captures essential atmospheric dynamics, including the Hadley circulation and the zonal-mean zonal wind (Supplementary Fig. 28 ), as well as the spatial patterns of eddy kinetic energy in different seasons (Supplementary Fig. 31 ), and the distinctive seasonal behaviours of monsoon circulation (Supplementary Fig. 29 ; additional details are provided in Supplementary Information section  I.1 ).

figure 4

a , Global mean temperature for ERA5 14 (orange), 1990–2019 climatology (black) and NeuralGCM-1.4° (blue) for 2020 using 35 simulations initialized every 10 days during 2019 (thick line, ensemble mean; thin lines, different initial conditions). b , Yearly global mean temperature for ERA5 (orange), mean over 22 CMIP6 AMIP experiments 34 (violet; model details are in Supplementary Information section  I.3 ) and NeuralGCM-2.8° for 22 AMIP-like simulations with prescribed SST initialized every 10 days during 1980 (thick line, ensemble mean; thin lines, different initial conditions). c , The RMSB of the 850-hPa temperature averaged between 1981 and 2014 for 22 NeuralGCM-2.8° AMIP runs (labelled NGCM), 22 CMIP6 AMIP experiments (labelled AMIP) and debiased 22 CMIP6 AMIP experiments (labelled AMIP*; bias was removed by removing the 850-hPa global temperature bias). In the box plots, the red line represents the median. The box delineates the first to third quartiles; the whiskers extend to 1.5 times the interquartile range (Q1 − 1.5IQR and Q3 + 1.5IQR), and outliers are shown as individual dots. d , Vertical profiles of tropical (20° S–20° N) temperature trends for 1981–2014. Orange, ERA5; black dots, Radiosonde Observation Correction using Reanalyses (RAOBCORE) 41 ; blue dots, mean trends for NeuralGCM; purple dots, mean trends from CMIP6 AMIP runs (grey and black whiskers, 25th and 75th percentiles for NeuralGCM and CMIP6 AMIP runs, respectively). e – g , Tropical cyclone tracks for ERA5 ( e ), NeuralGCM-1.4° ( f ) and X-SHiELD 31 ( g ). h – k , Mean precipitable water for ERA5 ( h ) and the precipitable water bias in NeuralGCM-1.4° ( i ), initialized 90 days before mid-January 2020 similarly to X-SHiELD, X-SHiELD ( j ) and climatology ( k ; averaged between 1990 and 2019). In d – i , quantities are calculated between mid-January 2020 and mid-January 2021 and all models were regridded to a 256 × 128 Gaussian grid before computation and tracking.

Next, we compare the annual biases of a single NeuralGCM realization with a single realization of X-SHiELD (the only one available), both initiated in mid-October 2019. We consider 19 January 2020 to 17 January 2021, the time frame for which X-SHiELD data are available. Global cloud-resolving models, such as X-SHiELD, are considered state of the art, especially for simulating the hydrological cycle, owing to their resolution being capable of resolving deep convection 32 . The annual bias in precipitable water for NeuralGCM (RMSE of 1.09 mm) is substantially smaller than the biases of both X-SHiELD (RMSE of 1.74 mm) and climatology (RMSE of 1.36 mm; Fig. 4i–k ). Moreover, NeuralGCM shows a lower temperature bias in the upper and lower troposphere than X-SHiELD (Extended Data Fig. 6 ). We also indirectly compare precipitation bias in X-SHiELD with precipitation-minus-evaporation bias in NeuralGCM-1.4°, which shows slightly larger bias and grid-scale artefacts for NeuralGCM (Extended Data Fig. 5 ).

Finally, to assess the capability of NeuralGCM to generate tropical cyclones in an annual model integration, we use the tropical cyclone tracker TempestExtremes 33 , as described in Supplementary Information section   I.2 , Supplementary Fig. 34 and Supplementary Table 6 . Figure 4e–g shows that NeuralGCM, even at a coarse resolution of 1.4°, produces realistic trajectories and counts of tropical cyclone (83 versus 86 in ERA5 for the corresponding period), whereas X-SHiELD, when regridded to 1.4° resolution, substantially underestimates the tropical cyclone count (40). Additional statistical analyses of tropical cyclones can be found in Extended Data Figs. 7 and 8 .

Decadal simulations

To assess the capability of NeuralGCM to simulate historical temperature trends, we conduct AMIP-like simulations over a duration of 40 years with NeuralGCM-2.8°. Out of 37 different runs with initial conditions spaced every 10 days during the year 1980, 22 simulations were stable for the entire 40-year period, and our analysis focuses on these results. We compare with 22 simulations run with prescribed SST from the Coupled Model Intercomparison Project Phase 6 (CMIP6) 34 , listed in Supplementary Information section  I.3 .

We find that all 40-year simulations of NeuralGCM, as well as the mean of the 22 AMIP runs, accurately capture the global warming trends observed in ERA5 data (Fig. 4b ). There is a strong correlation in the year-to-year temperature trends with ERA5 data, suggesting that NeuralGCM effectively captures the impact of SST forcing on climate. When comparing spatial biases averaged over 1981–2014, we find that all 22 NeuralGCM-2.8° runs have smaller bias than the CMIP6 AMIP runs, and this result remains even when removing the global temperature bias in CMIP6 AMIP runs (Fig. 4c and Supplementary Figs. 32 and 33 ).

Next, we investigated the vertical structure of tropical warming trends, which climate models tend to overestimate in the upper troposphere 35 . As shown in Fig. 4d , the trends, calculated by linear regression, of NeuralGCM are closer to ERA5 than those of AMIP runs. In particular, the bias in the upper troposphere is reduced. However, NeuralGCM does show a wider spread in its predictions than the AMIP runs, even at levels near the surface where temperatures are typically more constrained by prescribed SST.

Lastly, we evaluated NeuralGCM’s capability to generalize to unseen warmer climates by conducting AMIP simulations with increased SST (Supplementary Information section  I.4.2 ). We find that NeuralGCM shows some of the robust features of climate warming response to modest SST increases (+1 K and +2 K); however, for more substantial SST increases (+4 K), NeuralGCM’s response diverges from expectations (Supplementary Fig. 37 ). In addition, AMIP simulations with increased SST show climate drift, underscoring NeuralGCM’s limitations in this context (Supplementary Fig. 38 ).

NeuralGCM is a differentiable hybrid atmospheric model that combines the strengths of traditional GCMs with machine learning for weather forecasting and climate simulation. To our knowledge, NeuralGCM is the first machine-learning-based model to make accurate ensemble weather forecasts, with better CRPS than state-of-the-art physics-based models. It is also, to our knowledge, the first hybrid model that achieves comparable spatial bias to global cloud-resolving models, can simulate realistic tropical cyclone tracks and can run AMIP-like simulations with realistic historical temperature trends. Overall, NeuralGCM demonstrates that incorporating machine learning is a viable alternative to building increasingly detailed physical models 32 for improving GCMs.

Compared with traditional GCMs with similar skill, NeuralGCM is computationally efficient and low complexity. NeuralGCM runs at 8- to 40-times-coarser horizontal resolution than ECMWF’s Integrated Forecasting System and global cloud-resolving models, which enables 3 to 5 orders of magnitude savings in computational resources. For example, NeuralGCM-1.4° simulates 70,000 simulation days in 24 hours using a single tensor-processing-unit versus 19 simulated days on 13,824 central-processing-unit cores with X-SHiELD (Extended Data Table 1 ). This can be leveraged for previously impractical tasks such as large ensemble forecasting. NeuralGCM’s dynamical core uses global spectral methods 36 , and learned physics is parameterized with fully connected neural networks acting on single vertical columns. Substantial headroom exists to pursue higher accuracy using advanced numerical methods and machine-learning architectures.

Our results provide strong evidence for the disputed hypothesis 37 , 38 , 39 that learning to predict short-term weather is an effective way to tune parameterizations for climate. NeuralGCM models trained on 72-hour forecasts are capable of realistic multi-year simulation. When provided with historical SSTs, they capture essential atmospheric dynamics such as seasonal circulation, monsoons and tropical cyclones. However, we will probably need alternative training strategies 38 , 39 to learn important processes for climate with subtle impacts on weather timescales, such as a cloud feedback.

The NeuralGCM approach is compatible with incorporating either more physics or more machine learning, as required for operational weather forecasts and climate simulations. For weather forecasting, we expect that end-to-end learning 40 with observational data will allow for better and more relevant predictions, including key variables such as precipitation. Such models could include neural networks acting as corrections to traditional data assimilation and model diagnostics. For climate projection, NeuralGCM will need to be reformulated to enable coupling with other Earth-system components (for example, ocean and land), and integrating data on the atmospheric chemical composition (for example, greenhouse gases and aerosols). There are also research challenges common to current machine-learning-based climate models 19 , including the capability to simulate unprecedented climates (that is, generalization), adhering to physical constraints, and resolving numerical instabilities and climate drift. NeuralGCM’s flexibility to incorporate physics-based models (for example, radiation) offers a promising avenue to address these challenges.

Models based on physical laws and empirical relationships are ubiquitous in science. We believe the differentiable hybrid modelling approach of NeuralGCM has the potential to transform simulation for a wide range of applications, such as materials discovery, protein folding and multiphysics engineering design.

Differentiable atmospheric model

NeuralGCM combines components of the numerical solver and flexible neural network parameterizations. Simulation in time is carried out in a coordinate system suitable for solving the dynamical equations of the atmosphere, describing large-scale fluid motion and thermodynamics under the influence of gravity and the Coriolis force.

Our differentiable dynamical core is implemented in JAX, a library for high-performance code in Python that supports automatic differentiation 42 . The dynamical core solves the hydrostatic primitive equations with moisture, using a horizontal pseudo-spectral discretization and vertical sigma coordinates 36 , 43 . We evolve seven prognostic variables: vorticity and divergence of horizontal wind, temperature, surface pressure, and three water species (specific humidity, and specific ice and liquid cloud water content).

Our learned physics module uses the single-column approach of GCMs 2 , whereby information from only a single atmospheric column is used to predict the impact of unresolved processes occurring within that column. These effects are predicted using a fully connected neural network with residual connections, with weights shared across all atmospheric columns (Supplementary Information section  C.4 ).

The inputs to the neural network include the prognostic variables in the atmospheric column, total incident solar radiation, sea-ice concentration and SST (Supplementary Information section  C.1 ). We also provide horizontal gradients of the prognostic variables, which we found improves performance 44 . All inputs are standardized to have zero mean and unit variance using statistics precomputed during model initialization. The outputs are the prognostic variable tendencies scaled by the fixed unconditional standard deviation of the target field (Supplementary Information section  C.5 ).

To interface between ERA5 14 data stored in pressure coordinates and the sigma coordinate system of our dynamical core, we introduce encoder and decoder components (Supplementary Information section  D ). These components perform linear interpolation between pressure levels and sigma coordinate levels. We additionally introduce learned corrections to both encoder and decoder steps (Supplementary Figs. 4–6 ), using the same column-based neural network architecture as the learned physics module. Importantly, the encoder enables us to eliminate the gravity waves from initialization shock 45 , which otherwise contaminate forecasts.

Figure 1a shows the sequence of steps that NeuralGCM takes to make a forecast. First, it encodes ERA5 data at t  =  t 0 on pressure levels to initial conditions on sigma coordinates. To perform a time step, the dynamical core and learned physics (Fig. 1b ) then compute tendencies, which are integrated in time using an implicit–explicit ordinary differential equation solver 46 (Supplementary Information section  E and Supplementary Table 2 ). This is repeated to advance the model from t  =  t 0 to t  =  t final . Finally, the decoder converts predictions back to pressure levels.

The time-step size of the ODE solver (Supplementary Table 3 ) is limited by the Courant–Friedrichs–Lewy condition on dynamics, and can be small relative to the timescale of atmospheric change. Evaluating learned physics is approximately 1.5 times as expensive as a time step of the dynamical core. Accordingly, following the typical practice for GCMs, we hold learned physics tendencies constant for multiple ODE time steps to reduce computational expense, typically corresponding to 30 minutes of simulation time.

Deterministic and stochastic models

We train deterministic NeuralGCM models using a combination of three loss functions (Supplementary Information section  G.4 ) to encourage accuracy and sharpness while penalizing bias. During the main training phase, all losses are defined in a spherical harmonics basis. We use a standard mean squared error loss for prompting accuracy, modified to progressively filter out contributions from higher total wavenumbers at longer lead times (Supplementary Fig. 8 ). This filtering approach tackles the ‘double penalty problem’ 47 as it prevents the model from being penalized for predicting high-wavenumber features in incorrect locations at later times, especially beyond the predictability horizon. A second loss term encourages the spectrum to match the training data using squared loss on the total wavenumber spectrum of prognostic variables. These first two losses are evaluated on both sigma and pressure levels. Finally, a third loss term discourages bias by adding mean squared error on the batch-averaged mean amplitude of each spherical harmonic coefficient. For analysis of the impact that various loss functions have, refer to Supplementary Information section  H.6.1 , and Supplementary Figs. 23 and 24 . The combined action of the three training losses allow the resulting models trained on 3-day rollouts to remain stable during years-to-decades-long climate simulations. Before final evaluations, we perform additional fine-tuning of just the decoder component on short rollouts of 24 hours (Supplementary Information section  G.5 ).

Stochastic NeuralGCM models incorporate inherent randomness in the form of additional random fields passed as inputs to neural network components. Our stochastic loss is based on the CRPS 28 , 48 , 49 . CRPS consists of mean absolute error that encourages accuracy, balanced by a similar term that encourages ensemble spread. For each variable we use a sum of CRPS in grid space and CRPS in the spherical harmonic basis below a maximum cut-off wavenumber (Supplementary Information section  G.6 ). We compute CRPS on rollout lengths from 6 hours to 5 days. As illustrated in Fig. 1 , we inject noise to the learned encoder and the learned physics module by sampling from Gaussian random fields with learned spatial and temporal correlation (Supplementary Information section  C.2 and Supplementary Fig. 2 ). For training, we generate two ensemble members per forecast, which suffices for an unbiased estimate of CRPS.

Data availability

For training and evaluating the NeuralGCM models, we used the publicly available ERA5 dataset 14 , originally downloaded from https://cds.climate.copernicus.eu/ and available via Google Cloud Storage in Zarr format at gs://gcp-public-data-arco-era5/ar/full_37-1h-0p25deg-chunk-1.zarr-v3. To compare NeuralGCM with operational and data-driven weather models, we used forecast datasets distributed as part of WeatherBench2 12 at https://weatherbench2.readthedocs.io/en/latest/data-guide.html , to which we have added NeuralGCM forecasts for 2020. To compare NeuralGCM with atmospheric models in climate settings, we used CMIP6 data available at https://catalog.pangeo.io/browse/master/climate/ , as well as X-SHiELD 24 outputs available on Google Cloud storage in a ‘requester pays’ bucket at gs://ai2cm-public-requester-pays/C3072-to-C384-res-diagnostics. The Radiosonde Observation Correction using Reanalyses (RAOBCORE) V1.9 that was used as reference tropical temperature trends was downloaded from https://webdata.wolke.img.univie.ac.at/haimberger/v1.9/ . Base maps use freely available data from https://www.naturalearthdata.com/downloads/ .

Code availability

The NeuralGCM code base is separated into two open source projects: Dinosaur and NeuralGCM, both publicly available on GitHub at https://github.com/google-research/dinosaur (ref. 50 ) and https://github.com/google-research/neuralgcm (ref. 51 ). The Dinosaur package implements a differentiable dynamical core used by NeuralGCM, whereas the NeuralGCM package provides machine-learning models and checkpoints of trained models. Evaluation code for NeuralGCM weather forecasts is included in WeatherBench2 12 , available at https://github.com/google-research/weatherbench2 (ref. 52 ).

Bauer, P., Thorpe, A. & Brunet, G. The quiet revolution of numerical weather prediction. Nature 525 , 47–55 (2015).

Article   ADS   CAS   PubMed   Google Scholar  

Balaji, V. et al. Are general circulation models obsolete? Proc. Natl Acad. Sci. USA 119 , e2202075119 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Lam, R. et al. Learning skillful medium-range global weather forecasting. Science 382 , 1416–1421 (2023).

Article   ADS   MathSciNet   CAS   PubMed   Google Scholar  

Bi, K. et al. Accurate medium-range global weather forecasting with 3D neural networks. Nature 619 , 533–538 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Hourdin, F. et al. The art and science of climate model tuning. Bull. Am. Meteorol. Soc. 98 , 589–602 (2017).

Article   ADS   Google Scholar  

Bony, S. & Dufresne, J.-L. Marine boundary layer clouds at the heart of tropical cloud feedback uncertainties in climate models. Geophys. Res. Lett. 32 , L20806 (2005).

Webb, M. J., Lambert, F. H. & Gregory, J. M. Origins of differences in climate sensitivity, forcing and feedback in climate models. Clim. Dyn. 40 , 677–707 (2013).

Article   Google Scholar  

Sherwood, S. C., Bony, S. & Dufresne, J.-L. Spread in model climate sensitivity traced to atmospheric convective mixing. Nature 505 , 37–42 (2014).

Article   ADS   PubMed   Google Scholar  

Palmer, T. & Stevens, B. The scientific challenge of understanding and estimating climate change. Proc. Natl Acad. Sci. USA 116 , 24390–24395 (2019).

Fischer, E. M., Beyerle, U. & Knutti, R. Robust spatially aggregated projections of climate extremes. Nat. Clim. Change 3 , 1033–1038 (2013).

Field, C. B. Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation: Special Report of the Intergovernmental Panel on Climate Change (Cambridge Univ. Press, 2012).

Rasp, S. et al. WeatherBench 2: A benchmark for the next generation of data-driven global weather models. J. Adv. Model. Earth Syst. 16 , e2023MS004019 (2024).

Keisler, R. Forecasting global weather with graph neural networks. Preprint at https://arxiv.org/abs/2202.07575 (2022).

Hersbach, H. et al. The ERA5 global reanalysis. Q. J. R. Meteorol. Soc. 146 , 1999–2049 (2020).

Zhou, L. et al. Toward convective-scale prediction within the next generation global prediction system. Bull. Am. Meteorol. Soc. 100 , 1225–1243 (2019).

Bonavita, M. On some limitations of current machine learning weather prediction models. Geophys. Res. Lett. 51 , e2023GL107377 (2024).

Weyn, J. A., Durran, D. R. & Caruana, R. Improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere. J. Adv. Model. Earth Syst. 12 , e2020MS002109 (2020).

Watt-Meyer, O. et al. ACE: a fast, skillful learned global atmospheric model for climate prediction. Preprint at https://arxiv.org/abs/2310.02074 (2023).

Bretherton, C. S. Old dog, new trick: reservoir computing advances machine learning for climate modeling. Geophys. Res. Lett. 50 , e2023GL104174 (2023).

Reichstein, M. et al. Deep learning and process understanding for data-driven Earth system science. Nature 566 , 195–204 (2019).

Brenowitz, N. D. & Bretherton, C. S. Spatially extended tests of a neural network parametrization trained by coarse-graining. J. Adv. Model. Earth Syst. 11 , 2728–2744 (2019).

Rasp, S., Pritchard, M. S. & Gentine, P. Deep learning to represent subgrid processes in climate models. Proc. Natl Acad. Sci. USA 115 , 9684–9689 (2018).

Yuval, J. & O’Gorman, P. A. Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nat. Commun. 11 , 3295 (2020).

Kwa, A. et al. Machine-learned climate model corrections from a global storm-resolving model: performance across the annual cycle. J. Adv. Model. Earth Syst. 15 , e2022MS003400 (2023).

Arcomano, T., Szunyogh, I., Wikner, A., Hunt, B. R. & Ott, E. A hybrid atmospheric model incorporating machine learning can capture dynamical processes not captured by its physics-based component. Geophys. Res. Lett. 50 , e2022GL102649 (2023).

Han, Y., Zhang, G. J. & Wang, Y. An ensemble of neural networks for moist physics processes, its generalizability and stable integration. J. Adv. Model. Earth Syst. 15 , e2022MS003508 (2023).

Gelbrecht, M., White, A., Bathiany, S. & Boers, N. Differentiable programming for Earth system modeling. Geosci. Model Dev. 16 , 3123–3135 (2023).

Gneiting, T. & Raftery, A. E. Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc. 102 , 359–378 (2007).

Article   MathSciNet   CAS   Google Scholar  

Fortin, V., Abaza, M., Anctil, F. & Turcotte, R. Why should ensemble spread match the RMSE of the ensemble mean? J. Hydrometeorol. 15 , 1708–1713 (2014).

Holton, J. R. An introduction to Dynamic Meteorology 5th edn (Elsevier, 2004).

Cheng, K.-Y. et al. Impact of warmer sea surface temperature on the global pattern of intense convection: insights from a global storm resolving model. Geophys. Res. Lett. 49 , e2022GL099796 (2022).

Stevens, B. et al. DYAMOND: the dynamics of the atmospheric general circulation modeled on non-hydrostatic domains. Prog. Earth Planet. Sci. 6 , 61 (2019).

Ullrich, P. A. et al. TempestExtremes v2.1: a community framework for feature detection, tracking, and analysis in large datasets. Geosc. Model Dev. 14 , 5023–5048 (2021).

Eyring, V. et al. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev. 9 , 1937–1958 (2016).

Mitchell, D. M., Lo, Y. E., Seviour, W. J., Haimberger, L. & Polvani, L. M. The vertical profile of recent tropical temperature trends: persistent model biases in the context of internal variability. Environ. Res. Lett. 15 , 1040b4 (2020).

Bourke, W. A multi-level spectral model. I. Formulation and hemispheric integrations. Mon. Weather Rev. 102 , 687–701 (1974).

Ruiz, J. J., Pulido, M. & Miyoshi, T. Estimating model parameters with ensemble-based data assimilation: a review. J. Meteorol. Soc. Jpn Ser. II 91 , 79–99 (2013).

Schneider, T., Lan, S., Stuart, A. & Teixeira, J. Earth system modeling 2.0: a blueprint for models that learn from observations and targeted high-resolution simulations. Geophys. Res. Lett. 44 , 12–396 (2017).

Schneider, T., Leung, L. R. & Wills, R. C. J. Opinion: Optimizing climate models with process knowledge, resolution, and artificial intelligence. Atmos. Chem. Phys. 24 , 7041–7062 (2024).

Sutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 27 , 3104–3112 (2014).

Haimberger, L., Tavolato, C. & Sperka, S. Toward elimination of the warm bias in historic radiosonde temperature records—some new results from a comprehensive intercomparison of upper-air data. J. Clim. 21 , 4587–4606 (2008).

Bradbury, J. et al. JAX: composable transformations of Python+NumPy programs. GitHub http://github.com/google/jax (2018).

Durran, D. R. Numerical Methods for Fluid Dynamics: With Applications to Geophysics Vol. 32, 2nd edn (Springer, 2010).

Wang, P., Yuval, J. & O’Gorman, P. A. Non-local parameterization of atmospheric subgrid processes with neural networks. J. Adv. Model. Earth Syst. 14 , e2022MS002984 (2022).

Daley, R. Normal mode initialization. Rev. Geophys. 19 , 450–468 (1981).

Whitaker, J. S. & Kar, S. K. Implicit–explicit Runge–Kutta methods for fast–slow wave problems. Mon. Weather Rev. 141 , 3426–3434 (2013).

Gilleland, E., Ahijevych, D., Brown, B. G., Casati, B. & Ebert, E. E. Intercomparison of spatial forecast verification methods. Weather Forecast. 24 , 1416–1430 (2009).

Rasp, S. & Lerch, S. Neural networks for postprocessing ensemble weather forecasts. Month. Weather Rev. 146 , 3885–3900 (2018).

Pacchiardi, L., Adewoyin, R., Dueben, P. & Dutta, R. Probabilistic forecasting with generative networks via scoring rule minimization. J. Mach. Learn. Res. 25 , 1–64 (2024).

Smith, J. A., Kochkov, D., Norgaard, P., Yuval, J. & Hoyer, S. google-research/dinosaur: 1.0.0. Zenodo https://doi.org/10.5281/zenodo.11376145 (2024).

Kochkov, D. et al. google-research/neuralgcm: 1.0.0. Zenodo https://doi.org/10.5281/zenodo.11376143 (2024).

Rasp, S. et al. google-research/weatherbench2: v0.2.0. Zenodo https://doi.org/10.5281/zenodo.11376271 (2023).

Download references

Acknowledgements

We thank A. Kwa, A. Merose and K. Shah for assistance with data acquisition and handling; L. Zepeda-Núñez for feedback on the paper; and J. Anderson, C. Van Arsdale, R. Chemke, G. Dresdner, J. Gilmer, J. Hickey, N. Lutsko, G. Nearing, A. Paszke, J. Platt, S. Ponda, M. Pritchard, D. Rothenberg, F. Sha, T. Schneider and O. Voicu for discussions.

Author information

These authors contributed equally: Dmitrii Kochkov, Janni Yuval, Ian Langmore, Peter Norgaard, Jamie Smith, Stephan Hoyer

Authors and Affiliations

Google Research, Mountain View, CA, USA

Dmitrii Kochkov, Janni Yuval, Ian Langmore, Peter Norgaard, Jamie Smith, Griffin Mooers, James Lottes, Stephan Rasp, Michael P. Brenner & Stephan Hoyer

Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA

Milan Klöwer

European Centre for Medium-Range Weather Forecasts, Reading, UK

Peter Düben & Sam Hatfield

Google DeepMind, London, UK

Peter Battaglia, Alvaro Sanchez-Gonzalez & Matthew Willson

School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA

Michael P. Brenner

You can also search for this author in PubMed   Google Scholar

Contributions

D.K., J.Y., I.L., P.N., J.S. and S. Hoyer contributed equally to this work. D.K., J.Y., I.L., P.N., J.S., G.M., J.L. and S. Hoyer wrote the code. D.K., J.Y., I.L., P.N., G.M. and S. Hoyer trained models and analysed the data. M.P.B. and S. Hoyer managed and oversaw the research project. M.K., S.R., P.D., S. Hatfield, P.B. and M.P.B. contributed technical advice and ideas. M.W. ran experiments with GraphCast for comparison with NeuralGCM. A.S.-G. assisted with data preparation. D.K., J.Y., I.L., P.N. and S. Hoyer wrote the paper. All authors gave feedback and contributed to editing the paper.

Corresponding authors

Correspondence to Dmitrii Kochkov , Janni Yuval or Stephan Hoyer .

Ethics declarations

Competing interests.

D.K., J.Y., I.L., P.N., J.S., J.L., S.R., P.B., A.S.-G., M.W., M.P.B. and S. Hoyer are employees of Google. S. Hoyer, D.K., I.L., J.Y., G.M., P.N., J.S. and M.B. have filed international patent application PCT/US2023/035420 in the name of Google LLC, currently pending, relating to neural general circulation models.

Peer review

Peer review information.

Nature thanks Karthik Kashinath and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 maps of bias for neuralgcm-ens and ecmwf-ens forecasts..

Bias is averaged over all forecasts initialized in 2020.

Extended Data Fig. 2 Maps of spread-skill ratio for NeuralGCM-ENS and ECMWF-ENS forecasts.

Spread-skill ratio is averaged over all forecasts initialized in 2020.

Extended Data Fig. 3 Geostrophic balance in NeuralGCM, GraphCast 3 and ECMWF-HRES.

Vertical profiles of the extratropical intensity (averaged between latitude 30°–70° in both hemispheres) and over all forecasts initialized in 2020 of (a,d,g) geostrophic wind, (b,e,h) ageostrophic wind and (c,f,i) the ratio of the intensity of ageostrophic wind over geostrophic wind for ERA5 (black continuous line in all panels), (a,b,c) NeuralGCM-0.7°, (d,e,f) GraphCast and (g,h,i) ECMWF-HRES at lead times of 1 day, 5 days and 10 days.

Extended Data Fig. 4 Precipitation minus evaporation calculated from the third day of weather forecasts.

(a) Tropical (latitudes −20° to 20°) precipitation minus evaporation (P minus E) rate distribution, (b) Extratropical (latitudes 30° to 70° in both hemispheres) P minus E, (c) mean P minus E for 2020 ERA5 14 and (d) NeuralGCM-0.7° (calculated from the third day of forecasts and averaged over all forecasts initialized in 2020), (e) the bias between NeuralGCM-0.7° and ERA5, (f-g) Snapshot of daily precipitation minus evaporation for 2020-01-04 for (f) NeuralGCM-0.7° (forecast initialized on 2020-01-02) and (g) ERA5.

Extended Data Fig. 5 Indirect comparison between precipitation bias in X-SHiELD and precipitation minus evaporation bias in NeuralGCM-1.4°.

Mean precipitation calculated between 2020-01-19 and 2021-01-17 for (a) ERA5 14 (c) X-SHiELD 31 and the biases in (e) X-SHiELD and (g) climatology (ERA5 data averaged over 1990-2019). Mean precipitation minus evaporation calculated between 2020-01-19 and 2021-01-17 for (b) ERA5 (d) NeuralGCM-1.4° (initialized in October 18th 2019) and the biases in (f) NeuralGCM-1.4° and (h) climatology (data averaged over 1990–2019).

Extended Data Fig. 6 Yearly temperature bias for NeuralGCM and X-SHiELD 31 .

Mean temperature between 2020-01-19 to 2020-01-17 for (a) ERA5 at 200hPa and (b) 850hPa. (c,d) the bias in the temperature for NeuralGCM-1.4°, (e,f) the bias in X-SHiELD and (g,h) the bias in climatology (calculated from 1990–2019). NeuralGCM-1.4° was initialized in 18th of October (similar to X-SHiELD).

Extended Data Fig. 7 Tropical Cyclone densities and annual regional counts.

(a) Tropical Cyclone (TC) density from ERA5 14 data spanning 1987–2020. (b) TC density from NeuralGCM-1.4° for 2020, generated using 34 different initial conditions all initialized in 2019. (c) Box plot depicting the annual number of TCs across different regions, based on ERA5 data (1987–2020), NeuralGCM-1.4° for 2020 (34 initial conditions), and orange markers show ERA5 for 2020. In the box plots, the red line represents the median; the box delineates the first to third quartiles; the whiskers extend to 1.5 times the interquartile range (Q1 − 1.5IQR and Q3 + 1.5IQR), and outliers are shown as individual dots. Each year is defined from January 19th to January 17th of the following year, aligning with data availability from X-SHiELD. For NeuralGCM simulations, the 3 initial conditions starting in January 2019 exclude data for January 17th, 2021, as these runs spanned only two years.

Extended Data Fig. 8 Tropical Cyclone maximum wind distribution in NeuralGCM vs. ERA5 14 .

Number of Tropical Cyclones (TCs) as a function of maximum wind speed at 850hPa across different regions, based on ERA5 data (1987–2020; in orange), and NeuralGCM-1.4° for 2020 (34 initial conditions; in blue). Each year is defined from January 19th to January 17th of the following year, aligning with data availability from X-SHiELD. For NeuralGCM simulations, the 3 initial conditions starting in January 2019 exclude data for January 17th, 2021, as these runs spanned only two years.

Supplementary information

Supplementary information.

Supplementary Information (38 figures, 6 tables): (A) Lines of code in atmospheric models; (B) Dynamical core of NeuralGCM; (C) Learned physics of NeuralGCM; (D) Encoder and decoder of NeuralGCM; (E) Time integration; (F) Evaluation metrics; (G) Training; (H) Additional weather evaluations; (I) Additional climate evaluations.

Peer Review File

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kochkov, D., Yuval, J., Langmore, I. et al. Neural general circulation models for weather and climate. Nature (2024). https://doi.org/10.1038/s41586-024-07744-y

Download citation

Received : 13 November 2023

Accepted : 15 June 2024

Published : 22 July 2024

DOI : https://doi.org/10.1038/s41586-024-07744-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Google ai predicts long-term climate trends and weather — in minutes.

  • Helena Kudiabor

Nature (2024)

Weather and climate predicted accurately — without using a supercomputer

  • Oliver Watt-Meyer

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

where can i find quantitative research articles

Qualitative & Quantitative Data

Understanding Qualitative and Quantitative Data

  • 7 minute read
  • August 22, 2024

Smith Alex

Written by:

where can i find quantitative research articles

Smith Alex is a committed data enthusiast and an aspiring leader in the domain of data analytics. With a foundation in engineering and practical experience in the field of data science

Summary: This article delves into qualitative and quantitative data, defining each type and highlighting their key differences. It discusses when to use each data type, the benefits of integrating both, and the challenges researchers face. Understanding these concepts is crucial for effective research design and achieving comprehensive insights.

Introduction

In the realm of research and Data Analysis , two fundamental types of data play pivotal roles: qualitative and quantitative data. Understanding the distinctions between these two categories is essential for researchers, analysts, and decision-makers alike, as each type serves different purposes and is suited to various contexts.

This article will explore the definitions, characteristics, uses, and challenges associated with both qualitative and quantitative data, providing a comprehensive overview for anyone looking to enhance their understanding of data collection and analysis.

Read More:   Exploring 5 Statistical Data Analysis Techniques with Real-World Examples

Defining Qualitative Data

Defining Qualitative Data

Qualitative data is non-numerical in nature and is primarily concerned with understanding the qualities, characteristics, and attributes of a subject.

This type of data is descriptive and often involves collecting information through methods such as interviews, focus groups, observations, and open-ended survey questions. The goal of qualitative data is to gain insights into the underlying motivations, opinions, and experiences of individuals or groups.

Characteristics of Qualitative Data

  • Descriptive : Qualitative data provides rich, detailed descriptions of phenomena, allowing researchers to capture the complexity of human experiences.
  • Subjective : The interpretation of qualitative data can vary based on the researcher’s perspective, making it inherently subjective.
  • Contextual : This type of data is often context-dependent, meaning that the insights gained can be influenced by the environment or situation in which the data was collected.
  • Exploratory : Qualitative data is typically used in exploratory research to generate hypotheses or to understand phenomena that are not well understood.

Examples of Qualitative Data

  • Interview transcripts that capture participants’ thoughts and feelings.
  • Observational notes from field studies.
  • Responses to open-ended questions in surveys.
  • Personal narratives or case studies that illustrate individual experiences.

Defining Quantitative Data

where can i find quantitative research articles

Quantitative data, in contrast, is numerical and can be measured or counted. This type of data is often used to quantify variables and analyse relationships between them. Quantitative research typically employs statistical methods to test hypotheses, identify patterns, and make predictions based on numerical data.

Characteristics of Quantitative Data

  • Objective : Quantitative data is generally considered more objective than qualitative data, as it relies on measurable values that can be statistically analysed.
  • Structured : This type of data is often collected using structured methods such as surveys with closed-ended questions, experiments, or observational checklists.
  • Generalizable : Because quantitative data is based on numerical values, findings can often be generalised to larger populations if the sample is representative.
  • Statistical Analysis : Quantitative data lends itself to various statistical analyses , allowing researchers to draw conclusions based on numerical evidence.

Examples of Quantitative Data

  • Age, height, and weight measurements.
  • Survey results with numerical ratings (e.g., satisfaction scores).
  • Test scores or academic performance metrics.
  • Financial data such as income, expenses, and profit margins.

Key Differences Between Qualitative and Quantitative Data

Understanding the differences between qualitative and quantitative data is crucial for selecting the appropriate research methods and analysis techniques. Here are some key distinctions:

where can i find quantitative research articles

When to Use Qualitative Data

Qualitative data is particularly useful in situations where the research aims to explore complex phenomena, understand human behaviour, or generate new theories. Here are some scenarios where qualitative data is the preferred choice:

Exploratory Research

When investigating a new area of study where little is known, qualitative methods can help uncover insights and generate hypotheses.

Understanding Context

Qualitative data is valuable for capturing the context surrounding a particular phenomenon, providing depth to the analysis.

Gaining Insights into Attitudes and Behaviours

When the goal is to understand why individuals think or behave in a certain way, qualitative methods such as interviews can provide rich, nuanced insights.

Developing Theories

Qualitative research can help in the development of theories by exploring relationships and patterns that quantitative methods may overlook.

When to Use Quantitative Data

Quantitative data is best suited for research that requires measurement, comparison, and statistical analysis. Here are some situations where quantitative data is the preferred choice:

Testing Hypotheses

When researchers have specific hypotheses to test , quantitative methods allow for rigorous statistical analysis to confirm or reject these hypotheses.

Measuring Variables

Quantitative data is ideal for measuring variables and establishing relationships between them, making it useful for experiments and surveys.

Generalising Findings

When the goal is to generalise findings to a larger population, quantitative research provides the necessary data to support such conclusions.

Identifying Patterns and Trends

Quantitative analysis can reveal patterns and trends in data that can inform decision-making and policy development.

Integrating Qualitative and Quantitative Data

Integrating Qualitative and Quantitative Data

While qualitative and quantitative data are distinct, they can be effectively integrated to provide a more comprehensive understanding of a research question. This mixed-methods approach combines the strengths of both types of data, allowing researchers to triangulate findings and gain deeper insights.

Benefits of Integration

Integrating qualitative and quantitative data enhances research by combining numerical analysis with rich, descriptive insights. This mixed-methods approach allows for a comprehensive understanding of complex phenomena, validating findings and providing a more nuanced perspective on research questions.

  • Enhanced Validity: By using both qualitative and quantitative data, researchers can validate their findings through multiple sources of evidence.
  • Rich Insights : Qualitative data can provide context and depth to quantitative findings, helping to explain the “why” behind numerical trends.
  • Comprehensive Understanding: Integrating both types of data allows for a more holistic understanding of complex phenomena, leading to more informed conclusions and recommendations.

Examples of Integration

  • Surveys with Open-Ended Questions: Combining closed-ended questions (quantitative) with open-ended questions (qualitative) in surveys can provide both measurable data and rich descriptive insights.
  • Case Studies with Statistical Analysis: Researchers can conduct case studies (qualitative) while also collecting quantitative data to support their findings, offering a more robust analysis.
  • Focus Groups with Follow-Up Surveys: After conducting focus groups (qualitative), researchers can administer surveys (quantitative) to a larger population to validate the insights gained.

Challenges and Considerations

While qualitative and quantitative data offer distinct advantages, researchers must also be aware of the challenges and considerations associated with each type:

Challenges of Qualitative Data

The challenges of qualitative data are multifaceted and can significantly impact the research process. Here are some of the primary challenges faced by researchers when working with qualitative data:

Subjectivity and Bias

One of the most significant challenges in qualitative research is the inherent subjectivity involved in data collection and analysis. Researchers’ personal beliefs, assumptions, and experiences can influence their interpretation of data.

Data Overload

Qualitative research often generates large volumes of data, which can be overwhelming. This data overload can make it challenging to identify key themes and insights. Researchers may struggle to manage and analyse vast amounts of qualitative data, leading to potential insights being overlooked.

Lack of Structure

Qualitative data is often unstructured, making it difficult to analyse systematically. The absence of a predefined format can lead to challenges in drawing meaningful conclusions from the data.

Time-Consuming Nature

Qualitative analysis can be extremely time-consuming, especially when dealing with extensive data sets. The process of collecting, transcribing, and analysing qualitative data often requires significant time and resources, which can be a barrier for researchers.

Challenges of Quantitative Data

Quantitative data provides objective, measurable evidence, it also faces challenges in capturing the full complexity of human experiences, maintaining data accuracy, and avoiding misinterpretation of statistical results. Integrating qualitative data can help overcome some of these limitations.

Limits in Capturing Complexity

Quantitative data, by its nature, can oversimplify complex phenomena and miss important nuances that qualitative data can capture. The focus on numerical measurements may not fully reflect the depth and richness of human experiences and behaviours.

Chances for Misinterpretation

Numbers can be twisted or misinterpreted if not analysed properly. Researchers must be cautious in interpreting statistical results, as correlation does not imply causation. Poor knowledge of statistical analysis can negatively impact the analysis and interpretation of quantitative data.

Influence of Measurement Errors

Due to the numerical nature of quantitative data, even small measurement errors can skew the entire dataset. Inaccuracies in data collection methods can lead to drawing incorrect conclusions from the analysis.

Lack of Context

Quantitative experiments often do not take place in natural settings. The data may lack the context and nuance that qualitative data can provide to fully explain the phenomena being studied.

Sample Size Limitations

Small sample sizes in quantitative studies can reduce the reliability of the data. Large sample sizes are needed for more accurate statistical analysis. This also affects the ability to generalise findings to wider populations.

Confirmation Bias

Researchers may miss observing important phenomena due to their focus on testing pre-determined hypotheses rather than generating new theories. The confirmation bias inherent in hypothesis testing can limit the discovery of unexpected insights.

In conclusion, understanding the distinctions between qualitative and quantitative data is essential for effective research and Data Analysis . Each type of data serves unique purposes and is suited to different contexts, making it crucial for researchers to select the appropriate methods based on their research objectives.

By integrating both qualitative and quantitative data, researchers can gain a more comprehensive understanding of complex phenomena, leading to richer insights and more informed decision-making.

As the landscape of research continues to evolve, the ability to effectively utilise and integrate both types of data will remain a valuable skill for researchers and analysts alike.

Frequently Asked Questions

What is the primary difference between qualitative and quantitative data.

The primary difference is that qualitative data is descriptive and non-numerical, focusing on understanding qualities and experiences, while quantitative data is numerical and measurable, focusing on quantifying variables and testing hypotheses.

When Should I Use Qualitative Data in My Research?

Qualitative data is best used when exploring new topics, understanding complex behaviours, or generating hypotheses, particularly when context and depth are important.

Can Qualitative and Quantitative Data Be Used Together?

Yes, integrating qualitative and quantitative data can provide a more comprehensive understanding of a research question, allowing researchers to validate findings and gain richer insights.

Reviewed by:

' src=

Post written by: Smith Alex

guest

Artificial Intelligence Web Scraping vs. Web Crawling: Understanding the Differences

metaprogramming in python

Python Metaprogramming: Unlocking the Power of Code Manipulation

You may also like.

Data Analytics Projects

  • Data Analysts

Top Data Analytics Projects in 2024 for Beginners to Experienced

  • July 20, 2023

data analytics

Explaining Four Types of Analytics With Examples

  • Shlok Kamat
  • July 24, 2023
  • 15 minute read
  • MS in the Learning Sciences
  • Tuition & Financial Aid

SMU Simmons School of Education & Human Development

Qualitative vs. quantitative data analysis: How do they differ?

Educator presenting data to colleagues

Learning analytics have become the cornerstone for personalizing student experiences and enhancing learning outcomes. In this data-informed approach to education there are two distinct methodologies: qualitative and quantitative analytics. These methods, which are typical to data analytics in general, are crucial to the interpretation of learning behaviors and outcomes. This blog will explore the nuances that distinguish qualitative and quantitative research, while uncovering their shared roles in learning analytics, program design and instruction.

What is qualitative data?

Qualitative data is descriptive and includes information that is non numerical. Qualitative research is used to gather in-depth insights that can't be easily measured on a scale like opinions, anecdotes and emotions. In learning analytics qualitative data could include in depth interviews, text responses to a prompt, or a video of a class period. 1

What is quantitative data?

Quantitative data is information that has a numerical value. Quantitative research is conducted to gather measurable data used in statistical analysis. Researchers can use quantitative studies to identify patterns and trends. In learning analytics quantitative data could include test scores, student demographics, or amount of time spent in a lesson. 2

Key difference between qualitative and quantitative data

It's important to understand the differences between qualitative and quantitative data to both determine the appropriate research methods for studies and to gain insights that you can be confident in sharing.

Data Types and Nature

Examples of qualitative data types in learning analytics:

  • Observational data of human behavior from classroom settings such as student engagement, teacher-student interactions, and classroom dynamics
  • Textual data from open-ended survey responses, reflective journals, and written assignments
  • Feedback and discussions from focus groups or interviews
  • Content analysis from various media

Examples of quantitative data types:

  • Standardized test, assessment, and quiz scores
  • Grades and grade point averages
  • Attendance records
  • Time spent on learning tasks
  • Data gathered from learning management systems (LMS), including login frequency, online participation, and completion rates of assignments

Methods of Collection

Qualitative and quantitative research methods for data collection can occasionally seem similar so it's important to note the differences to make sure you're creating a consistent data set and will be able to reliably draw conclusions from your data.

Qualitative research methods

Because of the nature of qualitative data (complex, detailed information), the research methods used to collect it are more involved. Qualitative researchers might do the following to collect data:

  • Conduct interviews to learn about subjective experiences
  • Host focus groups to gather feedback and personal accounts
  • Observe in-person or use audio or video recordings to record nuances of human behavior in a natural setting
  • Distribute surveys with open-ended questions

Quantitative research methods

Quantitative data collection methods are more diverse and more likely to be automated because of the objective nature of the data. A quantitative researcher could employ methods such as:

  • Surveys with close-ended questions that gather numerical data like birthdates or preferences
  • Observational research and record measurable information like the number of students in a classroom
  • Automated numerical data collection like information collected on the backend of a computer system like button clicks and page views

Analysis techniques

Qualitative and quantitative data can both be very informative. However, research studies require critical thinking for productive analysis.

Qualitative data analysis methods

Analyzing qualitative data takes a number of steps. When you first get all your data in one place you can do a review and take notes of trends you think you're seeing or your initial reactions. Next, you'll want to organize all the qualitative data you've collected by assigning it categories. Your central research question will guide your data categorization whether it's by date, location, type of collection method (interview vs focus group, etc), the specific question asked or something else. Next, you'll code your data. Whereas categorizing data is focused on the method of collection, coding is the process of identifying and labeling themes within the data collected to get closer to answering your research questions. Finally comes data interpretation. To interpret the data you'll take a look at the information gathered including your coding labels and see what results are occurring frequently or what other conclusions you can make. 3

Quantitative analysis techniques

The process to analyze quantitative data can be time-consuming due to the large volume of data possible to collect. When approaching a quantitative data set, start by focusing in on the purpose of your evaluation. Without making a conclusion, determine how you will use the information gained from analysis; for example: The answers of this survey about study habits will help determine what type of exam review session will be most useful to a class. 4

Next, you need to decide who is analyzing the data and set parameters for analysis. For example, if two different researchers are evaluating survey responses that rank preferences on a scale from 1 to 5, they need to be operating with the same understanding of the rankings. You wouldn't want one researcher to classify the value of 3 to be a positive preference while the other considers it a negative preference. It's also ideal to have some type of data management system to store and organize your data, such as a spreadsheet or database. Within the database, or via an export to data analysis software, the collected data needs to be cleaned of things like responses left blank, duplicate answers from respondents, and questions that are no longer considered relevant. Finally, you can use statistical software to analyze data (or complete a manual analysis) to find patterns and summarize your findings. 4

Qualitative and quantitative research tools

From the nuanced, thematic exploration enabled by tools like NVivo and ATLAS.ti, to the statistical precision of SPSS and R for quantitative analysis, each suite of data analysis tools offers tailored functionalities that cater to the distinct natures of different data types.

Qualitative research software:

NVivo: NVivo is qualitative data analysis software that can do everything from transcribe recordings to create word clouds and evaluate uploads for different sentiments and themes. NVivo is just one tool from the company Lumivero, which offers whole suites of data processing software. 5

ATLAS.ti: Similar to NVivo, ATLAS.ti allows researchers to upload and import data from a variety of sources to be tagged and refined using machine learning and presented with visualizations and ready for insert into reports. 6

SPSS: SPSS is a statistical analysis tool for quantitative research, appreciated for its user-friendly interface and comprehensive statistical tests, which makes it ideal for educators and researchers. With SPSS researchers can manage and analyze large quantitative data sets, use advanced statistical procedures and modeling techniques, predict customer behaviors, forecast market trends and more. 7

R: R is a versatile and dynamic open-source tool for quantitative analysis. With a vast repository of packages tailored to specific statistical methods, researchers can perform anything from basic descriptive statistics to complex predictive modeling. R is especially useful for its ability to handle large datasets, making it ideal for educational institutions that generate substantial amounts of data. The programming language offers flexibility in customizing analysis and creating publication-quality visualizations to effectively communicate results. 8

Applications in Educational Research

Both quantitative and qualitative data can be employed in learning analytics to drive informed decision-making and pedagogical enhancements. In the classroom, quantitative data like standardized test scores and online course analytics create a foundation for assessing and benchmarking student performance and engagement. Qualitative insights gathered from surveys, focus group discussions, and reflective student journals offer a more nuanced understanding of learners' experiences and contextual factors influencing their education. Additionally feedback and practical engagement metrics blend these data types, providing a holistic view that informs curriculum development, instructional strategies, and personalized learning pathways. Through these varied data sets and uses, educators can piece together a more complete narrative of student success and the impacts of educational interventions.

Master Data Analysis with an M.S. in Learning Sciences From SMU

Whether it is the detailed narratives unearthed through qualitative data or the informative patterns derived from quantitative analysis, both qualitative and quantitative data can provide crucial information for educators and researchers to better understand and improve learning. Dive deeper into the art and science of learning analytics with SMU's online Master of Science in the Learning Sciences program . At SMU, innovation and inquiry converge to empower the next generation of educators and researchers. Choose the Learning Analytics Specialization to learn how to harness the power of data science to illuminate learning trends, devise impactful strategies, and drive educational innovation. You could also find out how advanced technologies like augmented reality (AR), virtual reality (VR), and artificial intelligence (AI) can revolutionize education, and develop the insight to apply embodied cognition principles to enhance learning experiences in the Learning and Technology Design Specialization , or choose your own electives to build a specialization unique to your interests and career goals.

For more information on our curriculum and to become part of a community where data drives discovery, visit SMU's MSLS program website or schedule a call with our admissions outreach advisors for any queries or further discussion. Take the first step towards transforming education with data today.

  • Retrieved on August 8, 2024, from nnlm.gov/guides/data-glossary/qualitative-data
  • Retrieved on August 8, 2024, from nnlm.gov/guides/data-glossary/quantitative-data
  • Retrieved on August 8, 2024, from cdc.gov/healthyyouth/evaluation/pdf/brief19.pdf
  • Retrieved on August 8, 2024, from cdc.gov/healthyyouth/evaluation/pdf/brief20.pdf
  • Retrieved on August 8, 2024, from lumivero.com/solutions/
  • Retrieved on August 8, 2024, from atlasti.com/
  • Retrieved on August 8, 2024, from ibm.com/products/spss-statistics
  • Retrieved on August 8, 2024, from cran.r-project.org/doc/manuals/r-release/R-intro.html#Introduction-and-preliminaries

Return to SMU Online Learning Sciences Blog

Southern Methodist University has engaged Everspring , a leading provider of education and technology services, to support select aspects of program delivery.

This will only take a moment

bioRxiv

Environmental DNA Transport at an Offshore Mesophotic Bank in the Northwestern Gulf of Mexico

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Luke J McCartin
  • ORCID record for Annette F Govindarajan
  • ORCID record for Jill M McDermott
  • ORCID record for Santiago Herrera
  • For correspondence: [email protected]
  • Info/History
  • Supplementary material
  • Preview PDF

Accurately constraining the transport of environmental DNA (eDNA) after it is shed from an animal is vital to appropriately geolocate species detections and biodiversity measurements from eDNA sequencing data. Modeling studies predict the horizontal transport of eDNA at concentrations detectable using quantitative PCR over scales of tens of kilometers, but more limited vertical transport. Field studies routinely find that eDNA metabarcoding data distinguishes biological communities at small spatial scales, over scales of tens of meters. Here, we leverage the unique bathymetry of an offshore, mesophotic bank and the benthic invertebrate community that it supports to determine the extent to which vertical and horizontal eDNA transport may affect the interpretation of species detections from eDNA metabarcoding data. We found that in a stratified water column, eDNA from benthic invertebrates was vertically constrained to depths close to the seafloor in the thermocline versus in the surface mixed layer above. However, when using primers that are taxonomically specific to corals, we found evidence for the horizontal transport of coral eDNA at distances at least ~1.5 km from where they can be reasonably expected to occur. On the contrary, there was minimal evidence for horizontal transport of benthic eDNA in data generated using primers that broadly targeted sequences from eukaryotes. These results highlight the importance of horizontal transport as well as considering methodological details, like the taxonomic specificity of PCR primers, when interpreting eDNA sequencing data.

Competing Interest Statement

The authors have declared no competing interest.

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Twitter logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Animal Behavior and Cognition (5560)
  • Biochemistry (12638)
  • Bioengineering (9515)
  • Bioinformatics (30983)
  • Biophysics (15930)
  • Cancer Biology (13013)
  • Cell Biology (18609)
  • Clinical Trials (138)
  • Developmental Biology (10062)
  • Ecology (15050)
  • Epidemiology (2067)
  • Evolutionary Biology (19238)
  • Genetics (12794)
  • Genomics (17627)
  • Immunology (12760)
  • Microbiology (29857)
  • Molecular Biology (12450)
  • Neuroscience (65069)
  • Paleontology (483)
  • Pathology (2012)
  • Pharmacology and Toxicology (3478)
  • Physiology (5376)
  • Plant Biology (11157)
  • Scientific Communication and Education (1730)
  • Synthetic Biology (3071)
  • Systems Biology (7714)
  • Zoology (1737)

IMAGES

  1. Quantitative Research

    where can i find quantitative research articles

  2. Critical Analysis of Quantitative Research

    where can i find quantitative research articles

  3. Quantitative Research

    where can i find quantitative research articles

  4. Quantitative Research in Nursing: Shaping Evidence-Based Practices Free

    where can i find quantitative research articles

  5. How to Read a Quantitative Research Article

    where can i find quantitative research articles

  6. Critical Appraisal of Quantitative Research Article Essay Example

    where can i find quantitative research articles

COMMENTS

  1. Google Scholar

    Find articles. with all of the words. with the exact phrase. with at least one of the words. without the words. where my words occur. anywhere in the article. in the title of the article. Return articles authored by. e.g., "PJ Hayes" or McCarthy. Return articles published in. e.g., J Biol Chem or Nature.

  2. How do I find quantitative research articles?

    To locate qualitative research articles, use a subject-specific database or a general library database like Academic Search Ultimate or Google Scholar. Finding this types of research takes a bit of investigation. Try this method. Begin by entering your keywords and conducting a search. Since quantitative research is based on the collection and ...

  3. Find Quantitative Articles in CINAHL

    What is Quantitative Research? What is Qualitative Research? Quantitative vs Qualitative; Find Quantitative Articles in CINAHL. Step 1: Accessing CINAHL ; Step 2: Create a Keyword Search ; Step 3: Create a Subject Heading Search ; Step 4: Repeat Steps 1-3 for Second Concept ; Step 5: Repeat Steps 1-3 for Quantitative Terms ; Step 6: Combining ...

  4. All Quantitative research articles

    Moles and titrations. 5 January 2015. Dorothy Warren describes some of the difficulties with teaching this topic and shows how you can help your students to master aspects of quantitative chemistry. Previous. 1. 2. Next. All Quantitative research articles in RSC Education.

  5. How do I find a Quantitative article?

    You can use our library's databases to search for these kinds of research studies:. Try adding "qualitative" or "quantitative" to your keywords when you're searching. For example: nutrition AND qualitative; nutrition AND quantitative; If you have a particular research methodology in mind (e.g. survey, experiment, case study, etc.) you can include keywords to describe it in your search.

  6. Quantitative Research Excellence: Study Design and Reliable and Valid

    If you have citation software installed, you can download article citation data to the citation manager of your choice. Select your citation manager software: Direct import. Share options ... Quantitative Research for the Qualitative Researcher. 2014. SAGE Research Methods. Entry . Research Design Principles. Show details Hide details.

  7. Search

    Find the research you need | With 160+ million publications, 1+ million questions, and 25+ million researchers, this is where everyone can access science

  8. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...

  9. 35388 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on QUANTITATIVE RESEARCH METHODS. Find methods information, sources, references or conduct a literature ...

  10. What Is Quantitative Research? An Overview and Guidelines

    Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.

  11. Finding Quantitative Research Articles

    Tips for Finding Quantitative Articles with a Keyword Search. If you want to limit your search to quantitative studies, first try "quantitative" as a keyword, then try using one of the following terms/phrases in your search (example: lactation AND statistics): Correlational design*. Effect size. Empirical research. Experiment*.

  12. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  13. (PDF) An Overview of Quantitative Research Methods

    Qualitative research involves the quality of data and aims to understand the explanations and motives for actions, and also the. way individuals perceive their experiences and the world around ...

  14. How do I find a quantitative article?

    You can find quantitative articles by searching in the Library databases using methodology terms as keywords. To find a quantitative study, possible keywords include the type of study, data analysis type, or terminology used to describe the results. The following search uses our multi-database search tool to find examples of quantitative ...

  15. Locating Articles in PubMed

    There are many different types of quantitative studies. You can choose as many as you want - or as few. They are listed below. After you choose the types you want, click Show. Then the types show up in the Article Type field and you can click on them to filter out the types you want. When you click Show the Article Types show up on the left ...

  16. What is Quantitative Research?

    Research involving the collection of data in numerical form for quantitative analysis. The numerical data can be durations, scores, counts of incidents, ratings, or scales. Quantitative data can be collected in either controlled or naturalistic environments, in laboratories or field studies, from special populations or from samples of the ...

  17. Quantitative research: Understanding the approaches and key elements

    Quantitative research enhances research projects. Quantitative research approaches are so much more than "how much" or "how many," they reveal the why behind people's actions, emotions and behaviors. By using standardized collection methods, like surveys, quant instills confidence and rigor in findings. Quantitative research has many ...

  18. Dissertation Results/Findings Chapter (Quantitative)

    There are multiple steps involved in writing up the results chapter for your quantitative research. The exact number of steps applicable to you will vary from study to study and will depend on the nature of the research aims, objectives and research questions. However, we'll outline the generic steps below. Step 1 - Revisit your research ...

  19. Quantitative and Qualitative Research

    Qualitative vs Quantitative Research; QUAL ITATIVE QUANT ITATIVE; Methods include focus groups, unstructured or in-depth interviews, and reviews of documents for types of themes: Surveys, structured interviews, measurements & observations, and reviews of records or documents for numeric or quantifiable information

  20. Quantitative Articles

    Increasing Access to Diabetes Education in Rural Alabama Through Telehealth. Evaluating the Impact of Smartphones on Nursing Workflow: Lessons Learned. Validity of the Montreal Cognitive Assessment Screener in Adolescents and Young Adults With and Without Congenital Heart Disease. Pharmacogenetics of Ketamine-Induced Emergence Phenomena.

  21. How to appraise quantitative research

    Title, keywords and the authors. The title of a paper should be clear and give a good idea of the subject area. The title should not normally exceed 15 words 2 and should attract the attention of the reader. 3 The next step is to review the key words. These should provide information on both the ideas or concepts discussed in the paper and the ...

  22. 27 Quantitative Methods in Communication Research

    In communication research, both quantitative and qualitative methods are essential for understanding different aspects of communication processes and effects. Here's how quant methods can be applied: Surveys: Collecting data on communication patterns, relationship satisfaction, or conflict resolution strategies among different groups. ...

  23. MEDLINE

    Locating Quantitative Research in MEDLINE Complete . There is no easy way to specify quantitative studies in Medline and CINAHL.. Check box in the Publication Type menu for Randomized Controlled Trials (RCT).RCTs are often quantitative, or at least have a quantitative aspect.Another trick for both Medline is to select Charts from the Image types at the bottom of the Search screen.

  24. Quantitative Data Analysis: Everything You Need to Know

    In this article, we explore what quantitative data analysis is, the difference between quantitative and qualitative data analysis, and statistical methods you can apply to your data. We also walk you through the steps you can follow to analyze quantitative information, and how Userpilot can help you streamline the product analytics process. Let ...

  25. Quantitative and Qualitative Research

    What is Quantitative Research? Quantitative methodology is the dominant research framework in the social sciences. It refers to a set of strategies, techniques and assumptions used to study psychological, social and economic processes through the exploration of numeric patterns. Quantitative research gathers a range of numeric data.

  26. Biological Literature: How do I find a Quantitative article?

    You can use our library's databases to search for these kinds of research studies:. Try adding "qualitative" or "quantitative" to your keywords when you're searching. For example: nutrition AND qualitative; nutrition AND quantitative; If you have a particular research methodology in mind (e.g. survey, experiment, case study, etc.) you can include keywords to describe it in your search.

  27. Neural general circulation models for weather and climate

    a, Overall model structure, showing how forcings F t, noise z t (for stochastic models) and inputs y t are encoded into the model state x t.The model state is fed into the dynamical core, and ...

  28. Qualitative and Quantitative Data: Key Differences and Uses

    Summary: This article delves into qualitative and quantitative data, defining each type and highlighting their key differences. It discusses when to use each data type, the benefits of integrating both, and the challenges researchers face. Understanding these concepts is crucial for effective research design and achieving comprehensive insights.

  29. Qualitative vs. Quantitative Data Analysis in Education

    Quantitative data is information that has a numerical value. Quantitative research is conducted to gather measurable data used in statistical analysis. Researchers can use quantitative studies to identify patterns and trends. In learning analytics quantitative data could include test scores, student demographics, or amount of time spent in a ...

  30. Environmental DNA Transport at an Offshore Mesophotic Bank in ...

    Accurately constraining the transport of environmental DNA (eDNA) after it is shed from an animal is vital to appropriately geolocate species detections and biodiversity measurements from eDNA sequencing data. Modeling studies predict the horizontal transport of eDNA at concentrations detectable using quantitative PCR over scales of tens of kilometers, but more limited vertical transport ...