Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

is a systematic review a type of research design

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 18 July 2024.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2024, July 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

Introduction to Systematic Reviews

  • Reference work entry
  • First Online: 20 July 2022
  • pp 2159–2177
  • Cite this reference work entry

is a systematic review a type of research design

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  

378 Accesses

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

is a systematic review a type of research design

What Is the Difference Between a Systematic Review and a Meta-analysis?

is a systematic review a type of research design

Systematic Reviewing

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?

Types of Reviews

  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Review Typologies

There are many types of evidence synthesis projects, including systematic reviews as well as others. The selection of review type is wholly dependent on the research question. Not all research questions are well-suited for systematic reviews.

  • Review Typologies (from LITR-EX) This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable.

Review the table to peruse review types and associated methodologies. Librarians can also help your team determine which review type might be appropriate for your project. 

Reproduced from Grant, M. J. and Booth, A. (2009), A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26: 91-108.  doi:10.1111/j.1471-1842.2009.00848.x

Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or mode

Seeks to identify most significant items in the field

No formal quality assessment. Attempts to evaluate according to contribution

Typically narrative, perhaps conceptual or chronological

Significant component: seeks to identify conceptual contribution to embody existing or derive new theory

Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness. May include research findings

May or may not include comprehensive searching

May or may not include quality assessment

Typically narrative

Analysis may be chronological, conceptual, thematic, etc.

Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature

Completeness of searching determined by time/scope constraints

No formal quality assessment

May be graphical and tabular

Characterizes quantity and quality of literature, perhaps by study design and other key features. May identify need for primary or secondary research

Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results

Aims for exhaustive, comprehensive searching. May use funnel plot to assess completeness

Quality assessment may determine inclusion/ exclusion and/or sensitivity analyses

Graphical and tabular with narrative commentary

Numerical analysis of measures of effect assuming absence of heterogeneity

Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies

Requires either very sensitive search to retrieve all studies or separately conceived quantitative and qualitative strategies

Requires either a generic appraisal instrument or separate appraisal processes with corresponding checklists

Typically both components will be presented as narrative and in tables. May also employ graphical means of integrating quantitative and qualitative studies

Analysis may characterise both literatures and look for correlations between characteristics or use gap analysis to identify aspects absent in one literature but missing in the other

Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics

May or may not include comprehensive searching (depends whether systematic overview or not)

May or may not include quality assessment (depends whether systematic overview or not)

Synthesis depends on whether systematic or not. Typically narrative but may include tabular features

Analysis may be chronological, conceptual, thematic, etc.

Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies

May employ selective or purposive sampling

Quality assessment typically used to mediate messages not for inclusion/exclusion

Qualitative, narrative synthesis

Thematic analysis, may include conceptual models

Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research

Completeness of searching determined by time constraints

Time-limited formal quality assessment

Typically narrative and tabular

Quantities of literature and overall quality/direction of effect of literature

Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research evidence (usually including ongoing research)

Completeness of searching determined by time/scope constraints. May include research in progress

No formal quality assessment

Typically tabular with some narrative commentary

Characterizes quantity and quality of literature, perhaps by study design and other key features. Attempts to specify a viable review

Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives

Aims for comprehensive searching of current literature

No formal quality assessment

Typically narrative, may have tabular accompaniment

Current state of knowledge and priorities for future investigation and research

Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review

Aims for exhaustive, comprehensive searching

Quality assessment may determine inclusion/exclusion

Typically narrative with tabular accompaniment

What is known; recommendations for practice. What remains unknown; uncertainty around findings, recommendations for future research

Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis’

Aims for exhaustive, comprehensive searching

May or may not include quality assessment

Minimal narrative, tabular summary of studies

What is known; recommendations for practice. Limitations

Attempt to include elements of systematic review process while stopping short of systematic review. Typically conducted as postgraduate student assignment

May or may not include comprehensive searching

May or may not include quality assessment

Typically narrative with tabular accompaniment

What is known; uncertainty around findings; limitations of methodology

Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results

Identification of component reviews, but no search for primary studies

Quality assessment of studies within component reviews and/or of reviews themselves

Graphical and tabular with narrative commentary

What is known; recommendations for practice. What remains unknown; recommendations for future research

  • << Previous: What is a Systematic Review?
  • Next: Manuals and Reporting Guidelines >>
  • Last Updated: Jun 18, 2024 9:41 AM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Study Design 101: Systematic Review

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review
  • Meta-Analysis
  • Helpful Formulas
  • Finding Specific Study Types

A document often written by a panel that provides a comprehensive review of all relevant studies on a particular clinical or health-related topic/question. The systematic review is created after reviewing and combining all the information from both published and unpublished studies (focusing on clinical trials of similar treatments) and then summarizing the findings.

  • Exhaustive review of the current literature and other sources (unpublished studies, ongoing research)
  • Less costly to review prior studies than to create a new study
  • Less time required than conducting a new study
  • Results can be generalized and extrapolated into the general population more broadly than individual studies
  • More reliable and accurate than individual studies
  • Considered an evidence-based resource

Disadvantages

  • Very time-consuming
  • May not be easy to combine studies

Design pitfalls to look out for

Studies included in systematic reviews may be of varying study designs, but should collectively be studying the same outcome.

Is each study included in the review studying the same variables?

Some reviews may group and analyze studies by variables such as age and gender; factors that were not allocated to participants.

Do the analyses in the systematic review fit the variables being studied in the original studies?

Fictitious Example

Does the regular wearing of ultraviolet-blocking sunscreen prevent melanoma? An exhaustive literature search was conducted, resulting in 54 studies on sunscreen and melanoma. Each study was then evaluated to determine whether the study focused specifically on ultraviolet-blocking sunscreen and melanoma prevention; 30 of the 54 studies were retained. The thirty studies were reviewed and showed a strong positive relationship between daily wearing of sunscreen and a reduced diagnosis of melanoma.

Real-life Examples

Yang, J., Chen, J., Yang, M., Yu, S., Ying, L., Liu, G., ... Liang, F. (2018). Acupuncture for hypertension. The Cochrane Database of Systematic Reviews, 11 (11), CD008821. https://doi.org/10.1002/14651858.CD008821.pub2

This systematic review analyzed twenty-two randomized controlled trials to determine whether acupuncture is a safe and effective way to lower blood pressure in adults with primary hypertension. Due to the low quality of evidence in these studies and lack of blinding, it is not possible to link any short-term decrease in blood pressure to the use of acupuncture. Additional research is needed to determine if there is an effect due to acupuncture that lasts at least seven days.

Parker, H.W. and Vadiveloo, M.K. (2019). Diet quality of vegetarian diets compared with nonvegetarian diets: a systematic review. Nutrition Reviews , https://doi.org/10.1093/nutrit/nuy067

This systematic review was interested in comparing the diet quality of vegetarian and non-vegetarian diets. Twelve studies were included. Vegetarians more closely met recommendations for total fruit, whole grains, seafood and plant protein, and sodium intake. In nine of the twelve studies, vegetarians had higher overall diet quality compared to non-vegetarians. These findings may explain better health outcomes in vegetarians, but additional research is needed to remove any possible confounding variables.

Related Terms

Cochrane Database of Systematic Reviews

A highly-regarded database of systematic reviews prepared by The Cochrane Collaboration , an international group of individuals and institutions who review and analyze the published literature.

Exclusion Criteria

The set of conditions that characterize some individuals which result in being excluded in the study (i.e. other health conditions, taking specific medications, etc.). Since systematic reviews seek to include all relevant studies, exclusion criteria are not generally utilized in this situation.

Inclusion Criteria

The set of conditions that studies must meet to be included in the review (or for individual studies - the set of conditions that participants must meet to be included in the study; often comprises age, gender, disease type and status, etc.).

Now test yourself!

1. Systematic Reviews are similar to Meta-Analyses, except they do not include a statistical analysis quantitatively combining all the studies.

a) True b) False

2. The panels writing Systematic Reviews may include which of the following publication types in their review?

a) Published studies b) Unpublished studies c) Cohort studies d) Randomized Controlled Trials e) All of the above

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Practice Guideline
  • Next: Meta-Analysis >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2962
  • [email protected]
  • https://himmelfarb.gwu.edu

1.2.2  What is a systematic review?

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question.  It  uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993) . The key characteristics of a systematic review are:

a clearly stated set of objectives with pre-defined eligibility criteria for studies;

an explicit, reproducible methodology;

a systematic search that attempts to identify all studies that would meet the eligibility criteria;

an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and

a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976). By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review (see Chapter 9, Section 9.1.3 ). They also facilitate investigations of the consistency of evidence across studies, and the exploration of differences across studies.

Systematic Reviews

  • What is a Systematic Review?

A systematic review is an evidence synthesis that uses explicit, reproducible methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.

Key Characteristics of a Systematic Review:

Generally, systematic reviews must have:

  • a clearly stated set of objectives with pre-defined eligibility criteria for studies
  • an explicit, reproducible methodology
  • a systematic search that attempts to identify all studies that would meet the eligibility criteria
  • an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias
  • a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

A meta-analysis is a systematic review that uses quantitative methods to synthesize and summarize the pooled data from included studies.

Additional Information

  • How-to Books
  • Beyond Health Sciences

Cover Art

  • Cochrane Handbook For Systematic Reviews of Interventions Provides guidance to authors for the preparation of Cochrane Intervention reviews. Chapter 6 covers searching for reviews.
  • Systematic Reviews: CRD’s Guidance for Undertaking Reviews in Health Care From The University of York Centre for Reviews and Dissemination: Provides practical guidance for undertaking evidence synthesis based on a thorough understanding of systematic review methodology. It presents the core principles of systematic reviewing, and in complementary chapters, highlights issues that are specific to reviews of clinical tests, public health interventions, adverse effects, and economic evaluations.
  • Cornell, Sytematic Reviews and Evidence Synthesis Beyond the Health Sciences Video series geared for librarians but very informative about searching outside medicine.
  • << Previous: Getting Started
  • Next: Levels of Evidence >>
  • Getting Started
  • Levels of Evidence
  • Locating Systematic Reviews
  • Searching Systematically
  • Developing Answerable Questions
  • Identifying Synonyms & Related Terms
  • Using Truncation and Wildcards
  • Identifying Search Limits/Exclusion Criteria
  • Keyword vs. Subject Searching
  • Where to Search
  • Search Filters
  • Sensitivity vs. Precision
  • Core Databases
  • Other Databases
  • Clinical Trial Registries
  • Conference Presentations
  • Databases Indexing Grey Literature
  • Web Searching
  • Handsearching
  • Citation Indexes
  • Documenting the Search Process
  • Managing your Review

Research Support

  • Last Updated: Aug 6, 2024 10:17 AM
  • URL: https://guides.library.ucdavis.edu/systematic-reviews

Banner

Research and Evidence-based Practice: Levels of Evidence and Study Designs

  • MCHS Published Research
  • Levels of Evidence and Study Designs
  • Searching for the Evidence & Critical Appraisal
  • Reference and Citation Management
  • Writing and Publication
  • Searching Grey Literature

Evidence Pyramid

An  evidence pyramid   visually depicts the evidential strength of different research designs. The image below is one of several available renderings of an evidence pyramid. Studies with the highest internal validity, characterized by a high degree of quantitative analysis, review, analysis, and stringent scientific methodoloy, are at the top of the pyramid. Observational research and expert opinion reside at the bottom of the pyramid. 

is a systematic review a type of research design

Which Research Designs for Which Questions?

Different types of research studies are better suited to answer different categories of clinical questions. You might not always find the highest level of evidence (i.e., systematic review or meta-analysis) to answer your question.  When this happens, work your way down the Evidence Pyramid to the next highest level of evidence.

Therapy : Which treatment does more harm than good?

RCT > Cohort Study  >  Case Control > Case Series

Diagnosis : Which diagnostic test should I use?

Prospective, blind comparison to a gold standard, ie. A controlled trial that looks at patients with varying degrees of an illness and administers both diagnostic tests -- the test under investigation and the "gold standard" test -- to all of the patients in the study group.

Prognosis : What is the patient's likely clinical course over time?

Cohort Study > Case Control > Case Series

Etiology / Harm : What are the causes of this disease or condition?

RCT > Cohort Study > Case Control > Case Series

Prevention : How do we reduce the chance of disease by identifying and modifying risk factors?

RCT > Cohort Study > Case Control > Case Series

Cost : Is one intervention more cost-effective than another?

Economic Analysis

Quality of Life : What will be the patient's quality of life following an intervention?

Qualitative Study

Levels of Evidence

  Levels of Evidence
Level I Evidence from a systematic review or meta-analysis of all relevant RCTs (randomized controlled trial) or evidence-based clinical practice guidelines based on systematic reviews of RCTs or 3 or more RCTs of good quality that have similar results.
Level II Evidence obtained from at least one well designed RCT (eg large multi-site RCT).
Level III Evidence obtained from well-designed controlled trials without randomization (ie quasi-experimental).
Level IV Evidence from well-designed case-control or cohort studies.
Level V Evidence from systematic reviews of descriptive and qualitative studies (meta-synthesis).
Level VI Evidence from a single descriptive or qualitative study.
Level VII Evidence from the opinion of authorities and/or reports of expert committees.

Types of Study Designs

Systematic Review:    A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria. A systematic review typically includes a description of the findings of the collection of research studies. Cochrane Reviews are the gold standard!  (AHRQ Glossary of Terms)

Meta-Analysis :   A work consisting of studies using a quantitative method of combining the results of independent studies (usually drawn from the published literature) and synthesizing summaries and conclusions which may be used to evaluate therapeutic effectiveness, plan new studies, etc. It is often an overview of clinical trials. It is usually called a meta-analysis by the author or sponsoring body and should be differentiated from reviews of literature. (PubMed)

Evidence Guideline:   Systematically developed statement to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances (Institute of Medicine).  These have a rigorous development process.  An example is AHRQ Guidelines at guidelines.gov or Lippincott  Procedures .

Evidence Summary:   A summary of the evidence.

Randomized Controlled Trial:    A controlled clinical trial that randomly (by chance) assigns participants to two or more groups. There are various methods to randomize study participants to their groups. (AHRQ Glossary of Terms)

Controlled Clinical Trial:   A type of clinical trial comparing the effectiveness of one medication or treatment with the effectiveness of another medication or treatment. In many controlled trials, the other treatment is a placebo (inactive substance) and is considered the "control." (AHRQ Glossary of Terms)

Cohort Study:   A clinical research study in which people who presently have a certain condition or receive a particular treatment are followed over time and compared with another group of people who are not affected by the condition. (AHRQ Glossary of Terms)

Case Control Study :    The observational epidemiologic study of persons with the disease (or other outcome variable) of interest and a suitable control (comparison, reference) group of persons without the disease. The relationship of an attribute to the disease is examined by comparing the diseased and nondiseased with regard to how frequently the attribute is present or, if quantitative, the levels of the attribute, in each of the groups. (OCEBM Table of Evidence Glossary)

Case Series:   A group or series of case reports involving patients who were given similar treatment. Reports of case series usually contain detailed information about the individual patients. This includes demographic information (for example, age, gender, ethnic origin) and information on diagnosis, treatment, response to treatment, and follow-up after treatment. (OCEBM Table of Evidence Glossary)

Case Study :    An investigation of a single subject or a single unit, which could be a small number of individuals who seem to be representative of a larger group or very different from it. (Dictionary of Nursing Theory and Research, Fourth Edition)

Editorial:    Work consisting of a statement of the opinions, beliefs, and policy of the editor or publisher of a journal, usually on current matters of medical or scientific significance to the medical community or society at large. The editorials published by editors of journals representing the official organ of a society or organization are generally substantive. (PubMed)

Opinion:   A belief or conclusion held with confidence but not substantiated by positive knowledge or proof. (The Free Dictionary)

Animal Research:   A laboratory experiment using animals to study the development and progression of diseases. Animal studies also test how safe and effective new treatments are before they are tested in people.(NCI Dictionary of Cancer Terms)

In Vitro Research:   In the laboratory (outside the body). The opposite of in vivo (in the body). (NCI Dictionary of Cancer Terms)

  • << Previous: PICO
  • Next: Searching for the Evidence & Critical Appraisal >>
  • Last Updated: May 30, 2023 4:20 PM
  • URL: https://marshfieldclinic.libguides.com/Research_and_EBP

Systematic Reviews: Levels of evidence and study design

Levels of evidence.

"Levels of Evidence" tables have been developed which outline and grade the best evidence. However, the review question will determine the choice of study design.

Secondary sources provide analysis, synthesis, interpretation and evaluation of primary works. Secondary sources are not evidence, but rather provide a commentary on and discussion of evidence. e.g. systematic review

Primary sources contain the original data and analysis from research studies. No outside evaluation or interpretation is provided. An example of a primary literature source is a peer-reviewed research article. Other primary sources include preprints, theses, reports and conference proceedings.

Levels of evidence for primary sources fall into the following broad categories of study designs   (listed from highest to lowest):

  • Experimental : RTC's (Randomised Control Trials)
  • Quasi-experimental studies (Non-randomised control studies, Before-and-after study, Interrupted time series)
  • Observational studies (Cohort study, Case-control study, Case series) 

Based on information from Centre for Reviews and Dissemination. (2009). Systematic reviews: CRD's guidance for undertaking reviews in health care. Retrieved from http://www.york.ac.uk/inst/crd/index_guidance.htm

Hierarchy of Evidence Pyramid

"Levels of Evidence" are often represented in as a pyramid, with the highest level of evidence at the top:

is a systematic review a type of research design

Types of Study Design

The following definitions are adapted from the Glossary in " Systematic reviews: CRD's Guidance for Undertaking Reviews in Health Care " , Centre for Reviews and Dissemination, University of York :

  • Systematic Review The application of strategies that limit bias in the assembly, critical appraisal, and synthesis of all relevant studies on a specific topic and research question. 
  • Meta-analysis A systematic review which uses quantitative methods to summarise the results
  • Randomized control clinical trial (RCT) A group of patients is randomised into an experimental group and a control group. These groups are followed up for the variables/outcomes of interest.
  • Cohort study Involves the identification of two groups (cohorts) of patients, one which did receive the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.
  • Case-control study Involves identifying patients who have the outcome of interest (cases) and control patients without the same outcome, and looking to see if they had the exposure of interest.
  • Critically appraised topic A short summary of an article from the literature, created to answer a specific clinical question.

EBM and Study Design

  • << Previous: SR protocol
  • Next: Searching for systematic reviews >>
  • Getting started
  • Types of reviews
  • Formulate the question
  • SR protocol
  • Levels of evidence and study design
  • Searching for systematic reviews
  • Search strategies
  • Subject databases
  • Keeping up to date/Alerts
  • Trial registers
  • Conference proceedings
  • Critical appraisal
  • Documenting and reporting
  • Managing search results
  • Statistical methods
  • Journal information/publishing
  • Contact a librarian
  • Last Updated: May 15, 2024 11:15 AM
  • URL: https://ecu.au.libguides.com/systematic-reviews

Edith Cowan University acknowledges and respects the Noongar people, who are the traditional custodians of the land upon which its campuses stand and its programs operate. In particular ECU pays its respects to the Elders, past and present, of the Noongar people, and embrace their culture, wisdom and knowledge.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Study designs: Part 7 - Systematic reviews

Affiliations.

  • 1 Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India.
  • 2 Director, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India.
  • PMID: 32670836
  • PMCID: PMC7342340
  • DOI: 10.4103/picr.PICR_84_20

In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies. Systematic reviews often also use meta-analysis, which is a statistical tool to mathematically collate the results of various research studies to obtain a pooled estimate of treatment effect; this will be discussed in the next article.

Keywords: Research design; review [publication type]; systematic review [publication type].

Copyright: © 2020 Perspectives in Clinical Research.

PubMed Disclaimer

Conflict of interest statement

There are no conflicts of interest.

Similar articles

  • Study designs: Part 8 - Meta-analysis (I). Ranganathan P, Aggarwal R. Ranganathan P, et al. Perspect Clin Res. 2020 Oct-Dec;11(4):178-181. doi: 10.4103/picr.PICR_283_20. Epub 2020 Oct 6. Perspect Clin Res. 2020. PMID: 33489837 Free PMC article.
  • Systematic reviews and their application to research in speech and language therapy: a response to T. R. Pring's 'Ask a silly question: two decades of troublesome trials' (2004). Garrett Z, Thomas J. Garrett Z, et al. Int J Lang Commun Disord. 2006 Jan-Feb;41(1):95-105. doi: 10.1080/13682820500071542. Int J Lang Commun Disord. 2006. PMID: 16272005 Review.
  • Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. Schmucker CM, Blümle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, von Elm E, Briel M, Meerpohl JJ; OPEN consortium. Schmucker CM, et al. PLoS One. 2017 Apr 25;12(4):e0176210. doi: 10.1371/journal.pone.0176210. eCollection 2017. PLoS One. 2017. PMID: 28441452 Free PMC article. Review.
  • Conducting systematic reviews of intervention questions III: Synthesizing data from intervention studies using meta-analysis. O'Connor AM, Sargeant JM, Wang C. O'Connor AM, et al. Zoonoses Public Health. 2014 Jun;61 Suppl 1:52-63. doi: 10.1111/zph.12123. Zoonoses Public Health. 2014. PMID: 24905996
  • The use of systematic reviews and meta-analyses in infection control and hospital epidemiology. Bent S, Shojania KG, Saint S. Bent S, et al. Am J Infect Control. 2004 Jun;32(4):246-54. doi: 10.1016/j.ajic.2003.10.004. Am J Infect Control. 2004. PMID: 15175624 Review.
  • Study designs: Part 9 - Meta-analysis (II). Ranganathan P, Aggarwal R. Ranganathan P, et al. Perspect Clin Res. 2021 Jan-Mar;12(1):53-57. doi: 10.4103/picr.PICR_369_20. Epub 2021 Jan 19. Perspect Clin Res. 2021. PMID: 33816210 Free PMC article.
  • Centre for Evidence Based Medicine. [Last accessed on 2020 Apr 01]. Available from: http://wwwcebmnet .
  • Chandler J, Cumpston M, Thomas J, Higgins JP, Deeks JJ, Clarke MJ. Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Introduction. Cochrane Handbook for Systematic Reviews of Interventions Version 60 (updated August 2019) Ch 1 Cochrane. 2019. Available from: http://wwwtrainingcochraneorg/handbook .
  • Impellizzeri FM, Bizzini M. Systematic review and meta-analysis: A primer. Int J Sports Phys Ther. 2012;7:493–503. - PMC - PubMed
  • Akobeng AK. Understanding systematic reviews and meta-analysis. Arch Dis Child. 2005;90:845–8. - PMC - PubMed
  • Safi S, Sethi NJ, Nielsen EE, Feinberg J, Jakobsen JC, Gluud C. Beta-blockers for suspected or diagnosed acute myocardial infarction. Cochrane Database Syst Rev. 2019;12:CD012484. - PMC - PubMed

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • Ovid Technologies, Inc.
  • PubMed Central
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

is a systematic review a type of research design

Evidence Synthesis and Systematic Reviews

  • Question Formulation

Systematic Reviews

Rapid reviews, scoping reviews.

  • Other Review Types
  • Resources for Reviews by Discipline and Type
  • Tools for Evidence Synthesis
  • Grey Literature

Definition : A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors.

When to use : If you want to identify, appraise, and synthesize all available research that is relevant to a particular question with reproduceable search methods.

Limitations : It requires extensive time and a team

Resources :

  • Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare
  • The 8 stages of a systematic review
  • Determining the scope of the review and the questions it will address
  • Reporting the review

Definition : Rapid reviews are a form of evidence synthesis that may provide more timely information for decision making compared with standard systematic reviews

When to use : When you want to evaluate new or emerging research topics using some systematic review methods at a faster pace

Limitations : It is not as rigorous or as thorough as a systematic review and therefore may be more likely to be biased

  • Cochrane guidance for rapid reviews
  • Steps for conducting a rapid review
  • Expediting systematic reviews: methods and implications of rapid reviews

Definition : Scoping reviews are often used to categorize or group existing literature in a given field in terms of its nature, features, and volume.

When to use : Label body of literature with relevance to time, location (e.g. country or context), source (e.g. peer-reviewed or grey literature), and origin (e.g. healthcare discipline or academic field) It also is used to clarify working definitions and conceptual boundaries of a topic or field or to identify gaps in existing literature/research

Limitations : More citations to screen and takes as long or longer than a systematic review.  Larger teams may be required because of the larger volumes of literature.  Different screening criteria and process than a systematic review

  • PRISMA-ScR for scoping reviews
  • JBI Updated methodological guidance for the conduct of scoping reviews
  • JBI Manual: Scoping Reviews (2020)
  • Equator Network-Current Best Practices for the Conduct of Scoping Reviews
  • << Previous: Question Formulation
  • Next: Other Review Types >>
  • Last Updated: Jul 9, 2024 9:55 AM
  • URL: https://guides.temple.edu/systematicreviews

Temple University

University libraries.

See all library locations

  • Library Directory
  • Locations and Directions
  • Frequently Called Numbers

Twitter Icon

Need help? Email us at [email protected]

Banner

Literature Reviews: Types of Clinical Study Designs

  • Library Basics
  • 1. Choose Your Topic
  • How to Find Books
  • Types of Clinical Study Designs
  • Types of Literature
  • 3. Search the Literature
  • 4. Read & Analyze the Literature
  • 5. Write the Review
  • Keeping Track of Information
  • Style Guides
  • Books, Tutorials & Examples

Types of Study Designs

Meta-Analysis A way of combining data from many different research studies. A meta-analysis is a statistical process that combines the findings from individual studies.  Example :  Anxiety outcomes after physical activity interventions: meta-analysis findings .  Conn V.  Nurs Res . 2010 May-Jun;59(3):224-31.

Systematic Review A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria. A systematic review typically includes a description of the findings of the collection of research studies. The systematic review may also include a quantitative pooling of data, called a meta-analysis.  Example :  Complementary and alternative medicine use among women with breast cancer: a systematic review.   Wanchai A, Armer JM, Stewart BR. Clin J Oncol Nurs . 2010 Aug;14(4):E45-55.

Randomized Controlled Trial A controlled clinical trial that randomly (by chance) assigns participants to two or more groups. There are various methods to randomize study participants to their groups.  Example :  Meditation or exercise for preventing acute respiratory infection: a randomized controlled trial .  Barrett B, et al.  Ann Fam Med . 2012 Jul-Aug;10(4):337-46.

Cohort Study (Prospective Observational Study) A clinical research study in which people who presently have a certain condition or receive a particular treatment are followed over time and compared with another group of people who are not affected by the condition.  Example : Smokeless tobacco cessation in South Asian communities: a multi-centre prospective cohort study . Croucher R, et al. Addiction. 2012 Dec;107 Suppl 2:45-52.

Case-control Study Case-control studies begin with the outcomes and do not follow people over time. Researchers choose people with a particular result (the cases) and interview the groups or check their records to ascertain what different experiences they had. They compare the odds of having an experience with the outcome to the odds of having an experience without the outcome.  Example :  Non-use of bicycle helmets and risk of fatal head injury: a proportional mortality, case-control study .  Persaud N, et al.  CMAJ . 2012 Nov 20;184(17):E921-3.

Cross-sectional study The observation of a defined population at a single point in time or time interval. Exposure and outcome are determined simultaneously.  Example :  Fasting might not be necessary before lipid screening: a nationally representative cross-sectional study .  Steiner MJ, et al.  Pediatrics . 2011 Sep;128(3):463-70.

Case Reports and Series A report on a series of patients with an outcome of interest. No control group is involved.  Example :  Students mentoring students in a service-learning clinical supervision experience: an educational case report .  Lattanzi JB, et al.  Phys Ther . 2011 Oct;91(10):1513-24.

Ideas, Editorials, Opinions Put forth by experts in the field.  Example : Health and health care for the 21st century: for all the people . Koop CE.  Am J Public Health . 2006 Dec;96(12):2090-2.

Animal Research Studies Studies conducted using animal subjects.  Example : Intranasal leptin reduces appetite and induces weight loss in rats with diet-induced obesity (DIO) .  Schulz C, Paulus K, Jöhren O, Lehnert H.   Endocrinology . 2012 Jan;153(1):143-53.

Test-tube Lab Research "Test tube" experiments conducted in a controlled laboratory setting.

Adapted from Study Designs. In NICHSR Introduction to Health Services Research: a Self-Study Course.  http://www.nlm.nih.gov/nichsr/ihcm/06studies/studies03.html and Glossary of EBM Terms. http://www.cebm.utoronto.ca/glossary/index.htm#top  

Study Design Terminology

Bias - Any deviation of results or inferences from the truth, or processes leading to such deviation. Bias can result from several sources: one-sided or systematic variations in measurement from the true value (systematic error); flaws in study design; deviation of inferences, interpretations, or analyses based on flawed data or data collection; etc. There is no sense of prejudice or subjectivity implied in the assessment of bias under these conditions.

Case Control Studies - Studies which start with the identification of persons with a disease of interest and a control (comparison, referent) group without the disease. The relationship of an attribute to the disease is examined by comparing diseased and non-diseased persons with regard to the frequency or levels of the attribute in each group.

Causality - The relating of causes to the effects they produce. Causes are termed necessary when they must always precede an effect and sufficient when they initiate or produce an effect. Any of several factors may be associated with the potential disease causation or outcome, including predisposing factors, enabling factors, precipitating factors, reinforcing factors, and risk factors.

Control Groups - Groups that serve as a standard for comparison in experimental studies. They are similar in relevant characteristics to the experimental group but do not receive the experimental intervention.

Controlled Clinical Trials - Clinical trials involving one or more test treatments, at least one control treatment, specified outcome measures for evaluating the studied intervention, and a bias-free method for assigning patients to the test treatment. The treatment may be drugs, devices, or procedures studied for diagnostic, therapeutic, or prophylactic effectiveness. Control measures include placebos, active medicines, no-treatment, dosage forms and regimens, historical comparisons, etc. When randomization using mathematical techniques, such as the use of a random numbers table, is employed to assign patients to test or control treatments, the trials are characterized as Randomized Controlled Trials.

Cost-Benefit Analysis - A method of comparing the cost of a program with its expected benefits in dollars (or other currency). The benefit-to-cost ratio is a measure of total return expected per unit of money spent. This analysis generally excludes consideration of factors that are not measured ultimately in economic terms. Cost effectiveness compares alternative ways to achieve a specific set of results.

Cross-Over Studies - Studies comparing two or more treatments or interventions in which the subjects or patients, upon completion of the course of one treatment, are switched to another. In the case of two treatments, A and B, half the subjects are randomly allocated to receive these in the order A, B and half to receive them in the order B, A. A criticism of this design is that effects of the first treatment may carry over into the period when the second is given.

Cross-Sectional Studies - Studies in which the presence or absence of disease or other health-related variables are determined in each member of the study population or in a representative sample at one particular time. This contrasts with LONGITUDINAL STUDIES which are followed over a period of time.

Double-Blind Method - A method of studying a drug or procedure in which both the subjects and investigators are kept unaware of who is actually getting which specific treatment.

Empirical Research - The study, based on direct observation, use of statistical records, interviews, or experimental methods, of actual practices or the actual impact of practices or policies.

Evaluation Studies - Works consisting of studies determining the effectiveness or utility of processes, personnel, and equipment.

Genome-Wide Association Study - An analysis comparing the allele frequencies of all available (or a whole genome representative set of) polymorphic markers in unrelated patients with a specific symptom or disease condition, and those of healthy controls to identify markers associated with a specific disease or condition.

Intention to Treat Analysis - Strategy for the analysis of Randomized Controlled Trial that compares patients in the groups to which they were originally randomly assigned.

Logistic Models - Statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable. A common application is in epidemiology for estimating an individual's risk (probability of a disease) as a function of a given risk factor.

Longitudinal Studies - Studies in which variables relating to an individual or group of individuals are assessed over a period of time.

Lost to Follow-Up - Study subjects in cohort studies whose outcomes are unknown e.g., because they could not or did not wish to attend follow-up visits.

Matched-Pair Analysis - A type of analysis in which subjects in a study group and a comparison group are made comparable with respect to extraneous factors by individually pairing study subjects with the comparison group subjects (e.g., age-matched controls).

Meta-Analysis - Works consisting of studies using a quantitative method of combining the results of independent studies (usually drawn from the published literature) and synthesizing summaries and conclusions which may be used to evaluate therapeutic effectiveness, plan new studies, etc. It is often an overview of clinical trials. It is usually called a meta-analysis by the author or sponsoring body and should be differentiated from reviews of literature.

Numbers Needed To Treat - Number of patients who need to be treated in order to prevent one additional bad outcome. It is the inverse of Absolute Risk Reduction.

Odds Ratio - The ratio of two odds. The exposure-odds ratio for case control data is the ratio of the odds in favor of exposure among cases to the odds in favor of exposure among noncases. The disease-odds ratio for a cohort or cross section is the ratio of the odds in favor of disease among the exposed to the odds in favor of disease among the unexposed. The prevalence-odds ratio refers to an odds ratio derived cross-sectionally from studies of prevalent cases.

Patient Selection - Criteria and standards used for the determination of the appropriateness of the inclusion of patients with specific conditions in proposed treatment plans and the criteria used for the inclusion of subjects in various clinical trials and other research protocols.

Predictive Value of Tests - In screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test.

Prospective Studies - Observation of a population for a sufficient number of persons over a sufficient number of years to generate incidence or mortality rates subsequent to the selection of the study group.

Qualitative Studies - Research that derives data from observation, interviews, or verbal interactions and focuses on the meanings and interpretations of the participants.

Quantitative Studies - Quantitative research is research that uses numerical analysis.

Random Allocation - A process involving chance used in therapeutic trials or other research endeavor for allocating experimental subjects, human or animal, between treatment and control groups, or among treatment groups. It may also apply to experiments on inanimate objects.

Randomized Controlled Trial - Clinical trials that involve at least one test treatment and one control treatment, concurrent enrollment and follow-up of the test- and control-treated groups, and in which the treatments to be administered are selected by a random process, such as the use of a random-numbers table.

Reproducibility of Results - The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.

Retrospective Studies - Studies used to test etiologic hypotheses in which inferences about an exposure to putative causal factors are derived from data relating to characteristics of persons under study or to events or experiences in their past. The essential feature is that some of the persons under study have the disease or outcome of interest and their characteristics are compared with those of unaffected persons.

Sample Size - The number of units (persons, animals, patients, specified circumstances, etc.) in a population to be studied. The sample size should be big enough to have a high likelihood of detecting a true difference between two groups.

Sensitivity and Specificity - Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition.

Single-Blind Method - A method in which either the observer(s) or the subject(s) is kept ignorant of the group to which the subjects are assigned.

Time Factors - Elements of limited time intervals, contributing to particular results or situations.

Source:  NLM MeSH Database

  • << Previous: How to Find Books
  • Next: Types of Literature >>
  • Last Updated: Dec 29, 2023 11:41 AM
  • URL: https://research.library.gsu.edu/litrev

Share

Scoping Review vs Systematic Review

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

  • Systematic reviews are designed to answer specific research questions with the goal of synthesizing evidence to inform clinical practice or policy decisions, such as determining the effectiveness of an intervention.
  • Scoping reviews are valuable tools for exploring broader research landscapes, clarifying concepts, and identifying research gaps.

How to Choose the Best Review for your Research Topic

The Cochrane Handbook states that the primary factor in deciding between a systematic review and a scoping review is the authors’ intention:

Do they aim to use the review’s results to answer a clinically meaningful question or to inform practice?

A systematic review is recommended if the objective is to evaluate the feasibility, appropriateness, meaningfulness, or effectiveness of a treatment or practice.

For example, “Is treatment A more effective than treatment B for condition C in population D?”

The goal is to produce a comprehensive, unbiased summary of the available evidence that can be directly applied to clinical decision-making.

Systematic reviews can address various aspects of healthcare beyond just effectiveness, including patient experiences and economic considerations.

They are often the foundation for developing evidence-based clinical practice guidelines.

Conversely, a scoping review is suitable when the focus is on identifying and discussing specific characteristics or concepts within the literature rather than generating direct clinical or policy recommendations.

Scoping reviews can be an excellent way for postgraduate students to gain a broad understanding of a field or to identify potential areas for more in-depth research.

If a research area has inconsistent terminology or definitions, a scoping review can map out how different concepts are used and potentially propose a unified understanding. This can help refine the focus and scope of a subsequent systematic review.

Key differences:

  • Systematic reviews aim to answer a specific question and typically involve a more rigorous, comprehensive search and analysis of the literature, including a detailed quality assessment of included studies.
  • Scoping reviews aim to map the key concepts and types of evidence available on a topic. While they follow a systematic approach, they typically do not include the same level of critical appraisal as systematic reviews.
  • Scoping reviews often have broader, more exploratory objectives than the focused question(s) in systematic reviews.
  • Scoping reviews map the available evidence, while systematic reviews synthesize and evaluate the evidence.
  • Scoping reviews typically use narrative synthesis, while systematic reviews may include meta-analysis.
  • Scoping reviews often identify research gaps, while systematic reviews focus on informing practice and policy.
  • Unlike scoping reviews, systematic reviews aim to be exhaustive within their defined scope, capturing all relevant evidence on a particular question.
  • Critical appraisal of individual studies is optional in scoping reviews but essential in systematic reviews.
  • Scoping reviews can be used as a preliminary step to a systematic review , helping to identify the types of evidence available, potential research questions, and relevant inclusion criteria.
  • Due to their rigorous methodology, systematic reviews are generally more time-consuming, often taking 12-24 months to complete, while scoping reviews can usually be completed more rapidly, typically within 2-6 months.

If the goal is to determine the effectiveness of an intervention :

Systematic reviews evaluate the effectiveness of a particular intervention for a specific condition while scoping reviews map the research landscape by:

  • Examining the range of interventions for a health condition.
  • Identifying types of studies conducted.
  • Noting populations studied.
  • Summarizing outcomes measured.

Scoping reviews help identify areas needing further research, whereas systematic reviews aim to draw conclusions about intervention effectiveness.

Exploratory, providing a descriptive overview of the research landscape.Aims to provide a rigorous and unbiased answer to a specific research question.
PCC (Population, Concept, Context)PICO (Problem/Population, Intervention, Comparison Intervention, Outcome)  
How do cultural beliefs and practices ( -context) influence the ways in which parents ( -parents of children with physical disabilities) perceive and address ( -concept) their children’s physical disabilities? For women who have experienced domestic violence ( ), how effective are advocacy programs ( ) compared to other treatments ( ) in improving the quality of life ( )?
Designed to be inclusive rather than exhaustive, capturing a wide range of sources.Comprehensive and systematic, aiming to minimize bias and identify all relevant studies,
 (OSF)  
Not usually included  Usually included
Typically broader, including study characteristics, concepts, interventions, methodologies, and key findings.More specific, often focusing on study design, participants, interventions, outcomes, and risk of bias assessment.
Primarily descriptive, focusing on summarizing characteristics and identifying themes and trends.Create a new understanding by synthesizing and interpreting the available evidence. This can include statistical meta-analysis to combine results from multiple studies.
Typically not a primary focus.Rigorous assessment of study quality is essential using standardized tools to minimize bias in the findings.
  Standard 

Standardized Reporting Guidelines

The PRISMA ( Preferred Reporting Items for Systematic Reviews and Meta-Analyses ) checklist is tailored for reporting systematic reviews and meta-analyses.

It consists of 27 items covering aspects such as the rationale, objectives, eligibility criteria, search strategy, study selection process, data extraction methods, risk of bias assessment, data synthesis, and reporting of finding.

PRISMA helps researchers communicate their methods and findings more effectively, ultimately improving the reliability and usefulness of systematic reviews for informing healthcare decisions.

The PRISMA-ScR ( Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews ) checklist builds upon the PRISMA checklist but is specifically designed for reporting scoping reviews.

It includes additional items relevant to scoping reviews, such as charting methods, stakeholder consultation, and the presentation of a broader range of evidence sources beyond empirical studies.

References:

Arksey, H., & O’Malley, L. (2005). Scoping studies: towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19-32.

Centre for Reviews and Dissemination (CRD). (2001). Undertaking systematic reviews of research on effectiveness: CRD’s guidance for those carrying out or commissioning reviews. York: University of York.

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors).  Cochrane Handbook for Systematic Reviews of Interventions  version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook.

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: advancing the methodology. Implementation Science, 5 (1), 69.

Munn, Z., Pollock, D., Khalil, H., Alexander, L., Mclnerney, P., Godfrey, C. M., … & Tricco, A. C. (2022). What are scoping reviews? Providing a formal definition of scoping reviews as a type of evidence synthesis .  JBI evidence synthesis ,  20 (4), 950-952.

Print Friendly, PDF & Email

  • Boston University Libraries

Systematic Reviews in the Social Sciences

Types of reviews.

  • What is a Systematic Review?

Literature Review vs. Systematic Review

  • Examples and Core Resources
  • Finding Systematic Reviews
  • Conducting Systematic Reviews
  • Saving Search Results
  • Systematic Review Management Tools
  • Citing Your Sources

Reviews can have different structures and goals. The primary forms of reviews in our discipline are literature reviews and systematic reviews:

A  literature review  provides a reader with a critical overview of the sources relevant to a specific research subject, question, or idea. In writing a literature review, it is important to contextualize each resource, evaluate the content, and provide a critical analysis of the strengths, contributions, and issues.

A guide to writing literature reviews is available  here.

A  systematic review  uses a specific methodology to identify all relevant studies on a specific topic and then select appropriate studies based on very specific criteria for inclusion/exclusion. By having transparent frameworks, systematic reviews seek to be verifiable and reproducible. Systematic reviews in the discipline can often include statistical analysis techniques.

A guide to writing systematic reviews is available  here.  

A comprehensive list of all the types of reviews you might encounter as a social science researcher and their search strategies is available  here.

The following chart can guide you through deciding if a literature review or systematic review is right for you. This is available to download or print by clicking below:

Difference Between Literature Review and Systematic Review

 

  Literature Review Systematic Review
Definition
Goals
Question
Number of Authors
Timeline
Requirements
Value

Adopted and reformatted for social science analysis purposes from: Kysh, Lynn (2013):  Difference between a systematic review and a literature review . Figshare. https://doi.org/10.6084/m9.figshare.766364.v1

Profile Photo

  • << Previous: What is a Systematic Review?
  • Next: Examples and Core Resources >>
  • Last Updated: Aug 8, 2024 11:34 AM
  • URL: https://library.bu.edu/systematic-reviews-social-sciences
  • Systematic review
  • Open access
  • Published: 07 August 2024

Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review

  • Nicole Freitas de Mello   ORCID: orcid.org/0000-0002-5228-6691 1 , 2 ,
  • Sarah Nascimento Silva   ORCID: orcid.org/0000-0002-1087-9819 3 ,
  • Dalila Fernandes Gomes   ORCID: orcid.org/0000-0002-2864-0806 1 , 2 ,
  • Juliana da Motta Girardi   ORCID: orcid.org/0000-0002-7547-7722 4 &
  • Jorge Otávio Maia Barreto   ORCID: orcid.org/0000-0002-7648-0472 2 , 4  

Implementation Science volume  19 , Article number:  59 ( 2024 ) Cite this article

422 Accesses

6 Altmetric

Metrics details

The implementation of clinical practice guidelines (CPGs) is a cyclical process in which the evaluation stage can facilitate continuous improvement. Implementation science has utilized theoretical approaches, such as models and frameworks, to understand and address this process. This article aims to provide a comprehensive overview of the models and frameworks used to assess the implementation of CPGs.

A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. The findings were reported following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Electronic databases were searched from their inception until May 15, 2023. A predetermined strategy and manual searches were conducted to identify relevant documents from health institutions worldwide. Eligible studies presented models and frameworks for assessing the implementation of CPGs. Information on the characteristics of the documents, the context in which the models were used (specific objectives, level of use, type of health service, target group), and the characteristics of each model or framework (name, domain evaluated, and model limitations) were extracted. The domains of the models were analyzed according to the key constructs: strategies, context, outcomes, fidelity, adaptation, sustainability, process, and intervention. A subgroup analysis was performed grouping models and frameworks according to their levels of use (clinical, organizational, and policy) and type of health service (community, ambulatorial, hospital, institutional). The JBI’s critical appraisal tools were utilized by two independent researchers to assess the trustworthiness, relevance, and results of the included studies.

Database searches yielded 14,395 studies, of which 80 full texts were reviewed. Eight studies were included in the data analysis and four methodological guidelines were additionally included from the manual search. The risk of bias in the studies was considered non-critical for the results of this systematic review. A total of ten models/frameworks for assessing the implementation of CPGs were found. The level of use was mainly policy, the most common type of health service was institutional, and the major target group was professionals directly involved in clinical practice. The evaluated domains differed between the models and there were also differences in their conceptualization. All the models addressed the domain "Context", especially at the micro level (8/12), followed by the multilevel (7/12). The domains "Outcome" (9/12), "Intervention" (8/12), "Strategies" (7/12), and "Process" (5/12) were frequently addressed, while "Sustainability" was found only in one study, and "Fidelity/Adaptation" was not observed.

Conclusions

The use of models and frameworks for assessing the implementation of CPGs is still incipient. This systematic review may help stakeholders choose or adapt the most appropriate model or framework to assess CPGs implementation based on their specific health context.

Trial registration

PROSPERO (International Prospective Register of Systematic Reviews) registration number: CRD42022335884. Registered on June 7, 2022.

Peer Review reports

Contributions to the literature

Although the number of theoretical approaches has grown in recent years, there are still important gaps to be explored in the use of models and frameworks to assess the implementation of clinical practice guidelines (CPGs). This systematic review aims to contribute knowledge to overcome these gaps.

Despite the great advances in implementation science, evaluating the implementation of CPGs remains a challenge, and models and frameworks could support improvements in this field.

This study demonstrates that the available models and frameworks do not cover all characteristics and domains necessary for a complete evaluation of CPGs implementation.

The presented findings contribute to the field of implementation science, encouraging debate on choices and adaptations of models and frameworks for implementation research and evaluation.

Substantial investments have been made in clinical research and development in recent decades, increasing the medical knowledge base and the availability of health technologies [ 1 ]. The use of clinical practice guidelines (CPGs) has increased worldwide to guide best health practices and to maximize healthcare investments. A CPG can be defined as "any formal statements systematically developed to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances" [ 2 ] and has the potential to improve patient care by promoting interventions of proven benefit and discouraging ineffective interventions. Furthermore, they can promote efficiency in resource allocation and provide support for managers and health professionals in decision-making [ 3 , 4 ].

However, having a quality CPG does not guarantee that the expected health benefits will be obtained. In fact, putting these devices to use still presents a challenge for most health services across distinct levels of government. In addition to the development of guidelines with high methodological rigor, those recommendations need to be available to their users; these recommendations involve the diffusion and dissemination stages, and they need to be used in clinical practice (implemented), which usually requires behavioral changes and appropriate resources and infrastructure. All these stages involve an iterative and complex process called implementation, which is defined as the process of putting new practices within a setting into use [ 5 , 6 ].

Implementation is a cyclical process, and the evaluation is one of its key stages, which allows continuous improvement of CPGs development and implementation strategies. It consists of verifying whether clinical practice is being performed as recommended (process evaluation or formative evaluation) and whether the expected results and impact are being reached (summative evaluation) [ 7 , 8 , 9 ]. Although the importance of the implementation evaluation stage has been recognized, research on how these guidelines are implemented is scarce [ 10 ]. This paper focused on the process of assessing CPGs implementation.

To understand and improve this complex process, implementation science provides a systematic set of principles and methods to integrate research findings and other evidence-based practices into routine practice and improve the quality and effectiveness of health services and care [ 11 ]. The field of implementation science uses theoretical approaches that have varying degrees of specificity based on the current state of knowledge and are structured based on theories, models, and frameworks [ 5 , 12 , 13 ]. A "Model" is defined as "a simplified depiction of a more complex world with relatively precise assumptions about cause and effect", and a "framework" is defined as "a broad set of constructs that organize concepts and data descriptively without specifying causal relationships" [ 9 ]. Although these concepts are distinct, in this paper, their use will be interchangeable, as they are typically like checklists of factors relevant to various aspects of implementation.

There are a variety of theoretical approaches available in implementation science [ 5 , 14 ], which can make choosing the most appropriate challenging [ 5 ]. Some models and frameworks have been categorized as "evaluation models" by providing a structure for evaluating implementation endeavors [ 15 ], even though theoretical approaches from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 13 ]. Two frameworks that can specify implementation aspects that should be evaluated as part of intervention studies are RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 16 ] and PRECEDE-PROCEED (Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development) [ 17 ]. Although the number of theoretical approaches has grown in recent years, the use of models and frameworks to evaluate the implementation of guidelines still seems to be a challenge.

This article aims to provide a complete map of the models and frameworks applied to assess the implementation of CPGs. The aim is also to subside debate and choices on models and frameworks for the research and evaluation of the implementation processes of CPGs and thus to facilitate the continued development of the field of implementation as well as to contribute to healthcare policy and practice.

A systematic review was conducted following the Cochrane methodology [ 18 ], with adaptations to the "selection process" due to the unique nature of this review (details can be found in the respective section). The review protocol was registered in PROSPERO (registration number: CRD42022335884) on June 7, 2022. This report adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 19 ] and a completed checklist is provided in Additional File 1.

Eligibility criteria

The SDMO approach (Types of Studies, Types of Data, Types of Methods, Outcomes) [ 20 ] was utilized in this systematic review, outlined as follows:

Types of studies

All types of studies were considered for inclusion, as the assessment of CPG implementation can benefit from a diverse range of study designs, including randomized clinical trials/experimental studies, scale/tool development, systematic reviews, opinion pieces, qualitative studies, peer-reviewed articles, books, reports, and unpublished theses.

Studies were categorized based on their methodological designs, which guided the synthesis, risk of bias assessment, and presentation of results.

Study protocols and conference abstracts were excluded due to insufficient information for this review.

Types of data

Studies that evaluated the implementation of CPGs either independently or as part of a multifaceted intervention.

Guidelines for evaluating CPG implementation.

Inclusion of CPGs related to any context, clinical area, intervention, and patient characteristics.

No restrictions were placed on publication date or language.

Exclusion criteria

General guidelines were excluded, as this review focused on 'models for evaluating clinical practice guidelines implementation' rather than the guidelines themselves.

Studies that focused solely on implementation determinants as barriers and enablers were excluded, as this review aimed to explore comprehensive models/frameworks.

Studies evaluating programs and policies were excluded.

Studies that only assessed implementation strategies (isolated actions) rather than the implementation process itself were excluded.

Studies that focused solely on the impact or results of implementation (summative evaluation) were excluded.

Types of methods

Not applicable.

All potential models or frameworks for assessing the implementation of CPG (evaluation models/frameworks), as well as their characteristics: name; specific objectives; levels of use (clinical, organizational, and policy); health system (public, private, or both); type of health service (community, ambulatorial, hospital, institutional, homecare); domains or outcomes evaluated; type of recommendation evaluated; context; limitations of the model.

Model was defined as "a deliberated simplification of a phenomenon on a specific aspect" [ 21 ].

Framework was defined as "structure, overview outline, system, or plan consisting of various descriptive categories" [ 21 ].

Models or frameworks used solely for the CPG development, dissemination, or implementation phase.

Models/frameworks used solely for assessment processes other than implementation, such as for the development or dissemination phase.

Data sources and literature search

The systematic search was conducted on July 31, 2022 (and updated on May 15, 2023) in the following electronic databases: MEDLINE/PubMed, Centre for Reviews and Dissemination (CRD), the Cochrane Library, Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Epistemonikos, Global Health, Health Systems Evidence, PDQ-Evidence, PsycINFO, Rx for Change (Canadian Agency for Drugs and Technologies in Health, CADTH), Scopus, Web of Science and Virtual Health Library (VHL). The Google Scholar database was used for the manual selection of studies (first 10 pages).

Additionally, hand searches were performed on the lists of references included in the systematic reviews and citations of the included studies, as well as on the websites of institutions working on CPGs development and implementation: Guidelines International Networks (GIN), National Institute for Health and Care Excellence (NICE; United Kingdom), World Health Organization (WHO), Centers for Disease Control and Prevention (CDC; USA), Institute of Medicine (IOM; USA), Australian Department of Health and Aged Care (ADH), Healthcare Improvement Scotland (SIGN), National Health and Medical Research Council (NHMRC; Australia), Queensland Health, The Joanna Briggs Institute (JBI), Ministry of Health and Social Policy of Spain, Ministry of Health of Brazil and Capes Theses and Dissertations Catalog.

The search strategy combined terms related to "clinical practice guidelines" (practice guidelines, practice guidelines as topic, clinical protocols), "implementation", "assessment" (assessment, evaluation), and "models, framework". The free term "monitoring" was not used because it was regularly related to clinical monitoring and not to implementation monitoring. The search strategies adapted for the electronic databases are presented in an additional file (see Additional file 2).

Study selection process

The results of the literature search from scientific databases, excluding the CRD database, were imported into Mendeley Reference Management software to remove duplicates. They were then transferred to the Rayyan platform ( https://rayyan.qcri.org ) [ 22 ] for the screening process. Initially, studies related to the "assessment of implementation of the CPG" were selected. The titles were first screened independently by two pairs of reviewers (first selection: four reviewers, NM, JB, SS, and JG; update: a pair of reviewers, NM and DG). The title screening was broad, including all potentially relevant studies on CPG and the implementation process. Following that, the abstracts were independently screened by the same group of reviewers. The abstract screening was more focused, specifically selecting studies that addressed CPG and the evaluation of the implementation process. In the next step, full-text articles were reviewed independently by a pair of reviewers (NM, DG) to identify those that explicitly presented "models" or "frameworks" for assessing the implementation of the CPG. Disagreements regarding the eligibility of studies were resolved through discussion and consensus, and by a third reviewer (JB) when necessary. One reviewer (NM) conducted manual searches, and the inclusion of documents was discussed with the other reviewers.

Risk of bias assessment of studies

The selected studies were independently classified and evaluated according to their methodological designs by two investigators (NM and JG). This review employed JBI’s critical appraisal tools to assess the trustworthiness, relevance and results of the included studies [ 23 ] and these tools are presented in additional files (see Additional file 3 and Additional file 4). Disagreements were resolved by consensus or consultation with the other reviewers. Methodological guidelines and noncomparative and before–after studies were not evaluated because JBI does not have specific tools for assessing these types of documents. Although the studies were assessed for quality, they were not excluded on this basis.

Data extraction

The data was independently extracted by two reviewers (NM, DG) using a Microsoft Excel spreadsheet. Discrepancies were discussed and resolved by consensus. The following information was extracted:

Document characteristics : author; year of publication; title; study design; instrument of evaluation; country; guideline context;

Usage context of the models : specific objectives; level of use (clinical, organizational, and policy); type of health service (community, ambulatorial, hospital, institutional); target group (guideline developers, clinicians; health professionals; health-policy decision-makers; health-care organizations; service managers);

Model and framework characteristics : name, domain evaluated, and model limitations.

The set of information to be extracted, shown in the systematic review protocol, was adjusted to improve the organization of the analysis.

The "level of use" refers to the scope of the model used. "Clinical" was considered when the evaluation focused on individual practices, "organizational" when practices were within a health service institution, and "policy" when the evaluation was more systemic and covered different health services or institutions.

The "type of health service" indicated the category of health service where the model/framework was used (or can be used) to assess the implementation of the CPG, related to the complexity of healthcare. "Community" is related to primary health care; "ambulatorial" is related to secondary health care; "hospital" is related to tertiary health care; and "institutional" represented models/frameworks not specific to a particular type of health service.

The "target group" included stakeholders related to the use of the model/framework for evaluating the implementation of the CPG, such as clinicians, health professionals, guideline developers, health policy-makers, health organizations, and service managers.

The category "health system" (public, private, or both) mentioned in the systematic review protocol was not found in the literature obtained and was removed as an extraction variable. Similarly, the variables "type of recommendation evaluated" and "context" were grouped because the same information was included in the "guideline context" section of the study.

Some selected documents presented models or frameworks recognized by the scientific field, including some that were validated. However, some studies adapted the model to this context. Therefore, the domain analysis covered all models or frameworks domains evaluated by (or suggested for evaluation by) the document analyzed.

Data analysis and synthesis

The results were tabulated using narrative synthesis with an aggregative approach, without meta-analysis, aiming to summarize the documents descriptively for the organization, description, interpretation and explanation of the study findings [ 24 , 25 ].

The model/framework domains evaluated in each document were studied according to Nilsen et al.’s constructs: "strategies", "context", "outcomes", "fidelity", "adaptation" and "sustainability". For this study, "strategies" were described as structured and planned initiatives used to enhance the implementation of clinical practice [ 26 ].

The definition of "context" varies in the literature. Despite that, this review considered it as the set of circumstances or factors surrounding a particular implementation effort, such as organizational support, financial resources, social relations and support, leadership, and organizational culture [ 26 , 27 ]. The domain "context" was subdivided according to the level of health care into "micro" (individual perspective), "meso" (organizational perspective), "macro" (systemic perspective), and "multiple" (when there is an issue involving more than one level of health care).

The "outcomes" domain was related to the results of the implementation process (unlike clinical outcomes) and was stratified according to the following constructs: acceptability, appropriateness, feasibility, adoption, cost, and penetration. All these concepts align with the definitions of Proctor et al. (2011), although we decided to separate "fidelity" and "sustainability" as independent domains similar to Nilsen [ 26 , 28 ].

"Fidelity" and "adaptation" were considered the same domain, as they are complementary pieces of the same issue. In this study, implementation fidelity refers to how closely guidelines are followed as intended by their developers or designers. On the other hand, adaptation involves making changes to the content or delivery of a guideline to better fit the needs of a specific context. The "sustainability" domain was defined as evaluations about the continuation or permanence over time of the CPG implementation.

Additionally, the domain "process" was utilized to address issues related to the implementation process itself, rather than focusing solely on the outcomes of the implementation process, as done by Wang et al. [ 14 ]. Furthermore, the "intervention" domain was introduced to distinguish aspects related to the CPG characteristics that can impact its implementation, such as the complexity of the recommendation.

A subgroup analysis was performed with models and frameworks categorized based on their levels of use (clinical, organizational, and policy) and the type of health service (community, ambulatorial, hospital, institutional) associated with the CPG. The goal is to assist stakeholders (politicians, clinicians, researchers, or others) in selecting the most suitable model for evaluating CPG implementation based on their specific health context.

Search results

Database searches yielded 26,011 studies, of which 107 full texts were reviewed. During the full-text review, 99 articles were excluded: 41 studies did not mention a model or framework for assessing the implementation of the CPG, 31 studies evaluated only implementation strategies (isolated actions) rather than the implementation process itself, and 27 articles were not related to the implementation assessment. Therefore, eight studies were included in the data analysis. The updated search did not reveal additional relevant studies. The main reason for study exclusion was that they did not use models or frameworks to assess CPG implementation. Additionally, four methodological guidelines were included from the manual search (Fig.  1 ).

figure 1

PRISMA diagram. Acronyms: ADH—Australian Department of Health, CINAHL—Cumulative Index to Nursing and Allied Health Literature, CDC—Centers for Disease Control and Prevention, CRD—Centre for Reviews and Dissemination, GIN—Guidelines International Networks, HSE—Health Systems Evidence, IOM—Institute of Medicine, JBI—The Joanna Briggs Institute, MHB—Ministry of Health of Brazil, NICE—National Institute for Health and Care Excellence, NHMRC—National Health and Medical Research Council, MSPS – Ministerio de Sanidad Y Política Social (Spain), SIGN—Scottish Intercollegiate Guidelines Network, VHL – Virtual Health Library, WHO—World Health Organization. Legend: Reason A –The study evaluated only implementation strategies (isolated actions) rather than the implementation process itself. Reason B – The study did not mention a model or framework for assessing the implementation of the intervention. Reason C – The study was not related to the implementation assessment. Adapted from Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71 . For more information, visit:

According to the JBI’s critical appraisal tools, the overall assessment of the studies indicates their acceptance for the systematic review.

The cross-sectional studies lacked clear information regarding "confounding factors" or "strategies to address confounding factors". This was understandable given the nature of the study, where such details are not typically included. However, the reviewers did not find this lack of information to be critical, allowing the studies to be included in the review. The results of this methodological quality assessment can be found in an additional file (see Additional file 5).

In the qualitative studies, there was some ambiguity regarding the questions: "Is there a statement locating the researcher culturally or theoretically?" and "Is the influence of the researcher on the research, and vice versa, addressed?". However, the reviewers decided to include the studies and deemed the methodological quality sufficient for the analysis in this article, based on the other information analyzed. The results of this methodological quality assessment can be found in an additional file (see Additional file 6).

Documents characteristics (Table  1 )

The documents were directed to several continents: Australia/Oceania (4/12) [ 31 , 33 , 36 , 37 ], North America (4/12 [ 30 , 32 , 38 , 39 ], Europe (2/12 [ 29 , 35 ] and Asia (2/12) [ 34 , 40 ]. The types of documents were classified as cross-sectional studies (4/12) [ 29 , 32 , 34 , 38 ], methodological guidelines (4/12) [ 33 , 35 , 36 , 37 ], mixed methods studies (3/12) [ 30 , 31 , 39 ] or noncomparative studies (1/12) [ 40 ]. In terms of the instrument of evaluation, most of the documents used a survey/questionnaire (6/12) [ 29 , 30 , 31 , 32 , 34 , 38 ], while three (3/12) used qualitative instruments (interviews, group discussions) [ 30 , 31 , 39 ], one used a checklist [ 37 ], one used an audit [ 33 ] and three (3/12) did not define a specific instrument to measure [ 35 , 36 , 40 ].

Considering the clinical areas covered, most studies evaluated the implementation of nonspecific (general) clinical areas [ 29 , 33 , 35 , 36 , 37 , 40 ]. However, some studies focused on specific clinical contexts, such as mental health [ 32 , 38 ], oncology [ 39 ], fall prevention [ 31 ], spinal cord injury [ 30 ], and sexually transmitted infections [ 34 ].

Usage context of the models (Table  1 )

Specific objectives.

All the studies highlighted the purpose of guiding the process of evaluating the implementation of CPGs, even if they evaluated CPGs from generic or different clinical areas.

Levels of use

The most common level of use of the models/frameworks identified to assess the implementation of CPGs was policy (6/12) [ 33 , 35 , 36 , 37 , 39 , 40 ]. In this level, the model is used in a systematic way to evaluate all the processes involved in CPGs implementation and is primarily related to methodological guidelines. This was followed by the organizational level of use (5/12) [ 30 , 31 , 32 , 38 , 39 ], where the model is used to evaluate the implementation of CPGs in a specific institution, considering its specific environment. Finally, the clinical level of use (2/12) [ 29 , 34 ] focuses on individual practice and the factors that can influence the implementation of CPGs by professionals.

Type of health service

Institutional services were predominant (5/12) [ 33 , 35 , 36 , 37 , 40 ] and included methodological guidelines and a study of model development and validation. Hospitals were the second most common type of health service (4/12) [ 29 , 30 , 31 , 34 ], followed by ambulatorial (2/12) [ 32 , 34 ] and community health services (1/12) [ 32 ]. Two studies did not specify which type of health service the assessment addressed [ 38 , 39 ].

Target group

The focus of the target group was professionals directly involved in clinical practice (6/12) [ 29 , 31 , 32 , 34 , 38 , 40 ], namely, health professionals and clinicians. Other less related stakeholders included guideline developers (2/12) [ 39 , 40 ], health policy decision makers (1/12) [ 39 ], and healthcare organizations (1/12) [ 39 ]. The target group was not defined in the methodological guidelines, although all the mentioned stakeholders could be related to these documents.

Model and framework characteristics

Models and frameworks for assessing the implementation of cpgs.

The Consolidated Framework for Implementation Research (CFIR) [ 31 , 38 ] and the Promoting Action on Research Implementation in Health Systems (PARiHS) framework [ 29 , 30 ] were the most commonly employed frameworks within the selected documents. The other models mentioned were: Goal commitment and implementation of practice guidelines framework [ 32 ]; Guideline to identify key indicators [ 35 ]; Guideline implementation checklist [ 37 ]; Guideline implementation evaluation tool [ 40 ]; JBI Implementation Framework [ 33 ]; Reach, effectiveness, adoption, implementation and maintenance (RE-AIM) framework [ 34 ]; The Guideline Implementability Framework [ 39 ] and an unnamed model [ 36 ].

Domains evaluated

The number of domains evaluated (or suggested for evaluation) by the documents varied between three and five, with the majority focusing on three domains. All the models addressed the domain "context", with a particular emphasis on the micro level of the health care context (8/12) [ 29 , 31 , 34 , 35 , 36 , 37 , 38 , 39 ], followed by the multilevel (7/12) [ 29 , 31 , 32 , 33 , 38 , 39 , 40 ], meso level (4/12) [ 30 , 35 , 39 , 40 ] and macro level (2/12) [ 37 , 39 ]. The "Outcome" domain was evaluated in nine models. Within this domain, the most frequently evaluated subdomain was "adoption" (6/12) [ 29 , 32 , 34 , 35 , 36 , 37 ], followed by "acceptability" (4/12) [ 30 , 32 , 35 , 39 ], "appropriateness" (3/12) [ 32 , 34 , 36 ], "feasibility" (3/12) [ 29 , 32 , 36 ], "cost" (1/12) [ 35 ] and "penetration" (1/12) [ 34 ]. Regarding the other domains, "Intervention" (8/12) [ 29 , 31 , 34 , 35 , 36 , 38 , 39 , 40 ], "Strategies" (7/12) [ 29 , 30 , 33 , 35 , 36 , 37 , 40 ] and "Process" (5/12) [ 29 , 31 , 32 , 33 , 38 ] were frequently addressed in the models, while "Sustainability" (1/12) [ 34 ] was only found in one model, and "Fidelity/Adaptation" was not observed. The domains presented by the models and frameworks and evaluated in the documents are shown in Table  2 .

Limitations of the models

Only two documents mentioned limitations in the use of the model or frameworks. These two studies reported limitations in the use of CFIR: "is complex and cumbersome and requires tailoring of the key variables to the specific context", and "this framework should be supplemented with other important factors and local features to achieve a sound basis for the planning and realization of an ongoing project" [ 31 , 38 ]. Limitations in the use of other models or frameworks are not reported.

Subgroup analysis

Following the subgroup analysis (Table  3 ), five different models/frameworks were utilized at the policy level by institutional health services. These included the Guideline Implementation Evaluation Tool [ 40 ], the NHMRC tool (model name not defined) [ 36 ], the JBI Implementation Framework + GRiP [ 33 ], Guideline to identify key indicators [ 35 ], and the Guideline implementation checklist [ 37 ]. Additionally, the "Guideline Implementability Framework" [ 39 ] was implemented at the policy level without restrictions based on the type of health service. Regarding the organizational level, the models used varied depending on the type of service. The "Goal commitment and implementation of practice guidelines framework" [ 32 ] was applied in community and ambulatory health services, while "PARiHS" [ 29 , 30 ] and "CFIR" [ 31 , 38 ] were utilized in hospitals. In contexts where the type of health service was not defined, "CFIR" [ 31 , 38 ] and "The Guideline Implementability Framework" [ 39 ] were employed. Lastly, at the clinical level, "RE-AIM" [ 34 ] was utilized in ambulatory and hospital services, and PARiHS [ 29 , 30 ] was specifically used in hospital services.

Key findings

This systematic review identified 10 models/ frameworks used to assess the implementation of CPGs in various health system contexts. These documents shared similar objectives in utilizing models and frameworks for assessment. The primary level of use was policy, the most common type of health service was institutional, and the main target group of the documents was professionals directly involved in clinical practice. The models and frameworks presented varied analytical domains, with sometimes divergent concepts used in these domains. This study is innovative in its emphasis on the evaluation stage of CPG implementation and in summarizing aspects and domains aimed at the practical application of these models.

The small number of documents contrasts with studies that present an extensive range of models and frameworks available in implementation science. The findings suggest that the use of models and frameworks to evaluate the implementation of CPGs is still in its early stages. Among the selected documents, there was a predominance of cross-sectional studies and methodological guidelines, which strongly influenced how the implementation evaluation was conducted. This was primarily done through surveys/questionnaires, qualitative methods (interviews, group discussions), and non-specific measurement instruments. Regarding the subject areas evaluated, most studies focused on a general clinical area, while others explored different clinical areas. This suggests that the evaluation of CPG implementation has been carried out in various contexts.

The models were chosen independently of the categories proposed in the literature, with their usage categorized for purposes other than implementation evaluation, as is the case with CFIR and PARiHS. This practice was described by Nilsen et al. who suggested that models and frameworks from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 14 , 15 , 42 , 43 ].

The results highlight the increased use of models and frameworks in evaluation processes at the policy level and institutional environments, followed by the organizational level in hospital settings. This finding contradicts a review that reported the policy level as an area that was not as well studied [ 44 ]. The use of different models at the institutional level is also emphasized in the subgroup analysis. This may suggest that the greater the impact (social, financial/economic, and organizational) of implementing CPGs, the greater the interest and need to establish well-defined and robust processes. In this context, the evaluation stage stands out as crucial, and the investment of resources and efforts to structure this stage becomes even more advantageous [ 10 , 45 ]. Two studies (16,7%) evaluated the implementation of CPGs at the individual level (clinical level). These studies stand out for their potential to analyze variations in clinical practice in greater depth.

In contrast to the level of use and type of health service most strongly indicated in the documents, with systemic approaches, the target group most observed was professionals directly involved in clinical practice. This suggests an emphasis on evaluating individual behaviors. This same emphasis is observed in the analysis of the models, in which there is a predominance of evaluating the micro level of the health context and the "adoption" subdomain, in contrast with the sub-use of domains such as "cost" and "process". Cassetti et al. observed the same phenomenon in their review, in which studies evaluating the implementation of CPGs mainly adopted a behavioral change approach to tackle those issues, without considering the influence of wider social determinants of health [ 10 ]. However, the literature widely reiterates that multiple factors impact the implementation of CPGs, and different actions are required to make them effective [ 6 , 46 , 47 ]. As a result, there is enormous potential for the development and adaptation of models and frameworks aimed at more systemic evaluation processes that consider institutional and organizational aspects.

In analyzing the model domains, most models focused on evaluating only some aspects of implementation (three domains). All models evaluated the "context", highlighting its significant influence on implementation [ 9 , 26 ]. Context is an essential effect modifier for providing research evidence to guide decisions on implementation strategies [ 48 ]. Contextualizing a guideline involves integrating research or other evidence into a specific circumstance [ 49 ]. The analysis of this domain was adjusted to include all possible contextual aspects, even if they were initially allocated to other domains. Some contextual aspects presented by the models vary in comprehensiveness, such as the assessment of the "timing and nature of stakeholder engagement" [ 39 ], which includes individual engagement by healthcare professionals and organizational involvement in CPG implementation. While the importance of context is universally recognized, its conceptualization and interpretation differ across studies and models. This divergence is also evident in other domains, consistent with existing literature [ 14 ]. Efforts to address this conceptual divergence in implementation science are ongoing, but further research and development are needed in this field [ 26 ].

The main subdomain evaluated was "adoption" within the outcome domain. This may be attributed to the ease of accessing information on the adoption of the CPG, whether through computerized system records, patient records, or self-reports from healthcare professionals or patients themselves. The "acceptability" subdomain pertains to the perception among implementation stakeholders that a particular CPG is agreeable, palatable or satisfactory. On the other hand, "appropriateness" encompasses the perceived fit, relevance or compatibility of the CPG for a specific practice setting, provider, or consumer, or its perceived fit to address a particular issue or problem [ 26 ]. Both subdomains are subjective and rely on stakeholders' interpretations and perceptions of the issue being analyzed, making them susceptible to reporting biases. Moreover, obtaining this information requires direct consultation with stakeholders, which can be challenging for some evaluation processes, particularly in institutional contexts.

The evaluation of the subdomains "feasibility" (the extent to which a CPG can be successfully used or carried out within a given agency or setting), "cost" (the cost impact of an implementation effort), and "penetration" (the extent to which an intervention or treatment is integrated within a service setting and its subsystems) [ 26 ] was rarely observed in the documents. This may be related to the greater complexity of obtaining information on these aspects, as they involve cross-cutting and multifactorial issues. In other words, it would be difficult to gather this information during evaluations with health practitioners as the target group. This highlights the need for evaluation processes of CPGs implementation involving multiple stakeholders, even if the evaluation is adjusted for each of these groups.

Although the models do not establish the "intervention" domain, we thought it pertinent in this study to delimit the issues that are intrinsic to CPGs, such as methodological quality or clarity in establishing recommendations. These issues were quite common in the models evaluated but were considered in other domains (e.g., in "context"). Studies have reported the importance of evaluating these issues intrinsic to CPGs [ 47 , 50 ] and their influence on the implementation process [ 51 ].

The models explicitly present the "strategies" domain, and its evaluation was usually included in the assessments. This is likely due to the expansion of scientific and practical studies in implementation science that involve theoretical approaches to the development and application of interventions to improve the implementation of evidence-based practices. However, these interventions themselves are not guaranteed to be effective, as reported in a previous review that showed unclear results indicating that the strategies had affected successful implementation [ 52 ]. Furthermore, model domains end up not covering all the complexity surrounding the strategies and their development and implementation process. For example, the ‘Guideline implementation evaluation tool’ evaluates whether guideline developers have designed and provided auxiliary tools to promote the implementation of guidelines [ 40 ], but this does not mean that these tools would work as expected.

The "process" domain was identified in the CFIR [ 31 , 38 ], JBI/GRiP [ 33 ], and PARiHS [ 29 ] frameworks. While it may be included in other domains of analysis, its distinct separation is crucial for defining operational issues when assessing the implementation process, such as determining if and how the use of the mentioned CPG was evaluated [ 3 ]. Despite its presence in multiple models, there is still limited detail in the evaluation guidelines, which makes it difficult to operationalize the concept. Further research is needed to better define the "process" domain and its connections and boundaries with other domains.

The domain of "sustainability" was only observed in the RE-AIM framework, which is categorized as an evaluation framework [ 34 ]. In its acronym, the letter M stands for "maintenance" and corresponds to the assessment of whether the user maintains use, typically longer than 6 months. The presence of this domain highlights the need for continuous evaluation of CPGs implementation in the short, medium, and long term. Although the RE-AIM framework includes this domain, it was not used in the questionnaire developed in the study. One probable reason is that the evaluation of CPGs implementation is still conducted on a one-off basis and not as a continuous improvement process. Considering that changes in clinical practices are inherent over time, evaluating and monitoring changes throughout the duration of the CPG could be an important strategy for ensuring its implementation. This is an emerging field that requires additional investment and research.

The "Fidelity/Adaptation" domain was not observed in the models. These emerging concepts involve the extent to which a CPG is being conducted exactly as planned or whether it is undergoing adjustments and adaptations. Whether or not there is fidelity or adaptation in the implementation of CPGs does not presuppose greater or lesser effectiveness; after all, some adaptations may be necessary to implement general CPGs in specific contexts. The absence of this domain in all the models and frameworks may suggest that they are not relevant aspects for evaluating implementation or that there is a lack of knowledge of these complex concepts. This may suggest difficulty in expressing concepts in specific evaluative questions. However, further studies are warranted to determine the comprehensiveness of these concepts.

It is important to note the customization of the domains of analysis, with some domains presented in the models not being evaluated in the studies, while others were complementarily included. This can be seen in Jeong et al. [ 34 ], where the "intervention" domain in the evaluation with the RE-AIM framework reinforced the aim of theoretical approaches such as guiding the process and not determining norms. Despite this, few limitations were reported for the models, suggesting that the use of models in these studies reflects the application of these models to defined contexts without a deep critical analysis of their domains.

Limitations

This review has several limitations. First, only a few studies and methodological guidelines that explicitly present models and frameworks for assessing the implementation of CPGs have been found. This means that few alternative models could be analyzed and presented in this review. Second, this review adopted multiple analytical categories (e.g., level of use, health service, target group, and domains evaluated), whose terminology has varied enormously in the studies and documents selected, especially for the "domains evaluated" category. This difficulty in harmonizing the taxonomy used in the area has already been reported [ 26 ] and has significant potential to confuse. For this reason, studies and initiatives are needed to align understandings between concepts and, as far as possible, standardize them. Third, in some studies/documents, the information extracted was not clear about the analytical category. This required an in-depth interpretative process of the studies, which was conducted in pairs to avoid inappropriate interpretations.

Implications

This study contributes to the literature and clinical practice management by describing models and frameworks specifically used to assess the implementation of CPGs based on their level of use, type of health service, target group related to the CPG, and the evaluated domains. While there are existing reviews on the theories, frameworks, and models used in implementation science, this review addresses aspects not previously covered in the literature. This valuable information can assist stakeholders (such as politicians, clinicians, researchers, etc.) in selecting or adapting the most appropriate model to assess CPG implementation based on their health context. Furthermore, this study is expected to guide future research on developing or adapting models to assess the implementation of CPGs in various contexts.

The use of models and frameworks to evaluate the implementation remains a challenge. Studies should clearly state the level of model use, the type of health service evaluated, and the target group. The domains evaluated in these models may need adaptation to specific contexts. Nevertheless, utilizing models to assess CPGs implementation is crucial as they can guide a more thorough and systematic evaluation process, aiding in the continuous improvement of CPGs implementation. The findings of this systematic review offer valuable insights for stakeholders in selecting or adjusting models and frameworks for CPGs evaluation, supporting future theoretical advancements and research.

Availability of data and materials

Abbreviations.

Australian Department of Health and Aged Care

Canadian Agency for Drugs and Technologies in Health

Centers for Disease Control and

Consolidated Framework for Implementation Research

Cumulative Index to Nursing and Allied Health Literature

Clinical practice guideline

Centre for Reviews and Dissemination

Guidelines International Networks

Getting Research into Practice

Health Systems Evidence

Institute of Medicine

The Joanna Briggs Institute

Ministry of Health of Brazil

Ministerio de Sanidad y Política Social

National Health and Medical Research Council

National Institute for Health and Care Excellence

Promoting action on research implementation in health systems framework

Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Prospective Register of Systematic Reviews

Reach, effectiveness, adoption, implementation, and maintenance framework

Healthcare Improvement Scotland

United States of America

Virtual Health Library

World Health Organization

Medicine I of. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001. Available from: http://www.nap.edu/catalog/10027 . Cited 2022 Sep 29.

Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Washington DC: National Academy Press. 1990. Available from: https://www.nap.edu/read/1626/chapter/8 Cited 2020 Sep 2.

Dawson A, Henriksen B, Cortvriend P. Guideline Implementation in Standardized Office Workflows and Exam Types. J Prim Care Community Heal. 2019;10. Available from: https://pubmed.ncbi.nlm.nih.gov/30900500/ . Cited 2020 Jul 15.

Unverzagt S, Oemler M, Braun K, Klement A. Strategies for guideline implementation in primary care focusing on patients with cardiovascular disease: a systematic review. Fam Pract. 2014;31(3):247–66. Available from: https://academic.oup.com/fampra/article/31/3/247/608680 . Cited 2020 Nov 5.

Article   PubMed   Google Scholar  

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-015-0242-0 . Cited 2022 May 1.

Article   Google Scholar  

Mangana F, Massaquoi LD, Moudachirou R, Harrison R, Kaluangila T, Mucinya G, et al. Impact of the implementation of new guidelines on the management of patients with HIV infection at an advanced HIV clinic in Kinshasa, Democratic Republic of Congo (DRC). BMC Infect Dis. 2020;20(1):N.PAG-N.PAG. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=146325052&amp .

Browman GP, Levine MN, Mohide EA, Hayward RSA, Pritchard KI, Gafni A, et al. The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. 2016;13(2):502–12. https://doi.org/10.1200/JCO.1995.13.2.502 .

Killeen SL, Donnellan N, O’Reilly SL, Hanson MA, Rosser ML, Medina VP, et al. Using FIGO Nutrition Checklist counselling in pregnancy: A review to support healthcare professionals. Int J Gynecol Obstet. 2023;160(S1):10–21. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146194829&doi=10.1002%2Fijgo.14539&partnerID=40&md5=d0f14e1f6d77d53e719986e6f434498f .

Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):1–12. Available from: https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-015-0089-9 . Cited 2020 Nov 5.

Cassetti V, M VLR, Pola-Garcia M, AM G, J JPC, L APDT, et al. An integrative review of the implementation of public health guidelines. Prev Med reports. 2022;29:101867. Available from: http://www.epistemonikos.org/documents/7ad499d8f0eecb964fc1e2c86b11450cbe792a39 .

Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science BioMed Central. 2006. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-1-1 .

Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;1(283):112461.

Handley MA, Gorukanti A, Cattamanchi A. Strategies for implementing implementation science: a methodological overview. Emerg Med J. 2016;33(9):660–4. Available from: https://pubmed.ncbi.nlm.nih.gov/26893401/ . Cited 2022 Mar 7.

Wang Y, Wong ELY, Nilsen P, Chung VC ho, Tian Y, Yeoh EK. A scoping review of implementation science theories, models, and frameworks — an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci. 2023;18(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-023-01296-x . Cited 2024 Jan 22.

Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1(1):1–12. Available from: https://implementationsciencecomms.biomedcentral.com/articles/10.1186/s43058-020-00023-7 . Cited 2022 May 20.

Glasgow RE, Vogt TM, Boles SM. *Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322. Available from: /pmc/articles/PMC1508772/?report=abstract. Cited 2022 May 22.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Asada Y, Lin S, Siegel L, Kong A. Facilitators and Barriers to Implementation and Sustainability of Nutrition and Physical Activity Interventions in Early Childcare Settings: a Systematic Review. Prev Sci. 2023;24(1):64–83. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139519721&doi=10.1007%2Fs11121-022-01436-7&partnerID=40&md5=b3c395fdd2b8235182eee518542ebf2b .

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. version 6. Cochrane; 2022. Available from: https://training.cochrane.org/handbook. Cited 2022 May 23.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71 . Cited 2021 Nov 18.

M C, AD O, E P, JP H, S G. Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Higgins JP, Green S, eds Cochrane Handb Syst Rev Interv. 2011;Version 5.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):1–8. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-019-0957-4 . Cited 2024 Jan 22.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0384-4 . Cited 2022 May 20.

JBI. JBI’s Tools Assess Trust, Relevance & Results of Published Papers: Enhancing Evidence Synthesis. Available from: https://jbi.global/critical-appraisal-tools . Cited 2023 Jun 13.

Drisko JW. Qualitative research synthesis: An appreciative and critical introduction. Qual Soc Work. 2020;19(4):736–53.

Pope C, Mays N, Popay J. Synthesising qualitative and quantitative health evidence: A guide to methods. 2007. Available from: https://books.google.com.br/books?hl=pt-PT&lr=&id=L3fbE6oio8kC&oi=fnd&pg=PR6&dq=synthesizing+qualitative+and+quantitative+health+evidence&ots=sfELNUoZGq&sig=bQt5wt7sPKkf7hwKUvxq2Ek-p2Q#v=onepage&q=synthesizing=qualitative=and=quantitative=health=evidence& . Cited 2022 May 22.

Nilsen P, Birken SA, Edward Elgar Publishing. Handbook on implementation science. 542. Available from: https://www.e-elgar.com/shop/gbp/handbook-on-implementation-science-9781788975988.html . Cited 2023 Apr 15.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-4-50 . Cited 2023 Jun 13.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Available from: https://pubmed.ncbi.nlm.nih.gov/20957426/ . Cited 2023 Jun 11.

Bahtsevani C, Willman A, Khalaf A, Östman M, Ostman M. Developing an instrument for evaluating implementation of clinical practice guidelines: a test-retest study. J Eval Clin Pract. 2008;14(5):839–46. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=105569473&amp . Cited 2023 Jan 18.

Balbale SN, Hill JN, Guihan M, Hogan TP, Cameron KA, Goldstein B, et al. Evaluating implementation of methicillin-resistant Staphylococcus aureus (MRSA) prevention guidelines in spinal cord injury centers using the PARIHS framework: a mixed methods study. Implement Sci. 2015;10(1):130. Available from: https://pubmed.ncbi.nlm.nih.gov/26353798/ . Cited 2023 Apr 3.

Article   PubMed   PubMed Central   Google Scholar  

Breimaier HE, Heckemann B, Halfens RJGG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14(1):43. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=109221169&amp . Cited 2023 Apr 3.

Chou AF, Vaughn TE, McCoy KD, Doebbeling BN. Implementation of evidence-based practices: Applying a goal commitment framework. Health Care Manage Rev. 2011;36(1):4–17. Available from: https://pubmed.ncbi.nlm.nih.gov/21157225/ . Cited 2023 Apr 30.

Porritt K, McArthur A, Lockwood C, Munn Z. JBI Manual for Evidence Implementation. JBI Handbook for Evidence Implementation. JBI; 2020. Available from: https://jbi-global-wiki.refined.site/space/JHEI . Cited 2023 Apr 3.

Jeong HJJ, Jo HSS, Oh MKK, Oh HWW. Applying the RE-AIM Framework to Evaluate the Dissemination and Implementation of Clinical Practice Guidelines for Sexually Transmitted Infections. J Korean Med Sci. 2015;30(7):847–52. Available from: https://pubmed.ncbi.nlm.nih.gov/26130944/ . Cited 2023 Apr 3.

GPC G de trabajo sobre implementación de. Implementación de Guías de Práctica Clínica en el Sistema Nacional de Salud. Manual Metodológico. 2009. Available from: https://portal.guiasalud.es/wp-content/uploads/2019/01/manual_implementacion.pdf . Cited 2023 Apr 3.

Australia C of. A guide to the development, implementation and evaluation of clinical practice guidelines. National Health and Medical Research Council; 1998. Available from: https://www.health.qld.gov.au/__data/assets/pdf_file/0029/143696/nhmrc_clinprgde.pdf .

Health Q. Guideline implementation checklist Translating evidence into best clinical practice. 2022.

Google Scholar  

Quittner AL, Abbott J, Hussain S, Ong T, Uluer A, Hempstead S, et al. Integration of mental health screening and treatment into cystic fibrosis clinics: Evaluation of initial implementation in 84 programs across the United States. Pediatr Pulmonol. 2020;55(11):2995–3004. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2005630887&from=export . Cited 2023 Apr 3.

Urquhart R, Woodside H, Kendell C, Porter GA. Examining the implementation of clinical practice guidelines for the management of adult cancers: A mixed methods study. J Eval Clin Pract. 2019;25(4):656–63. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=137375535&amp . Cited 2023 Apr 3.

Yinghui J, Zhihui Z, Canran H, Flute Y, Yunyun W, Siyu Y, et al. Development and validation for evaluation of an evaluation tool for guideline implementation. Chinese J Evidence-Based Med. 2022;22(1):111–9. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2016924877&from=export .

Breimaier HE, Halfens RJG, Lohrmann C. Effectiveness of multifaceted and tailored strategies to implement a fall-prevention guideline into acute care nursing practice: a before-and-after, mixed-method study using a participatory action research approach. BMC Nurs. 2015;14(1):18. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=103220991&amp .

Lai J, Maher L, Li C, Zhou C, Alelayan H, Fu J, et al. Translation and cross-cultural adaptation of the National Health Service Sustainability Model to the Chinese healthcare context. BMC Nurs. 2023;22(1). Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85153237164&doi=10.1186%2Fs12912-023-01293-x&partnerID=40&md5=0857c3163d25ce85e01363fc3a668654 .

Zhao J, Li X, Yan L, Yu Y, Hu J, Li SA, et al. The use of theories, frameworks, or models in knowledge translation studies in healthcare settings in China: a scoping review protocol. Syst Rev. 2021;10(1):13. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7792291 .

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. Available from: https://pubmed.ncbi.nlm.nih.gov/22898128/ . Cited 2023 Apr 4.

Phulkerd S, Lawrence M, Vandevijvere S, Sacks G, Worsley A, Tangcharoensathien V. A review of methods and tools to assess the implementation of government policies to create healthy food environments for preventing obesity and diet-related non-communicable diseases. Implement Sci. 2016;11(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-016-0379-5 . Cited 2022 May 1.

Buss PM, Pellegrini FA. A Saúde e seus Determinantes Sociais. PHYSIS Rev Saúde Coletiva. 2007;17(1):77–93.

Pereira VC, Silva SN, Carvalho VKSS, Zanghelini F, Barreto JOMM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Heal Res Policy Syst. 2022;20(1):13. Available from: https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-022-00815-4 . Cited 2022 Feb 21.

Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24 Suppl 1:S31-7. Available from: https://pubmed.ncbi.nlm.nih.gov/15712775/ . Cited 2021 Nov 9.

Lotfi T, Stevens A, Akl EA, Falavigna M, Kredo T, Mathew JL, et al. Getting trustworthy guidelines into the hands of decision-makers and supporting their consideration of contextual factors for implementation globally: recommendation mapping of COVID-19 guidelines. J Clin Epidemiol. 2021;135:182–6. Available from: https://pubmed.ncbi.nlm.nih.gov/33836255/ . Cited 2024 Jan 25.

Lenzer J. Why we can’t trust clinical guidelines. BMJ. 2013;346(7913). Available from: https://pubmed.ncbi.nlm.nih.gov/23771225/ . Cited 2024 Jan 25.

Molino C de GRC, Ribeiro E, Romano-Lieber NS, Stein AT, de Melo DO. Methodological quality and transparency of clinical practice guidelines for the pharmacological treatment of non-communicable diseases using the AGREE II instrument: A systematic review protocol. Syst Rev. 2017;6(1):1–6. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-017-0621-5 . Cited 2024 Jan 25.

Albers B, Mildon R, Lyon AR, Shlonsky A. Implementation frameworks in child, youth and family services – Results from a scoping review. Child Youth Serv Rev. 2017;1(81):101–16.

Download references

Acknowledgements

Not applicable

This study is supported by the Fundação de Apoio à Pesquisa do Distrito Federal (FAPDF). FAPDF Award Term (TOA) nº 44/2024—FAPDF/SUCTI/COOBE (SEI/GDF – Process 00193–00000404/2024–22). The content in this article is solely the responsibility of the authors and does not necessarily represent the official views of the FAPDF.

Author information

Authors and affiliations.

Department of Management and Incorporation of Health Technologies, Ministry of Health of Brazil, Brasília, Federal District, 70058-900, Brazil

Nicole Freitas de Mello & Dalila Fernandes Gomes

Postgraduate Program in Public Health, FS, University of Brasília (UnB), Brasília, Federal District, 70910-900, Brazil

Nicole Freitas de Mello, Dalila Fernandes Gomes & Jorge Otávio Maia Barreto

René Rachou Institute, Oswaldo Cruz Foundation, Belo Horizonte, Minas Gerais, 30190-002, Brazil

Sarah Nascimento Silva

Oswaldo Cruz Foundation - Brasília, Brasília, Federal District, 70904-130, Brazil

Juliana da Motta Girardi & Jorge Otávio Maia Barreto

You can also search for this author in PubMed   Google Scholar

Contributions

NFM and JOMB conceived the idea and the protocol for this study. NFM conducted the literature search. NFM, SNS, JMG and JOMB conducted the data collection with advice and consensus gathering from JOMB. The NFM and JMG assessed the quality of the studies. NFM and DFG conducted the data extraction. NFM performed the analysis and synthesis of the results with advice and consensus gathering from JOMB. NFM drafted the manuscript. JOMB critically revised the first version of the manuscript. All the authors revised and approved the submitted version.

Corresponding author

Correspondence to Nicole Freitas de Mello .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

13012_2024_1389_moesm1_esm.docx.

Additional file 1: PRISMA checklist. Description of data: Completed PRISMA checklist used for reporting the results of this systematic review.

Additional file 2: Literature search. Description of data: The search strategies adapted for the electronic databases.

13012_2024_1389_moesm3_esm.doc.

Additional file 3: JBI’s critical appraisal tools for cross-sectional studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for cross-sectional studies.

13012_2024_1389_MOESM4_ESM.doc

Additional file 4: JBI’s critical appraisal tools for qualitative studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for qualitative studies.

13012_2024_1389_MOESM5_ESM.doc

Additional file 5: Methodological quality assessment results for cross-sectional studies. Description of data: Methodological quality assessment results for cross-sectional studies using JBI’s critical appraisal tools.

13012_2024_1389_MOESM6_ESM.doc

Additional file 6: Methodological quality assessment results for the qualitative studies. Description of data: Methodological quality assessment results for qualitative studies using JBI’s critical appraisal tools.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Freitas de Mello, N., Nascimento Silva, S., Gomes, D.F. et al. Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review. Implementation Sci 19 , 59 (2024). https://doi.org/10.1186/s13012-024-01389-1

Download citation

Received : 06 February 2024

Accepted : 01 August 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s13012-024-01389-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Practice guideline
  • Evidence-Based Practice
  • Implementation science

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

is a systematic review a type of research design

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 08 August 2024

Research progress and intellectual structure of design for digital equity (DDE): A bibliometric analysis based on citespace

  • Baoyi Zhang   ORCID: orcid.org/0000-0003-4479-7587 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1019 ( 2024 ) Cite this article

227 Accesses

Metrics details

  • Cultural and media studies
  • Science, technology and society

Digital equity is imperative for realizing the Sustainable Development Goals, particularly SDG9 and SDG10. Recent empirical studies indicate that Design for Digital Equity (DDE) is an effective strategy for achieving digital equity. However, before this review, the overall academic landscape of DDE remained obscure, marked by substantial knowledge gaps. This review employs a rigorous bibliometric methodology to analyze 1705 DDE-related publications, aiming to delineate DDE’s research progress and intellectual structure and identify research opportunities. The retrieval strategy was formulated based on the PICo framework, with the process adhering to the PRISMA systematic review framework to ensure transparency and replicability of the research method. CiteSpace was utilized to visually present the analysis results, including co-occurrences of countries, institutions, authors, keywords, emerging trends, clustering, timeline analyses, and dual-map overlays of publications. The results reveal eight significant DDE clusters closely related to user-centered design, assistive technology, digital health, mobile devices, evidence-based practices, and independent living. A comprehensive intellectual structure of DDE was constructed based on the literature and research findings. The current research interest in DDE lies in evidence-based co-design practices, design issues in digital mental health, acceptance and humanization of digital technologies, digital design for visually impaired students, and intergenerational relationships. Future research opportunities are identified in DDE’s emotional, cultural, and fashion aspects; acceptance of multimodal, tangible, and natural interaction technologies; needs and preferences of marginalized groups in developing countries and among minority populations; and broader interdisciplinary research. This study unveils the multi-dimensional and inclusive nature of methodological, technological, and user issues in DDE research. These insights offer valuable guidance for policy-making, educational strategies, and the development of inclusive digital technologies, charting a clear direction for future research.

Similar content being viewed by others

is a systematic review a type of research design

Recommendations to advance digital health equity: a systematic review of qualitative studies

is a systematic review a type of research design

Design, content validity, and inter-observer reliability of the ‘Digitization of Cultural Heritage, Identities, and Education’ (DICHIE) instrument

is a systematic review a type of research design

Digital competence in adolescents and young adults: a critical analysis of concomitant variables, methodologies and intervention strategies

Introduction.

Digital equity has emerged as a critical factor in achieving the Sustainable Development Goals (SDGs), especially SDG9 (Industry, Innovation, and Infrastructure) and SDG10 (Reduced Inequalities) (United Nations, 2021 ; UNSD 2023 ), amidst the rapid evolution of digital technologies. In our increasingly digitalized society, these technologies amplify and transform existing social inequalities while offering numerous benefits, leading to more significant disparities in access and utilization (Grybauskas et al., 2022 ). This situation highlights the critical need for strategies that promote equitable digital participation, ensuring alignment with the overarching objectives of the SDGs. Digital equity, a multi-faceted issue, involves aspects such as the influence of cultural values on digital access (Yuen et al., 2017 ), the challenges and opportunities of technology in higher education (Willems et al., 2019 ), and the vital role of government policies in shaping digital divides (King & Gonzales, 2023 ), and the impact on healthcare access and delivery (Lawson et al., 2023 ). Equally important are the socioeconomic factors that intersect with digital equity (Singh, 2017 ) and the pressing need for accessible digital technologies for disabled individuals (Park et al., 2019 ). These issues are observed globally, necessitating diverse and inclusive strategies.

Design thinking, in addressing issues of social equality and accessibility, plays an essential role in accessibility (Persson et al., 2015 ; Dragicevic et al., 2023a ); in other words, it serves as a crucial strategy for reducing social inequality. Indeed, design strategies focused on social equality, also known as Equity-Centered Design (Oliveri et al., 2020 ; Bazzano et al., 2023 ), are diverse, including universal design (Mace ( 1985 )), Barrier-free design (Cooper et al., 1991 ), inclusive design (John Clarkson, Coleman ( 2015 )), and Design for All (Bendixen & Benktzon, 2015 ). Stanford d.school has further developed the Equity-Centered Design Framework based on its design thinking model (Stanford d.school, 2016 ) to foster empathy and self-awareness among designers in promoting equality. Equity-centered approaches are also a hot topic in academia, especially in areas like education (Firestone et al., 2023 ) and healthcare (Rodriguez et al., 2023 ). While these design approaches may have distinct features and positions due to their developmental stages, national and cultural contexts, and the issues they address, Equity-Centered Design consistently plays a vital role in achieving the goal of creating accessible environments and products, making them accessible and usable by individuals with various abilities or backgrounds (Persson et al., 2015 ).

Equity-centered design initially encompassed various non-digital products, but with the rapid advancement of digitalization, it has become increasingly critical to ensure that digital technologies are accessible and equitable for all users. This can be referred to as Design for Digital Equity (DDE). However, the current landscape reveals a significant gap in comprehensive research focused on Design for Digital Equity (DDE). This gap highlights the need for more focused research and development in this area, where bibliometrics can play a significant role. Through systematic reviews and visualizations, bibliometric analysis can provide insights into this field’s intellectual structure, informing and guiding future research directions in digital equity and design.

Bibliometrics, a term first coined by Pritchard in 1969 (Broadus, 1987 ), has evolved into an indispensable quantitative tool for analyzing scholarly publications across many research fields. This method, rooted in the statistical analysis of written communication, has significantly enhanced our understanding of academic trends and patterns. Its application spans environmental studies (Wang et al., 2021 ), economics (Qin et al., 2021 ), big data (Ahmad et al., 2020 ), energy (Xiao et al., 2021 ), medical research (Ismail & Saqr, 2022 ) and technology acceptance (Wang et al., 2022 ). By distilling complex publication data into comprehensible trends and patterns, bibliometrics has become a key instrument in shaping our understanding of the academic landscape and guiding future research directions.

In bibliometrics, commonly used tools such as CiteSpace (Chen, 2006 ), VOSviewer (Van Eck, Waltman ( 2010 )), and HistCite (Garfield, 2009 ) are integral for advancing co-citation analysis and data visualization. Among these, CiteSpace, developed by Professor Chen (Chen, 2006 ), is a Java-based tool pivotal in advancing co-citation analysis for data visualization and analysis. Renowned for its integration of burst detection, betweenness centrality, and heterogeneous network analysis, it is essential in identifying research frontiers and tracking trends across various domains. Chen demonstrates the versatility of CiteSpace in various fields, ranging from regenerative medicine to scientific literature, showcasing its proficiency in extracting complex insights from data sets (Chen, 2006 ). Its structured methodology, encompassing time slicing, thresholding, and more, facilitates comprehensive analysis of co-citations and keywords. This enhances not only the analytical capabilities of CiteSpace but also helps researchers comprehend trends within specific domains. (Chen et al. 2012 ; Ping et al. 2017 ). Therefore, CiteSpace is a precious tool in academic research, particularly for disciplines that require in-depth analysis of evolving trends and patterns.

After acknowledging the significance of DDE in the rapidly evolving digital environment, it becomes imperative to explore the academic contours of this field to bridge knowledge gaps, a critical prerequisite for addressing social inequalities within digital technology development. We aim to scrutinize DDE’s research progress and intellectual structure, analyzing a broad spectrum of literature with the aid of bibliometric and CiteSpace methodologies. Accordingly, four research questions (RQs) have been identified to guide this investigation. The detailed research questions are as follows:

RQ1: What are the trends in publications in the DDE field from 1995 to 2023?

RQ2: Who are the main contributors, and what are the collaboration patterns in DDE research?

RQ3: What are the current research hotspots in DDE?

RQ4: What is the intellectual structure and future trajectory of DDE?

The remainder of this paper is structured as follows: The Methods section explains our bibliometric approach and data collection for DDE research. The Results section details our findings on publication trends and collaborative networks, addressing RQ1 and RQ2. The Discussion section delves into RQ3 and RQ4, exploring research hotspots and the intellectual structure of DDE. The Conclusion section summarizes our study’s key insights.

In this article, the systematic review of DDE follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines (PRISMA 2023 ), which are evidence-based reporting guidelines for systematic review reports (Moher et al., 2010 ). PRISMA was developed to enhance the quality of systematic reviews and enhance the clarity and transparency of research findings (Liberati et al., 2009 ). To achieve this goal, the research workflow in this study incorporates an online tool based on the R package from PRISMA 2020. This tool enables researchers to rapidly generate flowcharts that adhere to the latest updates in the PRISMA statement, ensuring transparency and reproducibility in the research process. This workflow comprises three major stages: Identification, Screening, and Inclusion, as illustrated in Fig. 1 .

figure 1

PRISMA flowchart for the DDE systematic review.

Additionally, to obtain high-quality data sources, the Web of Science (referred to as WOS), provided by Clarivate Analytics, was chosen. WOS is typically considered the optimal data source for bibliometric research (van Leeuwen, 2006 ). The WOS Core Collection comprises over 21,000 peer-reviewed publications spanning 254 subject categories and 1.9 billion cited references, with the earliest records traceable back to 1900 (Clarivate, 2023 ). To thoroughly explore the research on DDE, this review utilized all databases within the WOS Core Collection as the source for data retrieval.

Search strategy

Developing a rational and effective search strategy is crucial for systematic reviews (Cooper et al., 2018 ), typically necessitating a structured framework to guide the process (Sayers, 2008 ). This approach ensures comprehensive and relevant literature coverage. To comprehensively and accurately assess the current state and development of “Design for Digital Equity,” this paper employs the PICo (participants, phenomena of interest, and context) model as its search strategy, a framework typically used for identifying research questions in systematic reviews (Stern et al., 2014 ). While the PICo framework is predominantly utilized within clinical settings for systematic reviews, its structured approach to formulating research questions and search strategies is equally applicable across many disciplines beyond the clinical environment. This adaptability makes it a suitable choice for exploring the multi-faceted aspects of digital equity in a non-clinical context (Nishikawa-Pacher, 2022 ).

This review, structured around the PICo framework, sets three key concepts (search term groups): Participants (P): any potential digital users; Phenomena of Interest (I): equity-centered design; Context (Co): digital equity. To explore the development and trends of DDE comprehensively, various forms of search terms are included in each PICo element. The determination of search terms is a two-stage process. In the first stage, core terms of critical concepts like equity-centered design, digital equity, and Design for Digital Equity, along with their synonyms, different spellings, and acronyms, are included in the list of candidate search terms. Wildcards (*) are used to expand the search range to ensure the inclusion of all variants and derivatives of critical terms, thus enhancing the thoroughness and depth of the search. However, studies have indicated the challenge of identifying semantically unrelated terms relevant to the research (Chen, 2018 ). To address this issue, the second phase of developing the search strategy involves reading domain-specific literature reviews using these core terms. This literature-based discovery (LBD) approach can identify hidden, previously unknown relationships, finding significant connections between different kinds of literature (Kastrin & Hristovski, 2021 ). The candidate word list is then reviewed, refined, or expanded by domain experts. Finally, a search string (Table 1 ) is constructed with all search terms under each search term group linked by the Boolean OR (this term or that term), and the Boolean links each group AND (this group of terms and that group of terms).

Inclusion criteria

Following the PRISMA process (Fig. 1 ), literature in the identification phase was filtered using automated tools based on publication data attributes such as titles, subjects, and full texts or specific criteria like publication names, publication time ranges, and types of publication sources. Given the necessity for a systematic and extensive exploration of DDE research, this review employed an advanced search using “ALL” instead of “topic” or “Title” in the search string to ensure a broader inclusion of results. No limitations were set on other attributes of the literature. The literature search was conducted on December 5, 2023, resulting in 1747 publications exported in Excel for further screening.

During the literature screening phase, the authors reviewed titles and abstracts, excluding 11 publications unrelated to DDE research. Three papers were inaccessible in the full-text acquisition phase. The remaining 1729 publications were then subjected to full-text review based on the following inclusion and exclusion criteria. Eventually, 1705 papers meeting the criteria were imported into CiteSpace for analysis.

Papers were included in this review if they met the following criteria:

They encompassed all three elements of PICo: stakeholders or target users of DDE, design relevance, and digitalization aspects.

They had transparent research methodologies, whether empirical or review studies employing qualitative, quantitative, or mixed methods.

They were written in English.

Papers were excluded if they:

Focused solely on digital technology, unrelated to design, human, and social factors.

Contained terms with identical acronyms but different meanings, e.g., ICT stands for Inflammation of connective tissue in medicine.

Were unrelated to topics of social equality.

Were in languages other than English.

Data analysis

To comprehensively address Research Question 1: “What are the publication trends in the DDE field from 1995 to 2023?” this study utilized CiteSpace to generate annual trend line graphs for descriptive analysis. This analysis revealed the annual development trends within the DDE research field and identified vital research nodes and significant breakthroughs by analyzing the citation frequency of literature across different years. Utilizing the burst detection feature in CiteSpace, key research papers and themes were further identified, marking periods of significant increases in research activity. For Research Question 2: “Who are the main contributors to DDE research, and what are their collaboration patterns?” nodes for countries, institutions, cited authors, cited publications, and keywords were set up in CiteSpace for network analysis. Our complex network diagrams illustrate the collaboration relationships between different researchers and institutions, where the size of the nodes indicates the number of publications by each entity, and the thickness and color of the lines represent the strength and frequency of collaborations.

Additionally, critical scholars and publications that act as bridges within the DDE research network were identified through centrality analysis. In the keyword analysis, the focus was on co-occurrence, trend development, and clustering. Current research hotspots were revealed using the LSI algorithm in CiteSpace for cluster analysis, demonstrating how these hotspots have evolved over time through timeline techniques. A dual-map overlay analysis was used to reveal citation relationships between different disciplines, showcasing the interdisciplinary nature of DDE research. In the visual displays of CiteSpace, the visual attributes of nodes and links were meticulously designed to express the complex logical relationships within the data intuitively. The size of nodes typically reflects the publication volume or citation frequency of entities such as authors, institutions, countries, or keywords, with larger nodes indicating highly active or influential research focal points. The change in node color often represents the progress of research over time, with gradients from dark to light colors indicating the evolution from historical to current research. Whether solid or dashed, the outline of nodes differentiates mainstream research areas from marginal or emerging fields. The thickness and color of the lines reflect the strength of collaborations or frequency of citations, aiding in the identification of close collaborations or frequent citations. These design elements not only enhance the information hierarchy of the diagrams but also improve the usability and accuracy for users exploring and analyzing the data, effectively supporting researchers in understanding the structure and dynamics of the academic field. The subsequent research results section provides detailed descriptions for each visual element.

The first section of the Results primarily addresses RQ1: “What are the trends in publications in the DDE field from 1995 to 2023?” The subsequent sections collectively address RQ2: “Who are the main contributors, and what are the collaboration patterns in DDE research?”

Analysis of Publication Trends

Figure 2 , extracted from the WOS citation analysis report, delineates the progression of annual scholarly publications within the Design for Digital Equity field. This trend analysis resonates with de Solla Price’s model of scientific growth (Price 1963 ), beginning with a slow and steady phase before transitioning into a period of more rapid expansion. Notably, a pronounced spike in publications was observed following 2020, characterized by the global COVID-19 pandemic. This uptick indicates an acute scholarly response to the pandemic, likely propelled by the heightened need for digital equity solutions as the world adapted to unprecedented reliance on digital technologies for communication, work, and education amidst widespread lockdowns and social distancing measures. The graph presents a clear visualization of this scholarly reaction, with the peak in 2021 marking the zenith of research output, followed by a slight retraction, which may suggest a period of consolidation or a pivot towards new research frontiers in the post-pandemic era.

figure 2

Trends in Scholarly Publications on Design for Digital Equity (1997–2023).

Visual analysis by countries or regions

Table 2 presents an illustrative overview of the diverse global contributions to research on “Design for Digital Equity,” including a breakdown of the number of publications, centrality, and the initial year of engagement for each participating country. The United States stands preeminent with 366 publications, affirming its central role in the domain since the mid-1990s. Despite fewer publications, the United Kingdom boasts the highest centrality, signaling its research as notably influential within the academic network since the late 1990s. Since China entered the DDE research arena in 2011, its publications have had explosive growth, reflecting rapid ascension and integration into the field. Furthermore, the extensive volume of publications from Canada and the notable centrality of Spain underscores their substantial and influential research endeavors. The table also recognizes the contributions from countries such as Germany, Italy, and Australia, each infusing unique strengths and perspectives into the evolution of DDE research.

Figure 3 , crafted within CiteSpace, delineates the collaborative contours of global research in Design for Digital Equity (DDE). Literature data are input with ‘country’ as the node type and annual segmentation for time slicing, employing the ‘Cosine’ algorithm to gauge the strength of links and the ‘g-index’ ( K  = 25) for selection criteria. The visualization employs a color gradient to denote the years of publication, with the proximity of nodes and the thickness of the interconnecting links articulating the intensity and frequency of collaborative efforts among nations. For instance, the close-knit ties between the United States, Germany, and France underscore a robust tripartite research collaboration within the DDE domain. The size of the nodes corresponds directly to the proportion of DDE publications contributed by each country. Larger nodes, such as those representing the USA and Germany, suggest more publications, indicating significant research activity and influence within the field. Purple nodes, such as those representing England and Canada, signal a strong centrality within the network, suggesting these countries contribute significantly and play a pivotal role in disseminating research findings throughout the network. The intertwining links of varying thickness reveal the nuanced interplay of collaboration: dense webs around European countries, for instance, underscore a rich tradition of continental cooperation, while transatlantic links point to ongoing exchanges between North American and European researchers. Moreover, the appearance of vibrant links extending toward Asian countries such as China and South Korea reflects the expanding scope of DDE research to encompass a truly global perspective, integrating diverse methodologies and insights as the research community tackles the universal challenges of digital equity.

figure 3

Collaborative networks between countries and regions in DDE research.

Visual analysis by institutions

Table 3 presents a quantified synopsis of institutional research productivity and centrality within the Design for Digital Equity field. The University of Toronto emerges as the most prolific contributor, with 64 publications and a centrality score of 0.06, indicating a significant impact on the field since 2008. The University System of Georgia and the Georgia Institute of Technology, each with 27 and 25 publications, respectively, registering a centrality of 0.01 since 2006, denoting their sustained scholarly activity over time. The Oslo Metropolitan University, with 23 publications and a centrality of 0.02 since 2016, and the Consiglio Nazionale delle Ricerche, with 17 publications since 2009, highlight the diverse international engagement in DDE research. The table also notes the early contributions of the Pennsylvania Commonwealth System of Higher Education, with 17 publications since 2004, although its centrality remains at 0.01. Institutions such as Laval University, Monash University, and the Polytechnic University of Milan show emergent centrality in the field, with recent increases in scholarly output, as indicated by their respective publication counts of 13, 12, and 12 since 2019, 2020, and 2018. This data evidences a dynamic and growing research domain characterized by historical depth and contemporary expansion.

Figure 4 displays a network map highlighting the collaborative landscape among institutions in the field of DDE. The University of Toronto commands a central node with a substantial size, indicating its leading volume of research output. The University of Alberta and CNR exhibit nodes colored to represent earlier works in the field, establishing their roles as foundational contributors. Inter-institutional links, albeit pleasing, are observable, suggesting research collaborations. Nodes such as the University of London and the Polytechnic University of Milan, while smaller, are nonetheless integral, denoting their active engagement in DDE research. The color coding of nodes corresponds to publication years, with warmer colors indicating more recent research, providing a temporal dimension to the map. This network visualization is an empirical tool to assess the scope and scale of institutional contributions and collaborations in DDE research.

figure 4

Network Map of Institutional Collaboration in DDE.

Analysis by publications

Table 4 delineates the pivotal academic publications contributing to the field, as evidenced by citation count, centrality, and publication year, offering a longitudinal perspective of influence and relevance. ‘Lecture Notes in Computer Science’ leads the discourse with 354 citations and the highest centrality of 0.10 since 2004, indicating its foundational and central role over nearly two decades. This is followed by the ‘Journal of Medical Internet Research,’ with 216 citations since 2013 and centrality of 0.05, evidencing a robust impact in a shorter timeframe. The relationship between citation count and centrality reveals a pattern of influential cores within the field. Publications with higher citation counts generally exhibit greater centrality, suggesting that they are reference points within the academic network and instrumental in shaping the digital equity narrative. The thematic diversity of the publications—from technology-focused to health-oriented publications like ‘Computers in Human Behavior’ and ‘Disability and Rehabilitation’—reflects the interdisciplinary nature of research in digital equity, encompassing a range of issues from technological access to health disparities. ‘CoDesign,’ despite its lower position with 101 citations since 2016 and centrality of 0.01, represents the burgeoning interest in participatory design practices within the field. Its presence underscores the evolving recognition of collaborative design processes as essential to achieving digital equity, particularly in the later years where user-centered design principles are increasingly deemed critical for inclusivity in digital environments.

Visual analysis by authors

Table 5 enumerates the most influential authors in the domain of DDE research, ranked by citation count and centrality within the academic network from the year of their first cited work. The table is led by Braun V., with a citation count of 103 and a centrality of 0.13 since 2015, indicating a strong influence in the recent scholarly conversation on DDE. Close behind, the World Health Organization (WHO), with 97 citations and a centrality of 0.10 since 2012, and Nielsen J., with an impressive centrality of 0.32 and 89 citations since 1999, denote long-standing and significant contributions to the field. The high centrality scores, remarkably Nielsen’s, suggest these authors’ works are central nodes in the network of citations, acting as crucial reference points for subsequent research. Further down the list, authors such as Davis F.D. and Venkatesh V. are notable for their scholarly impact, with citation counts of 74 and 59, respectively, and corresponding centrality measures that reflect their substantial roles in shaping DDE discourse. The table also recognizes the contributions of authoritative entities like the United Nations and the World Health Organization, reflecting digital equity research’s global and policy-oriented dimensions. The presence of ISO in the table, with a citation count of 25 since 2015, underscores the importance of standardization in the digital equity landscape. The diversity in authors and entities—from individual researchers to global organizations—highlights the multi-faceted nature of research in DDE, encompassing technical, social, and policy-related studies.

Figure 5 illustrates the collaborative network between cited authors in the DDE study. The left side of the network map is characterized by authors with cooler-colored nodes, indicating earlier contributions to digital equity research. Among these, Wright Ronald stands out with a significantly large node and a purple outline, highlighting his seminal role and the exceptional citation burst in his work. Cool colors suggest these authors laid the groundwork for subsequent research, with their foundational ideas and theories continuing to be pivotal in the field. Transitioning across the network to the right, a gradual shift to warmer node colors is observed, representing more recent contributions to the field. Here, the nodes increase in size, notably for authors such as Braun V. and the WHO, indicating a high volume of publications and a more contemporary impact on the field. The links between these recent large nodes and the earlier contributors, such as Wright Ronald, illustrate a scholarly lineage and intellectual progression within the research community. The authors with purple outlines on the right side of the map indicate recent citation bursts, signifying that their work has quickly become influential in the academic discourse of digital equity research. These bursts are likely a response to the evolution of digital technologies and the emerging challenges of equality within the digital space.

figure 5

Collaborative networks of globally cited authors in DDE research.

Visual analysis by keywords

The concurrent keywords reflect the research hotspots in the field of DDE. Table 6 presents the top 30 keywords with the highest frequency and centrality, while Fig. 6 shows the co-occurrence network of these keywords. Within the purview of Fig. 6 , the visualization elucidates the developmental trajectory of pivotal terms in the digital equity research domain. The nodes corresponding to ‘universal design,’ ‘assistive technology,’ and ‘user-centered design’ are characterized by lighter centers within their larger structures, signifying an established presence and a maturation over time within scholarly research. The robust, blue-hued link connecting ‘universal design’ and ‘assistive technology’ underscores these foundational concepts’ strong and historical interrelation. The nodes encircled by purple outlines, such as ‘universal design,’ ‘inclusive design,’ and ‘participatory design,’ denote a high degree of centrality. This indicates their role as critical junctions within the research network, reflecting a widespread citation across diverse studies and underscoring their integral position within the thematic constellation of the field. Of particular note are the nodes with red cores, such as ‘design for all,’ ‘digital health,’ ‘visual impairment,’ ‘mobile phone,’ and ‘digital divide.’ These nodes signal emergent focal points of research, indicating recent academic interest and citation frequency surges. Such bursts are emblematic of the field’s dynamic nature, pointing to evolving hotspots of scholarly investigation. For instance, the red core of ‘digital health’ suggests an intensifying dialogue around integrating digital technology in health-related contexts, a pertinent issue in modern discourse.

figure 6

Keyword co-occurrence networks in the DDE domain.

Building upon the highlighted red-core nodes denoting keyword bursts in Figs. 6 , 7 , “Top 17 Keywords with the Strongest Citation Bursts in DDE,” offers a quantified analysis of such emergent trends. This figure tabulates the keywords that have experienced the most significant surges in academic citations within the field of DDE from 1997 to 2023. Keywords such as ‘design for all’ and ‘universal design’ anchor the list, showcasing their foundational bursts starting from 1997, with ‘design for all’ maintaining a high citation strength of 20.66 until 2015 and ‘universal design’ demonstrating enduring relevance through 2016. This signifies the long-standing and evolving discourse surrounding these concepts. In contrast, terms like ‘mobile phone,’ ‘digital health,’ and ‘participation’ represent the newest fronts in DDE research, with citation bursts emerging as late as 2020 and 2021, reflecting the rapid ascent of these topics in the recent scholarly landscape. The strength of these bursts, particularly the 7.07 for ‘mobile phone,’ suggests a burgeoning field of study responsive to technological advancements and societal shifts. The bar graph component of the figure visually represents the duration of each burst, with red bars marking the start and end years. The length and position of these bars corroborate the temporal analysis, mapping the lifecycle of each keyword’s impact.

figure 7

Top 17 Keywords with the Strongest Citation Bursts in DDE.

The authors have conducted a keyword clustering analysis on the data presented in Fig. 6 , aiming to discern the interrelationships between keywords and delineate structured knowledge domains within the field of DDE. Utilizing the Latent Semantic Indexing (LSI) algorithm to derive the labeling of clusters, they have effectively crystallized seven distinct clusters in DDE research, as depicted in Fig. 8 . The cluster represented in red, labeled ‘#0 universal design,’ signifies a group of closely related concepts that have been pivotal in discussions on making design accessible to all users. This cluster’s central placement within the figure suggests its foundational role in DDE. Adjacent to this, in a lighter shade of orange, is the ‘#1 user-centered design’ cluster, indicating a slightly different but related set of terms emphasizing the importance of designing with the end-user’s needs and experiences in mind. The ‘#2 assistive technology’ cluster, shown in yellow, groups terms around technologies designed to aid individuals with disabilities, signifying its specialized yet crucial role in promoting digital equity. Notably, the #3 digital health cluster in green and the #4 mobile phone cluster in turquoise highlight the intersection of digital technology with health and mobile communication, illustrating the field’s expansion into these dynamic research areas. The ‘#6 participatory design’ cluster in purple and ‘#7 independent living’ cluster in pink emphasize collaboration in design processes and the empowerment of individuals to live independently, respectively.

figure 8

Keyword clustering analysis map for DDE research.

In addition, the timeline function in CiteSpace was used to present the seven clusters in Fig. 8 and the core keywords they contain (the threshold for Label was set to 6) annually, as shown in Fig. 9 . The timeline graph delves deeper into the clusters’ developmental stages and interconnections of keywords. In the #0 universal design cluster, the term ‘universal design’ dates back to 1997, alongside ‘assistive technology,’ ‘user participation,’ and ‘PWDs,’ which together marked the inception phase of DDE research within the universal design cluster, where the focus was on creating accessible environments and products for the broadest possible audience. With the advancement of digital technologies, terms like ‘artificial intelligence’ in 2015, ‘digital accessibility’ in 2018, and the more recent ‘students with disabilities’ have emerged as new topics within this cluster. Along with #0 universal design, the #6 participatory design cluster has a similarly lengthy history, with terms like ‘computer access’ and ‘design process’ highlighting the significance of digital design within this cluster. Moreover, within this timeline network, many terms are attributed to specific populations, such as ‘PWDs,’ ‘children,’ ‘aging users,’ ‘adults,’ ‘students,’ ‘blind people,’ ‘stroke patients,’ ‘family caregivers,’ ‘persons with mild cognitive impairments,’ ‘active aging,’ and ‘students with disabilities,’ revealing the user groups that DDE research needs to pay special attention to, mainly the recent focus on ‘mild cognitive impairments’ and ‘students with disabilities,’ which reflect emerging issues. Then, the particularly dense links in the graph hint at the correlations between keywords; for instance, ‘children’ and ‘affective computing’ within the #6 participatory design cluster are strongly related, and the latest terms ‘education’ and ‘autism spectrum disorder occupational therapy’ are strongly related, revealing specific issues within the important topic of education in DDE research. Other nodes with dense links include ‘digital divide,’ ‘user acceptance,’ ‘social participation,’ ‘interventions,’ ‘social inclusion,’ and ‘design for independence,’ reflecting the issues that have received scholarly attention in social sciences. Finally, on the digital technology front, ‘smart home’ emerged in 2006, followed by the terms ‘digital divide’ and ‘user interface’ in the same year. The emergence of ‘VR’ in 2014, ‘AR’ in 2016, and ‘wearable computing’ in 2017 also explain the digital technology focal points worth attention in DDE research.

figure 9

Timeline plot of 8 clusters of DDE keywords.

Dual-map overlays analysis of publications clusters

The double map overlay functionality of CiteSpace has been utilized to present a panoramic visualization of the knowledge base in DDE research (Fig. 10 ). This technique maps the citation dynamics between clusters of cited and cited publications, revealing the field’s interdisciplinary nature and scholarly communication. The left side of the figure depicts clusters of citing publications, showcasing newer disciplinary domains within DDE research. In contrast, the right side represents clusters of cited publications, reflecting the research foundations of DDE studies. Different colored dots within each cluster indicate the distribution of publications in that cluster. Notably, the arcs spanning the visualization illustrate the citation relationships between publications, with the thickness of the arcs corresponding to the citation volume. These citation trajectories from citing to cited clusters demonstrate the knowledge transfer and intellectual lineage of current DDE research within and across disciplinary boundaries. Notably, the Z-score algorithm converged on those arcs with stronger associations, yielding thicker arcs in green and blue. This indicates that the foundation of DDE research stems from two main disciplinary areas, namely ‘5.

figure 10

The left side represents citing publication clusters and the right side represents cited publication clusters.

Health, nursing, medicine’ and ‘7. psychology, education, and social on the right side of the figure. Publications from ‘2. MEDICINE, MEDICAL, CLINICAL’ and ‘10. ECONOMICS, ECONOMIC POLITICAL,’ and ‘6. On the left side, PSYCHOLOGY, EDUCATION, and health cite these two disciplinary areas extensively. In other words, the knowledge frontier of DDE research is concentrated in medicine and psychology, and their knowledge bases are also in the domains of health and psychology. However, there is a bidirectional cross-disciplinary citation relationship between the two areas—additionally, the red arcs from the ‘1. MATHEMATICS, SYSTEMS, MATHEMATICAL’ publications cluster showcase another facet of the knowledge frontier in DDE research, as they cite multiple clusters on the right side, forming a divergent structure, which confirms that some of the frontiers of MATHEMATICS in DDE research are based on a broader range of disciplines. The different network structures macroscopically reveal the overall developmental pattern of DDE research.

Hotspots and emerging trends

To answer RQ3, based on the research findings, the literature was re-engaged to reveal the research hotspots and emerging trends of DDE. These hotspots and trends are primarily concentrated in the following areas:

Embracing co-design and practical implementation in inclusive and universal design research

Research in inclusive and universal design increasingly emphasizes co-design with stakeholders, reflecting significant growth in publication (Table 4 on Co-design). In the digital context, transitioning from theory to practice in equity-centered design calls for enhanced adaptability and feasibility of traditional design theories. This shift requires a pragmatic and progressive approach, aligning with recent research (Zhang et al., 2023 ). Furthermore, the evidence-based practices in DDE (Cluster #6) are integral to this dimension, guiding the pragmatic application of design theories.

Focusing on digital mental health and urban-rural inequalities

In DDE, critical issues like the digital divide and mental health are central concerns. The focus on digital and mobile health, highlighted in Fig. 9 , shows a shift towards using technology to improve user engagement and address health challenges. As highlighted by Cosco (Cosco et al., 2021 ), mental health has emerged as a crucial focus in DDE, underscoring the need for designs that support psychological well-being. Additionally, ageism (Mannheim et al., 2023 ) and stereotypes (Nicosia et al., 2022 ) influence technology design in DDE, pointing to societal challenges that need addressing for more inclusive digital solutions. Patten’s (Patten et al., 2022 ) focus on smoking cessation in rural communities indicates a growing emphasis on reducing health disparities, ensuring that digital health advancements are inclusive and far-reaching. These trends in DDE highlight the importance of a holistic approach that considers technological, societal, and health-related factors.

Integration of empathetic, contextualized, and non-visual digital technologies

In the realm of DDE, the technology dimension showcases a range of emerging trends and research hotspots characterized by advancements in immersive technologies, assistive devices, and interactive systems. Technologies like VR (Bortkiewicz et al., 2023 ) and AR (Creed et al., 2023 ) are revolutionizing user experiences, offering enhanced empathy and engagement while raising new challenges. The growth in mobile phone usage (Cluster #4) and the development of 3D-printed individualized assistive devices (IADs) (Lamontagne et al., 2023 ) reflect an increasing emphasis on personalization and catering to diverse user needs. Tangible interfaces (Aguiar et al., 2023 ) and haptic recognition systems (Lu et al., 2023 ) make digital interactions more intuitive. The integration of cognitive assistive technology (Roberts et al., 2023 ) and brain-computer interfaces (BCI) (Padfield et al., 2023 ) is opening new avenues for user interaction, particularly for those with cognitive or physical limitations. The exploration of Social Assistive Robots (SAR) (Kaplan et al., 2024 ) and the application of IoT (Almukadi, 2023 ) illustrate a move towards socially aware and interconnected digital ecosystems, while voice recognition technologies (Berner & Alves, 2023 ) are enhancing accessibility. Edge computing (Walczak et al., 2023 ) represents a shift towards decentralized and user-oriented solutions.

For intergenerational relationships, students with disabilities and the visually impaired

The concurrent digitization trends and rapid global aging closely resemble the growth curve of DDE publications, as shown in Fig. 2 . The concept of active aging, championed by WHO (World Health Organization 2002 ), exerts a substantial impact. This effect is evident across multiple indicators, including a significant number of DDE papers published in the journal GERONTOLOGIST (109 articles), the prominent node of “elderly people” in keyword co-occurrence, and the notable mention of “elderly people” in keyword analysis (strength=3.62). Moreover, in 2011, China, the country with the largest elderly population globally, contributed 73 articles related to DDE (Table 2 ), further emphasizing the growing demand for future DDE research focusing on the elderly. Within DDE studies on the elderly, intergenerational relationships (Li & Cao, 2023 ) represent an emerging area of research. Additionally, two other emerging trends are centered on the educational and visually impaired populations. The term ‘students with disabilities’ in Fig. 9 illustrates this trend. This is reflected in the focus on inclusive digital education (Lazou & Tsinakos, 2023 ) and the digital health needs of the visually impaired (Yeong et al., 2021 ), highlighting the expanding scope of user-centric DDE research.

The intellectual structure of DDE

Previous studies have dissected DDE through various disciplinary lenses, often yielding isolated empirical findings. However, a comprehensive synthesis that contemplates the intricate interplay among DDE constructs has yet to be conspicuously absent. To fill this gap and answer RQ4, an intellectual structure that encapsulates the entirety of DDE was developed, amalgamating user demographics, design strategies, interdisciplinary approaches, and the overarching societal implications. This holistic structure, depicted in Fig. 11 , The DDE structure elucidates the multi-faceted approach required to achieve digital equity, integrating diverse user needs with tailored design strategies and bridging technological innovation with interdisciplinary methodologies. Its core function is to guide the creation of inclusive digital environments that are accessible, engaging, and responsive to the varied demands of a broad user spectrum.

figure 11

Design for Digital Equity (DDE) intellectual structure.

At the core of discussions surrounding digital equity lies the extensively examined and articulated issue of the digital divide, a well-documented challenge that scholars have explored (Gallegos-Rejas et al., 2023 ). This is illustrated in the concentric circles of the red core within the keyword contribution analysis, as depicted in Fig. 6 . It reflects the persistent digital access and literacy gaps that disproportionately affect marginalized groups. This divide extends beyond mere connectivity to encompass the nuances of social engagement (Almukadi, 2023 ), where the ability to participate meaningfully in digital spaces becomes a marker of societal inclusion. As noted by (Bochicchio et al., 2023 ; Jetha et al., 2023 ), employment is a domain where digital inequities manifest, creating barriers to employment inclusion. Similarly, feelings of loneliness, social isolation (Chung et al., 2023 ), and deficits in social skills (Estival et al., 2023 ) are exacerbated in the digital realm, where interactions often require different competencies. These social dimensions of DDE underscore the need for a more empathetic and user-informed approach to technology design, one that can cater to the nuanced needs of diverse populations, including medication reminders and telehealth solutions (Gallegos-Rejas et al., 2023 ) while minimizing cognitive load (Gomez-Hernandez et al., 2023 ) and advancing digital health equity (Ha et al., 2023 ).

The critical element of the DDE intellectual structure is the design strategy, as evidenced by the two categories #0 generic design and #6 participatory design, which contain the most prominent nodes in the keyword clustering in Part IV of this paper. Digital transformation through design thinking (Oliveira et al., 2024 ), user-centered design (Stawarz et al., 2023 ), and the co-design of 3D printed assistive technologies (Aflatoony et al., 2023 ; Benz et al., 2023 ; Ghorayeb et al., 2023 ) reflect the trend towards personalized and participatory design processes. Empathy emerges as a recurrent theme, both in contextualizing user experiences (Bortkiewicz et al., 2023 ) and in visualizing personal narratives (Gui et al., 2023 ), reinforcing the need for emotional durability (Huang et al., 2023 ) and accessible design (Jonsson et al., 2023 ). These approaches are not merely theoretical but are grounded in the pragmatics of participatory design (Kinnula et al., 2023 ), the living labs approach (Layton et al., 2023 ), and virtual collaborative design workshops (Peters et al., 2023 ), all of which facilitate the co-creation of solutions that resonate with the lived experiences of users.

One of the significant distinctions between DDE and traditional fairness-centered design lies in technical specifications. Supporting these strategies are fundamental theories and standards such as the Web Content Accessibility Guidelines (WCAG) (Jonsson et al., 2023 ), the Technology Acceptance Model (TAM) (Alvarez-Melgarejo et al., 2023 ), and socio-technical systems (STS) (Govers & van Amelsvoort, 2023 ), which provide the ethical and methodological framework for DDE initiatives. Additionally, digital ethnography (Joshi et al., 2023 ) and the Person-Environment-Tool (PET) framework (Jarl, Lundqvist 2020 ) offer valuable perspectives to analyze and design for the intricate interplay of human, technological, and environmental interactions.

Another noteworthy discovery highlighted by the previously mentioned findings is the rich interdisciplinary approach within the field of DDE. This interdisciplinary nature, exemplified by the integration of diverse knowledge domains, is evident in publications analysis of DDE (Table 4 ) and is visually demonstrated through the overlay of disciplinary citation networks (Fig. 10 ). Strategies such as gamification (Aguiar et al., 2023 ), music therapy (Chen & Norgaard, 2023 ), and multimodal communication strategies (Given et al., 2023 ) underline the synergistic potential of integrating diverse knowledge domains to foster more inclusive digital environments. Cognitive Behavioral Therapy (Kayser et al., 2023 ), multimedia advocacy (Watts et al., 2023 ), arts-based methods (Miller & Zelenko, 2022 ), storytelling (Ostrowski et al., 2021 ), and reminiscence therapy (Unbehaun et al., 2021 ) are not merely adjuncts but integral components that enhance the relevance and efficacy of DDE interventions.

Equally important, the relationship between the target users of DDE and digital technologies needs to be focused on as a design strategy. This includes attitudes, needs, challenges, risks, and capacity indicators. Positive outlooks envision digital transformation as a new norm post-pandemic for individuals with disabilities (Aydemir-Döke et al., 2023 ), while others display varied sentiments (Bally et al., 2023 ) or even hostile attitudes, as seen in the challenges of visually impaired with online shopping (Cohen et al., 2023 ). These attitudes interplay with ‘Needs’ that span essential areas, from service to recreational, highlighting the importance of ‘Capacity Indicators’ like digital literacy and digital thinking (Govers & van Amelsvoort, 2023 ) to bridge these gaps. The ‘Challenges and Risks’ associated with DDE, such as the adverse impacts of apps in medical contexts (Babbage et al., 2023 ) and ergonomic issues due to immersive technologies (Creed et al., 2023 ), present barriers that need to be mitigated to foster a conducive environment for digital engagement. Despite a generally positive attitude toward digital transformation, the low usage rates (Dale et al., 2023 ), usability concerns (Davoody et al., 2023 ), cultural differences in thinking, and the need for a humanizing digital transformation (Dragicevic et al., 2023b ) underscore the complexity of achieving digital equity. The widespread resistance and abandonment of rehabilitative technologies (Mitchell et al., 2023 ) further emphasize the need for DDE strategies that are culturally sensitive and user-friendly.

Going deeper, the arrows signify dynamic interrelationships among various components within the DDE intellectual structure. “Needs” drive the design and application of “Digital Technologies,” which in turn inspire “Innovative” solutions and approaches. Feedback from these innovations influences “Attitudes,” which, along with “Needs,” can pose “Challenges and Risks,” thereby shaping the “Capacity Indicators” that gauge proficiency in navigating the digital landscape. This cyclical interplay ensures that the DDE framework is not static but an evolving guide responsive to the changing landscape of digital equity.

future research direction

In the process of identifying research gaps and future directions, innovative research opportunities were determined from the results of temporal attributes in the visual, intellectual graph:

Emotional, cultural, and aesthetic factors in human-centered design: Universal Design (UD) and Design for All (DFA) will remain central themes in DDE. However, affective computing and user preferences must be explored (Alves et al., 2020 ). Beyond functional needs, experiential demands such as aesthetics, self-expression, and creativity, often overlooked in accessibility guidelines, are gaining recognition (Recupero et al., 2021 ). The concept of inclusive fashion (Oliveira & Okimoto, 2022 ) underscores the need to address multi-faceted user requirements, including fashion needs, cultural sensitivity, and diversity.

Digital technology adoption and improving digital literacy: The adoption of multimodal and multisensory interactions is gaining increased attention, with a growing focus on voice, tangible, and natural interaction technologies, alongside research into technology acceptance, aligning with the findings (Li et al., 2023 ). Exploring these interactive methods is crucial for enhancing user engagement and experience. However, there is a notable gap in research on the acceptance of many cutting-edge digital technologies. Additionally, investigating how design strategies can enhance digital literacy represents a valuable study area.

Expanding the Geographic and Cultural Scope: Literature indicates that the situation of DDE in developing countries (Abbas et al., 2014 ; Nahar et al., 2015 ) warrants in-depth exploration. Current literature and the distribution of research institutions show a significant gap in DDE research in these regions, especially in rural areas (as seen in Tables 2 and 3 and Figs. 3 and 4 ). Most research and literature is concentrated in developed countries, with insufficient focus on developing nations. Conversely, within developed countries, research on DDE concerning minority groups (Pham et al., 2022 ) and affluent Indigenous populations (Henson et al., 2023 ) is almost nonexistent. This situation reveals a critical research gap: even in economically advanced nations, the needs and preferences of marginalized groups are often overlooked. These groups may face unique challenges and needs, which should be explored or understood in mainstream research.

Multi-disciplinary Research in Digital Equity Design: While publication analysis (Table 4 ) and knowledge domain flow (Fig. 10 ) reveal the interdisciplinary nature of DDE, the current body of research predominantly focuses on computer science, medical and health sciences, sociology, and design. This review underscores the necessity of expanding research efforts across a broader spectrum of disciplines to address the diverse needs inherent in DDE adequately. For instance, the fusion of art, psychology, and computer technology could lead to research topics such as “Digital Equity Design Guidelines for Remote Art Therapy.” Similarly, the amalgamation of education, computer science, design, and management studies might explore subjects like “Integrating XR in Inclusive Educational Service Design: Technological Acceptance among Special Needs Students.” These potential research areas not only extend the scope of DDE but also emphasize the importance of a holistic and multi-faceted approach to developing inclusive and accessible digital solutions.

Practical implication

This study conducted an in-depth bibliometric and visualization analysis of the Digital Equity Design (DDE) field, revealing key findings on publication trends, significant contributors and collaboration patterns, key clusters, research hotspots, and intellectual structures. These insights directly affect policy-making, interdisciplinary collaboration, design optimization, and educational resource allocation. Analysis of publication trends provides policymakers with data to support digital inclusivity policies, particularly in education and health services, ensuring fair access to new technologies for all social groups, especially marginalized ones. The analysis of significant contributors and collaboration patterns highlights the role of interdisciplinary cooperation in developing innovative solutions, which is crucial for organizations and businesses designing products for specific groups, such as the disabled and elderly, in promoting active aging policies. Identifying key clusters and research hotspots guides the focus of future technological developments, enhancing the social adaptability and market competitiveness of designs. The construction of intellectual structures showcases the critical dimensions of user experience within DDE and the internal logic between various elements, providing a foundation for promoting deeper user involvement and more precise responses to needs in design research and practice, particularly in developing solutions and assessing their effectiveness to ensure that design outcomes truly reflect and meet end-user expectations and actual use scenarios.

Limitations

Nevertheless, this systematic review is subject to certain limitations. Firstly, the data sourced exclusively from the WOS is a constraint, as specific functionalities like dual-map overlays are uniquely tailored for WOS bibliometric data. Future studies could expand the scope by exploring DDE research in databases such as Scopus, Google Scholar, and grey literature. Additionally, while a comprehensive search string for DDE was employed, the results were influenced by the search timing and the subscription range of different research institutions to the database. Moreover, the possibility of relevant terms existing beyond the search string cannot be discounted. Secondly, despite adhering to the PRISMA guidelines for literature acquisition and screening, subjectivity may have influenced the authors during the inclusion and exclusion process, particularly while reviewing abstracts and full texts to select publications. Furthermore, the reliance solely on CiteSpace as the bibliometric tool introduces another limitation. The research findings are contingent on the features and algorithms of the current version of CiteSpace (6.2.r6 advanced). Future research could incorporate additional or newer versions of bibliometric tools to provide a more comprehensive analysis.

This systematic review aims to delineate the academic landscape of DDE by exploring its known and unknown aspects, including research progress, intellectual structure, research hotspots and trends, and future research recommendations. Before this review, these facets could have been clearer. To address these questions, a structured retrieval strategy set by PICo and a PRISMA process yielded 1705 publications, which were analyzed using CiteSpace for publication trends, geographic distribution of research collaborations, core publications, keyword co-occurrence, emergence, clustering, timelines, and dual-map overlays of publication disciplines. These visual presentations propose a DDE intellectual structure, although the literature data is focused on the WOS database. This framework could serve as a guide for future research to address these crucial issues. The DDE intellectual structure integrates research literature, particularly eight thematic clusters. It not only displays the overall intellectual structure of DDE on a macro level but also reveals the intrinsic logic between various elements. Most notably, as pointed out at the beginning of this review, digital equity, as a critical factor in achieving sustainable development goals, requires human-centered design thinking. An in-depth discussion of the research findings reveals that the development of DDE is characterized by a multi-dimensional approach, encompassing a wide range of societal, technological, and user-specific issues. Furthermore, emerging trends indicate that the future trajectory of DDE will be more diverse and inclusive, targeting a broad spectrum of user needs and societal challenges. Another significant aspect of this review is the proposition of four specific directions for future research, guiding researchers dedicated to related disciplines.

Data availability

The datasets generated or analyzed during the current study are available in the Dataverse repository: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/S5XXFB .

Abbas A, Hussain M, Iqbal M, Arshad S, Rasool S, Shafiq M, Ali W, Yaqub N (2014) Barriers and reforms for promoting ICTs in rural areas of Pakistan. In: Marcus A (ed), pp 391–399

Aflatoony L, Lee SJ, Sanford J (2023) Collective making: Co-designing 3D printed assistive technologies with occupational therapists, designers, and end-users. Assist Technol 35:153–162. https://doi.org/10.1080/10400435.2021.1983070

Article   PubMed   Google Scholar  

Aguiar LR, Rodríguez FJ, Aguilar JR, Plascencia VN, Mendoza LM, Valdez JR, Pech JR, am Leon, Ortiz LE (2023) Implementing gamification for blind and autistic people with tangible interfaces, extended reality, and universal design for learning: two case studies. Appl. Sci.-Basel 13. https://doi.org/10.3390/app13053159

Ahmad I, Ahmed G, Shah SAA, Ahmed E (2020) A decade of big data literature: analysis of trends in light of bibliometrics. J Supercomput 76:3555–3571. https://doi.org/10.1007/s11227-018-2714-x

Article   Google Scholar  

Almukadi W (2023) Smart scarf: An IOT-based solution for emotion recognition. Eng Technol Appl Sci Res 13:10870–10874. https://doi.org/10.48084/etasr.5952

Alvarez-Melgarejo M, Pedraza-Avella AC, Torres-Barreto ML (2023) Acceptance assessment of the software MOTIVATIC WEB by university educators. Int J Learn Technol 18:344–363. https://doi.org/10.1504/IJLT.2023.134585

Alves T, Natálio J, Henriques-Calado J, Gama S (2020) Incorporating personality in user interface design: A review. Personality and Individual Differences 155. https://doi.org/10.1016/j.paid.2019.109709

Aydemir-Döke D, Owenz M, Spencer B (2023) Being a disabled woman in a global pandemic: A focus group study in the United States and policy recommendations. Disability & Society. https://doi.org/10.1080/09687599.2023.2225207

Babbage, Drown J, van Solkema M, Armstrong J, Levack W, Kayes N (2023) Inpatient trial of a tablet app for communicating brain injury rehabilitation goals. Disabil. Rehabil.-Assist. Technol. https://doi.org/10.1080/17483107.2023.2167009

Bally EL, Cheng DM, van Grieken A, Sanz MF, Zanutto O, Carroll A, Darley A, Roozenbeek B, Dippel DW, Raat H (2023) Patients’ Perspectives Regarding Digital Health Technology to Support Self-management and Improve Integrated Stroke Care: Qualitative Interview Study. J Med Internet Res 25. https://doi.org/10.2196/42556

Bazzano AN, Noel L-A, Patel T, Dominique CC, Haywood C, Moore S, Mantsios A, Davis PA (2023) Improving the engagement of underrepresented people in health research through equity-centered design thinking: qualitative study and process evaluation for the development of the grounding health research in design toolkit. JMIR Form Res 7:e43101. https://doi.org/10.2196/43101

Article   PubMed   PubMed Central   Google Scholar  

Bendixen K, Benktzon M (2015) Design for all in Scandinavia – A strong concept. Appl Erg 46:248–257. https://doi.org/10.1016/j.apergo.2013.03.004

Benz C, Scott-Jeffs W, Revitt J, Brabon C, Fermanis C, Hawkes M, Keane C, Dyke R, Cooper S, Locantro M, Welsh M, Norman R, Hendrie D, Robinson S (2023) Co-designing a telepractice journey map with disability customers and clinicians: Partnering with users to understand challenges from their perspective. Health Expect. https://doi.org/10.1111/hex.13919

Berner K, Alves (2023) A scoping review of the literature using speech recognition technologies by individuals with disabilities in multiple contexts. Disabil Rehabil -Assist Technol 18:1139–1145. https://doi.org/10.1080/17483107.2021.1986583

Bochicchio V, Lopez A, Hase A, Albrecht J, Costa B, Deville A, Hensbergen R, Sirfouq J, Mezzalira S (2023) The psychological empowerment of adaptive competencies in individuals with Intellectual Disability: Literature-based rationale and guidelines for best training practices. Life Span Disabil 26:129–157. https://doi.org/10.57643/lsadj.2023.26.1_06

Bortkiewicz A, Józwiak Z, Laska-Lesniewicz A (2023) Ageing and its consequences - the use of virtual reality (vr) as a tool to visualize the problems of elderly. Med Pr 74:159–170. https://doi.org/10.13075/mp.5893.01406

Broadus RN (1987) Toward a definition of “bibliometrics. Scientometrics 12:373–379. https://doi.org/10.1007/BF02016680

Chen C (2006) CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature. J Am Soc Inf Sci 57:359–377. https://doi.org/10.1002/asi.20317

Chen C (2018) Eugene Garfield’s scholarly impact: a scientometric review. Scientometrics 114:489–516. https://doi.org/10.1007/s11192-017-2594-5

Chen C, Hu Z, Liu S, Tseng H (2012) Emerging trends in regenerative medicine: a scientometric analysis in CiteSpace. Expert Opin Biol Ther 12:593–608. https://doi.org/10.1517/14712598.2012.674507

Article   CAS   PubMed   Google Scholar  

Chen YA, Norgaard M (2023) Important findings of a technology-assisted in-home music-based intervention for individuals with stroke: a small feasibility study. Disabil. Rehabil.-Assist. Technol. https://doi.org/10.1080/17483107.2023.2274397

Chung JE, Gendron T, Winship J, Wood RE, Mansion N, Parsons P, Demiris G (2023) Smart Speaker and ICT Use in Relationship With Social Connectedness During the Pandemic: Loneliness and Social Isolation Found in Older Adults in Low-Income Housing. Gerontologist. https://doi.org/10.1093/geront/gnad145

Clarivate (2023) Web of Science Core Collection - Clarivate. https://clarivate.com/products/scientific-and-academic-research/research-discovery-and-workflow-solutions/webofscience-platform/web-of-science-core-collection/ . Accessed December 14, 2023

Cohen AH, Fresneda JE, Anderson RE (2023) How inaccessible retail websites affect blind and low vision consumers: their perceptions and responses. J Serv Theory Pract 33:329–351. https://doi.org/10.1108/JSTP-08-2021-0167

Cooper C, Booth A, Varley-Campbell J, Britten N, Garside R (2018) Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Med Res Methodol 18:85. https://doi.org/10.1186/s12874-018-0545-3

Cooper BA, Cohen U, Hasselkus BR (1991) Barrier-free design: a review and critique of the occupational therapy perspective. Am J Occup Ther 45:344–350. https://doi.org/10.5014/ajot.45.4.344

Cosco TD, Fortuna K, Wister A, Riadi I, Wagner K, Sixsmith A (2021) COVID-19, Social Isolation, and Mental Health Among Older Adults: A Digital Catch-22. J Med Internet Res 23. https://doi.org/10.2196/21864

Creed C, Al-Kalbani M, Theil A, Sarcar S, Williams I (2023) Inclusive AR/VR: accessibility barriers for immersive technologies. Univ Access Inf Soc. https://doi.org/10.1007/s10209-023-00969-0

Dale J, Nanton V, Day T, Apenteng P, Bernstein CJ, Smith GG, Strong P, Procter R (2023) Uptake and use of care companion, a web-based information resource for supporting informal carers of older people: mixed methods study. JMIR Aging 6. https://doi.org/10.2196/41185

Davoody N, Eghdam A, Koch S, Hägglund M (2023) Evaluation of an electronic care and rehabilitation planning tool with stroke survivors with Aphasia: Usability study. JMIR Human Factors 10. https://doi.org/10.2196/43861

Dragicevic N, Vladova G, Ullrich A (2023a) Design thinking capabilities in the digital world: A bibliometric analysis of emerging trends. Front Educ 7. https://doi.org/10.3389/feduc.2022.1012478

Dragicevic N, Hernaus T, Lee RW (2023b) Service innovation in Hong Kong organizations: Enablers and challenges to design thinking practices. Creat Innov Manag 32:198–214. https://doi.org/10.1111/caim.12555

Estival S, Demulier V, Renaud J, Martin JC (2023) Training work-related social skills in adults with Autism Spectrum Disorder using a tablet-based intervention. Human-Comput Interact. https://doi.org/10.1080/07370024.2023.2242344

Firestone AR, Cruz RA, Massey D (2023) Developing an equity-centered practice: teacher study groups in the preservice context. J Teach Educ 74:343–358. https://doi.org/10.1177/00224871231180536

Gallegos-Rejas VM, Thomas EE, Kelly JT, Smith AC (2023) A multi-stakeholder approach is needed to reduce the digital divide and encourage equitable access to telehealth. J Telemed Telecare 29:73–78. https://doi.org/10.1177/1357633X221107995

Garfield E (2009) From the science of science to Scientometrics visualizing the history of science with HistCite software. J Informetr 3:173–179. https://doi.org/10.1016/j.joi.2009.03.009

Ghorayeb A, Comber R, Gooberman-Hill R (2023) Development of a smart home interface with older adults: multi-method co-design study. JMIR Aging 6. https://doi.org/10.2196/44439

Given F, Allan M, Mccarthy S, Hemsley B (2023) Digital health autonomy for people with communication or swallowing disability and the sustainable development goal 10 of reducing inequalities and goal 3 of good health and well-being. Int J Speech-Lang Pathol 25:72–76. https://doi.org/10.1080/17549507.2022.2092212

Gomez-Hernandez M, Ferre X, Moral C, Villalba-Mora E (2023) Design guidelines of mobile apps for older adults: systematic review and thematic analysis. JMIR Mhealth and Uhealth 11. https://doi.org/10.2196/43186

Govers M, van Amelsvoort P (2023) A theoretical essay on socio-technical systems design thinking in the era of digital transformation. Gio-Gr -Interakt -Organ -Z Fuer Angew Org Psychol 54:27–40. https://doi.org/10.1007/s11612-023-00675-8

Grybauskas A, Stefanini A, Ghobakhloo M (2022) Social sustainability in the age of digitalization: A systematic literature Review on the social implications of industry 4.0. Technol Soc 70:101997. https://doi.org/10.1016/j.techsoc.2022.101997

Gui F, Yang JY, Wu QL, Liu Y, Zhou J, An N (2023) Enhancing caregiver empowerment through the story mosaic system: human-centered design approach for visualizing older adult life stories. JMIR Aging 6. https://doi.org/10.2196/50037

Ha S, Ho SH, Bae YH, Lee M, Kim JH, Lee J (2023) Digital health equity and tailored health care service for people with disability: user-centered design and usability study. J Med Internet Res 25. https://doi.org/10.2196/50029

Henson C, Chapman F, Cert G, Shepherd G, Carlson B, Rambaldini B, Gwynne K (2023) How older indigenous women living in high-income countries use digital health technology: systematic review. J Med Internet Res 25. https://doi.org/10.2196/41984

Huang XY, Kettley S, Lycouris S, Yao Y (2023) Autobiographical design for emotional durability through digital transformable fashion and textiles. Sustainability 15. https://doi.org/10.3390/su15054451

Ismail II, Saqr M (2022) A quantitative synthesis of eight decades of global multiple sclerosis research using bibliometrics. Front Neurol 13:845539. https://doi.org/10.3389/fneur.2022.845539

Jarl G, Lundqvist LO (2020) An alternative perspective on assistive technology: The person-environment-tool (PET) model. Assist Technol 32:47–53. https://doi.org/10.1080/10400435.2018.1467514

Jetha A, Bonaccio S, Shamaee A, Banks CG, Bültmann U, Smith PM, Tompa E, Tucker LB, Norman C, Gignac MA (2023) Divided in a digital economy: Understanding disability employment inequities stemming from the application of advanced workplace technologies. SSM-Qual Res Health 3. https://doi.org/10.1016/j.ssmqr.2023.100293

John Clarkson P, Coleman R (2015) History of inclusive design in the UK. Appl Erg 46(Pt B):235–247. https://doi.org/10.1016/j.apergo.2013.03.002

Jonsson M, Johansson S, Hussain D, Gulliksen J, Gustavsson C (2023) Development and evaluation of ehealth services regarding accessibility: scoping literature review. J Med Internet Res 25. https://doi.org/10.2196/45118

Joshi D, Panagiotou A, Bisht M, Udalagama U, Schindler A (2023) Digital Ethnography? Our experiences in the use of sensemaker for understanding gendered climate vulnerabilities amongst marginalized Agrarian communities. Sustainability 15. https://doi.org/10.3390/su15097196

Kaplan A, Barkan-Slater S, Zlotnik Y, Levy-Tzedek S (2024) Robotic technology for Parkinson’s disease: Needs, attitudes, and concerns of individuals with Parkinson’s disease and their family members. A focus group study. Int J Human-Comput Stud 181. https://doi.org/10.1016/j.ijhcs.2023.103148

Kastrin A, Hristovski D (2021) Scientometric analysis and knowledge mapping of literature-based discovery (1986–2020). Scientometrics 126:1415–1451. https://doi.org/10.1007/s11192-020-03811-z

Article   CAS   Google Scholar  

Kayser J, Wang X, Wu ZK, Dimoji A, Xiang XL (2023) Layperson-facilitated internet-delivered cognitive behavioral therapy for homebound older adults with depression: protocol for a randomized controlled trial. JMIR Res Protocols 12. https://doi.org/10.2196/44210

King J, Gonzales AL (2023) The influence of digital divide frames on legislative passage and partisan sponsorship: A content analysis of digital equity legislation in the US from 1990 to 2020. Telecommun Policy 47:102573. https://doi.org/10.1016/j.telpol.2023.102573

Kinnula M, Iivari N, Kuure L, Molin-Juustila T (2023) Educational Participatory Design in the Crossroads of Histories and Practices - Aiming for digital transformation in language pedagogy. Comput Support Coop Work- J Collab Comput Work Pract. https://doi.org/10.1007/s10606-023-09473-8

Lamontagne ME, Pellichero A, Tostain V, Routhier F, Flamand V, Campeau-Lecours A, Gherardini F, Thébaud M, Coignard P, Allègre W (2023) The REHAB-LAB model for individualized assistive device co-creation and production. Assist Technol. https://doi.org/10.1080/10400435.2023.2229880

Lawson McLean A, Lawson McLean AC (2023) Exploring the digital divide: Implications for teleoncology implementation. Patient Educ Couns 115:107939. https://doi.org/10.1016/j.pec.2023.107939

Layton N, Harper K, Martinez K, Berrick N, Naseri C (2023) Co-creating an assistive technology peer-support community: learnings from AT Chat. Disabil Rehabil -Assist Technol 18:603–609. https://doi.org/10.1080/17483107.2021.1897694

Lazou C, Tsinakos A (2023) Critical Immersive-triggered literacy as a key component for inclusive digital education. Educ Sci 13. https://doi.org/10.3390/educsci13070696

Li G, Li D, Tang T (2023) Bibliometric review of design for digital inclusion. Sustainability 15. https://doi.org/10.3390/su151410962

Li C, Cao M (2023) Designing for intergenerational communication among older adults: A systematic inquiry in old residential communities of China’s Yangtze River Delta. Systems 11. https://doi.org/10.3390/systems11110528

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6:e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lu JY, Liu Y, Lv TX, Meng L (2023) An emotional-aware mobile terminal accessibility-assisted recommendation system for the elderly based on haptic recognition. International J Hum–Comput Interact. https://doi.org/10.1080/10447318.2023.2266793

Mace R (1985) Universal design: barrier-free environments for everyone. Design West 33:147–152

Google Scholar  

Mannheim I, Wouters EJ, Köttl H, van Boekel LC, Brankaert R, van Zaalen Y (2023) Ageism in the discourse and practice of designing digital technology for older persons: a scoping review. Gerontologist 63:1188–1200. https://doi.org/10.1093/geront/gnac144

Miller E, Zelenko O (2022) The Caregiving Journey: Arts-based methods as tools for participatory co-design of health technologies. Social Sci-Basel 11. https://doi.org/10.3390/socsci11090396

Mitchell J, Shirota C, Clanchy K (2023) Factors that influence the adoption of rehabilitation technologies: a multi-disciplinary qualitative exploration. J NeuroEng Rehabil 20. https://doi.org/10.1186/s12984-023-01194-9

Moher D, Liberati A, Tetzlaff J, Altman DG (2010) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 8:336–341. https://doi.org/10.1016/j.ijsu.2010.02.007

Nahar L, Jaafar A, Ahamed E, Kaish A (2015) Design of a Braille learning application for visually impaired students in Bangladesh. Assist Technol 27:172–182. https://doi.org/10.1080/10400435.2015.1011758

Nicosia J, Aschenbrenner AJ, Adams SL, Tahan M, Stout SH, Wilks H, Balls-Berry JE, Morris JC, Hassenstab J (2022) Bridging the technological divide: stigmas and challenges with technology in digital brain health studies of older adults. Front Digit Health 4. https://doi.org/10.3389/fdgth.2022.880055

Nishikawa-Pacher A (2022) Research questions with PICO: A universal mnemonic. Publications 10:21. https://doi.org/10.3390/publications10030021

Oliveira M, Zancul E, Salerno MS (2024) Capability building for digital transformation through design thinking. Technol Forecast Soc Change 198. https://doi.org/10.1016/j.techfore.2023.122947

de Oliveira RD, Okimoto M (2022) Fashion-related assistive technologies for visually impaired people: a systematic review. Dobras:183–205

Oliveri ME, Nastal J, Slomp D (2020) Reflections on Equity‐Centered Design. ETS Research Report Series 2020:1–11. https://doi.org/10.1002/ets2.12307

Ostrowski AK, Harrington CN, Breazeal C, Park HW (2021). Personal narratives in technology design: the value of sharing older adults’ stories in the design of social robots. Front Robot AI 8. https://doi.org/10.3389/frobt.2021.716581

Padfield N, Anastasi AA, Camilleri T, Fabri S, Bugeja M, Camilleri K (2023). BCI-controlled wheelchairs: end-users’ perceptions, needs, and expectations, an interview-based study. Disabil. Rehabil.-Assist. Technol. https://doi.org/10.1080/17483107.2023.2211602

Park K, So H-J, Cha H (2019) Digital equity and accessible MOOCs: Accessibility evaluations of mobile MOOCs for learners with visual impairments. AJET 35:48–63. https://doi.org/10.14742/ajet.5521

Patten C, Brockman T, Kelpin S, Sinicrope P, Boehmer K, St Sauver J, Lampman M, Sharma P, Reinicke N, Huang M, McCoy R, Allen S, Pritchett J, Esterov D, Kamath C, Decker P, Petersen C, Cheville A (2022) Interventions for Increasing Digital Equity and Access (IDEA) among rural patients who smoke: Study protocol for a pragmatic randomized pilot trial. Contemp Clin Trials 119. https://doi.org/10.1016/j.cct.2022.106838

Persson H, Åhman H, Yngling AA, Gulliksen J (2015) Universal design, inclusive design, accessible design, design for all: different concepts—one goal? On the concept of accessibility—historical, methodological and philosophical aspects. Univ Access Inf Soc 14:505–526. https://doi.org/10.1007/s10209-014-0358-z

Peters D, Sadek M, Ahmadpour N (2023) Collaborative workshops at scale: a method for non-facilitated virtual collaborative design workshops. Int J Hum–Comput Interact. https://doi.org/10.1080/10447318.2023.2247589

Pham Q, El-Dassouki N, Lohani R, Jebanesan A, Young K (2022) The future of virtual care for older ethnic adults beyond the COVID-19 pandemic. J Med Internet Res 24. https://doi.org/10.2196/29876

Ping Q, He J, Chen C (2017) How many ways to use CiteSpace? A study of user interactive events over 14 months. Assoc Info Sci Tech 68:1234–1256. https://doi.org/10.1002/asi.23770

Price DDS (1963) Science since Babylon. Philos Sci 30:93–94

PRISMA (2023) Transparent reporting of systematic reviews and meta-analyses. http://www.prisma-statement.org/ . Accessed December 14, 2023

Qin Y, Wang X, Xu Z, Škare M (2021) The impact of poverty cycles on economic research: evidence from econometric analysis. Econ Res -Ekonomska Istraživanja 34:152–171. https://doi.org/10.1080/1331677X.2020.1780144

Recupero A, Marti P, Guercio S (2021) Enabling inner creativity to surface: the design of an inclusive handweaving loom to promote self-reliance, autonomy, and well-being. Behav Inf Technol 40:497–505. https://doi.org/10.1080/0144929X.2021.1909654

Roberts E, Fan GL, Chen XW (2023) In-lab development of a mobile interface for cognitive assistive technology to support instrumental activities of daily living in dementia homecare. J Aging Environ 37:127–141. https://doi.org/10.1080/26892618.2021.2001710

Rodriguez NM, Burleson G, Linnes JC, Sienko KH (2023) Thinking beyond the device: an overview of human- and equity-centered approaches for health technology design. Annu Rev Biomed Eng 25:257–280. https://doi.org/10.1146/annurev-bioeng-081922-024834

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sayers A (2008) Tips and tricks in performing a systematic review–Chapter 4. Br J Gen Pr 58:136. https://doi.org/10.3399/bjgp08X277168

Singh S (2017) Bridging the gender digital divide in developing countries. J Child Media 11:245–247. https://doi.org/10.1080/17482798.2017.1305604

Stanford d.school (2016) Equity-Centered Design Framework. https://dschool.stanford.edu/resources/equity-centered-design-framework . Accessed December 14, 2023

Stawarz K, Liang IJ, Alexander L, Carlin A, Wijekoon A, Western MJ (2023) Exploring the potential of technology to promote exercise snacking for older adults who are prefrail in the home setting: user-centered design study. JMIR Aging 6. https://doi.org/10.2196/41810

Stern C, Jordan Z, McArthur A (2014) Developing the review question and inclusion criteria. Am J Nurs 114:53–56. https://doi.org/10.1097/01.NAJ.0000445689.67800.86

Unbehaun D, Taugerbeck S, Aal K, Vaziri DD, Lehmann J, Tolmie P, Wieching R, Wulf V (2021) Notes of memories: Fostering social interaction, activity and reminiscence through an interactive music exergame developed for people with dementia and their caregivers. Hum-Comput Interact 36:439–472. https://doi.org/10.1080/07370024.2020.1746910

United Nations (2021) International Day of Older Persons: Digital equity for all ages. ITU/UN tech agency

UNSD UNS (2023) — SDG Indicators. https://unstats.un.org/sdgs/report/2022/ . Accessed December 13, 2023

Van Eck NJ, Waltman L (2010) Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84:523–538. https://doi.org/10.1007/s11192-009-0146-3

van Leeuwen T (2006) The application of bibliometric analyses in the evaluation of social science research. Who benefits from it, and why it is still feasible. Scientometrics 66:133–154. https://doi.org/10.1007/s11192-006-0010-7

Walczak R, Koszewski K, Olszewski R, Ejsmont K, Kálmán A (2023) Acceptance of IoT Edge-computing-based sensors in smart cities for universal design purposes. Energies 16. https://doi.org/10.3390/en16031024

Wang Z, Zhou Z, Xu W, Yang D, Xu Y, Yang L, Ren J, Li Y, Huang Y (2021) Research status and development trends in the field of marine environment corrosion: a new perspective. Environ Sci Pollut Res Int 28:54403–54428. https://doi.org/10.1007/s11356-021-15974-0

Wang J, Li X, Wang P, Liu Q, Deng Z, Wang J (2022) Research trend of the unified theory of acceptance and use of technology theory: a bibliometric analysis. Sustainability 14:10. https://doi.org/10.3390/su14010010

Watts P, Kwiatkowska G, Minnion A (2023) Using multimedia technology to enhance self-advocacy of people with intellectual disabilities: Introducing a theoretical framework for ‘Multimedia Advocacy. J Appl Res Intellect Disabil 36:739–749. https://doi.org/10.1111/jar.13107

Willems J, Farley H, Campbell C (2019) The increasing significance of digital equity in higher education. AJET 35:1–8. https://doi.org/10.14742/ajet.5996

World Health Organization (2002) Active aging: A policy framework. WHO, Geneva, Switzerland

Xiao Y, Wu H, Wang G, Mei H (2021) Mapping the Worldwide Trends on Energy Poverty Research: A Bibliometric Analysis (1999–2019). Int J Environ Res Public Health 18. https://doi.org/10.3390/ijerph18041764

Yeong JL, Thomas P, Buller J, Moosajee M (2021) A newly developed web-based resource on genetic eye disorders for users with visual impairment (Gene Vis): Usability Study. J Med Internet Res 23. https://doi.org/10.2196/19151

Yuen AHK, Park JH, Chen L, Cheng M (2017) Digital equity in cultural context: exploring the influence of Confucian heritage culture on Hong Kong families. Educ Tech Res Dev 65:481–501. https://doi.org/10.1007/s11423-017-9515-4

Zhang BY, Ma MY, Wang ZS (2023) Promoting active aging through assistive product design innovation: a preference-based integrated design framework. Front Public Health 11. https://doi.org/10.3389/fpubh.2023.1203830

Download references

Acknowledgements

This research was funded by the Humanities and Social Sciences Youth Foundation, Ministry of Education of the People’s Republic of China (21YJC760101).

Author information

Authors and affiliations.

Xiamen University of Technology, Xiamen, China

Baoyi Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

The author was responsible for all aspects of the work, including the conception, research, analysis, manuscript drafting, critical revision, and final approval of the version to be published. The author ensures the accuracy and integrity of the entire study.

Corresponding author

Correspondence to Baoyi Zhang .

Ethics declarations

Competing interests.

The author declares no competing interests.

Ethical approval

Ethical approval was not required as the study did not involve human participants.

Informed consent

Informed consent was not required as this study did not involve human participants.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhang, B. Research progress and intellectual structure of design for digital equity (DDE): A bibliometric analysis based on citespace. Humanit Soc Sci Commun 11 , 1019 (2024). https://doi.org/10.1057/s41599-024-03552-x

Download citation

Received : 27 December 2023

Accepted : 01 August 2024

Published : 08 August 2024

DOI : https://doi.org/10.1057/s41599-024-03552-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

is a systematic review a type of research design

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Registered Report Protocol

Registered Report Protocols describe a study’s rationale and methods for which the planned work was peer-reviewed prior to data collection.

See all article types »

Knowledge about research and facilitation of co-creation with children. Protocol for the article “scoping review of research about co-creation with children”

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Western Norway University of Applied Science, Bergen, Norway

ORCID logo

Roles Conceptualization, Formal analysis, Investigation, Validation, Writing – review & editing

Roles Formal analysis, Supervision, Writing – original draft, Writing – review & editing

  • Bjarnhild Samland, 
  • Tone Larsen, 
  • Lillian Pedersen

PLOS

  • Published: August 9, 2024
  • https://doi.org/10.1371/journal.pone.0307766
  • Peer Review
  • Reader Comments

Table 1

Children and young people’s participation, as stipulated in the Convention on the Rights of the Child, applies to both matters that directly and indirectly affect children. Participation is in some countries recognized as a fundamental right and children’s engagement seen as a valuable resource. Assisted by conceptual understanding of co-creation, children may be enabled to engage and participate in a variety of contexts. Knowledge about research on, and facilitation of, co-creation involving children is the theme of the scoping review presented by this protocol. The protocol outlines a scoping review which is to use a systematic approach to synthesize knowledge of research about co-creation with children. By systematically scoping the existing research about co-creation with children, the review will survey the available literature (evidence), identify key concepts, and uncover gaps in knowledge. The overall objective of this scoping review is to gain knowledge of research conducted about all types of co-creation with children, and to identify the gaps that future research should address. This scoping review acknowledges the existence of multiple definitions of co-creation, which vary depending on different contexts. The review will also recognize several other associated concepts, such as co-production, co-design, co-research, and co- innovation, since they are used interchangeably with or align with the understanding of co-creation being reviewed. The methodological framework outlined by the Joanna Briggs Institute (JBI) for scoping review will be used as a guide for this review. The PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation will be used during the process. The databases, ERIC (Education Resources Information Centre), Teacher Reference Center, Idunn, Oria, Libris, Kungliga biblioteket, ScienceDirect, ProQuest, Scopus, Academic search elite, Web of Science, Google scholar, will be searched for information on academic books and articles, in May 2024. Also grey literature will be searched for relevant academic references. There are no limitations in date of publication. Language will be limited to English, Norwegian, Swedish, and Danish. Following the selection of studies, data will be extracted and analysed. Ethical approval is not required, because only secondary data is collected. Dissemination will include peer-reviewed publications and presentations at conferences regarding public innovation, education, and children`s participations contexts.

Citation: Samland B, Larsen T, Pedersen L (2024) Knowledge about research and facilitation of co-creation with children. Protocol for the article “scoping review of research about co-creation with children”. PLoS ONE 19(8): e0307766. https://doi.org/10.1371/journal.pone.0307766

Editor: Sherief Ghozy, Mayo Clinic Minnesota, UNITED STATES OF AMERICA

Received: November 15, 2023; Accepted: July 8, 2024; Published: August 9, 2024

Copyright: © 2024 Samland et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data from this study will be made available together with this manuskript upon study completion.

Funding: This study was funded by the Norwegian Research Council, Western Norway University of Applied Sciences, and Sogndal Municipality.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The aim of this scoping review is to investigate research including facilitation of co-creation with children, with a focus on methodologies and methods. Through this review, we will illuminate approaches that have been employed in researching co-creation with children.

The perspective on children has undergone a transformation in recent decades. Instead of viewing children merely as becomings , we now recognize them as beings with their own rights and agency. This shift in view has been solidified through the UN Convention on the Rights of the Child, with particular emphasis on article 12 [ 1 , 2 ]. This article explicitly establishes children’s entitlement to express their views on matters affecting them directly, indirectly, and within the broader societal framework [ 1 , 2 ]. However, a gap between the stated article and its realisation within policy and practice contexts has been found in Robinson [ 3 ]. Therefore, it seems crucial to gain insight into how children’s voices and perspectives can be included. In this context, co-creation is an approach that may include children and promote their perspectives and voices.

The concept of co-creation emerged in the private sector, where service providers`collaboration with consumers to create value together, known as value co-creation, was deemed beneficial [ 4 , 5 ]. Co-creation is defined in various ways depending on different disciplines (e.g., marketing, service management, public management) and the context which it is applied. However, researchers argue that in the public sector, co-creation involves collaboration between public, private and/or civil actors to create public value by sharing knowledge and resources [ 5 – 9 ]. Ramaswamy & Ozcan [4 p.14] define co-creation as follows:

“Co-creation is joint creation and evolution of value with stakeholders’ individuals , intensifies and enacted through platforms of engagements , virtualized and emerging from ecosystems of capabilities and actualized and embodied in domains of experiences , expanding wealth–welfare–wellbeing .”

The elements of co-creation emphasized in this definition, value creation , stakeholders’ perspectives , and arenas for co-creation , suggest that co-creation is a multifaceted phenomenon that can be described using various terms and definitions, such as social innovation, co-design, co-production, and co-creation itself [ 10 ]. In literature, about the public sector, the terms "co-creation" and "co-production" are often used interchangeably, representing collaborative efforts between civil society and public servants to initiate, plan, design, and implement public services [ 11 ]. However, the term co-creation is more commonly used in the context of initiating or designing services, while co-production means involving residents in the service implementation stage [ 5 , 9 ]. Co-production can be understood as a process where the public organisations hold dominance and focusing on linear production. In co-creation the relationship between stakeholders appears as more interactive, equal, and dynamic. Here, value is created in the interaction that takes place within the context of the service user’s wider life experience [ 5 ]. The focus of the scoping review is investigating research projects that include co-creation with children, but related concepts will be taken into consideration if the collaborative interaction, a) includes children, and b) aims to create public and private value. Another precondition is that different stakeholders engage in a collaborative process with children, both providers and users of a service or product, who aim to promote innovation and improvement. This innovative dimension may lead to terms and concepts that are related to co-creation, such as co-innovation [ 6 , 10 ]. In our search we thus included the concepts of co-creation, co-innovation, co-production, co-design, and co-research.

In addition to examining the facilitation of co-creation with children, we will explore how research on co-creation has been conducted. An area of special interest will be to investigate how children’s perspectives on the co-creation process are illuminated. Recently, there has been an expansion in the field of participatory research involving children, driven by a growing emphasis on conducting research with children rather than just on them [ 12 ]. There is wide diversity in the ways participatory research with children is conducted, and there is increased attention to the ethical dimensions such as power, language and roles [ 12 – 14 ]. This review aims to explore research, identify, and analyse knowledge gaps, and provide guidance for future research of co-creation that supports further implementation of the Convention on the Rights of the Child in research and society at large. The review question we will explore is: What knowledge exists about the facilitation of co-creation with children and how has the research been conducted ?

The decision to conduct a scoping review was based on a careful assessment of available research methods. While a systematic literature review was initially considered, a scoping review was deemed to be more appropriate due to the need for a comprehensive overview of a broad topic [ 15 ]. There is no universally accepted definition of scoping reviews [ 15 ]. Munn et al. [16 p.950], reached a formal consensus with the JBI Scoping Reviews Methodology Group in 2020 on the following definition of scoping reviews:

Scoping reviews are a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (i.e., primary research, reviews, non-empirical evidence) within or across particular contexts. Scoping reviews can clarify key concepts/definitions in the literature and identify key characteristics or factors related to a concept, including those related to methodological research.

The key elements of a scoping review [ 16 ] are what is required to address the objective of the review: The objective is to assess the extent of the academic literature that describes co-creation processes involving children, examine how research is conducted, and identify key characteristics or factors related to research about co-creation with children. This scoping review aims to elucidate the existing knowledge regarding children’s participation in research on co-creation. It may also contribute to the knowledge development concerning facilitation of co creation involving children. It may identify and illuminate exemplary practices in various contexts, such as education, pedagogy, product and service development, cultural and social development. It serves to offer a comprehensive overview of the diverse contexts in which children engage in co-creation and might show how research methods affect co-creation processes. Additionally, the review serves to enhance theoretical and methodological frameworks within the realm of co-creation involving both children and adults.

A preliminary search of ERIC and Scopus is conducted and no current or underway systematic reviews or scoping reviews on the topic were identified. However, Williams et al. [ 17 ] conducted a scoping review to explore co-creation involving children. It is worth noting that while our research shares some common themes, their scope and context is limited to the enhancement of health-promoting physical environments within publicly accessible spaces.

The proposed scoping review, to identify and explore the concept of co-creation with children, will be conducted in accordance with the JBI methodology for scoping reviews [ 18 ] and the study design presented in this protocol has been checked according to Peters et al. [ 19 p. 956] "Checklist of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) statement adapted for a scoping review protocol".

The execution of the evidence synthesis will be conducted in accordance with the “Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping review (PRISMA-ScR) Checklist” [ 20 ]. The use of the PRISMA-ScR checklist will serve as a quality assessment for the study and a guideline for presenting the results of the search and the study inclusion process in the final scoping review.

Types of sources

The scoping review will include existing academic literature on research and facilitation of co-creation involving children and young people aged 0–18 that has been part of a co-creation processes without any contextual limitations. Our focus will be on literature that describes co-creation that contains partnerships and innovative elements where children actively participate. Co-creation without children as participants will be excluded. We will focus exclusively on academic articles, as research methods are crucial to this review. Gray literature or individual case reports will not be included in the synthesis. The scoping review’s focus is to examine how children’s perspectives are integrated into research and the potential connections between research on, and facilitation of, co-creation where they participate. Thus, we will read the abstracts of grey literature that occur in our search and scan them for references to relevant academic literature research.

To ensure a comprehensive analysis, we will consider studies employing various research designs, both qualitative and quantitative approaches, including action research. By incorporating diverse research designs, we aim to capture a broad range of perspectives and approaches in the literature pertaining to co-creation with children.

Search strategy

The reviewers have collaborated with an academic librarian, who is an expert in developing and performing searches for systematic reviews and meta-analyses, to prepare the search strategy.

This ensures transparency and auditability of both the search methodology and its outcomes. The search strategy is not limited by study design or year of dissemination and has been peer reviewed by another information specialist using the Peer Review of Electronic Search Strategies (PRESS) checklist [ 21 ].

The search is limited to title, abstract and keywords. We have undertaken an initial limited search of ERIC (Education Resource Information Centre) and Scopus, to develop our search strategy. We identified several articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles have been used to develop a full search strategy. Literature in English, Norwegian, Swedish, and Danish will be included. The databases ERIC, Idunn, Oria, Libris, Kungliga biblioteket, ScienceDirect, ProQuest, Scopus, Academic search elite, Web of Science and Google scholar will be searched for information. The search strategy, including all identified keywords and index terms, will be adapted for each included database. The literature search will be supplemented by manually scanning the reference lists of included studies to identify further publications linked to them. Furthermore, academic experts in co-creation with children will also be contacted to obtain information on relevant literature.

We have translated the search-words into the Scandinavian languages, due to our ability to understand them. This increases the chance to identify potentially overlooked scientific articles, especially for text searches. However, the search databases used in our search are international and use English index words (thesaurus). This means that our search strategy can retrieve articles across all languages, provided they are indexed in these databases. To ensure that the research results are as relevant as possible, we use proximity indicators “WO”. Table 1 below shows the search concept that will be conducted.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0307766.t001

Study/source of evidence selection

Following the search, all identified citations will be collated and uploaded into EndNote 20 and duplicates removed. After conducting a pilot, titles and abstracts will be screened by the authors for assessment against the inclusion criteria for the review.

The full text of selected citations will be assessed in detail against the inclusion criteria.

Reasons for exclusion of sources of evidence at full text that do not meet the inclusion criteria will be recorded and reported in the scoping review. The Table 2 below summarises the inclusion and exclusion criteria of this review.

thumbnail

https://doi.org/10.1371/journal.pone.0307766.t002

Any disagreements that arise between the authors at each stage of the selection process will be resolved through discussion, or with an additional reviewer/s. To ensure uniformity in the evaluation of articles and to promote a shared understanding of the selection of sources among us, we will initiate a pilot phase. In the pilot all three authors will consider the same ten articles, according to a predetermined template for analysis and synthesis, to foster cohesion in our approach and align our assessments. We will adjust our subsequent evidence selection based on the insights gained from reflecting our pilot’s outcome. To detail the study selection process, a flow diagram will be used ( Fig 1 ).

thumbnail

https://doi.org/10.1371/journal.pone.0307766.g001

Data extraction

The articles will be handled using Endnote and an online software for conducting reviews, Rayyan [ 22 ]. Data will be extracted from papers included in the scoping review. The initial stages of the selection process, encompassing the following steps, will be executed by the primary author (Samland, B.):

  • Removal of duplicates
  • Exclusion of editorials, reports, opinion papers, and conference papers
  • Elimination of studies that are not conducted in or related to co-creation with children.

In instances where a study’s inclusion status cannot be determined based on the abstract alone, it will be included for further consideration. The second and third authors will independently review 20% of the records randomly. Upon completion of this, the first author will read all the included articles to verify their adherence to the eligibility criteria. Additionally, the second and third authors will each read 20% of the studies randomly selected. Subsequently, all three authors will read through all included articles, extracting data from each included study. The selection criteria outlined in this protocol are designed to identify crucial information for describing the scope, foster theoretical development, and reveal knowledge gaps, to offer recommendations to guide future research.

The data extracted will include specific details about:

  • Year of publication.
  • Country of origin.
  • Aims/purpose of co-creation.
  • Concept. Process of co-creation.
  • Participants (children, pupils’ representatives, co- researchers, service-users, designers, teachers, leaders, politicians, volunteers, services, organisations)
  • Methodology / methods. Facilitation and research methods.
  • Outcome (for; children, service-users, services, organisations) (e.g. how was outcome measured).
  • Key findings that relate to the scoping review question/s.

Authors must establish a shared understanding of the data extraction process. We will conduct a pilot, as with the selection of sources. During the pilot phase, all authors will individually extract data from the same three articles. Subsequently, we will meet to discuss our findings, reflect on our results, and adjust our understanding, and appraise further extraction. Throughout the data extraction process, the initial draft of the data extraction tool will be subject to revisions and adjustments as required. All modifications made will be thoroughly documented within the scoping review. Any disagreements arising between the authors will be resolved through discussion, or if necessary, with the input of an additional reviewer/s. If appropriate, authors of papers will be contacted to request missing or additional data, where required.

Presentation of results

We will use Excel software to create an overview of the detected patterns in the articles and the extracted data. This tool enables us to synthesize and summarize the existing literature on co-creation with children clearly. Our findings will be presented using a combination of textual descriptions, tabular data, and graphical representations to ensure a useful presentation of our findings [ 20 , 23 ]. All data underlying the findings, will be fully available without restriction, as part of this manuscript, at the time of publication.

Limitations

We are aware of different limitations conducting the scoping review. Linguistic limitations exist. Although we have the advantage of mastering Scandinavian and English languages, language barriers will inevitably limit our access to all relevant literature. However, if English abstract provides sufficient information, we may include these studies with the help of today’s technology. Our preliminary test searches suggest that language will not necessarily pose a significant challenge, thanks to our broad search strategy. However, this broad search strategy leads us to the next limitation: it may reduce the precision in relation to the review`s purpose. This was indicated by our test searches, but the test search also showed the necessity of a comprehensive approach to cover related concepts. Finally, it should be mentioned that the scoping review will not advocate for specific methods for exploring or implementing co-creation with children. Future work may address this. Hopefully, this scoping review may serve as a basis for systematic reviews and additional research into children’s participation in co-creation and related areas.

This protocol serves as a guideline for investigating the contributions of research on children’s participation in co-creation processes, and how children’s perspectives are highlighted. By synthesizing knowledge about research and facilitation of co-creation with children, we hope to influence children’s involvement in future research as well as the facilitation of co-creation across various sectors of society. As authors, we commit to maintaining ongoing dialogue throughout the process and making necessary adjustments along the way. We will conduct continuous quality assessments, with the goal of publishing the review in a suitable peer-reviewed journal. This will make the knowledge available so it can influence, and hopefully, strengthen children’s participation in different parts of the society.

Supporting information

S1 checklist. checklist of the preferred reporting items for systematic reviews and meta-analyses protocols (prisma-p) statement adapted for a scoping review protocol..

https://doi.org/10.1371/journal.pone.0307766.s001

Acknowledgments

This protocol for a scoping review is related to the first authors´ PhD research project, “Co -creation in the public sector. Children as co-researchers”.

  • 1. UN Committee on the Rights of the Child. (2009). General Comment no. 12.
  • 2. Unicef. (1989). Convention on the Rights of the Child.
  • View Article
  • Google Scholar
  • 4. Ramaswamy V., & Ozcan K. (2014). The co-creation paradigm . Stanford University Press.
  • 5. Brandsen T., & Honingh M. (2018). Definitions of co-production and co-creation. In Co-production and co-creation (pp. 9–17). Routledge.
  • 11. Agger A., & Tortzen A. (2015). Forskningsreview om samskabelse. Roskilde: Roskilde Universitet and University College Lillebælt.
  • PubMed/NCBI

IMAGES

  1. Levels of evidence and study design

    is a systematic review a type of research design

  2. 4 components of a systematic review

    is a systematic review a type of research design

  3. How to Conduct a Systematic Review

    is a systematic review a type of research design

  4. systematic review step by step guide

    is a systematic review a type of research design

  5. Introduction to systematic reviews

    is a systematic review a type of research design

  6. Systematic reviews

    is a systematic review a type of research design

COMMENTS

  1. Study designs: Part 7

    Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.

  2. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  3. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  4. Systematic review

    A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. [1] A systematic review extracts and interprets data from published studies on the topic (in the scientific literature), then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based ...

  5. Systematic reviews: Structure, form and content

    In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  6. Systematic reviews: Brief overview of methods, limitations, and

    Systematic reviews can help us know what we know about a topic, and what is not yet known, often to a greater extent than the findings of a single study. 4 The process is comprehensive enough to establish consistency and generalizability of research findings across settings and populations. 3 A meta-analysis is a type of systematic review that ...

  7. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, and reviews and analyzes their results [ 1 ]. During the systematic review process, the quality of studies is evaluated, and a statistical meta-analysis of the study results is conducted on the basis of their quality. A meta-analysis is a valid, objective ...

  8. How to do a systematic review

    ferent types and methods of systematic review for dif-ferent types of questions. This is the same as when selecting a method for primary research, where the type of research question influences selection of an appropriate method (e.g. a question about the effect of an intervention may be best answered by a rando-

  9. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr Robert Boyle and his colleagues published a systematic review in ...

  10. Introduction to Systematic Reviews

    Abstract. A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter.

  11. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  12. Types of Reviews

    This site explores different review methodologies such as, systematic, scoping, realist, narrative, state of the art, meta-ethnography, critical, and integrative reviews. The LITR-EX site has a health professions education focus, but the advice and information is widely applicable. Types of Reviews. Review the table to peruse review types and ...

  13. Research Guides: Study Design 101: Systematic Review

    Definition. A document often written by a panel that provides a comprehensive review of all relevant studies on a particular clinical or health-related topic/question. The systematic review is created after reviewing and combining all the information from both published and unpublished studies (focusing on clinical trials of similar treatments ...

  14. 1.2.2 What is a systematic review?

    A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman ...

  15. What is a Systematic Review?

    an explicit, reproducible methodology. a systematic search that attempts to identify all studies that would meet the eligibility criteria. an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias. a systematic presentation, and synthesis, of the characteristics and findings of ...

  16. Levels of Evidence and Study Designs

    Systematic Review: A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria.

  17. Systematic Reviews: Levels of evidence and study design

    Secondary sources are not evidence, but rather provide a commentary on and discussion of evidence. e.g. systematic review. Primary sources contain the original data and analysis from research studies. No outside evaluation or interpretation is provided. An example of a primary literature source is a peer-reviewed research article.

  18. Study designs: Part 7

    Abstract. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies. Systematic reviews often also use meta ...

  19. Common Review Types

    Definition: A systematic review is a summary of research results (evidence) that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue.It synthesizes the results of multiple primary studies related to each other by using strategies that reduce biases and errors. When to use: If you want to identify, appraise, and synthesize all ...

  20. Literature Reviews: Types of Clinical Study Designs

    Systematic Review A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria.

  21. Scoping Review vs Systematic Review

    Systematic reviews are designed to answer specific research questions with the goal of synthesizing evidence to inform clinical practice or policy decisions, such as determining the effectiveness of an intervention.; Scoping reviews are valuable tools for exploring broader research landscapes, clarifying concepts, and identifying research gaps.; How to Choose the Best Review for your Research ...

  22. Types of Reviews

    A literature review provides a reader with a critical overview of the sources relevant to a specific research subject, question, or idea. In writing a literature review, it is important to contextualize each resource, evaluate the content, and provide a critical analysis of the strengths, contributions, and issues.

  23. Models and frameworks for assessing the implementation of clinical

    A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. ... (Types of Studies, Types of Data, Types of Methods, Outcomes) was utilized in this systematic review, outlined as follows: Types of studies. All types of studies were considered for ...

  24. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    Because no study, regardless of its type, should be interpreted in isolation, a systematic review is generally the best form of evidence. So, the preferred method is a good summary of research reports, i.e., systematic reviews and meta-analysis, which will give evidence-based answers to clinical situations.

  25. Assessing social connection for long‐term care home residents

    This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement 32 and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology for systematic reviews of outcome measures 33 and for assessing content validity. 34 The study was ...

  26. Research progress and intellectual structure of design for digital

    This systematic review aims to delineate the academic landscape of DDE by exploring its known and unknown aspects, including research progress, intellectual structure, research hotspots and trends ...

  27. Knowledge about research and facilitation of co-creation with children

    Methods. The proposed scoping review, to identify and explore the concept of co-creation with children, will be conducted in accordance with the JBI methodology for scoping reviews [] and the study design presented in this protocol has been checked according to Peters et al. [19 p. 956] "Checklist of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P ...

  28. The Association Between Companion Animal Attachment and Depression: A

    The systematic review process involved a search of multiple databases and references to identify papers that met the criteria for inclusion, an initial title and abstract screening (completed by the primary author), a more in-depth full-article review (completed by the primary author and a research assistant), data extraction and quality ...

  29. Individualized prediction models in adhd: A systematic review and meta

    There have been increasing efforts to develop prediction models supporting personalised detection, prediction, or treatment of ADHD. We overviewed the current status of prediction science in ADHD by: (1) systematically reviewing and appraising available prediction models; (2) quantitatively assessing factors impacting the performance of published models.

  30. Interparental conflict and young adult romantic relationships: A

    In the last two decades, researchers have been progressively investigating the impact of interparental conflict (IPC) on young adults romantic relationships. This systematic review aimed to synthesize literature on IPC and romantic relationship outcomes among young adults and highlight mechanisms found in this link. Following the PRISMA protocol, 3232 studies were identified using Boolean ...