Research methodology vs. research methods
The research methodology or design is the overall strategy and rationale that you used to carry out the research. Whereas, research methods are the specific tools and processes you use to gather and understand the data you need to test your hypothesis.
To further understand research methodology, let’s explore some examples of research methodology:
a. Qualitative research methodology example: A study exploring the impact of author branding on author popularity might utilize in-depth interviews to gather personal experiences and perspectives.
b. Quantitative research methodology example: A research project investigating the effects of a book promotion technique on book sales could employ a statistical analysis of profit margins and sales before and after the implementation of the method.
c. Mixed-Methods research methodology example: A study examining the relationship between social media use and academic performance might combine both qualitative and quantitative approaches. It could include surveys to quantitatively assess the frequency of social media usage and its correlation with grades, alongside focus groups or interviews to qualitatively explore students’ perceptions and experiences regarding how social media affects their study habits and academic engagement.
These examples highlight the meaning of methodology in research and how it guides the research process, from data collection to analysis, ensuring the study’s objectives are met efficiently.
When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.
Wondering how to write the research methodology section? Follow these steps to create a strong methods chapter:
At the start of a research paper , you would have provided the background of your research and stated your hypothesis or research problem. In this section, you will elaborate on your research strategy.
Begin by restating your research question and proceed to explain what type of research you opted for to test it. Depending on your research, here are some questions you can consider:
a. Did you use qualitative or quantitative data to test the hypothesis?
b. Did you perform an experiment where you collected data or are you writing a dissertation that is descriptive/theoretical without data collection?
c. Did you use primary data that you collected or analyze secondary research data or existing data as part of your study?
These questions will help you establish the rationale for your study on a broader level, which you will follow by elaborating on the specific methods you used to collect and understand your data.
Now that you have told your reader what type of research you’ve undertaken for the dissertation, it’s time to dig into specifics. State what specific methods you used and explain the conditions and variables involved. Explain what the theoretical framework behind the method was, what samples you used for testing it, and what tools and materials you used to collect the data.
Once you have explained the data collection process, explain how you analyzed and studied the data. Here, your focus is simply to explain the methods of analysis rather than the results of the study.
Here are some questions you can answer at this stage:
a. What tools or software did you use to analyze your results?
b. What parameters or variables did you consider while understanding and studying the data you’ve collected?
c. Was your analysis based on a theoretical framework?
Your mode of analysis will change depending on whether you used a quantitative or qualitative research methodology in your study. If you’re working within the hard sciences or physical sciences, you are likely to use a quantitative research methodology (relying on numbers and hard data). If you’re doing a qualitative study, in the social sciences or humanities, your analysis may rely on understanding language and socio-political contexts around your topic. This is why it’s important to establish what kind of study you’re undertaking at the onset.
Now that you have gone through your research process in detail, you’ll also have to make a case for it. Justify your choice of methodology and methods, explaining why it is the best choice for your research question. This is especially important if you have chosen an unconventional approach or you’ve simply chosen to study an existing research problem from a different perspective. Compare it with other methodologies, especially ones attempted by previous researchers, and discuss what contributions using your methodology makes.
No matter how thorough a methodology is, it doesn’t come without its hurdles. This is a natural part of scientific research that is important to document so that your peers and future researchers are aware of it. Writing in a research paper about this aspect of your research process also tells your evaluator that you have actively worked to overcome the pitfalls that came your way and you have refined the research process.
1. Remember who you are writing for. Keeping sight of the reader/evaluator will help you know what to elaborate on and what information they are already likely to have. You’re condensing months’ work of research in just a few pages, so you should omit basic definitions and information about general phenomena people already know.
2. Do not give an overly elaborate explanation of every single condition in your study.
3. Skip details and findings irrelevant to the results.
4. Cite references that back your claim and choice of methodology.
5. Consistently emphasize the relationship between your research question and the methodology you adopted to study it.
To sum it up, what is methodology in research? It’s the blueprint of your research, essential for ensuring that your study is systematic, rigorous, and credible. Whether your focus is on qualitative research methodology, quantitative research methodology, or a combination of both, understanding and clearly defining your methodology is key to the success of your research.
Once you write the research methodology and complete writing the entire research paper, the next step is to edit your paper. As experts in research paper editing and proofreading services , we’d love to help you perfect your paper!
Here are some other articles that you might find useful:
What does research methodology mean, what types of research methodologies are there, what is qualitative research methodology, how to determine sample size in research methodology, what is action research methodology.
Found this article helpful?
This is very simplified and direct. Very helpful to understand the research methodology section of a dissertation
Leave a Comment: Cancel reply
Your email address will not be published.
Your organization needs a technical editor: here’s why, your guide to the best ebook readers in 2024, writing for the web: 7 expert tips for web content writing.
Subscribe to our Newsletter
Get carefully curated resources about writing, editing, and publishing in the comfort of your inbox.
How to Copyright Your Book?
If you’ve thought about copyrighting your book, you’re on the right path.
© 2024 All rights reserved
Terms & conditions.
As the Christmas season is upon us, we find ourselves reflecting on the past year and those who we have helped to shape their future. It’s been quite a year for us all! The end of the year brings no greater joy than the opportunity to express to you Christmas greetings and good wishes.
At this special time of year, Research Prospect brings joyful discount of 10% on all its services. May your Christmas and New Year be filled with joy.
We are looking back with appreciation for your loyalty and looking forward to moving into the New Year together.
"Claim this offer"
In unfamiliar and hard times, we have stuck by you. This Christmas, Research Prospect brings you all the joy with exciting discount of 10% on all its services.
Offer valid till 5-1-2024
We love being your partner in success. We know you have been working hard lately, take a break this holiday season to spend time with your loved ones while we make sure you succeed in your academics
Discount code: RP23720
Published by Nicolas at March 21st, 2024 , Revised On March 12, 2024
Research methodology is a crucial aspect of any investigative process, serving as the blueprint for the entire research journey. If you are stuck in the methodology section of your research paper , then this blog will guide you on what is a research methodology, its types and how to successfully conduct one.
Table of Contents
Research methodology can be defined as the systematic framework that guides researchers in designing, conducting, and analyzing their investigations. It encompasses a structured set of processes, techniques, and tools employed to gather and interpret data, ensuring the reliability and validity of the research findings.
Research methodology is not confined to a singular approach; rather, it encapsulates a diverse range of methods tailored to the specific requirements of the research objectives.
Here is why Research methodology is important in academic and professional settings.
Research methodology forms the backbone of rigorous inquiry. It provides a structured approach that aids researchers in formulating precise thesis statements , selecting appropriate methodologies, and executing systematic investigations. This, in turn, enhances the quality and credibility of the research outcomes.
In both academic and professional contexts, the ability to reproduce research outcomes is paramount. A well-defined research methodology establishes clear procedures, making it possible for others to replicate the study. This not only validates the findings but also contributes to the cumulative nature of knowledge.
In professional settings, decisions often hinge on reliable data and insights. Research methodology equips professionals with the tools to gather pertinent information, analyze it rigorously, and derive meaningful conclusions.
This informed decision-making is instrumental in achieving organizational goals and staying ahead in competitive environments.
For academic researchers, adherence to robust research methodology is a hallmark of excellence. Institutions value research that adheres to high standards of methodology, fostering a culture of academic rigour and intellectual integrity. Furthermore, it prepares students with critical skills applicable beyond academia.
Research methodology instills a problem-solving mindset by encouraging researchers to approach challenges systematically. It equips individuals with the skills to dissect complex issues, formulate hypotheses , and devise effective strategies for investigation.
In the pursuit of knowledge and discovery, understanding the fundamentals of research methodology is paramount.
Research, in its essence, is a systematic and organized process of inquiry aimed at expanding our understanding of a particular subject or phenomenon. It involves the exploration of existing knowledge, the formulation of hypotheses, and the collection and analysis of data to draw meaningful conclusions.
Research is a dynamic and iterative process that contributes to the continuous evolution of knowledge in various disciplines.
Research takes on various forms, each tailored to the nature of the inquiry. Broadly classified, research can be categorized into two main types:
To conduct effective research, one must go through the different components of research methodology. These components form the scaffolding that supports the entire research process, ensuring its coherence and validity.
Research design serves as the blueprint for the entire research project. It outlines the overall structure and strategy for conducting the study. The three primary types of research design are:
Choosing the right data collection methods is crucial for obtaining reliable and relevant information. Common methods include:
Once data is collected, analysis becomes imperative to derive meaningful conclusions. Different methodologies exist for quantitative and qualitative data:
Selecting an appropriate research method is a critical decision in the research process. It determines the approach, tools, and techniques that will be used to answer the research questions.
Quantitative research involves the collection and analysis of numerical data, providing a structured and objective approach to understanding and explaining phenomena.
Experimental research involves manipulating variables to observe the effect on another variable under controlled conditions. It aims to establish cause-and-effect relationships.
Key Characteristics:
Applications: Commonly used in scientific studies and psychology to test hypotheses and identify causal relationships.
Survey research gathers information from a sample of individuals through standardized questionnaires or interviews. It aims to collect data on opinions, attitudes, and behaviours.
Applications: Widely employed in social sciences, marketing, and public opinion research to understand trends and preferences.
Descriptive research seeks to portray an accurate profile of a situation or phenomenon. It focuses on answering the ‘what,’ ‘who,’ ‘where,’ and ‘when’ questions.
Applications: Useful in situations where researchers want to understand and describe a phenomenon without altering it, common in social sciences and education.
Qualitative research emphasizes exploring and understanding the depth and complexity of phenomena through non-numerical data.
A case study is an in-depth exploration of a particular person, group, event, or situation. It involves detailed, context-rich analysis.
Applications: Common in social sciences, psychology, and business to investigate complex and specific instances.
Ethnography involves immersing the researcher in the culture or community being studied to gain a deep understanding of their behaviours, beliefs, and practices.
Applications: Widely used in anthropology, sociology, and cultural studies to explore and document cultural practices.
Grounded theory aims to develop theories grounded in the data itself. It involves systematic data collection and analysis to construct theories from the ground up.
Applications: Commonly applied in sociology, nursing, and management studies to generate theories from empirical data.
Research design is the structural framework that outlines the systematic process and plan for conducting a study. It serves as the blueprint, guiding researchers on how to collect, analyze, and interpret data.
Exploratory design.
Exploratory research design is employed when a researcher aims to explore a relatively unknown subject or gain insights into a complex phenomenon.
Applications: Valuable in the early stages of investigation, especially when the researcher seeks a deeper understanding of a subject before formalizing research questions.
Descriptive research design focuses on portraying an accurate profile of a situation, group, or phenomenon.
Applications: Widely used in social sciences, marketing, and educational research to provide detailed and objective descriptions.
Explanatory research design aims to identify the causes and effects of a phenomenon, explaining the ‘why’ and ‘how’ behind observed relationships.
Applications: Commonly employed in scientific studies and social sciences to delve into the underlying reasons behind observed patterns.
Cross-sectional design.
Cross-sectional designs collect data from participants at a single point in time.
Applications: Suitable for studying characteristics or behaviours that are stable or not expected to change rapidly.
Longitudinal designs involve the collection of data from the same participants over an extended period.
Applications: Ideal for studying developmental processes, trends, or the impact of interventions over time.
Experimental design.
Experimental designs involve manipulating variables under controlled conditions to observe the effect on another variable.
Applications: Commonly used in scientific studies, psychology, and medical research to establish causal relationships.
Non-experimental designs observe and describe phenomena without manipulating variables.
Applications: Suitable for studying complex phenomena in real-world settings where manipulation may not be ethical or feasible.
Effective data collection is fundamental to the success of any research endeavour.
Objective Design:
Structured Format:
Pilot Testing:
Sampling Strategy:
Establishing Rapport:
Open-Ended Questions:
Active Listening:
Ethical Considerations:
1. participant observation.
Immersive Participation:
Field Notes:
Ethical Awareness:
Objective Observation:
Data Reliability:
Contextual Understanding:
1. using existing data.
Identifying Relevant Archives:
Data Verification:
Ethical Use:
Incomplete or Inaccurate Archives:
Temporal Bias:
Access Limitations:
Conducting research is a complex and dynamic process, often accompanied by a myriad of challenges. Addressing these challenges is crucial to ensure the reliability and validity of research findings.
Sampling bias:.
Measurement error:.
Timeline pressures:.
Selection bias:.
Conducting successful research relies not only on the application of sound methodologies but also on strategic planning and effective collaboration. Here are some tips to enhance the success of your research methodology:
Well-defined research objectives guide the entire research process. Clearly articulate the purpose of your study, outlining specific research questions or hypotheses.
A thorough literature review provides a foundation for understanding existing knowledge and identifying gaps. Invest time in reviewing relevant literature to inform your research design and methodology.
A detailed plan serves as a roadmap, ensuring all aspects of the research are systematically addressed. Develop a detailed research plan outlining timelines, milestones, and tasks.
Ethical practices are fundamental to maintaining the integrity of research. Address ethical considerations early, obtain necessary approvals, and ensure participant rights are safeguarded.
Research methodologies evolve, and staying updated is essential for employing the most effective techniques. Engage in continuous learning by attending workshops, conferences, and reading recent publications.
Unforeseen challenges may arise during research, necessitating adaptability in methods. Be flexible and willing to modify your approach when needed, ensuring the integrity of the study.
Research is often an iterative process, and refining methods based on ongoing findings enhance the study’s robustness. Regularly review and refine your research design and methods as the study progresses.
What is the research methodology.
Research methodology is the systematic process of planning, executing, and evaluating scientific investigation. It encompasses the techniques, tools, and procedures used to collect, analyze, and interpret data, ensuring the reliability and validity of research findings.
Research methodologies include qualitative and quantitative approaches. Qualitative methods involve in-depth exploration of non-numerical data, while quantitative methods use statistical analysis to examine numerical data. Mixed methods combine both approaches for a comprehensive understanding of research questions.
To write a research methodology, clearly outline the study’s design, data collection, and analysis procedures. Specify research tools, participants, and sampling methods. Justify choices and discuss limitations. Ensure clarity, coherence, and alignment with research objectives for a robust methodology section.
In the methodology section of a research paper, describe the study’s design, data collection, and analysis methods. Detail procedures, tools, participants, and sampling. Justify choices, address ethical considerations, and explain how the methodology aligns with research objectives, ensuring clarity and rigour.
Mixed research methodology combines both qualitative and quantitative research approaches within a single study. This approach aims to enhance the details and depth of research findings by providing a more comprehensive understanding of the research problem or question.
How to write date in Canada – ISO 8601 format – YYYY-MM-DD – For example, January 4, 2024, will be written as 2024-01-04.
Should you use MLA or APA citation style in your dissertation, thesis, or research paper? Choose by reading this comprehensive blog.
Learn how to write a reference letter that seals the deal: Expert tips to make yours stand out and get that job, admission, or rental house!
Ready to place an order?
Learning resources.
Reference management. Clean and simple.
Why do you need a research methodology, what needs to be included, why do you need to document your research method, what are the different types of research instruments, qualitative / quantitative / mixed research methodologies, how do you choose the best research methodology for you, frequently asked questions about research methodology, related articles.
When you’re working on your first piece of academic research, there are many different things to focus on, and it can be overwhelming to stay on top of everything. This is especially true of budding or inexperienced researchers.
If you’ve never put together a research proposal before or find yourself in a position where you need to explain your research methodology decisions, there are a few things you need to be aware of.
Once you understand the ins and outs, handling academic research in the future will be less intimidating. We break down the basics below:
A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more.
You can think of your research methodology as being a formula. One part will be how you plan on putting your research into practice, and another will be why you feel this is the best way to approach it. Your research methodology is ultimately a methodological and systematic plan to resolve your research problem.
In short, you are explaining how you will take your idea and turn it into a study, which in turn will produce valid and reliable results that are in accordance with the aims and objectives of your research. This is true whether your paper plans to make use of qualitative methods or quantitative methods.
The purpose of a research methodology is to explain the reasoning behind your approach to your research - you'll need to support your collection methods, methods of analysis, and other key points of your work.
Think of it like writing a plan or an outline for you what you intend to do.
When carrying out research, it can be easy to go off-track or depart from your standard methodology.
Tip: Having a methodology keeps you accountable and on track with your original aims and objectives, and gives you a suitable and sound plan to keep your project manageable, smooth, and effective.
With all that said, how do you write out your standard approach to a research methodology?
As a general plan, your methodology should include the following information:
In any dissertation, thesis, or academic journal, you will always find a chapter dedicated to explaining the research methodology of the person who carried out the study, also referred to as the methodology section of the work.
A good research methodology will explain what you are going to do and why, while a poor methodology will lead to a messy or disorganized approach.
You should also be able to justify in this section your reasoning for why you intend to carry out your research in a particular way, especially if it might be a particularly unique method.
Having a sound methodology in place can also help you with the following:
A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research.
The choice of research instrument will usually be yours to make as the researcher and will be whichever best suits your methodology.
There are many different research instruments you can use in collecting data for your research.
Generally, they can be grouped as follows:
These are the most common ways of carrying out research, but it is really dependent on your needs as a researcher and what approach you think is best to take.
It is also possible to combine a number of research instruments if this is necessary and appropriate in answering your research problem.
There are three different types of methodologies, and they are distinguished by whether they focus on words, numbers, or both.
Data type | What is it? | Methodology |
---|---|---|
Quantitative | This methodology focuses more on measuring and testing numerical data. What is the aim of quantitative research? | Surveys, tests, existing databases. |
Qualitative | Qualitative research is a process of collecting and analyzing both words and textual data. | Observations, interviews, focus groups. |
Mixed-method | A mixed-method approach combines both of the above approaches. | Where you can use a mixed method of research, this can produce some incredibly interesting results. This is due to testing in a way that provides data that is both proven to be exact while also being exploratory at the same time. |
➡️ Want to learn more about the differences between qualitative and quantitative research, and how to use both methods? Check out our guide for that!
If you've done your due diligence, you'll have an idea of which methodology approach is best suited to your research.
It’s likely that you will have carried out considerable reading and homework before you reach this point and you may have taken inspiration from other similar studies that have yielded good results.
Still, it is important to consider different options before setting your research in stone. Exploring different options available will help you to explain why the choice you ultimately make is preferable to other methods.
If proving your research problem requires you to gather large volumes of numerical data to test hypotheses, a quantitative research method is likely to provide you with the most usable results.
If instead you’re looking to try and learn more about people, and their perception of events, your methodology is more exploratory in nature and would therefore probably be better served using a qualitative research methodology.
It helps to always bring things back to the question: what do I want to achieve with my research?
Once you have conducted your research, you need to analyze it. Here are some helpful guides for qualitative data analysis:
➡️ How to do a content analysis
➡️ How to do a thematic analysis
➡️ How to do a rhetorical analysis
Research methodology refers to the techniques used to find and analyze information for a study, ensuring that the results are valid, reliable and that they address the research objective.
Data can typically be organized into four different categories or methods: observational, experimental, simulation, and derived.
Writing a methodology section is a process of introducing your methods and instruments, discussing your analysis, providing more background information, addressing your research limitations, and more.
Your research methodology section will need a clear research question and proposed research approach. You'll need to add a background, introduce your research question, write your methodology and add the works you cited during your data collecting phase.
The research methodology section of your study will indicate how valid your findings are and how well-informed your paper is. It also assists future researchers planning to use the same methodology, who want to cite your study or replicate it.
Educational resources and simple solutions for your research journey
Writing a research paper is both an art and a skill, and knowing how to write the methods section of a research paper is the first crucial step in mastering scientific writing. If, like the majority of early career researchers, you believe that the methods section is the simplest to write and needs little in the way of careful consideration or thought, this article will help you understand it is not 1 .
We have all probably asked our supervisors, coworkers, or search engines “ how to write a methods section of a research paper ” at some point in our scientific careers, so you are not alone if that’s how you ended up here. Even for seasoned researchers, selecting what to include in the methods section from a wealth of experimental information can occasionally be a source of distress and perplexity.
Additionally, journal specifications, in some cases, may make it more of a requirement rather than a choice to provide a selective yet descriptive account of the experimental procedure. Hence, knowing these nuances of how to write the methods section of a research paper is critical to its success. The methods section of the research paper is not supposed to be a detailed heavy, dull section that some researchers tend to write; rather, it should be the central component of the study that justifies the validity and reliability of the research.
Are you still unsure of how the methods section of a research paper forms the basis of every investigation? Consider the last article you read but ignore the methods section and concentrate on the other parts of the paper . Now think whether you could repeat the study and be sure of the credibility of the findings despite knowing the literature review and even having the data in front of you. You have the answer!
Having established the importance of the methods section , the next question is how to write the methods section of a research paper that unifies the overall study. The purpose of the methods section , which was earlier called as Materials and Methods , is to describe how the authors went about answering the “research question” at hand. Here, the objective is to tell a coherent story that gives a detailed account of how the study was conducted, the rationale behind specific experimental procedures, the experimental setup, objects (variables) involved, the research protocol employed, tools utilized to measure, calculations and measurements, and the analysis of the collected data 2 .
In this article, we will take a deep dive into this topic and provide a detailed overview of how to write the methods section of a research paper . For the sake of clarity, we have separated the subject into various sections with corresponding subheadings.
Table of Contents
The methods section is a fundamental section of any paper since it typically discusses the ‘ what ’, ‘ how ’, ‘ which ’, and ‘ why ’ of the study, which is necessary to arrive at the final conclusions. In a research article, the introduction, which serves to set the foundation for comprehending the background and results is usually followed by the methods section, which precedes the result and discussion sections. The methods section must explicitly state what was done, how it was done, which equipment, tools and techniques were utilized, how were the measurements/calculations taken, and why specific research protocols, software, and analytical methods were employed.
The primary goal of the methods section is to provide pertinent details about the experimental approach so that the reader may put the results in perspective and, if necessary, replicate the findings 3 . This section offers readers the chance to evaluate the reliability and validity of any study. In short, it also serves as the study’s blueprint, assisting researchers who might be unsure about any other portion in establishing the study’s context and validity. The methods plays a rather crucial role in determining the fate of the article; an incomplete and unreliable methods section can frequently result in early rejections and may lead to numerous rounds of modifications during the publication process. This means that the reviewers also often use methods section to assess the reliability and validity of the research protocol and the data analysis employed to address the research topic. In other words, the purpose of the methods section is to demonstrate the research acumen and subject-matter expertise of the author(s) in their field.
Similar to the research paper, the methods section also follows a defined structure; this may be dictated by the guidelines of a specific journal or can be presented in a chronological or thematic manner based on the study type. When writing the methods section , authors should keep in mind that they are telling a story about how the research was conducted. They should only report relevant information to avoid confusing the reader and include details that would aid in connecting various aspects of the entire research activity together. It is generally advisable to present experiments in the order in which they were conducted. This facilitates the logical flow of the research and allows readers to follow the progression of the study design.
It is also essential to clearly state the rationale behind each experiment and how the findings of earlier experiments informed the design or interpretation of later experiments. This allows the readers to understand the overall purpose of the study design and the significance of each experiment within that context. However, depending on the particular research question and method, it may make sense to present information in a different order; therefore, authors must select the best structure and strategy for their individual studies.
In cases where there is a lot of information, divide the sections into subheadings to cover the pertinent details. If the journal guidelines pose restrictions on the word limit , additional important information can be supplied in the supplementary files. A simple rule of thumb for sectioning the method section is to begin by explaining the methodological approach ( what was done ), describing the data collection methods ( how it was done ), providing the analysis method ( how the data was analyzed ), and explaining the rationale for choosing the methodological strategy. This is described in detail in the upcoming sections.
Contrary to widespread assumption, the methods section of a research paper should be prepared once the study is complete to prevent missing any key parameter. Hence, please make sure that all relevant experiments are done before you start writing a methods section . The next step for authors is to look up any applicable academic style manuals or journal-specific standards to ensure that the methods section is formatted correctly. The methods section of a research paper typically constitutes materials and methods; while writing this section, authors usually arrange the information under each category.
The materials category describes the samples, materials, treatments, and instruments, while experimental design, sample preparation, data collection, and data analysis are a part of the method category. According to the nature of the study, authors should include additional subsections within the methods section, such as ethical considerations like the declaration of Helsinki (for studies involving human subjects), demographic information of the participants, and any other crucial information that can affect the output of the study. Simply put, the methods section has two major components: content and format. Here is an easy checklist for you to consider if you are struggling with how to write the methods section of a research paper .
Now that you know how to write the methods section of a research paper , let’s address another challenge researchers face while writing the methods section —what to include in the methods section . How much information is too much is not always obvious when it comes to trying to include data in the methods section of a paper. In the next section, we examine this issue and explore potential solutions.
The technical nature of the methods section occasionally makes it harder to present the information clearly and concisely while staying within the study context. Many young researchers tend to veer off subject significantly, and they frequently commit the sin of becoming bogged down in itty bitty details, making the text harder to read and impairing its overall flow. However, the best way to write the methods section is to start with crucial components of the experiments. If you have trouble deciding which elements are essential, think about leaving out those that would make it more challenging to comprehend the context or replicate the results. The top-down approach helps to ensure all relevant information is incorporated and vital information is not lost in technicalities. Next, remember to add details that are significant to assess the validity and reliability of the study. Here is a simple checklist for you to follow ( bonus tip: you can also make a checklist for your own study to avoid missing any critical information while writing the methods section ).
To address “ how to write the methods section of a research paper ”, authors should not only pay careful attention to what to include but also what not to include in the methods section of a research paper . Here is a list of do not’s when writing the methods section :
We hope that by this point, you understand how crucial it is to write a thoughtful and precise methods section and the ins and outs of how to write the methods section of a research paper . To restate, the entire purpose of the methods section is to enable others to reproduce the results or verify the research. We sincerely hope that this post has cleared up any confusion and given you a fresh perspective on the methods section .
As a parting gift, we’re leaving you with a handy checklist that will help you understand how to write the methods section of a research paper . Feel free to download this checklist and use or share this with those who you think may benefit from it.
References
Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.
Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place – Get All Access now starting at just $14 a month !
Research methodology is the backbone of any successful study, providing a structured approach to collecting and analysing data. It encompasses a broad spectrum of methods, each with specific processes and applications, tailored to answer distinct research questions.
This article will explore various types of research methodologies, delve into their processes, and illustrate with examples how they are applied in real-world research.
Understanding these methodologies is essential for any researcher aiming to conduct thorough and impactful studies.
Research methodology contains various strategies and approaches to conduct scientific research, each tailored to specific types of questions and data.
Think of research methodology as the master plan for your study. It guides you on why and how to gather and analyse data, ensuring your approach aligns perfectly with your research question.
This methodology includes deciding between qualitative research, which explores topics in depth through interviews or focus groups, or quantitative research, which quantifies data through surveys and statistical analysis.
There is even an option to mix both, and approach called the mixed method.
If you’re analysing the lived experiences of individuals in a specific setting, qualitative methodologies allow you to capture the nuances of human emotions and behaviours through detailed narratives.
Quantitative methodologies would enable you to measure and compare these experiences in a more structured, numerical format.
Choosing a robust methodology not only provides the rationale for the methods you choose but also highlights the research limitations and ethical considerations, keeping your study transparent and grounded.
It’s a thoughtful composition that gives research its direction and purpose, much like how an architect’s plan is essential before the actual construction begins.
Qualitative research dives deep into the social context of a topic. It collects words and textual data rather than numerical data.
Within the family, qualitative research methodologies can be broken down into several approaches:
Ethnography: Deeply rooted in the traditions of anthropology, you immerse yourself in the community or social setting you’re studying when conducting an ethnography study.
Case Study Research: Here, you explore the complexity of a single case in detail. This could be an institution, a group, or an individual. You might look into interviews, documents, and reports, to build a comprehensive picture of the subject.
Grounded Theory: Here, you try to generate theories from the data itself rather than testing existing hypotheses. You might start with a research question but allow your theories to develop as you gather more data.
Narrative Research: You explore the stories people tell about their lives and personal experiences in their own words. Through techniques like in-depth interviews or life story collections, you analyse the narrative to understand the individual’s experiences.
Discourse Analysis: You analyse written or spoken words to understand the social norms and power structures that underlie the language used. This method can reveal a lot about the social context and the dynamics of power in communication.
These methods help to uncover patterns in how people think and interact. For example, in exploring consumer attitudes toward a new product, you would likely conduct focus groups or participant observations to gather qualitative data.
This method helps you understand the motivations and feelings behind consumer choices.
Quantitative research relies on numerical data to find patterns and test hypotheses. This methodology uses statistical analysis to quantify data and uncover relationships between variables.
There are several approaches in quantitative research:
Experimental Research: This is the gold standard when you aim to determine causality. By manipulating one variable and controlling others, you observe changes in the dependent variables.
Survey Research: A popular approach, because of its efficiency in collecting data from a large sample of participants. By using standardised questions, you can gather data that are easy to analyse statistically.
Correlational Research: This approach tries to identify relationships between two or more variables without establishing a causal link. The strength and direction of these relationships are quantified, albeit without confirming one variable causes another.
Longitudinal Studies: You track variables over time, providing a dynamic view of how situations evolve. This approach requires commitment and can be resource-intensive, but the depth of data they provide is unparalleled.
Cross-sectional Studies: Offers a snapshot of a population at a single point in time. They are quicker and cheaper than longitudinal studies.
Mixed methods research combines both approaches to benefit from the depth of qualitative data and the breadth of quantitative analysis.
You might start with qualitative interviews to develop hypotheses about health behaviours in a community. Then, you could conduct a large-scale survey to test these hypotheses quantitatively.
This approach is particularly useful when you want to explore a new area where previous data may not exist, giving you a comprehensive insight into both the empirical and social dimensions of a research problem.
When you dive into a research project, choosing the right methodology is akin to selecting the best tools for building a house.
It shapes how you approach the research question, gather data, and interpret the results. Here are a couple of crucial factors to keep in mind.
The type of research question you pose can heavily influence the methodology you choose. Qualitative methodologies are superb for exploratory research where you aim to understand concepts, perceptions, and experiences.
If you’re exploring how patients feel about a new healthcare policy, interviews and focus groups would be instrumental.
Quantitative methods are your go-to for questions that require measurable and statistical data, like assessing the prevalence of a medical condition across different regions.
Consider what data is necessary to address your research question effectively. Qualitative data can provide depth and detail through:
This makes qualitative method ideal for understanding complex social interactions or historical contexts.
Quantitative data, however, offers the breadth and is often numerical, allowing for a broad analysis of patterns and correlations.
If your study aims to investigate both the breadth and depth, a mixed methods approach might be necessary, enabling you to draw on the strengths of both qualitative and quantitative data.
While deciding on research methodology, you must evaluate the resources available, including:
Quantitative research often requires larger samples and hence, might be more costly and time-consuming.
Qualitative research, while generally less resource-intensive, demands substantial time for data collection and analysis, especially if you conduct lengthy interviews or detailed content analysis.
If resources are limited, adapting your methodology to fit these constraints without compromising the integrity of your research is crucial.
Your familiarity and comfort level with various research methodologies will significantly affect your choice.
Conducting sophisticated statistical analyses requires a different skill set than carrying out in-depth qualitative interviews.
If your background is in social science, you might find qualitative methods more within your wheelhouse; whereas, a postgraduate student in epidemiology might be more adept at quantitative methods.
It’s also worth considering the availability of workshops, courses, or collaborators who could complement your skills.
Different methodologies raise different ethical concerns.
In qualitative research, maintaining anonymity and dealing with sensitive information can be challenging, especially when using direct quotes or detailed descriptions from participants.
Quantitative research might involve considerations around participant consent for large surveys or experiments.
Practically, you need to think about the sampling design to ensure it is representative of the population studied. Non-probability sampling might be quicker and cheaper but can introduce bias, limiting the generalisability of your findings.
By meticulously considering these factors, you tailor your research design to not just answer the research questions effectively but also to reflect the realities of your operational environment.
This thoughtful approach helps ensure that your research is not only robust but also practical and ethical, standing up to both academic scrutiny and real-world application.
Research methodology is a crucial framework that guides the entire research process. It involves choosing between various qualitative and quantitative approaches, each tailored to specific research questions and objectives.
Your chosen methodology shapes how data is gathered, analysed, and interpreted, ultimately influencing the reliability and validity of your research findings.
Understanding these methodologies ensures that researchers can effectively write research proposal, address their study’s aims and contribute valuable insights to their field.
Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.
We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!
2024 © Academia Insider
BMC Medical Research Methodology volume 20 , Article number: 226 ( 2020 ) Cite this article
41k Accesses
60 Citations
60 Altmetric
Metrics details
Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.
We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?
Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
Peer Review reports
The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).
In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig. 1 .
Trends in the number studies that mention “methodological review” or “meta-
epidemiological study” in PubMed.
The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.
The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.
Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.
Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.
Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.
These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].
There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.
Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].
Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.
In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.
Q: How should I select research reports for my methodological study?
A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].
The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.
Q: How many databases should I search?
A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.
Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.
Q: Should I publish a protocol for my methodological study?
A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.
Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).
Q: How to appraise the quality of a methodological study?
A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.
Q: Should I justify a sample size?
A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:
Comparing two groups
Determining a proportion, mean or another quantifier
Determining factors associated with an outcome using regression-based analyses
For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].
Q: What should I call my study?
A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.
Q: Should I account for clustering in my methodological study?
A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”
A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].
Q: Should I extract data in duplicate?
A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].
Q: Should I assess the risk of bias of research reports included in my methodological study?
A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].
Q: What variables are relevant to methodological studies?
A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:
Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].
Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.
Q: Should I focus only on high impact journals?
A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.
Q: Can I conduct a methodological study of qualitative research?
A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.
Q: What reporting guidelines should I use for my methodological study?
A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.
Q: What are the potential threats to validity and how can I avoid them?
A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.
Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].
With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.
Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.
Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.
In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:
What is the aim?
Methodological studies that investigate bias
A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].
Methodological studies that investigate quality (or completeness) of reporting
Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].
Methodological studies that investigate the consistency of reporting
Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].
Methodological studies that investigate factors associated with reporting
In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].
Methodological studies that investigate methods
Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].
Methodological studies that summarize other methodological studies
Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].
Methodological studies that investigate nomenclature and terminology
Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].
Other types of methodological studies
In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.
What is the design?
Methodological studies that are descriptive
Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].
Methodological studies that are analytical
Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].
What is the sampling strategy?
Methodological studies that include the target population
Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n = 103) [ 30 ].
Methodological studies that include a sample of the target population
Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.
What is the unit of analysis?
Methodological studies with a research report as the unit of analysis
Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.
Methodological studies with a design, analysis or reporting item as the unit of analysis
Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].
This framework is outlined in Fig. 2 .
A proposed framework for methodological studies
Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.
In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Consolidated Standards of Reporting Trials
Evidence, Participants, Intervention, Comparison, Outcome, Timeframe
Grading of Recommendations, Assessment, Development and Evaluations
Participants, Intervention, Comparison, Outcome, Timeframe
Preferred Reporting Items of Systematic reviews and Meta-Analyses
Studies Within a Review
Studies Within a Trial
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
PubMed Google Scholar
Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.
PubMed PubMed Central Google Scholar
Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.
Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.
Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.
Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.
Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.
Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.
Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.
Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.
Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.
Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.
Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.
Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.
CAS PubMed Google Scholar
Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.
Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.
Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.
Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.
The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.
Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.
Google Scholar
Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.
Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.
CAS Google Scholar
Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.
Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.
Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.
Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.
The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.
Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.
Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.
Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.
Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.
Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.
De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.
Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.
Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.
Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.
Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.
El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.
Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.
Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.
CAS PubMed PubMed Central Google Scholar
Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.
Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.
Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.
Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.
Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.
Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.
Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.
Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.
Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.
Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.
Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.
Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.
Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.
Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.
de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.
Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.
Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.
Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.
Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.
Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.
Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.
Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.
Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.
Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.
Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.
Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.
Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.
Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.
Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.
METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.
Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.
Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.
Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.
Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.
Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.
Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.
Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.
Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.
Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.
Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.
Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.
Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.
Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.
Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.
Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.
Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.
Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.
Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.
Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.
Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.
Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.
Download references
This work did not receive any dedicated funding.
Authors and affiliations.
Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane
Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada
Lawrence Mbuagbaw & Lehana Thabane
Centre for the Development of Best Practices in Health, Yaoundé, Cameroon
Lawrence Mbuagbaw
Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia
Livia Puljak
Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA
David B. Allison
Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada
Lehana Thabane
Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada
Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada
You can also search for this author in PubMed Google Scholar
LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.
Correspondence to Lawrence Mbuagbaw .
Ethics approval and consent to participate.
Not applicable.
Competing interests.
DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7
Download citation
Received : 27 May 2020
Accepted : 27 August 2020
Published : 07 September 2020
DOI : https://doi.org/10.1186/s12874-020-01107-7
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1471-2288
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Published on 25 February 2019 by Shona McCombes . Revised on 10 October 2022.
Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.
It should include:
Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.
How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, frequently asked questions about methodology.
Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .
It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.
You can start by introducing your overall approach to your research. You have two options here.
What research problem or question did you investigate?
And what type of data did you need to achieve this aim?
Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?
Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .
In order to be considered generalisable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.
Here, explain how you operationalised your concepts and measured your variables. Discuss your sampling method or inclusion/exclusion criteria, as well as any tools, procedures, and materials you used to gather your data.
Surveys Describe where, when, and how the survey was conducted.
Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.
Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.
The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.
The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on 4–8 July 2022, between 11:00 and 15:00.
Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.
In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.
Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)
Interviews or focus groups Describe where, when, and how the interviews were conducted.
Participant observation Describe where, when, and how you conducted the observation or ethnography .
Existing data Explain how you selected case study materials for your analysis.
In order to gain better insight into possibilities for future improvement of the fitness shop’s product range, semi-structured interviews were conducted with 8 returning customers.
Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.
Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.
Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.
Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods here.
Next, you should indicate how you processed and analysed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.
In quantitative research , your analysis will be based on numbers. In your methods section, you can include:
In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).
Specific methods might include:
Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.
Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.
In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .
Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.
The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions .
Your methodology can be strengthened by referencing existing research in your field. This can help you to:
Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.
Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.
Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.
Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).
In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .
Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.
A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.
For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
McCombes, S. (2022, October 10). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/thesis-dissertation/methodology/
Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a literature review | guide, template, & examples, what is a theoretical framework | a step-by-step guide.
Qualitative vs quantitative vs mixed methods.
By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021
Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!
In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.
Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research
Choosing a research methodology – Nature of the research – Research area norms – Practicalities
Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.
Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.
Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.
Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.
In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.
The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job.
Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.
To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).
The three factors you need to consider are:
Let’s take a look at each of these.
As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .
But, what types of research exist?
Broadly speaking, research can fall into one of three categories:
As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.
Let’s look at an example in action.
If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.
If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .
So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.
The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.
If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.
Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.
A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .
Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.
When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.
But what constraints, you ask?
When you’re evaluating your methodological options, you need to consider the following constraints:
Let’s look at each of these.
The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.
If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.
So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.
The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.
Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon.
As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .
Some of the costs that may arise include:
These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.
Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.
The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.
Some of the questions you should ask yourself are:
Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.
So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.
In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:
If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.
This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...
Very useful and informative especially for beginners
Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.
I find the article very informative especially for my decitation it has been helpful and an eye opener.
Hi I am Anna ,
I am a PHD candidate in the area of cyber security, maybe we can link up
The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.
I found the post very informative and practical.
I struggle so much with designs of the research for sure!
I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.
Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Quantitative research methodologies, qualitative research methodologies, mixed method methodologies, selecting a methodology.
According to Dawson (2019),a research methodology is the primary principle that will guide your research. It becomes the general approach in conducting research on your topic and determines what research method you will use. A research methodology is different from a research method because research methods are the tools you use to gather your data (Dawson, 2019). You must consider several issues when it comes to selecting the most appropriate methodology for your topic. Issues might include research limitations and ethical dilemmas that might impact the quality of your research. Descriptions of each type of methodology are included below.
Quantitative research methodologies are meant to create numeric statistics by using survey research to gather data (Dawson, 2019). This approach tends to reach a larger amount of people in a shorter amount of time. According to Labaree (2020), there are three parts that make up a quantitative research methodology:
Once you decide on a methodology, you can consider the method to which you will apply your methodology.
Qualitative research methodologies examine the behaviors, opinions, and experiences of individuals through methods of examination (Dawson, 2019). This type of approach typically requires less participants, but more time with each participant. It gives research subjects the opportunity to provide their own opinion on a certain topic.
Examples of Qualitative Research Methodologies
A mixed methodology allows you to implement the strengths of both qualitative and quantitative research methods. In some cases, you may find that your research project would benefit from this. This approach is beneficial because it allows each methodology to counteract the weaknesses of the other (Dawson, 2019). You should consider this option carefully, as it can make your research complicated if not planned correctly.
What should you do to decide on a research methodology? The most logical way to determine your methodology is to decide whether you plan on conducting qualitative or qualitative research. You also have the option to implement a mixed methods approach. Looking back on Dawson's (2019) five "W's" on the previous page , may help you with this process. You should also look for key words that indicate a specific type of research methodology in your hypothesis or proposal. Some words may lean more towards one methodology over another.
Quantitative Research Key Words
Qualitative Research Key Words
The methods section of a research paper provides the information by which a study’s validity is judged. The method section answers two main questions: 1) How was the data collected or generated? 2) How was it analyzed? The writing should be direct and precise and written in the past tense.
You must explain how you obtained and analyzed your results for the following reasons:
Bem, Daryl J. Writing the Empirical Journal Article . Psychology Writing Center. University of Washington; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008.
I. Groups of Research Methods
There are two main groups of research methods in the social sciences:
II. Content
An effectively written methodology section should:
NOTE : Once you have written all of the elements of the methods section, subsequent revisions should focus on how to present those elements as clearly and as logically as possibly. The description of how you prepared to study the research problem, how you gathered the data, and the protocol for analyzing the data should be organized chronologically. For clarity, when a large amount of detail must be presented, information should be presented in sub-sections according to topic.
III. Problems to Avoid
Irrelevant Detail The methodology section of your paper should be thorough but to the point. Don’t provide any background information that doesn’t directly help the reader to understand why a particular method was chosen, how the data was gathered or obtained, and how it was analyzed. Unnecessary Explanation of Basic Procedures Remember that you are not writing a how-to guide about a particular method. You should make the assumption that readers possess a basic understanding of how to investigate the research problem on their own and, therefore, you do not have to go into great detail about specific methodological procedures. The focus should be on how you applied a method , not on the mechanics of doing a method. NOTE: An exception to this rule is if you select an unconventional approach to doing the method; if this is the case, be sure to explain why this approach was chosen and how it enhances the overall research process. Problem Blindness It is almost a given that you will encounter problems when collecting or generating your data. Do not ignore these problems or pretend they did not occur. Often, documenting how you overcame obstacles can form an interesting part of the methodology. It demonstrates to the reader that you can provide a cogent rationale for the decisions you made to minimize the impact of any problems that arose. Literature Review Just as the literature review section of your paper provides an overview of sources you have examined while researching a particular topic, the methodology section should cite any sources that informed your choice and application of a particular method [i.e., the choice of a survey should include any citations to the works you used to help construct the survey].
It’s More than Sources of Information! A description of a research study's method should not be confused with a description of the sources of information. Such a list of sources is useful in itself, especially if it is accompanied by an explanation about the selection and use of the sources. The description of the project's methodology complements a list of sources in that it sets forth the organization and interpretation of information emanating from those sources.
Azevedo, L.F. et al. How to Write a Scientific Paper: Writing the Methods Section. Revista Portuguesa de Pneumologia 17 (2011): 232-238; Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Methods Section . The Writer’s Handbook. Writing Center. University of Wisconsin, Madison; Writing the Experimental Report: Methods, Results, and Discussion . The Writing Lab and The OWL. Purdue University; Methods and Materials . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College.
Statistical Designs and Tests? Do Not Fear Them!
Don't avoid using a quantitative approach to analyzing your research problem just because you fear the idea of applying statistical designs and tests. A qualitative approach, such as conducting interviews or content analysis of archival texts, can yield exciting new insights about a research problem, but it should not be undertaken simply because you have a disdain for running a simple regression. A well designed quantitative research study can often be accomplished in very clear and direct ways, whereas, a similar study of a qualitative nature usually requires considerable time to analyze large volumes of data and a tremendous burden to create new paths for analysis where previously no path associated with your research problem had existed.
Knowing the Relationship Between Theories and Methods
There can be multiple meaning associated with the term "theories" and the term "methods" in social sciences research. A helpful way to delineate between them is to understand "theories" as representing different ways of characterizing the social world when you research it and "methods" as representing different ways of generating and analyzing data about that social world. Framed in this way, all empirical social sciences research involves theories and methods, whether they are stated explicitly or not. However, while theories and methods are often related, it is important that, as a researcher, you deliberately separate them in order to avoid your theories playing a disproportionate role in shaping what outcomes your chosen methods produce.
Introspectively engage in an ongoing dialectic between theories and methods to help enable you to use the outcomes from your methods to interrogate and develop new theories, or ways of framing conceptually the research problem. This is how scholarship grows and branches out into new intellectual territory.
Reynolds, R. Larry. Ways of Knowing. Alternative Microeconomics. Part 1, Chapter 3. Boise State University; The Theory-Method Relationship . S-Cool Revision. United Kingdom.
FIND US ON
Ask A Librarian
chat Text: 1-308-210-3865 email Librarians by Subject Make an Appointment
The basic purpose of any position paper is to present an arguable opinion about a particular topic.
The primary objective is to convince the audience that your opinion on the topic is valid and worth considering, using well-researched facts and other expert opinions to support your claims. The secondary objective is to factually refute the validity of the opposing side's counter-claims. This indicates to your audience that you are a credible, well-informed resource.
Abstract: The abstract contains the purpose of the paper. Give a very high-level overview of the content of the paper and include a clear case for action. The intent is to convey to the reader why this is important. Most abstracts are written in the future tense, because the reader hasn’t read the paper yet, so explain what they will read, not what they already know.
Background: Describe the issue(s) and give any relevant background. (Basically, a literature review)
Position: Include a description of the position paper subject with focus on the issue the paper is addressing. Include two or three paragraphs describing the issue. Provide background details that are required to understand the scenario. This is one of the most important sections – work to clearly convey thoughts and issues.
Conclusion: Conclude gracefully. If applicable, make a recommendation of one solution over another to solve the problem. Explain succinctly why the association is taking this position.
References: Include a bibliography of resources used during the preparation of the paper. Be sure to cite references actually used in the paper.
( ALA website )
The structure of a position paper is flexible, but it should generally follow a simple flow that clearly conveys the problem and the position of the author(s). A position paper should begin by clearly stating the problem and its relevance to the scientific community or even to the society as a whole. It should then address the main position of the author. For example:
Background: For decades, the WHO has urged the adoption of a tax on unhealthy foods to discourage the consumption of products that are harmful to our health.
Relevance: Sugar has been shown to have a negative impact on health, and play a major role in the rising obesity rates in America.
Position: The United States should adopt a tax on drinks with added sugar, to reduce the consumption of sugar, and promote healthier eating habits.
The author should then clearly list the common arguments and possible objections against this position. To continue with our example:
Argument 1: A sugary drink tax that focuses on soda may not impact other products that have an equally negative health impact such as fruit juice or candy.
Argument 2: A sugary drink tax is regressive and places a financial burden on the poorest consumers.
A strong position paper acknowledges the validity of the counter-arguments and then puts forth reasons why the author’s position is still the correct one. In our example paper, the author can address the counter-arguments in the next section like so:
Counter-argument 1 : It is true that a sugary drink tax would not impact all sources of added sugar in the average American diet. However, it would still have a significant impact on a major source of added sugar to achieve its goal of reducing overall sugar consumption.
Counter-argument 2: All consumption taxes are regressive. A sugary drink tax would be most effective accompanied by subsidies for healthy foods such as fruit and vegetables.
Finally, summarize your main points and re-state your position in your conclusion. All arguments in the paper should be backed up by facts, data, and evidence , with proper citation attributed to your sources. In this way, a position paper is no different from an ordinary research paper .
( enago.com )
___________________________________________________________________________________________________________________
Note: A position paper should not restate the obvious facts about the text or topic. Position papers rely on critical evaluation that goes beyond a mere surface reading or a passionate personal reaction. The thesis statement of the paper should be crafted in such a way as to ensure that discussion of the subject is necessary and relevant.
Ineffective: Legislators continue to debate the extent to which government should be involved in the lives of individual citizens.
Effective: A responsible government must respect the rights of individuals and agree not to interfere with citizens’ abilities to make sensible decisions for themselves.
The second point can easily be debated while the first states a well-known fact that is not open to individual interpretation.
( agnesscott.edu )
1) Gather evidence – prioritize relevance and credibility. VERIFY the credibility of your sources and be sure that the evidence you’re using is grounded in the most recent research.
2) You can use expert quotes to support your stance, but sparingly. The strength of your argument will come from the critical evaluation, interpretation and presentation of a multitude of sources – don’t over-rely on a single voice.
3) Keep it simple! Position papers don't need to go into excessive detail. Present your points clearly and briefly.
4) Each paragraph in the paper should discuss a single idea.
5) Avoid using the passive voice and words such as “maybe, perhaps, possibly, etc.” that weaken your argument. Phrases like “in my opinion” are also needless and sound apologetic instead of certain. if you’re writing the paper, it’s obviously your opinion.
6) Don’t be afraid to be argumentative. That’s why it’s called a “position” paper.
( grammarly and agnesscott.edu )
2508 11th Avenue, Kearney, NE 68849-2240
Circulation Desk: 308-865-8599 Main Office: 308-865-8535
Ask A Librarian
25 Pages Posted: 7 Sep 2024
affiliation not provided to SSRN
As global climate change intensifies, the frequency and severity of forest fires are escalating at an unprecedented rate. Effective emergency prevention and control measures are essential to forestall forest fire incidents and mitigate their devastating impact. Both in pre-disaster wildfire prediction and during emergency response, the intricate analysis, interpretation, and management of complex environmental factors are involved. The complexity inherent in forest fire incidents, characterized by diverse data types, various scenarios, and disparate computational models, poses significant challenges to providing efficient decision support during emergency response. To address these challenges, this study leverages the concept of knowledge graphs and, for the first time, introduces a novel data-driven integrated computing framework. This framework constructs a tuple model architecture that encapsulates environmental data, computational models, and their interrelationships within a cohesive graph structure. This approach unifies diverse multi-source heterogeneous thematic data and computational models within a singular computational framework, culminating in an integrated system for forest fire decision support. The study conducts experiments using pre-disaster wildfire prediction and the strategic avoidance of casualty-prone zones during fire spread rescue operations as illustrative examples. The experimental results demonstrate that the tuple model architecture effectively facilitates data and model sharing across multiple scenarios in forest fire emergency response, offering an innovative, data-driven, integrated computing methodology for addressing these challenges.
Keywords: multi-source heterogeneous data, graph data representation, tuple model Architecture, integrated computing framework, computational model, decision support
Suggested Citation: Suggested Citation
No Address Available
Do you have a job opening that you would like to promote on ssrn, paper statistics, related ejournals, decision analysis ejournal.
Subscribe to this fee journal for more curated articles on this topic
Artificial intelligence ejournal, information systems ejournal, applied computing ejournal, information technology & systems ejournal, decision-making & management science ejournal, data science, data analytics & informatics ejournal, decision-making in computational design & technology ejournal, environmental science & climate change ejournal, ecological modeling ejournal, decision-making in engineering ejournal, environmental data analysis ejournal, computational earth science ejournal.
Subscribe to this free journal for more curated articles on this topic
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 7, 2021 by Shona McCombes . Revised on September 5, 2024 by Pritha Bhandari.
A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about:
A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.
You might have to write up a research design as a standalone assignment, or it might be part of a larger research proposal or other project. In either case, you should carefully consider which methods are most appropriate and feasible for answering your question.
Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.
Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.
There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.
The first choice you need to make is whether you’ll take a qualitative or quantitative approach.
Qualitative approach | Quantitative approach |
---|---|
and describe frequencies, averages, and correlations about relationships between variables |
Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.
Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.
It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.
As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .
At each stage of the research design process, make sure that your choices are practically feasible.
Discover proofreading & editing
Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.
Quantitative designs can be split into four main types.
Type of design | Purpose and characteristics |
---|---|
Experimental | relationships effect on a |
Quasi-experimental | ) |
Correlational | |
Descriptive |
With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).
Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.
Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.
The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.
Type of design | Purpose and characteristics |
---|---|
Grounded theory | |
Phenomenology |
Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.
In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.
A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.
For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?
The more precisely you define your population, the easier it will be to gather a representative sample.
Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.
To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.
Probability sampling | Non-probability sampling |
---|---|
Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.
For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.
In some types of qualitative designs, sampling may not be relevant.
For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.
In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .
For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.
Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.
You can choose just one data collection method, or use several methods in the same study.
Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .
Questionnaires | Interviews |
---|---|
) |
Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.
Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.
Quantitative observation | |
---|---|
There are many other ways you might collect data depending on your field and topic.
Field | Examples of data collection methods |
---|---|
Media & communication | Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives |
Psychology | Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time |
Education | Using tests or assignments to collect data on knowledge and skills |
Physical sciences | Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition |
If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.
If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.
With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.
Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.
However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.
As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.
Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.
Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.
If you’re using observations , which events or actions will you count?
If you’re using surveys , which questions will you ask and what range of responses will be offered?
You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.
Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.
Reliability | Validity |
---|---|
) ) |
For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.
If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.
As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.
That means making decisions about things like:
If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?
If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?
It’s also important to create a data management plan for organizing and storing your data.
Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.
Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).
On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.
In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.
Using descriptive statistics , you can summarize your sample data in terms of:
The specific calculations you can do depend on the level of measurement of your variables.
Using inferential statistics , you can:
Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.
Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.
In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.
Two of the most common approaches to doing this are thematic analysis and discourse analysis .
Approach | Characteristics |
---|---|
Thematic analysis | |
Discourse analysis |
There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.
If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.
Statistics
Research bias
A research design is a strategy for answering your research question . It defines your overall approach and determines how you will collect and analyze data.
A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.
Quantitative research designs can be divided into two main categories:
Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.
The priorities of a research design can vary depending on the field, but you usually have to specify:
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
Operationalization means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.
A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
McCombes, S. (2024, September 05). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/research-design/
Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Holly Bennion, PhD graduate at Durham University 9 Sep 2024
This blog post focuses on my approach to using poetry as an analytical tool in a recent empirical study. There is an exciting body of research highlighting the potential for incorporating poetry into the various stages of the research process. Writing and sharing poems can be an effective data collection method, whereby poems are constructed by/with participants to explore their stories, feelings and memories. Poetry can also be used as an analytical/interpretative lens – for example, Carr (2003) created poems to document the experiences of family members of hospitalised relatives, transforming interview transcripts into poetry. Researchers can also use poetry to disseminate educational research and extend the tone and scope of research communication. The growing emergence of poetry in research, underpinned by arts-based research, is also connected to theoretical insights by postmodern, poststructural and feminist theories, which invites transformative and inclusive possibilities for research that goes beyond hegemonic and traditional forms of knowledge (Cutts & Sankofa Waters, 2019).
My PhD research explored children’s experiences and perspectives of belonging and school inclusion. I explored the interconnectivity in discourses on self-identification, otherness and school inclusion in multilingual and multicultural spaces. The methods included focus groups, children’s artwork, co-analysis with participants, and dance and drama workshops. As part of the data analysis process, I chose to experiment with poems. This process involved going back and forth between the transcriptions, the NVivo coding, and looking closely at the participants’ artwork and what they said about it.
To begin the process, I experimented with free-verse poetry, whereby I attempted to use poetry to identify connections between participants’ comments, further identify themes and keywords, and document my own reflections and feelings as I delved into the data.
Then, I began experimenting with structure and specific words and phrases. I used linguistic devices such as repetition to illustrate aspects that the participants felt strongly about or things they mentioned frequently. I experimented with using short, snappy lines or long, stream-of-consciousness lines to imply the tone of voice and the atmosphere of the workshops. I selected six poems to include in my thesis. Below is one example, which takes verbatim the words of the participants:
Something for you
It belongs to me and
I own it, just mine, not sharing
I may share it sometimes
My life, my bed
The first part of this poem reflects Aasab’s comment: ‘belonging is something for you, it’s like a surprise for you and we have to keep it’. I was interested in her view of belonging as a ‘surprise’. The exclamation mark was used to convey her excited tone of voice. The repetition of ‘my’ – ‘my life, my bed, my things’ – was utilised to highlight how participants often distinguished between what is ‘mine’ and ‘yours’.
‘Through poetry, I was liberated from the structured form of academic writing; I could experiment with themes, form, language, tone and imagery to interpret and represent the children’s comments about belonging and school inclusion.’
The notion of material possessions and human–object relationships was significant in the findings. Furman and colleagues (2007) note that poetry can be a powerful tool for communication through the playfulness of metaphor, alliteration and visual elements. Through poetry, I was liberated from the structured form of academic writing; I could experiment with themes, form, language, tone and imagery to interpret and represent the children’s comments about belonging and school inclusion. I found that poetry as an analysis tool gave me enthusiasm for and confidence in my data.
Reflecting on my research approach, I advocate that poetry can serve as a valuable analysis tool for research, and it can be utilised as part of a multi-level approach. Poetry can be a powerful tool for communicating the researcher’s reflections and interpretations of the data and representing the voices of participants in engaging ways. Importantly, I was not seeking to create a single narrative through the poetry. Poetry is open to interpretation; it is evocative and invites emotional engagement. Like my data collection methods – which invited collaboration, imagination and contradictions among participants – the poetry was an interesting tool that enabled multiple narratives, opinions and clarifications for the researcher and audience.
To conclude, I quote poet and academic Neil McBride (2009, p. 43):
‘[Poetry] questions, it leaves frayed edges and loose writes. It draws out the hidden, the spiritual, the underlying rhythms of life that we swamp with information, noise and news channels.’
Holly will be presenting at the BERA Conference 2024 and WERA Focal Meeting on Monday 9 September at 12:45pm for a symposium panel on ‘Migration and Education across the Four Nations of the UK’.
Carr, J. (2003). Poetic expressions of vigilance. Qualitative Health Research , 13 (9), 1334–1331. https://doi.org/DOI: 10.1177/1049732303254018
Cutts, Q., & Sankofa Waters, M. (2019). Poetic approaches to qualitative data analysis. Education Publications , 145. https://doi.org/10.1093/acrefore/9780190264093.013.993
Furman, R., Langer, C., Davis, C. S., Gallardo, H. P., & Kulkarni, S. (2007). Expressive, research and reflective poetry as qualitative inquiry: A study of adolescent identity. Qualitative Research , 7 (3), 301–315. https://doi.org/10.1177/1468794107078511
McBride, N. (2009, December 3). Poetry cornered. Times Higher Education , 1 (925), 42–44. https://www.timeshighereducation.com/features/poetry-cornered/409334.article
Publication series
Publishing opportunity Closed
Award Closed
BERA in the news 6 Sep 2024
News 30 Aug 2024
News 12 Aug 2024
News 2 Aug 2024
P. giuliani, k. godbey, v. kejzlar, and w. nazarewicz, phys. rev. research 6 , 033266 – published 9 september 2024.
One can improve predictability in the unknown domain by combining forecasts of imperfect complex computational models using a Bayesian statistical machine learning framework. In many cases, however, the models used in the mixing process are similar. In addition to contaminating the model space, the existence of such similar, or even redundant, models during the multimodeling process can result in misinterpretation of results and deterioration of predictive performance. In this paper we describe a method based on the principal component analysis that eliminates model redundancy. We show that by adding model orthogonalization to the proposed Bayesian model combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance.
DOI: https://doi.org/10.1103/PhysRevResearch.6.033266
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society
Authors & affiliations.
Vol. 6, Iss. 3 — September - November 2024
Other options.
Schematic representation of the PCA approach for the model combination. Here, two model classes consist of two and five models, respectively, and are represented as vectors in a space R n . This collection of seven models is approximated in the affine space (gray rectangle) spanned by the constant ϕ 0 term (dashed light-gray arrow) and the two principal components ϕ 1 and ϕ 2 (dashed black arrows).
Illustration of S3 of Table 1 . (a) Forecasts of the binding energy per nucleon produced by 19 different models: one perfect model (black), three good models (blue), five intermediate models (green), and ten bad models (red). The spread of the results comes from the noise terms added. The inset shows the projection ν j ( k ) defined in Eq. ( 7 ) for each of the 19 models onto the first two principal components, clearly identifying the existence of three model classes, with the perfect model and three good models being nearly aligned. (b) Decay of the singular values s j . The inset shows the evolution of the RMSE ( 15 ) for the training (cyan blue squares), validation (yellow stars), and testing (dark red circles) datasets as the number of principal components kept in the expansion ( 8 ) is increased (zero corresponds to ϕ 0 ). The BMC + PCA results are marked by solid lines. The dashed lines show the RMSE obtained when combining all 19 models without projecting on principal components (pure BMC), which shows signs of overfitting: lower RMSE, training dataset; higher RMSE, testing set.
Case study II results. (a) Training (squares), validation (stars), and testing (circles) datasets of binding energies of 629 even-even nuclei used in this paper. The stable isotopes are marked by small black squares. (b) Projections ν 1 and ν 2 of 15 realistic models of the nuclear binding energy into the first two principal components. This representation allows us to visualize intermodel relationships. (c) Similar to Fig. 2 but for the realistic mass models. The colors and symbols follow the same convention as in panel (a), with solid lines representing the BMC + PCA model of Eq. ( 8 ) and dashed lines representing the BMC of Eq. ( 3 ) with f 0 = 0 . (d) Distribution of the weights ω k for the individual models in the expansion ( 9 ) in the unconstrained (top) and simplex-constrained (bottom) settings [see Eq. ( 2 )]. The vertical error bars represent a 95 % region obtained from the sampled posterior.
(a) Predictive posterior distribution for the binding energy per nucleon of the Sn isotopes. The mean prediction and 95 % credible interval of the unconstrained combined model is shown in purple, while the simplex-constrained (simplex) combined model is shown in khaki. The inset shows the detail of the plot for N = 54 , 56 , and 58. (b) ECP for unconstrained and simplex-constrained variants for training (blue), validation (yellow), and testing (red) datasets. The diagonal black line shows a reference of what a perfect statistical coverage would entail, with points above it being conservative, and those below being overconfident.
Sign up to receive regular email alerts from Physical Review Research
It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.
Paste a citation or doi, enter a citation.
Objectives This cohort study reported descriptive statistics in athletes engaged in Summer and Winter Olympic sports who sustained a sport-related concussion (SRC) and assessed the impact of access to multidisciplinary care and injury modifiers on recovery.
Methods 133 athletes formed two subgroups treated in a Canadian sport institute medical clinic: earlier (≤7 days) and late (≥8 days) access. Descriptive sample characteristics were reported and unrestricted return to sport (RTS) was evaluated based on access groups as well as injury modifiers. Correlations were assessed between time to RTS, history of concussions, the number of specialist consults and initial symptoms.
Results 160 SRC (median age 19.1 years; female=86 (54%); male=74 (46%)) were observed with a median (IQR) RTS duration of 34.0 (21.0–63.0) days. Median days to care access was different in the early (1; n SRC =77) and late (20; n SRC =83) groups, resulting in median (IQR) RTS duration of 26.0 (17.0–38.5) and 45.0 (27.5–84.5) days, respectively (p<0.001). Initial symptoms displayed a meaningful correlation with prognosis in this study (p<0.05), and female athletes (52 days (95% CI 42 to 101)) had longer recovery trajectories than male athletes (39 days (95% CI 31 to 65)) in the late access group (p<0.05).
Conclusions Olympic athletes in this cohort experienced an RTS time frame of about a month, partly due to limited access to multidisciplinary care and resources. Earlier access to care shortened the RTS delay. Greater initial symptoms and female sex in the late access group were meaningful modifiers of a longer RTS.
Data are available on reasonable request. Due to the confidential nature of the dataset, it will be shared through a controlled access repository and made available on specific and reasonable requests.
https://doi.org/10.1136/bjsports-2024-108211
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Most data regarding the impact of sport-related concussion (SRC) guidelines on return to sport (RTS) are derived from collegiate or recreational athletes. In these groups, time to RTS has steadily increased in the literature since 2005, coinciding with the evolution of RTS guidelines. However, current evidence suggests that earlier access to care may accelerate recovery and RTS time frames.
This study reports epidemiological data on the occurrence of SRC in athletes from several Summer and Winter Olympic sports with either early or late access to multidisciplinary care. We found the median time to RTS for Olympic athletes with an SRC was 34.0 days which is longer than that reported in other athletic groups such as professional or collegiate athletes. Time to RTS was reduced by prompt access to multidisciplinary care following SRC, and sex-influenced recovery in the late access group with female athletes having a longer RTS timeline. Greater initial symptoms, but not prior concussion history, were also associated with a longer time to RTS.
Considerable differences exist in access to care for athletes engaged in Olympic sports, which impact their recovery. In this cohort, several concussions occurred during international competitions where athletes are confronted with poor access to organised healthcare. Pathways for prompt access to multidisciplinary care should be considered by healthcare authorities, especially for athletes who travel internationally and may not have the guidance or financial resources to access recommended care.
After two decades of consensus statements, sport-related concussion (SRC) remains a high focus of research, with incidence ranging from 0.1 to 21.5 SRC per 1000 athlete exposures, varying according to age, sex, sport and level of competition. 1 2 Evidence-based guidelines have been proposed by experts to improve its identification and management, such as those from the Concussion in Sport Group. 3 Notably, they recommend specific strategies to improve SRC detection and monitoring such as immediate removal, 4 prompt access to healthcare providers, 5 evidence-based interventions 6 and multidisciplinary team approaches. 7 It is believed that these guidelines contribute to improving the early identification and management of athletes with an SRC, thereby potentially mitigating its long-term consequences.
Nevertheless, evidence regarding the impact of SRC guidelines implementation remains remarkably limited, especially within high-performance sport domains. In fact, most reported SRC data focus on adolescent student-athletes, collegiate and sometimes professional athletes in the USA but often neglect Olympians. 1 2 8–11 Athletes engaged in Olympic sports, often referred to as elite amateurs, are typically classified among the highest performers in elite sport, alongside professional athletes. 12 13 They train year-round and uniquely compete regularly on the international stage in sports that often lack professional leagues and rely on highly variable resources and facilities, mostly dependent on winning medals. 14 Unlike professional athletes, Olympians do not have access to large financial rewards. Although some Olympians work or study in addition to their intensive sports practice, they can devote more time to full-time sports practice compared with collegiate athletes. Competition calendars in Olympians differ from collegiate athletes, with periodic international competitions (eg, World Cups, World Championships) throughout the whole year rather than regular domestic competitions within a shorter season (eg, semester). Olympians outclass most collegiate athletes, and only the best collegiate athletes will have the chance to become Olympians and/or professionals. 12 13 15 In Canada, a primary reason for limited SRC data in Olympic sports is that the Canadian Olympic and Paralympic Sports Institute (COPSI) network only adopted official guidelines in 2018 to standardise care for athletes’ SRC nationwide. 16 17 The second reason could be the absence of a centralised medical structure and surveillance systems, identified as key factors contributing to the under-reporting and underdiagnosis of athletes with an SRC. 18
Among the available evidence on the evolution of SRC management, a 2023 systematic review and meta-analysis in athletic populations including children, adolescents and adults indicated that a full return to sport (RTS) could take up to a month but is estimated to require 19.8 days on average (15.4 days in adults), as opposed to the initial expectation of approximately 10.0 days based on studies published prior to 2005. 19 In comparison, studies focusing strictly on American collegiate athletes report median times to RTS of 16 days. 9 20 21 Notably, a recent study of military cadets reported an even longer return to duty times of 29.4 days on average, attributed to poorer access to care and fewer incentives to return to play compared with elite sports. 22 In addition, several modifiers have also been identified as influencing the time to RTS, such as the history of concussions, type of sport, sex, past medical problems (eg, preinjury modifiers), as well as the initial number of symptoms and their severity (eg, postinjury modifiers). 20 22 The evidence regarding the potential influence of sex on the time to RTS has yielded mixed findings in this area. 23–25 In fact, females are typically under-represented in SRC research, highlighting the need for additional studies that incorporate more balanced sample representation across sexes and control for known sources of bias. 26 Interestingly, a recent Concussion Assessment, Research and Education Consortium study, which included a high representation of concussed female athletes (615 out of 1071 patients), revealed no meaningful differences in RTS between females and males (13.5 and 11.8 days, respectively). 27 Importantly, findings in the sporting population suggested that earlier initiation of clinical care is linked to shorter recovery after concussion. 5 28 However, these factors affecting the time to RTS require a more thorough investigation, especially among athletes engaged in Olympic sports who may or may not have equal access to prompt, high-quality care.
Therefore, the primary objective of this study was to provide descriptive statistics among athletes with SRC engaged in both Summer and Winter Olympic sport programmes over a quadrennial, and to assess the influence of recommended guidelines of the COPSI network and the fifth International Consensus Conference on Concussion in Sport on the duration of RTS performance. 16 17 Building on available evidence, the international schedule constraints, variability in resources 14 and high-performance expectation among this elite population, 22 prolonged durations for RTS, compared with what is typically reported (eg, 16.0 or 15.4 days), were hypothesised in Olympians. 3 19 The secondary objective was to more specifically evaluate the impact of access to multidisciplinary care and injury modifiers on the time to RTS. Based on current evidence, 5 7 29 30 the hypothesis was formulated that athletes with earlier multidisciplinary access would experience a faster RTS. Regarding injury modifiers, it was expected that female and male athletes would show similar time to RTS despite presenting sex-specific characteristics of SRC. 31 The history of concussions, the severity of initial symptoms and the number of specialist consults were expected to be positively correlated to the time to RTS. 20 32
A total of 133 athletes (F=72; M=61; mean age±SD: 20.7±4.9 years old) who received medical care at the Institut national du sport du Québec, a COPSI training centre set up with a medical clinic, were included in this cohort study with retrospective analysis. They participated in 23 different Summer and Winter Olympic sports which were classified into six categories: team (soccer, water polo), middle distance/power (rowing, swimming), speed/strength (alpine skiing, para alpine skiing, short and long track speed skating), precision/skill-dependent (artistic swimming, diving, equestrian, figure skating, gymnastics, skateboard, synchronised skating, trampoline) and combat/weight-making (boxing, fencing, judo, para judo, karate, para taekwondo, wrestling) sports. 13 This sample consists of two distinct groups: (1) early access group in which athletes had access to a medical integrated support team of multidisciplinary experts within 7 days following their SRC and (2) late access group composed of athletes who had access to a medical integrated support team of multidisciplinary experts eight or more days following their SRC. 5 30 Inclusion criteria for the study were participation in a national or international-level sports programme 13 and having sustained at least one SRC diagnosed by an authorised healthcare practitioner (eg, physician and/or physiotherapist).
The institute clinic provides multidisciplinary services for care of patients with SRC including a broad range of recommended tests for concussion monitoring ( table 1 ). The typical pathway for the athletes consisted of an initial visit to either a sports medicine physician or their team sports therapist. A clinical diagnosis of SRC was then confirmed by a sports medicine physician, and referral for the required multidisciplinary assessments ensued based on the patient’s signs and symptoms. Rehabilitation progression was based on the evaluation of exercise tolerance, 33 priority to return to cognitive tasks and additional targeted support based on clinical findings of a cervical, visual or vestibular nature. 17 The expert team worked in an integrated manner with the athlete and their coaching staff for the rehabilitation phase, including regular round tables and ongoing communication. 34 For some athletes, access to recommended care was fee based, without a priori agreements with a third party payer (eg, National Sports Federation).
Main evaluations performed to guide the return to sport following sport-related concussion
Data were collected at the medical clinic using a standardised injury surveillance form based on International Olympic Committee guidelines. 35 All injury characteristics were extracted from the central injury database between 1 July 2018 and 31 July 2022. This period corresponds to a Winter Olympic sports quadrennial but also covers 3 years for Summer Olympic sports due to the postponing of the Tokyo 2020 Olympic Games. Therefore, the observation period includes a typical volume of competitions across sports and minimises differences in exposure based on major sports competition schedules. The information extracted from the database included: participant ID, sex, date of birth, sport, date of injury, type of injury, date of their visit at the clinic, clearance date of unrestricted RTS (eg, defined as step 6 of the RTS strategy with a return to normal gameplay including competitions), the number and type of specialist consults, mechanism of injury (eg, fall, hit), environment where the injury took place (eg, training, competition), history of concussions, history of modifiers (eg, previous head injury, migraines, learning disability, attention deficit disorder or attention deficit/hyperactivity disorder, depression, anxiety, psychotic disorder), as well as the number of symptoms and the total severity score from the first Sport Concussion Assessment Tool 5 (SCAT5) assessment following SRC. 17
Following a Shapiro-Wilk test, medians, IQR and non-parametric tests were used for the analyses because of the absence of normal distributions for all the variables in the dataset (all p<0.001). The skewness was introduced by the presence of individuals that required lengthy recovery periods. One participant was removed from the analysis because their time to consult with the multidisciplinary team was extremely delayed (>1 year).
Descriptive statistics were used to describe the participant’s demographics, SRC characteristics and risk factors in the total sample. Estimated incidences of SRC were also reported for seven resident sports at the institute for which it was possible to quantify a detailed estimate of training volume based on the annual number of training and competition hours as well as the number of athletes in each sport.
To assess if access to multidisciplinary care modified the time to RTS, we compared time to RTS between early and late access groups using a method based on median differences described elsewhere. 36 Wilcoxon rank sum tests were also performed to make between-group comparisons on single variables of age, time to first consult, the number of specialists consulted and medical visits. Fisher’s exact tests were used to compare count data between groups on variables of sex, history of concussion, time since the previous concussion, presence of injury modifiers, environment and mechanism of injury. Bonferroni corrections were applied for multiple comparisons in case of meaningful differences.
To assess if injury modifiers modified time to RTS in the total sample, we compared time to RTS between sexes, history of concussions, time since previous concussion or other injury modifiers using a method based on median differences described elsewhere. 36 Kaplan-Meier curves were drawn to illustrate time to RTS differences between sexes (origin and start time: date of injury; end time: clearance date of unrestricted RTS). Trajectories were then assessed for statistical differences using Cox proportional hazards model. Wilcoxon rank sum tests were employed for comparing the total number of symptoms and severity scores on the SCAT5. The association of multilevel variables on return to play duration was evaluated in the total sample with Kruskal-Wallis rank tests for environment, mechanism of injury, history of concussions and time since previous concussion. For all subsequent analyses of correlations between SCAT5 results and secondary variables, only data obtained from SCAT5 assessments within the acute phase of injury (≤72 hours) were considered (n=65 SRC episodes in the early access group). 37 Spearman rank correlations were estimated between RTS duration, history of concussions, number of specialist consults and total number of SCAT5 symptoms or total symptom severity. All statistical tests were performed using RStudio (R V.4.1.0, The R Foundation for Statistical Computing). The significance level was set to p<0.05.
The study population is representative of the Canadian athletic population in terms of age, gender, demographics and includes a balanced representation of female and male athletes. The study team consists of investigators from different disciplines and countries, but with a predominantly white composition and under-representation of other ethnic groups. Our study population encompasses data from the Institut national du sport du Québec, covering individuals of all genders, ethnicities and geographical regions across Canada.
The patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research.
During the 4-year period covered by this retrospective chart review, a total of 160 SRC episodes were recorded in 132 athletes with a median (IQR) age of 19.1 (17.8–22.2) years old ( table 2 ). 13 female and 10 male athletes had multiple SRC episodes during this time. The sample had a relatively balanced number of females (53.8%) and males (46.2%) with SRC included. 60% of the sample reported a history of concussion, with 35.0% reporting having experienced more than two episodes. However, most of these concussions had occurred more than 1 year before the SRC for which they were being treated. Within this sample, 33.1% of participants reported a history of injury modifiers. Importantly, the median (IQR) time to first clinic consult was 10.0 (1.0–20.0) days and the median (IQR) time to RTS was 34.0 (21.0–63.0) days in this sample ( table 3 ). The majority of SRCs occurred during training (56.3%) rather than competition (33.1%) and were mainly due to a fall (63.7%) or a hit (31.3%). The median (IQR) number of follow-up consultations and specialists consulted after the SRC were, respectively, 9 (5.0–14.3) and 3 (2.0–4.0).
Participants demographics
Sport-related concussion characteristics
Among seven sports of the total sample (n=89 SRC), the estimated incidence of athletes with SRC was highest in short-track speed skating (0.47/1000 hours; 95% CI 0.3 to 0.6), and lower in boxing, trampoline, water polo, judo, artistic swimming, and diving (0.24 (95% CI 0.0 to 0.5), 0.16 (95% CI 0.0 to 0.5), 0.13 (95% CI 0.1 to 0.2), 0.11 (95% CI 0.1 to 0.2), 0.09 (95% CI 0.0 to 0.2) and 0.06 (95% CI 0.0 to 0.1)/1000, respectively ( online supplemental material ). Furthermore, most athletes sustained an SRC in training (66.5%; 95% CI 41.0 to 92.0) rather than competition (26.0%; 95% CI 0.0 to 55.0) except for judo athletes (20.0% (95% CI 4.1 to 62.0) and 80.0% (95% CI 38.0 to 96.0), respectively). Falls were the most common injury mechanism in speed skating, trampoline and judo while hits were the most common injury mechanism in boxing, water polo, artistic swimming and diving.
Access to care.
The median difference in time to RTS was 19 days (95% CI 9.3 to 28.7; p<0.001) between the early (26 (IQR 17.0–38.5) days) and late (45 (IQR 27.5–84.5) days) access groups ( table 3 ; figure 1 ). Importantly, the distribution of SRC environments was different between both groups (p=0.008). The post hoc analysis demonstrated a meaningful difference in the distribution of SRC in training and competition environments between groups (p=0.029) but not for the other comparisons. There was a meaningful difference between the groups in time to first consult (p<0.001; 95% CI −23.0 to −15.0), but no meaningful differences between groups in median age (p=0.176; 95% CI −0.3 to 1.6), sex distribution (p=0.341; 95% CI 0.7 to 2.8), concussion history (p=0.210), time since last concussion (p=0.866), mechanisms of SRC (p=0.412), the presence of modifiers (p=0.313; 95% CI 0.3 to 1.4) and the number of consulted specialists (p=0.368; 95% CI −5.4 to 1.0) or medical visits (p=0.162; 95% CI −1.0 to 3.0).
Time to return to sport following sport-related concussion as a function of group’s access to care and sex. Outliers: below=Q1−1.5×IQR; above=Q3+1.5×IQR.
The median difference in time to RTS was 6.5 days (95% CI −19.3 to 5.3; p=0.263; figure 1 ) between female (37.5 (IQR 22.0–65.3) days) and male (31.0 (IQR 20.0–48.0) days) athletes. Survival analyses highlighted an increased hazard of longer recovery trajectory in female compared with male athletes (HR 1.4; 95% CI 1.4 to 0.7; p=0.052; figure 2A ), which was mainly driven by the late (HR 1.8; 95% CI 1.8 to 0.6; p=0.019; figure 2C ) rather than the early (HR 1.1; 95% CI 1.1 to 0.9; p=0.700; figure 2B ) access group. Interestingly, a greater number of female athletes (n=15) required longer than 100 days for RTS as opposed to the male athletes (n=6). There were no meaningful differences between sexes for the total number of symptoms recorded on the SCAT5 (p=0.539; 95% CI −1.0 to 2.0) nor the total symptoms total severity score (p=0.989; 95% CI −5.0 to 5.0).
Time analysis of sex differences in the time to return to sport following sport-related concussion in the (A) total sample, as well as (B) early, and (C) late groups using survival curves with 95% confidence bands and tables of time-specific number of patients at risk (censoring proportion: 0%).
SRC modifiers are presented in table 2 , and their influence on RTP is shown in table 4 . The median difference in time to RTS was 1.5 days (95% CI −10.6 to 13.6; p=0.807) between athletes with none and one episode of previous concussion, was 3.5 days (95% CI −13.9 to 19.9; p=0.728) between athletes with none and two or more episodes of previous concussion, and was 2 days (95% CI −12.4 to 15.4; p=0.832) between athletes with one and two or more episodes of previous concussion. The history of concussions (none, one, two or more) had no meaningful impact on the time to RTS (p=0.471). The median difference in time to RTS was 4.5 days (95% CI −21.0 to 30.0; p=0.729) between athletes with none and one episode of concussion in the previous year, was 2 days (95% CI −10.0 to 14.0; p=0.744) between athletes with none and one episode of concussion more than 1 year ago, and was 2.5 days (95% CI −27.7 to 22.7; p=0.846) between athletes with an episode of concussion in the previous year and more than 1 year ago. Time since the most recent concussion did not change the time to RTS (p=0.740). The longest time to RTS was observed in the late access group in which athletes had a concussion in the previous year, with a very large spread of durations (65.0 (IQR 33.0–116.5) days). The median difference in time to RTS was 3 days (95% CI −13.1 to 7.1; p=0.561) between athletes with and without other injury modifiers. The history of other injury modifiers had no meaningful influence on the time to RTS (95% CI −6.0 to 11.0; p=0.579).
Preinjury modifiers of time to return to sport following SRC
Positive associations were observed between the time to RTS and the number of initial symptoms (r=0.3; p=0.010; 95% CI 0.1 to 0.5) or initial severity score (r=0.3; p=0.008; 95% CI 0.1 to 0.5) from the SCAT5. The associations were not meaningful between the number of specialist consultations and the initial number of symptoms (r=−0.1; p=0.633; 95% CI −0.3 to 0.2) or initial severity score (r=−0.1; p=0.432; 95% CI −0.3 to 0.2). Anecdotally, most reported symptoms following SRC were ‘headache’ (86.2%) and ‘pressure in the head’ (80.0%), followed by ‘fatigue’ (72.3%), ‘neck pain’ (70.8%) and ‘not feeling right’ (67.7%; online supplemental material ).
This study is the first to report descriptive data on athletes with SRC collected across several sports during an Olympic quadrennial, including athletes who received the most recent evidence-based care at the time of data collection. Primarily, results indicate that the time to RTS in athletes engaged in Summer and Winter Olympic sports may require a median (IQR) of 34.0 (21.0–63.0) days. Importantly, findings demonstrated that athletes with earlier (≤7 days) access to multidisciplinary concussion care showed faster RTS compared with those with late access. Time to RTS exhibited large variability where sex had a meaningful influence on the recovery pathway in the late access group. Initial symptoms, but not history of concussion, were correlated with prognosis in this sample. The main reported symptoms were consistent with previous studies. 38 39
This study provides descriptive data on the impact of SRC monitoring programmes on recovery in elite athletes engaged in Olympic sports. As hypothesised, the median time to RTS found in this study (eg, 34.0 days) was about three times longer than those found in reports from before 2005, and 2 weeks longer than the typical median values (eg, 19.8 days) recently reported in athletic levels including youth (high heterogeneity, I 2 =99.3%). 19 These durations were also twice as long as the median unrestricted time to RTS observed among American collegiate athletes, which averages around 16 days. 9 20 21 However, they were more closely aligned with findings from collegiate athletes with slow recovery (eg, 34.7 days) and evidence from military cadets with poor access where return to duty duration was 29.4 days. 8 22 Several reasons could explain such extended time to RTS, but the most likely seems to be related to the diversity in access among these sports to multidisciplinary services (eg, 10.0 median days (1–20)), well beyond the delays experienced by collegiate athletes, for example (eg, 0.0 median days (0–2)). 40 In the total sample, the delays to first consult with the multidisciplinary clinic were notably mediated by the group with late access, whose athletes had more SRC during international competition. One of the issues for athletes engaged in Olympic sports is that they travel abroad year-round for competitions, in contrast with collegiate athletes who compete domestically. These circumstances likely make access to quality care very variable and make the follow-up of care less centralised. Also, access to resources among these sports is highly variable (eg, medal-dependant), 14 and at the discretion of the sport’s leadership (eg, sport federation), who may decide to prioritise more or fewer resources to concussion management considering the relatively low incidence of this injury. Another explanation for the longer recovery times in these athletes could be the lack of financial incentives to return to play faster, which are less prevalent among Olympic sports compared with professionals. However, the stakes of performance and return to play are still very high among these athletes.
Additionally, it is plausible that studies vary their outcome with shifting operational definitions such as resolution of symptoms, return to activities, graduated return to play or unrestricted RTS. 19 40 It is understood that resolution of symptoms may occur much earlier than return to preinjury performance levels. Finally, an aspect that has been little studied to date is the influence of the sport’s demands on the RTS. For example, acrobatic sports requiring precision/technical skills such as figure skating, trampoline and diving, which involve high visuospatial and vestibular demands, 41 might require more time to recover or elicit symptoms for longer times. Anecdotally, athletes who experienced a long time to RTS (>100 days) were mostly from precision/skill-dependent sports in this sample. The sports demand should be further considered as an injury modifier. More epidemiological reports that consider the latest guidelines are therefore necessary to gain a better understanding of the true time to RTS and impact following SRC in Olympians.
In this study, athletes who obtained early access to multidisciplinary care after SRC recovered faster than those with late access to multidisciplinary care. This result aligns with findings showing that delayed access to a healthcare practitioner delays recovery, 19 including previous evidence in a sample of patients from a sports medicine clinic (ages 12–22), indicating that the group with a delayed first clinical visit (eg, 8–20 days) was associated with a 5.8 times increased likelihood of a recovery longer than 30 days. 5 Prompt multidisciplinary approach for patients with SRC is suggested to yield greater effectiveness over usual care, 3 6 17 which is currently evaluated under randomised controlled trial. 42 Notably, early physical exercise and prescribed exercise (eg, 48 hours postinjury) are effective in improving recovery compared with strict rest or stretching. 43 44 In fact, preclinical and clinical studies have shown that exercise has the potential to improve neurotransmission, neuroplasticity and cerebral blood flow which supports that the physically trained brain enhanced recovery. 45 46 Prompt access to specialised healthcare professionals can be challenging in some contexts (eg, during international travel), and the cost of accessing medical care privately may prove further prohibitive. This barrier to recovery should be a priority for stakeholders in Olympic sports and given more consideration by health authorities.
The estimated incidences of SRC were in the lower range compared with what is reported in other elite sport populations. 1 2 However, the burden of injury remained high for these sports, and the financial resources as well as expertise required to facilitate athletes’ rehabilitation was considerable (median number of consultations: 9.0). Notably, the current standard of public healthcare in Canada does not subsidise the level of support recommended following SRC as first-line care, and the financial subsidisation of this recommended care within each federation is highly dependent on the available funding, varying significantly between sports. 14 Therefore, the ongoing efforts to improve education, prevention and early recognition, modification of rules to make the environments safer and multidisciplinary care access for athletes remain crucial. 7
This unique study provides multisport characteristics following the evolution of concussion guidelines in Summer and Winter Olympic sports in North America. Notably, it features a balance between the number of female and male athletes, allowing the analysis of sex differences. 23 26 In a previous review of 171 studies informing consensus statements, samples were mostly composed of more than 80% of male participants, and more than 40% of these studies did not include female participants at all. 26 This study also included multiple non-traditional sports typically not encompassed in SRC research, feature previously identified as a key requirement of future epidemiological research. 47
However, it must be acknowledged that potential confounding factors could influence the results. For example, the number of SRC detected during the study period does not account for potentially unreported concussions. Nevertheless, this figure should be minimal because these athletes are supervised both in training and in competition by medical staff. Next, the sport types were heterogeneous, with inconsistent risk for head impacts or inconsistent sport demand which might have an influence on recovery. Furthermore, the number of participants or sex in each sport was not evenly distributed, with short-track speed skaters representing a large portion of the overall sample (32.5%), for example. Additionally, the number of participants with specific modifiers was too small in the current sample to conclude whether the presence of precise characteristics (eg, history of concussion) impacted the time to RTS. Also, the group with late access was more likely to consist of athletes who sought specialised care for persistent symptoms. These complex cases are often expected to require additional time to recover. 48 Furthermore, athletes in the late group may have sought support outside of the institute medical clinic, without a coordinated multidisciplinary approach. Therefore, the estimation of clinical consultations was tentative for this group and may represent a potential confounding factor in this study.
This is the first study to provide evidence of the prevalence of athletes with SRC and modifiers of recovery in both female and male elite-level athletes across a variety of Summer and Winter Olympic sports. There was a high variability in access to care in this group, and the median (IQR) time to RTS following SRC was 34.0 (21.0–63.0) days. Athletes with earlier access to multidisciplinary care took nearly half the time to RTS compared with those with late access. Sex had a meaningful influence on the recovery pathway in the late access group. Initial symptom number and severity score but not history of concussion were meaningful modifiers of recovery. Injury surveillance programmes targeting national sport organisations should be prioritised to help evaluate the efficacy of recommended injury monitoring programmes and to help athletes engaged in Olympic sports who travel a lot internationally have better access to care. 35 49
Patient consent for publication.
Not applicable.
This study involves human participants and was approved by the ethics board of Université de Montréal (certificate #2023-4052). Participants gave informed consent to participate in the study before taking part.
The authors would like to thank the members of the concussion interdisciplinary clinic of the Institut national du sport du Québec for collecting the data and for their unconditional support to the athletes.
Supplementary data.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
X @ThomasRomeas
Correction notice This article has been corrected since it published Online First. The ORCID details have been added for Dr Croteau.
Contributors TR, FC and SL were involved in planning, conducting and reporting the work. François Bieuzen and Magdalena Wojtowicz critically reviewed the manuscript. TR is guarantor.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
IMAGES
VIDEO
COMMENTS
What Is a Research Methodology? | Steps & Tips
The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.
What is Research Methodology? Definition, Types, and ...
What Is Research Methodology? Definition + Examples
Organizing Your Social Sciences Research Paper
Research Methodology Guide: Writing Tips, Types, & ...
Research methodology is a crucial aspect of any investigative process, serving as the blueprint for the entire research journey. If you are stuck in the methodology section of your research paper, then this blog will guide you on what is a research methodology, its types and how to successfully conduct one.
What is research methodology? [Update 2024]
Research Methods | Definitions, Types, Examples
Your Step-by-Step Guide to Writing a Good Research ...
How to Write the Methods Section of a Research Paper
Research methodology is a crucial framework that guides the entire research process. It involves choosing between various qualitative and quantitative approaches, each tailored to specific research questions and objectives. Your chosen methodology shapes how data is gathered, analysed, and interpreted, ultimately influencing the reliability and ...
Methodology Section for Research Papers
What Is Research Methodology? (Why It's Important and ...
Methodology in a Research Paper - Indeed
How to Write an APA Methods Section | With Examples
A tutorial on methodological studies: the what, when, how and ...
What Is a Research Methodology? | Steps & Tips - Scribbr
How To Choose The Right Research Methodology
What are research methodologies? - Pfeiffer Library
Organizing Academic Research Papers: 6. The Methodology
How to Write the Methods Section of a Research Paper
The basic purpose of any position paper is to present an arguable opinion about a particular topic. The primary objective is to convince the audience that your opinion on the topic is valid and worth considering, using well-researched facts and other expert opinions to support your claims. The secondary objective is to factually refute the validity of the opposing side's counter-claims.
The experimental results demonstrate that the tuple model architecture effectively facilitates data and model sharing across multiple scenarios in forest fire emergency response, offering an innovative, data-driven, integrated computing methodology for addressing these challenges.
What Is a Research Design | Types, Guide & ...
Reflecting on my research approach, I advocate that poetry can serve as a valuable analysis tool for research, and it can be utilised as part of a multi-level approach. Poetry can be a powerful tool for communicating the researcher's reflections and interpretations of the data and representing the voices of participants in engaging ways.
Mathematics Research Center, Academy of Athens, Athens, Greece. Contribution: Conceptualization, Investigation, Writing - original draft, Methodology, Writing - review & editing. Search for more papers by this author
In this paper we describe a method based on the principal component analysis that eliminates model redundancy. We show that by adding model orthogonalization to the proposed Bayesian model combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance. ... Phys. Rev. Research 6 ...
Objectives This cohort study reported descriptive statistics in athletes engaged in Summer and Winter Olympic sports who sustained a sport-related concussion (SRC) and assessed the impact of access to multidisciplinary care and injury modifiers on recovery. Methods 133 athletes formed two subgroups treated in a Canadian sport institute medical clinic: earlier (≤7 days) and late (≥8 days ...