• Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

744k Accesses

310 Citations

84 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research articles using qualitative methods

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research articles using qualitative methods

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved June 9, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

29k Accesses

48 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

research articles using qualitative methods

Determinants of behaviour and their efficacy as targets of behavioural change interventions

research articles using qualitative methods

An overview of clinical decision support systems: benefits, risks, and strategies for success

research articles using qualitative methods

Principal component analysis

Introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions.

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

‘Baby mamas’ in Urban Ghana: an exploratory qualitative study on the factors influencing serial fathering among men in Accra, Ghana

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Revolutionising dental technologies: a qualitative study on dental technicians’ perceptions of Artificial intelligence integration

  • Galvin Sim Siang Lin
  • Yook Shiang Ng
  • Kah Hoay Chua

BMC Oral Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research articles using qualitative methods

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • ESHRE Pages
  • Mini-reviews
  • Author Guidelines
  • Submission Site
  • Reasons to Publish
  • Open Access
  • Advertising and Corporate Services
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Branded Books
  • Journals Career Network
  • About Human Reproduction
  • About the European Society of Human Reproduction and Embryology
  • Editorial Board
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ESHRE
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, when to use qualitative research, how to judge qualitative research, conclusions, authors' roles, conflict of interest.

  • < Previous

Qualitative research methods: when to use them and how to judge them

  • Article contents
  • Figures & tables
  • Supplementary Data

K. Hammarberg, M. Kirkman, S. de Lacey, Qualitative research methods: when to use them and how to judge them, Human Reproduction , Volume 31, Issue 3, March 2016, Pages 498–501, https://doi.org/10.1093/humrep/dev334

  • Permissions Icon Permissions

In March 2015, an impressive set of guidelines for best practice on how to incorporate psychosocial care in routine infertility care was published by the ESHRE Psychology and Counselling Guideline Development Group ( ESHRE Psychology and Counselling Guideline Development Group, 2015 ). The authors report that the guidelines are based on a comprehensive review of the literature and we congratulate them on their meticulous compilation of evidence into a clinically useful document. However, when we read the methodology section, we were baffled and disappointed to find that evidence from research using qualitative methods was not included in the formulation of the guidelines. Despite stating that ‘qualitative research has significant value to assess the lived experience of infertility and fertility treatment’, the group excluded this body of evidence because qualitative research is ‘not generally hypothesis-driven and not objective/neutral, as the researcher puts him/herself in the position of the participant to understand how the world is from the person's perspective’.

Qualitative and quantitative research methods are often juxtaposed as representing two different world views. In quantitative circles, qualitative research is commonly viewed with suspicion and considered lightweight because it involves small samples which may not be representative of the broader population, it is seen as not objective, and the results are assessed as biased by the researchers' own experiences or opinions. In qualitative circles, quantitative research can be dismissed as over-simplifying individual experience in the cause of generalisation, failing to acknowledge researcher biases and expectations in research design, and requiring guesswork to understand the human meaning of aggregate data.

As social scientists who investigate psychosocial aspects of human reproduction, we use qualitative and quantitative methods, separately or together, depending on the research question. The crucial part is to know when to use what method.

The peer-review process is a pillar of scientific publishing. One of the important roles of reviewers is to assess the scientific rigour of the studies from which authors draw their conclusions. If rigour is lacking, the paper should not be published. As with research using quantitative methods, research using qualitative methods is home to the good, the bad and the ugly. It is essential that reviewers know the difference. Rejection letters are hard to take but more often than not they are based on legitimate critique. However, from time to time it is obvious that the reviewer has little grasp of what constitutes rigour or quality in qualitative research. The first author (K.H.) recently submitted a paper that reported findings from a qualitative study about fertility-related knowledge and information-seeking behaviour among people of reproductive age. In the rejection letter one of the reviewers (not from Human Reproduction ) lamented, ‘Even for a qualitative study, I would expect that some form of confidence interval and paired t-tables analysis, etc. be used to analyse the significance of results'. This comment reveals the reviewer's inappropriate application to qualitative research of criteria relevant only to quantitative research.

In this commentary, we give illustrative examples of questions most appropriately answered using qualitative methods and provide general advice about how to appraise the scientific rigour of qualitative studies. We hope this will help the journal's reviewers and readers appreciate the legitimate place of qualitative research and ensure we do not throw the baby out with the bath water by excluding or rejecting papers simply because they report the results of qualitative studies.

In psychosocial research, ‘quantitative’ research methods are appropriate when ‘factual’ data are required to answer the research question; when general or probability information is sought on opinions, attitudes, views, beliefs or preferences; when variables can be isolated and defined; when variables can be linked to form hypotheses before data collection; and when the question or problem is known, clear and unambiguous. Quantitative methods can reveal, for example, what percentage of the population supports assisted conception, their distribution by age, marital status, residential area and so on, as well as changes from one survey to the next ( Kovacs et al. , 2012 ); the number of donors and donor siblings located by parents of donor-conceived children ( Freeman et al. , 2009 ); and the relationship between the attitude of donor-conceived people to learning of their donor insemination conception and their family ‘type’ (one or two parents, lesbian or heterosexual parents; Beeson et al. , 2011 ).

In contrast, ‘qualitative’ methods are used to answer questions about experience, meaning and perspective, most often from the standpoint of the participant. These data are usually not amenable to counting or measuring. Qualitative research techniques include ‘small-group discussions’ for investigating beliefs, attitudes and concepts of normative behaviour; ‘semi-structured interviews’, to seek views on a focused topic or, with key informants, for background information or an institutional perspective; ‘in-depth interviews’ to understand a condition, experience, or event from a personal perspective; and ‘analysis of texts and documents’, such as government reports, media articles, websites or diaries, to learn about distributed or private knowledge.

Qualitative methods have been used to reveal, for example, potential problems in implementing a proposed trial of elective single embryo transfer, where small-group discussions enabled staff to explain their own resistance, leading to an amended approach ( Porter and Bhattacharya, 2005 ). Small-group discussions among assisted reproductive technology (ART) counsellors were used to investigate how the welfare principle is interpreted and practised by health professionals who must apply it in ART ( de Lacey et al. , 2015 ). When legislative change meant that gamete donors could seek identifying details of people conceived from their gametes, parents needed advice on how best to tell their children. Small-group discussions were convened to ask adolescents (not known to be donor-conceived) to reflect on how they would prefer to be told ( Kirkman et al. , 2007 ).

When a population cannot be identified, such as anonymous sperm donors from the 1980s, a qualitative approach with wide publicity can reach people who do not usually volunteer for research and reveal (for example) their attitudes to proposed legislation to remove anonymity with retrospective effect ( Hammarberg et al. , 2014 ). When researchers invite people to talk about their reflections on experience, they can sometimes learn more than they set out to discover. In describing their responses to proposed legislative change, participants also talked about people conceived as a result of their donations, demonstrating various constructions and expectations of relationships ( Kirkman et al. , 2014 ).

Interviews with parents in lesbian-parented families generated insight into the diverse meanings of the sperm donor in the creation and life of the family ( Wyverkens et al. , 2014 ). Oral and written interviews also revealed the embarrassment and ambivalence surrounding sperm donors evident in participants in donor-assisted conception ( Kirkman, 2004 ). The way in which parents conceptualise unused embryos and why they discard rather than donate was explored and understood via in-depth interviews, showing how and why the meaning of those embryos changed with parenthood ( de Lacey, 2005 ). In-depth interviews were also used to establish the intricate understanding by embryo donors and recipients of the meaning of embryo donation and the families built as a result ( Goedeke et al. , 2015 ).

It is possible to combine quantitative and qualitative methods, although great care should be taken to ensure that the theory behind each method is compatible and that the methods are being used for appropriate reasons. The two methods can be used sequentially (first a quantitative then a qualitative study or vice versa), where the first approach is used to facilitate the design of the second; they can be used in parallel as different approaches to the same question; or a dominant method may be enriched with a small component of an alternative method (such as qualitative interviews ‘nested’ in a large survey). It is important to note that free text in surveys represents qualitative data but does not constitute qualitative research. Qualitative and quantitative methods may be used together for corroboration (hoping for similar outcomes from both methods), elaboration (using qualitative data to explain or interpret quantitative data, or to demonstrate how the quantitative findings apply in particular cases), complementarity (where the qualitative and quantitative results differ but generate complementary insights) or contradiction (where qualitative and quantitative data lead to different conclusions). Each has its advantages and challenges ( Brannen, 2005 ).

Qualitative research is gaining increased momentum in the clinical setting and carries different criteria for evaluating its rigour or quality. Quantitative studies generally involve the systematic collection of data about a phenomenon, using standardized measures and statistical analysis. In contrast, qualitative studies involve the systematic collection, organization, description and interpretation of textual, verbal or visual data. The particular approach taken determines to a certain extent the criteria used for judging the quality of the report. However, research using qualitative methods can be evaluated ( Dixon-Woods et al. , 2006 ; Young et al. , 2014 ) and there are some generic guidelines for assessing qualitative research ( Kitto et al. , 2008 ).

Although the terms ‘reliability’ and ‘validity’ are contentious among qualitative researchers ( Lincoln and Guba, 1985 ) with some preferring ‘verification’, research integrity and robustness are as important in qualitative studies as they are in other forms of research. It is widely accepted that qualitative research should be ethical, important, intelligibly described, and use appropriate and rigorous methods ( Cohen and Crabtree, 2008 ). In research investigating data that can be counted or measured, replicability is essential. When other kinds of data are gathered in order to answer questions of personal or social meaning, we need to be able to capture real-life experiences, which cannot be identical from one person to the next. Furthermore, meaning is culturally determined and subject to evolutionary change. The way of explaining a phenomenon—such as what it means to use donated gametes—will vary, for example, according to the cultural significance of ‘blood’ or genes, interpretations of marital infidelity and religious constructs of sexual relationships and families. Culture may apply to a country, a community, or other actual or virtual group, and a person may be engaged at various levels of culture. In identifying meaning for members of a particular group, consistency may indeed be found from one research project to another. However, individuals within a cultural group may present different experiences and perceptions or transgress cultural expectations. That does not make them ‘wrong’ or invalidate the research. Rather, it offers insight into diversity and adds a piece to the puzzle to which other researchers also contribute.

In qualitative research the objective stance is obsolete, the researcher is the instrument, and ‘subjects’ become ‘participants’ who may contribute to data interpretation and analysis ( Denzin and Lincoln, 1998 ). Qualitative researchers defend the integrity of their work by different means: trustworthiness, credibility, applicability and consistency are the evaluative criteria ( Leininger, 1994 ).

Trustworthiness

A report of a qualitative study should contain the same robust procedural description as any other study. The purpose of the research, how it was conducted, procedural decisions, and details of data generation and management should be transparent and explicit. A reviewer should be able to follow the progression of events and decisions and understand their logic because there is adequate description, explanation and justification of the methodology and methods ( Kitto et al. , 2008 )

Credibility

Credibility is the criterion for evaluating the truth value or internal validity of qualitative research. A qualitative study is credible when its results, presented with adequate descriptions of context, are recognizable to people who share the experience and those who care for or treat them. As the instrument in qualitative research, the researcher defends its credibility through practices such as reflexivity (reflection on the influence of the researcher on the research), triangulation (where appropriate, answering the research question in several ways, such as through interviews, observation and documentary analysis) and substantial description of the interpretation process; verbatim quotations from the data are supplied to illustrate and support their interpretations ( Sandelowski, 1986 ). Where excerpts of data and interpretations are incongruent, the credibility of the study is in doubt.

Applicability

Applicability, or transferability of the research findings, is the criterion for evaluating external validity. A study is considered to meet the criterion of applicability when its findings can fit into contexts outside the study situation and when clinicians and researchers view the findings as meaningful and applicable in their own experiences.

Larger sample sizes do not produce greater applicability. Depth may be sacrificed to breadth or there may be too much data for adequate analysis. Sample sizes in qualitative research are typically small. The term ‘saturation’ is often used in reference to decisions about sample size in research using qualitative methods. Emerging from grounded theory, where filling theoretical categories is considered essential to the robustness of the developing theory, data saturation has been expanded to describe a situation where data tend towards repetition or where data cease to offer new directions and raise new questions ( Charmaz, 2005 ). However, the legitimacy of saturation as a generic marker of sampling adequacy has been questioned ( O'Reilly and Parker, 2013 ). Caution must be exercised to ensure that a commitment to saturation does not assume an ‘essence’ of an experience in which limited diversity is anticipated; each account is likely to be subtly different and each ‘sample’ will contribute to knowledge without telling the whole story. Increasingly, it is expected that researchers will report the kind of saturation they have applied and their criteria for recognising its achievement; an assessor will need to judge whether the choice is appropriate and consistent with the theoretical context within which the research has been conducted.

Sampling strategies are usually purposive, convenient, theoretical or snowballed. Maximum variation sampling may be used to seek representation of diverse perspectives on the topic. Homogeneous sampling may be used to recruit a group of participants with specified criteria. The threat of bias is irrelevant; participants are recruited and selected specifically because they can illuminate the phenomenon being studied. Rather than being predetermined by statistical power analysis, qualitative study samples are dependent on the nature of the data, the availability of participants and where those data take the investigator. Multiple data collections may also take place to obtain maximum insight into sensitive topics. For instance, the question of how decisions are made for embryo disposition may involve sampling within the patient group as well as from scientists, clinicians, counsellors and clinic administrators.

Consistency

Consistency, or dependability of the results, is the criterion for assessing reliability. This does not mean that the same result would necessarily be found in other contexts but that, given the same data, other researchers would find similar patterns. Researchers often seek maximum variation in the experience of a phenomenon, not only to illuminate it but also to discourage fulfilment of limited researcher expectations (for example, negative cases or instances that do not fit the emerging interpretation or theory should be actively sought and explored). Qualitative researchers sometimes describe the processes by which verification of the theoretical findings by another team member takes place ( Morse and Richards, 2002 ).

Research that uses qualitative methods is not, as it seems sometimes to be represented, the easy option, nor is it a collation of anecdotes. It usually involves a complex theoretical or philosophical framework. Rigorous analysis is conducted without the aid of straightforward mathematical rules. Researchers must demonstrate the validity of their analysis and conclusions, resulting in longer papers and occasional frustration with the word limits of appropriate journals. Nevertheless, we need the different kinds of evidence that is generated by qualitative methods. The experience of health, illness and medical intervention cannot always be counted and measured; researchers need to understand what they mean to individuals and groups. Knowledge gained from qualitative research methods can inform clinical practice, indicate how to support people living with chronic conditions and contribute to community education and awareness about people who are (for example) experiencing infertility or using assisted conception.

Each author drafted a section of the manuscript and the manuscript as a whole was reviewed and revised by all authors in consultation.

No external funding was either sought or obtained for this study.

The authors have no conflicts of interest to declare.

Beeson D , Jennings P , Kramer W . Offspring searching for their sperm donors: how family types shape the process . Hum Reprod 2011 ; 26 : 2415 – 2424 .

Google Scholar

Brannen J . Mixing methods: the entry of qualitative and quantitative approaches into the research process . Int J Soc Res Methodol 2005 ; 8 : 173 – 184 .

Charmaz K . Grounded Theory in the 21st century; applications for advancing social justice studies . In: Denzin NK , Lincoln YS (eds). The Sage Handbook of Qualitative Research . California : Sage Publications Inc. , 2005 .

Google Preview

Cohen D , Crabtree B . Evaluative criteria for qualitative research in health care: controversies and recommendations . Ann Fam Med 2008 ; 6 : 331 – 339 .

de Lacey S . Parent identity and ‘virtual’ children: why patients discard rather than donate unused embryos . Hum Reprod 2005 ; 20 : 1661 – 1669 .

de Lacey SL , Peterson K , McMillan J . Child interests in assisted reproductive technology: how is the welfare principle applied in practice? Hum Reprod 2015 ; 30 : 616 – 624 .

Denzin N , Lincoln Y . Entering the field of qualitative research . In: Denzin NK , Lincoln YS (eds). The Landscape of Qualitative Research: Theories and Issues . Thousand Oaks : Sage , 1998 , 1 – 34 .

Dixon-Woods M , Bonas S , Booth A , Jones DR , Miller T , Shaw RL , Smith JA , Young B . How can systematic reviews incorporate qualitative research? A critical perspective . Qual Res 2006 ; 6 : 27 – 44 .

ESHRE Psychology and Counselling Guideline Development Group . Routine Psychosocial Care in Infertility and Medically Assisted Reproduction: A Guide for Fertility Staff , 2015 . http://www.eshre.eu/Guidelines-and-Legal/Guidelines/Psychosocial-care-guideline.aspx .

Freeman T , Jadva V , Kramer W , Golombok S . Gamete donation: parents' experiences of searching for their child's donor siblings or donor . Hum Reprod 2009 ; 24 : 505 – 516 .

Goedeke S , Daniels K , Thorpe M , Du Preez E . Building extended families through embryo donation: the experiences of donors and recipients . Hum Reprod 2015 ; 30 : 2340 – 2350 .

Hammarberg K , Johnson L , Bourne K , Fisher J , Kirkman M . Proposed legislative change mandating retrospective release of identifying information: consultation with donors and Government response . Hum Reprod 2014 ; 29 : 286 – 292 .

Kirkman M . Saviours and satyrs: ambivalence in narrative meanings of sperm provision . Cult Health Sex 2004 ; 6 : 319 – 336 .

Kirkman M , Rosenthal D , Johnson L . Families working it out: adolescents' views on communicating about donor-assisted conception . Hum Reprod 2007 ; 22 : 2318 – 2324 .

Kirkman M , Bourne K , Fisher J , Johnson L , Hammarberg K . Gamete donors' expectations and experiences of contact with their donor offspring . Hum Reprod 2014 ; 29 : 731 – 738 .

Kitto S , Chesters J , Grbich C . Quality in qualitative research . Med J Aust 2008 ; 188 : 243 – 246 .

Kovacs GT , Morgan G , Levine M , McCrann J . The Australian community overwhelmingly approves IVF to treat subfertility, with increasing support over three decades . Aust N Z J Obstetr Gynaecol 2012 ; 52 : 302 – 304 .

Leininger M . Evaluation criteria and critique of qualitative research studies . In: Morse J (ed). Critical Issues in Qualitative Research Methods . Thousand Oaks : Sage , 1994 , 95 – 115 .

Lincoln YS , Guba EG . Naturalistic Inquiry . Newbury Park, CA : Sage Publications , 1985 .

Morse J , Richards L . Readme First for a Users Guide to Qualitative Methods . Thousand Oaks : Sage , 2002 .

O'Reilly M , Parker N . ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research . Qual Res 2013 ; 13 : 190 – 197 .

Porter M , Bhattacharya S . Investigation of staff and patients' opinions of a proposed trial of elective single embryo transfer . Hum Reprod 2005 ; 20 : 2523 – 2530 .

Sandelowski M . The problem of rigor in qualitative research . Adv Nurs Sci 1986 ; 8 : 27 – 37 .

Wyverkens E , Provoost V , Ravelingien A , De Sutter P , Pennings G , Buysse A . Beyond sperm cells: a qualitative study on constructed meanings of the sperm donor in lesbian families . Hum Reprod 2014 ; 29 : 1248 – 1254 .

Young K , Fisher J , Kirkman M . Women's experiences of endometriosis: a systematic review of qualitative research . J Fam Plann Reprod Health Care 2014 ; 41 : 225 – 234 .

  • conflict of interest
  • credibility
  • qualitative research
  • quantitative methods
Month: Total Views:
December 2016 1
January 2017 4
February 2017 41
March 2017 82
April 2017 111
May 2017 88
June 2017 71
July 2017 73
August 2017 92
September 2017 59
October 2017 76
November 2017 84
December 2017 179
January 2018 177
February 2018 235
March 2018 377
April 2018 504
May 2018 914
June 2018 1,052
July 2018 2,122
August 2018 4,606
September 2018 5,764
October 2018 7,844
November 2018 11,701
December 2018 8,722
January 2019 10,884
February 2019 10,938
March 2019 13,846
April 2019 17,949
May 2019 17,333
June 2019 12,257
July 2019 12,485
August 2019 14,138
September 2019 11,868
October 2019 13,410
November 2019 17,044
December 2019 12,312
January 2020 14,664
February 2020 15,928
March 2020 16,475
April 2020 22,019
May 2020 12,941
June 2020 15,155
July 2020 13,648
August 2020 12,338
September 2020 12,599
October 2020 13,599
November 2020 12,718
December 2020 10,484
January 2021 11,352
February 2021 12,734
March 2021 15,285
April 2021 14,133
May 2021 12,930
June 2021 8,304
July 2021 7,175
August 2021 7,738
September 2021 7,781
October 2021 7,080
November 2021 7,031
December 2021 5,549
January 2022 6,326
February 2022 6,135
March 2022 9,371
April 2022 10,448
May 2022 10,467
June 2022 6,905
July 2022 5,981
August 2022 7,254
September 2022 7,249
October 2022 7,709
November 2022 7,660
December 2022 6,075
January 2023 6,936
February 2023 7,205
March 2023 8,530
April 2023 8,302
May 2023 7,630
June 2023 5,434
July 2023 4,658
August 2023 5,289
September 2023 4,781
October 2023 6,221
November 2023 6,384
December 2023 4,798
January 2024 5,907
February 2024 5,875
March 2024 8,614
April 2024 8,526
May 2024 7,309
June 2024 805

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2350
  • Copyright © 2024 European Society of Human Reproduction and Embryology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident Due to planned maintenance there will be periods of time where the website may be unavailable. We apologise for any inconvenience.

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

research articles using qualitative methods

  • > Journals
  • > BJPsych Bulletin
  • > The Psychiatrist
  • > Volume 37 Issue 6
  • > Qualitative research: its value and applicability

research articles using qualitative methods

Article contents

What questions are best answered using qualitative research, countering some misconceptions, in conclusion, qualitative research: its value and applicability.

Published online by Cambridge University Press:  02 January 2018

Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them. The effective application of qualitative methods to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. Qualitative approaches have particular potential in psychiatry research, singularly and in combination with quantitative methods. This article outlines the nature and potential application of qualitative research as well as attempting to counter a number of misconceptions.

Qualitative research has a rich tradition in the social sciences. Since the late 19th century, researchers interested in studying the social behaviour and cultures of humankind have perceived limitations in trying to explain the phenomena they encounter in purely quantifiable, measurable terms. Anthropology, in its social and cultural forms, was one of the foremost disciplines in developing what would later be termed a qualitative approach, founded as it was on ethnographic studies which sought an understanding of the culture of people from other societies, often hitherto unknown and far removed in geography. Reference Bernard 1 Early researchers would spend extended periods of time living in societies, observing, noting and photographing the minutia of daily life, with the most committed often learning the language of peoples they observed, in the hope of gaining greater acceptance by them and a more detailed understanding of the cultural norms at play. All academic disciplines concerned with human and social behaviour, including anthropology, sociology and psychology, now make extensive use of qualitative research methods whose systematic application was first developed by these colonial-era social scientists.

Their methods, involving observation, participation and discussion of the individuals and groups being studied, as well as reading related textual and visual media and artefacts, form the bedrock of all qualitative social scientific inquiry. The general aim of qualitative research is thus to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of those studied, and the meanings attached to them. Reference Bryman 2 Researchers interested in finding out why people behave the way they do; how people are affected by events, how attitudes and opinions are formed; how and why cultures and practices have developed in the way they have, might well consider qualitative methods to answer their questions.

It is fair to say that clinical and health-related research is still dominated by quantitative methods, of which the randomised controlled trial, focused on hypothesis-testing through experiment controlled by randomisation, is perhaps the quintessential method. Qualitative approaches may seem obscure to the uninitiated when directly compared with the experimental, quantitative methods used in clinical research. There is increasing recognition among researchers in these fields, however, that qualitative methods such as observation, in-depth interviews, focus groups, consensus methods, case studies and the interpretation of texts can be more effective than quantitative approaches in exploring complex phenomena and as such are valuable additions to the methodological armoury available to them. Reference Denzin and Lincoln 3

In considering what kind of research questions are best answered using a qualitative approach, it is important to remember that, first and foremost, unlike quantitative research, inquiry conducted in the qualitative tradition seeks to answer the question ‘What?’ as opposed to ‘How often?’. Qualitative methods are designed to reveal what is going on by describing and interpreting phenomena; they do not attempt to measure how often an event or association occurs. Research conducted using qualitative methods is normally done with an intent to preserve the inherent complexities of human behaviour as opposed to assuming a reductive view of the subject in order to count and measure the occurrence of phenomena. Qualitative research normally takes an inductive approach, moving from observation to hypothesis rather than hypothesis-testing or deduction, although the latter is perfectly possible.

When conducting research in this tradition, the researcher should, if possible, avoid separating the stages of study design, data collection and analysis, but instead weave backwards and forwards between the raw data and the process of conceptualisation, thereby making sense of the data throughout the period of data collection. Although there are inevitable tensions among methodologists concerned with qualitative practice, there is broad consensus that a priori categories and concepts reflecting a researcher's own preconceptions should not be imposed on the process of data collection and analysis. The emphasis should be on capturing and interpreting research participants' true perceptions and/or behaviours.

Using combined approaches

The polarity between qualitative and quantitative research has been largely assuaged, to the benefit of all disciplines which now recognise the value, and compatibility, of both approaches. Indeed, there can be particular value in using quantitative methods in combination with qualitative methods. Reference Barbour 4 In the exploratory stages of a research project, qualitative methodology can be used to clarify or refine the research question, to aid conceptualisation and to generate a hypothesis. It can also help to identify the correct variables to be measured, as researchers have been known to measure before they fully understand the underlying issues pertaining to a study and, as a consequence, may not always target the most appropriate factors. Qualitative work can be valuable in the interpretation, qualification or illumination of quantitative research findings. This is particularly helpful when focusing on anomalous results, as they test the main hypothesis formulated. Qualitative methods can also be used in combination with quantitative methods to triangulate findings and support the validation process, for example, where three or more methods are used and the results compared for similarity (e.g. a survey, interviews and a period of observation in situ ).

‘There is little value in qualitative research findings because we cannot generalise from them’

Generalisability refers to the extent that the account can be applied to other people, times and settings other than those actually studied. A common criticism of qualitative research is that the results of a study are rarely, if ever, generalisable to a larger population because the sample groups are small and the participants are not chosen randomly. Such criticism fails to recognise the distinctiveness of qualitative research where sampling is concerned. In quantitative research, the intent is to secure a large random sample that is representative of the general population, with the purpose of eliminating individual variations, focusing on generalisations and thereby allowing for statistical inference of results that are applicable across an entire population. In qualitative research, generalisability is based on the assumption that it is valuable to begin to understand similar situations or people, rather than being representative of the target population. Qualitative research is rarely based on the use of random samples, so the kinds of reference to wider populations made on the basis of surveys cannot be used in qualitative analysis.

Qualitative researchers utilise purposive sampling, whereby research participants are selected deliberately to test a particular theoretical premise. The purpose of sampling here is not to identify a random subgroup of the general population from which statistically significant results can be extrapolated, but rather to identify, in a systematic way, individuals that possess relevant characteristics for the question being considered. Reference Strauss and Corbin 5 The researchers must instead ensure that any reference to people and settings beyond those in the study are justified, which is normally achieved by defining, in detail, the type of settings and people to whom the explanation or theory applies based on the identification of similar settings and people in the study. The intent is to permit a detailed examination of the phenomenon, resulting in a text-rich interpretation that can deepen our understanding and produce a plausible explanation of the phenomenon under study. The results are not intended to be statistically generalisable, although any theory they generate might well be.

‘Qualitative research cannot really claim reliability or validity’

In quantitative research, reliability is the extent to which different observers, or the same observers on different occasions, make the same observations or collect the same data about the same object of study. The changing nature of social phenomena scrutinised by qualitative researchers inevitably makes the possibility of the same kind of reliability problematic in their work. A number of alternative concepts to reliability have been developed by qualitative methodologists, however, known collectively as forms of trustworthiness. Reference Guba 6

One way to demonstrate trustworthiness is to present detailed evidence in the form of quotations from interviews and field notes, along with thick textual descriptions of episodes, events and settings. To be trustworthy, qualitative analysis should also be auditable, making it possible to retrace the steps leading to a certain interpretation or theory to check that no alternatives were left unexamined and that no researcher biases had any avoidable influence on the results. Usually, this involves the recording of information about who did what with the data and in what order so that the origin of interpretations can be retraced.

In general, within the research traditions of the natural sciences, findings are validated by their repeated replication, and if a second investigator cannot replicate the findings when they repeat the experiment then the original results are questioned. If no one else can replicate the original results then they are rejected as fatally flawed and therefore invalid. Natural scientists have developed a broad spectrum of procedures and study designs to ensure that experiments are dependable and that replication is possible. In the social sciences, particularly when using qualitative research methods, replication is rarely possible given that, when observed or questioned again, respondents will almost never say or do precisely the same things. Whether results have been successfully replicated is always a matter of interpretation. There are, however, procedures that, if followed, can significantly reduce the possibility of producing analyses that are partial or biased. Reference Altheide, Johnson, Denzin and Lincoln 7

Triangulation is one way of doing this. It essentially means combining multiple views, approaches or methods in an investigation to obtain a more accurate interpretation of the phenomena, thereby creating an analysis of greater depth and richness. As the process of analysing qualitative data normally involves some form of coding, whereby data are broken down into units of analysis, constant comparison can also be used. Constant comparison involves checking the consistency and accuracy of interpretations and especially the application of codes by constantly comparing one interpretation or code with others both of a similar sort and in other cases and settings. This in effect is a form of interrater reliability, involving multiple researchers or teams in the coding process so that it is possible to compare how they have coded the same passages and where there are areas of agreement and disagreement so that consensus can be reached about a code's definition, improving consistency and rigour. It is also good practice in qualitative analysis to look constantly for outliers – results that are out of line with your main findings or any which directly contradict what your explanations might predict, re-examining the data to try to find a way of explaining the atypical finding to produce a modified and more complex theory and explanation.

Qualitative research has been established for many decades in the social sciences and encompasses a valuable set of methodological tools for data collection, analysis and interpretation. Their effective application to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. The use of qualitative approaches to research in psychiatry has particular potential, singularly and in combination with quantitative methods. Reference Crabb and Chur-Hansen 8 When devising research questions in the specialty, careful thought should always be given to the most appropriate methodology, and consideration given to the great depth and richness of empirical evidence which a robust qualitative approach is able to provide.

Declaration of interest

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 37, Issue 6
  • Steven J. Agius (a1)
  • DOI: https://doi.org/10.1192/pb.bp.113.042770

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

American Psychological Association Logo

Qualitative and Mixed Methods Research

diverse group of business people around a conference room table

APA-published content featuring qualitative and mixed methods research

Open Access Article: Psychotherapy

The Implementation of a Team Training Intervention for School Mental Health: Lessons Learned

Using qualitative methods, Wolk and colleagues examine how to enhance the implementation of evidence-based practice with school mental health teams.

Open Access Article: Journal of Experimental Psychology: General

The Persuasive Power of Knowledge: Testing the Confidence Heuristic

Puflord and colleagues use a mixed methods approach to test the confidence heuristic—the notion that people are confident when they know they are right, which in turn makes them more persuasive.

October 2018

Open Access Article: American Psychologist

Journal Article Reporting Standards for Qualitative Primary, Qualitative Meta-Analytic, and Mixed Methods Research in Psychology: The APA Publications and Communications Board Task Force Report

In recognition of the growth in qualitative research, the journal article reporting standards have been updated to include guidelines for reporting qualitative and mixed methods.

January 2018

Journal: Qualitative Psychology

Articles that underscore the distinctive contributions of qualitative research to the advancement of psychological knowledge.

Journals Special Issues

Qualitative Methods in Asian American Psychology, Part II

The second part of this special issue on qualitative research begins with two articles that explore aspects of ethnic socialization.

Qualitative Methods in Asian American Psychology, Part I

With an open access introduction, the issue is comprised of articles devoted to studies that use qualitative methods to examine a range of Asian American topics.

December 2017

Using Qualitative Research Methods to Improve Clinical Care

Including an open access introduction, the articles within this special issue illustrate the use of qualitative methods in research focused upon improving clinical practice in pediatric psychology.

Qualitative Strategies for Ethnocultural Research

This book presents the state-of-the-art discourse on qualitative methods in psychology and community studies.

APA Journals

APA Journals produces an array of scholarly journals that cover the spectrum of modern psychology and feature the latest research in the field.

APA Books publishes books to meet the needs of scholars, clinicians, researchers, and students in all areas of psychology.

APA Journals Article Spotlight

APA Journals Article Spotlight is a free summary of recently published articles in an APA journal.

APA Style JARS

The APA Style Journal Article Reporting Standards (JARS) website describes best practices for reporting quantitative, qualitative, and mixed methods research in scholarly articles.

Contact APA Publications

  • Privacy Policy

Research Method

Home » Qualitative Research – Methods, Analysis Types and Guide

Qualitative Research – Methods, Analysis Types and Guide

Table of Contents

Qualitative Research

Qualitative Research

Qualitative research is a type of research methodology that focuses on exploring and understanding people’s beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

Qualitative research aims to uncover the meaning and significance of social phenomena, and it typically involves a more flexible and iterative approach to data collection and analysis compared to quantitative research. Qualitative research is often used in fields such as sociology, anthropology, psychology, and education.

Qualitative Research Methods

Types of Qualitative Research

Qualitative Research Methods are as follows:

One-to-One Interview

This method involves conducting an interview with a single participant to gain a detailed understanding of their experiences, attitudes, and beliefs. One-to-one interviews can be conducted in-person, over the phone, or through video conferencing. The interviewer typically uses open-ended questions to encourage the participant to share their thoughts and feelings. One-to-one interviews are useful for gaining detailed insights into individual experiences.

Focus Groups

This method involves bringing together a group of people to discuss a specific topic in a structured setting. The focus group is led by a moderator who guides the discussion and encourages participants to share their thoughts and opinions. Focus groups are useful for generating ideas and insights, exploring social norms and attitudes, and understanding group dynamics.

Ethnographic Studies

This method involves immersing oneself in a culture or community to gain a deep understanding of its norms, beliefs, and practices. Ethnographic studies typically involve long-term fieldwork and observation, as well as interviews and document analysis. Ethnographic studies are useful for understanding the cultural context of social phenomena and for gaining a holistic understanding of complex social processes.

Text Analysis

This method involves analyzing written or spoken language to identify patterns and themes. Text analysis can be quantitative or qualitative. Qualitative text analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Text analysis is useful for understanding media messages, public discourse, and cultural trends.

This method involves an in-depth examination of a single person, group, or event to gain an understanding of complex phenomena. Case studies typically involve a combination of data collection methods, such as interviews, observations, and document analysis, to provide a comprehensive understanding of the case. Case studies are useful for exploring unique or rare cases, and for generating hypotheses for further research.

Process of Observation

This method involves systematically observing and recording behaviors and interactions in natural settings. The observer may take notes, use audio or video recordings, or use other methods to document what they see. Process of observation is useful for understanding social interactions, cultural practices, and the context in which behaviors occur.

Record Keeping

This method involves keeping detailed records of observations, interviews, and other data collected during the research process. Record keeping is essential for ensuring the accuracy and reliability of the data, and for providing a basis for analysis and interpretation.

This method involves collecting data from a large sample of participants through a structured questionnaire. Surveys can be conducted in person, over the phone, through mail, or online. Surveys are useful for collecting data on attitudes, beliefs, and behaviors, and for identifying patterns and trends in a population.

Qualitative data analysis is a process of turning unstructured data into meaningful insights. It involves extracting and organizing information from sources like interviews, focus groups, and surveys. The goal is to understand people’s attitudes, behaviors, and motivations

Qualitative Research Analysis Methods

Qualitative Research analysis methods involve a systematic approach to interpreting and making sense of the data collected in qualitative research. Here are some common qualitative data analysis methods:

Thematic Analysis

This method involves identifying patterns or themes in the data that are relevant to the research question. The researcher reviews the data, identifies keywords or phrases, and groups them into categories or themes. Thematic analysis is useful for identifying patterns across multiple data sources and for generating new insights into the research topic.

Content Analysis

This method involves analyzing the content of written or spoken language to identify key themes or concepts. Content analysis can be quantitative or qualitative. Qualitative content analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Content analysis is useful for identifying patterns in media messages, public discourse, and cultural trends.

Discourse Analysis

This method involves analyzing language to understand how it constructs meaning and shapes social interactions. Discourse analysis can involve a variety of methods, such as conversation analysis, critical discourse analysis, and narrative analysis. Discourse analysis is useful for understanding how language shapes social interactions, cultural norms, and power relationships.

Grounded Theory Analysis

This method involves developing a theory or explanation based on the data collected. Grounded theory analysis starts with the data and uses an iterative process of coding and analysis to identify patterns and themes in the data. The theory or explanation that emerges is grounded in the data, rather than preconceived hypotheses. Grounded theory analysis is useful for understanding complex social phenomena and for generating new theoretical insights.

Narrative Analysis

This method involves analyzing the stories or narratives that participants share to gain insights into their experiences, attitudes, and beliefs. Narrative analysis can involve a variety of methods, such as structural analysis, thematic analysis, and discourse analysis. Narrative analysis is useful for understanding how individuals construct their identities, make sense of their experiences, and communicate their values and beliefs.

Phenomenological Analysis

This method involves analyzing how individuals make sense of their experiences and the meanings they attach to them. Phenomenological analysis typically involves in-depth interviews with participants to explore their experiences in detail. Phenomenological analysis is useful for understanding subjective experiences and for developing a rich understanding of human consciousness.

Comparative Analysis

This method involves comparing and contrasting data across different cases or groups to identify similarities and differences. Comparative analysis can be used to identify patterns or themes that are common across multiple cases, as well as to identify unique or distinctive features of individual cases. Comparative analysis is useful for understanding how social phenomena vary across different contexts and groups.

Applications of Qualitative Research

Qualitative research has many applications across different fields and industries. Here are some examples of how qualitative research is used:

  • Market Research: Qualitative research is often used in market research to understand consumer attitudes, behaviors, and preferences. Researchers conduct focus groups and one-on-one interviews with consumers to gather insights into their experiences and perceptions of products and services.
  • Health Care: Qualitative research is used in health care to explore patient experiences and perspectives on health and illness. Researchers conduct in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education: Qualitative research is used in education to understand student experiences and to develop effective teaching strategies. Researchers conduct classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work : Qualitative research is used in social work to explore social problems and to develop interventions to address them. Researchers conduct in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : Qualitative research is used in anthropology to understand different cultures and societies. Researchers conduct ethnographic studies and observe and interview members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : Qualitative research is used in psychology to understand human behavior and mental processes. Researchers conduct in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy : Qualitative research is used in public policy to explore public attitudes and to inform policy decisions. Researchers conduct focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

How to Conduct Qualitative Research

Here are some general steps for conducting qualitative research:

  • Identify your research question: Qualitative research starts with a research question or set of questions that you want to explore. This question should be focused and specific, but also broad enough to allow for exploration and discovery.
  • Select your research design: There are different types of qualitative research designs, including ethnography, case study, grounded theory, and phenomenology. You should select a design that aligns with your research question and that will allow you to gather the data you need to answer your research question.
  • Recruit participants: Once you have your research question and design, you need to recruit participants. The number of participants you need will depend on your research design and the scope of your research. You can recruit participants through advertisements, social media, or through personal networks.
  • Collect data: There are different methods for collecting qualitative data, including interviews, focus groups, observation, and document analysis. You should select the method or methods that align with your research design and that will allow you to gather the data you need to answer your research question.
  • Analyze data: Once you have collected your data, you need to analyze it. This involves reviewing your data, identifying patterns and themes, and developing codes to organize your data. You can use different software programs to help you analyze your data, or you can do it manually.
  • Interpret data: Once you have analyzed your data, you need to interpret it. This involves making sense of the patterns and themes you have identified, and developing insights and conclusions that answer your research question. You should be guided by your research question and use your data to support your conclusions.
  • Communicate results: Once you have interpreted your data, you need to communicate your results. This can be done through academic papers, presentations, or reports. You should be clear and concise in your communication, and use examples and quotes from your data to support your findings.

Examples of Qualitative Research

Here are some real-time examples of qualitative research:

  • Customer Feedback: A company may conduct qualitative research to understand the feedback and experiences of its customers. This may involve conducting focus groups or one-on-one interviews with customers to gather insights into their attitudes, behaviors, and preferences.
  • Healthcare : A healthcare provider may conduct qualitative research to explore patient experiences and perspectives on health and illness. This may involve conducting in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education : An educational institution may conduct qualitative research to understand student experiences and to develop effective teaching strategies. This may involve conducting classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work: A social worker may conduct qualitative research to explore social problems and to develop interventions to address them. This may involve conducting in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : An anthropologist may conduct qualitative research to understand different cultures and societies. This may involve conducting ethnographic studies and observing and interviewing members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : A psychologist may conduct qualitative research to understand human behavior and mental processes. This may involve conducting in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy: A government agency or non-profit organization may conduct qualitative research to explore public attitudes and to inform policy decisions. This may involve conducting focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

Purpose of Qualitative Research

The purpose of qualitative research is to explore and understand the subjective experiences, behaviors, and perspectives of individuals or groups in a particular context. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research aims to provide in-depth, descriptive information that can help researchers develop insights and theories about complex social phenomena.

Qualitative research can serve multiple purposes, including:

  • Exploring new or emerging phenomena : Qualitative research can be useful for exploring new or emerging phenomena, such as new technologies or social trends. This type of research can help researchers develop a deeper understanding of these phenomena and identify potential areas for further study.
  • Understanding complex social phenomena : Qualitative research can be useful for exploring complex social phenomena, such as cultural beliefs, social norms, or political processes. This type of research can help researchers develop a more nuanced understanding of these phenomena and identify factors that may influence them.
  • Generating new theories or hypotheses: Qualitative research can be useful for generating new theories or hypotheses about social phenomena. By gathering rich, detailed data about individuals’ experiences and perspectives, researchers can develop insights that may challenge existing theories or lead to new lines of inquiry.
  • Providing context for quantitative data: Qualitative research can be useful for providing context for quantitative data. By gathering qualitative data alongside quantitative data, researchers can develop a more complete understanding of complex social phenomena and identify potential explanations for quantitative findings.

When to use Qualitative Research

Here are some situations where qualitative research may be appropriate:

  • Exploring a new area: If little is known about a particular topic, qualitative research can help to identify key issues, generate hypotheses, and develop new theories.
  • Understanding complex phenomena: Qualitative research can be used to investigate complex social, cultural, or organizational phenomena that are difficult to measure quantitatively.
  • Investigating subjective experiences: Qualitative research is particularly useful for investigating the subjective experiences of individuals or groups, such as their attitudes, beliefs, values, or emotions.
  • Conducting formative research: Qualitative research can be used in the early stages of a research project to develop research questions, identify potential research participants, and refine research methods.
  • Evaluating interventions or programs: Qualitative research can be used to evaluate the effectiveness of interventions or programs by collecting data on participants’ experiences, attitudes, and behaviors.

Characteristics of Qualitative Research

Qualitative research is characterized by several key features, including:

  • Focus on subjective experience: Qualitative research is concerned with understanding the subjective experiences, beliefs, and perspectives of individuals or groups in a particular context. Researchers aim to explore the meanings that people attach to their experiences and to understand the social and cultural factors that shape these meanings.
  • Use of open-ended questions: Qualitative research relies on open-ended questions that allow participants to provide detailed, in-depth responses. Researchers seek to elicit rich, descriptive data that can provide insights into participants’ experiences and perspectives.
  • Sampling-based on purpose and diversity: Qualitative research often involves purposive sampling, in which participants are selected based on specific criteria related to the research question. Researchers may also seek to include participants with diverse experiences and perspectives to capture a range of viewpoints.
  • Data collection through multiple methods: Qualitative research typically involves the use of multiple data collection methods, such as in-depth interviews, focus groups, and observation. This allows researchers to gather rich, detailed data from multiple sources, which can provide a more complete picture of participants’ experiences and perspectives.
  • Inductive data analysis: Qualitative research relies on inductive data analysis, in which researchers develop theories and insights based on the data rather than testing pre-existing hypotheses. Researchers use coding and thematic analysis to identify patterns and themes in the data and to develop theories and explanations based on these patterns.
  • Emphasis on researcher reflexivity: Qualitative research recognizes the importance of the researcher’s role in shaping the research process and outcomes. Researchers are encouraged to reflect on their own biases and assumptions and to be transparent about their role in the research process.

Advantages of Qualitative Research

Qualitative research offers several advantages over other research methods, including:

  • Depth and detail: Qualitative research allows researchers to gather rich, detailed data that provides a deeper understanding of complex social phenomena. Through in-depth interviews, focus groups, and observation, researchers can gather detailed information about participants’ experiences and perspectives that may be missed by other research methods.
  • Flexibility : Qualitative research is a flexible approach that allows researchers to adapt their methods to the research question and context. Researchers can adjust their research methods in real-time to gather more information or explore unexpected findings.
  • Contextual understanding: Qualitative research is well-suited to exploring the social and cultural context in which individuals or groups are situated. Researchers can gather information about cultural norms, social structures, and historical events that may influence participants’ experiences and perspectives.
  • Participant perspective : Qualitative research prioritizes the perspective of participants, allowing researchers to explore subjective experiences and understand the meanings that participants attach to their experiences.
  • Theory development: Qualitative research can contribute to the development of new theories and insights about complex social phenomena. By gathering rich, detailed data and using inductive data analysis, researchers can develop new theories and explanations that may challenge existing understandings.
  • Validity : Qualitative research can offer high validity by using multiple data collection methods, purposive and diverse sampling, and researcher reflexivity. This can help ensure that findings are credible and trustworthy.

Limitations of Qualitative Research

Qualitative research also has some limitations, including:

  • Subjectivity : Qualitative research relies on the subjective interpretation of researchers, which can introduce bias into the research process. The researcher’s perspective, beliefs, and experiences can influence the way data is collected, analyzed, and interpreted.
  • Limited generalizability: Qualitative research typically involves small, purposive samples that may not be representative of larger populations. This limits the generalizability of findings to other contexts or populations.
  • Time-consuming: Qualitative research can be a time-consuming process, requiring significant resources for data collection, analysis, and interpretation.
  • Resource-intensive: Qualitative research may require more resources than other research methods, including specialized training for researchers, specialized software for data analysis, and transcription services.
  • Limited reliability: Qualitative research may be less reliable than quantitative research, as it relies on the subjective interpretation of researchers. This can make it difficult to replicate findings or compare results across different studies.
  • Ethics and confidentiality: Qualitative research involves collecting sensitive information from participants, which raises ethical concerns about confidentiality and informed consent. Researchers must take care to protect the privacy and confidentiality of participants and obtain informed consent.

Also see Research Methods

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Experimental Research Design

Experimental Design – Types, Methods, Guide

Survey Research

Survey Research – Types, Methods, Examples

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Exploratory Research

Exploratory Research – Types, Methods and...

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 25, 2024 11:09 AM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Conducting Qualitative Research Online: Challenges and Solutions

  • Practical Application
  • Open access
  • Published: 11 June 2021
  • Volume 14 , pages 711–718, ( 2021 )

Cite this article

You have full access to this open access article

research articles using qualitative methods

  • Stacy M. Carter   ORCID: orcid.org/0000-0003-2617-8694 1 ,
  • Patti Shih   ORCID: orcid.org/0000-0002-9628-7987 1 ,
  • Jane Williams   ORCID: orcid.org/0000-0002-0142-0299 2 ,
  • Chris Degeling   ORCID: orcid.org/0000-0003-4279-3443 1 &
  • Julie Mooney-Somers   ORCID: orcid.org/0000-0003-4047-3403 2  

33k Accesses

66 Citations

14 Altmetric

Explore all metrics

What ways of thinking and concrete strategies can assist qualitative health researchers to transition their research practice to online environments? We propose that researchers should foreground inclusion when designing online qualitative research, and suggest ethical, technological and social adaptations required to move data collection online. Existing research shows that this move can aid in meeting recruitment targets, but can also reduce the richness of the data generated, as well as how much participants enjoy participating, and the ability to achieve consensus in groups. Mindful and consultative choices are required to prevent these problems. To adapt to ethical challenges, researchers should especially consider participant privacy, and ways to build rapport and show appropriate care for participants, including protocols for dealing with distress or disengagement, managing data, and supporting consent. To adapt to technological challenges, research plans should choose between online modalities and platforms based on a clear understanding of their particular affordances and the implications of these. Finally, successful research in virtual social environments requires new protocols for engagement before data collection, attention to group numbers and dynamics, altered moderator teams and roles, and new logistical tasks for researchers. The increasing centrality of online environments to everyday life is driving traditional qualitative research methods to online environments and generating new qualitative research methods that respond to the particularities of online worlds. With strong design principles and attention to ethical, technical and social challenges, online methods can make a significant contribution to qualitative research in health.

Similar content being viewed by others

research articles using qualitative methods

Coproducing Online Focus or Consultation Groups for Health and Social Care Research

research articles using qualitative methods

Are we on the cusp of a fourth research paradigm? Predicting the future for a new approach to methods-use in medical and health services research

Avoid common mistakes on your manuscript.

Qualitative research can thrive in online modalities if supported by sound methodology and carefully adapted methods.

In moving to online data collection, equity must be a central consideration; online modalities may increase opportunities to participate for some and exclude others.

Different technological platforms offer different strengths; adaptation is required to manage the virtual social environment and address particular ethical challenges in online engagement.

Interviewer: Now, you were just about to say something when you froze.

Participant: Yeah …

Interviewer: Oh, now you’re freezing again.

Participant: Let me just close this other …

Interviewer: No, I’ve got you again, that’s, you’re coming back.

Participant: Ok, good, I just closed a window I had open.

Interviewer: Just give me one second and I’ll just shout upstairs at my daughter who is probably watching something.

Participant: Ok.

Interviewer: (has conversation with daughter) Sorry about that.

Participant: That’s ok. It’s part of, part of the world we live in.

Interviewer: It is. The cat’s been trying to come and have a look at you as well, but I’ve managed to keep her down.

Excerpt from qualitative interview conducted on a videoconferencing platform in 2020

Many readers will recognise the encounter above, and may have had interactions like it, attempting to balance the personal and the professional, attempting to transpose rules and norms of one milieu into another, attempting to connect against distraction and technological difficulties. These issues are perhaps more acute for research interactions—like the one above—than for everyday interactions. In research, the need to generate meaningful findings, the requirements of human research ethics, and limits of time and resources increase the stakes. The challenge is arguably greater still for qualitative research, where participants are asked to speak in depth about often very personal, private or challenging issues, and rapport and support for participants can be critical to success. Our aim here is to provide practical assistance to help qualitative researchers and participants succeed in this online terrain.

Qualitative methods are a natural fit for patient-centred outcomes and health preferences research, as they allow the study of participants’ experiences, choices and actions from the participant’s perspective. While qualitative methods are often used as a preliminary step in the development of quantitative instruments or studies [ 1 ], qualitative studies provide complex and patient-centred insights in their own right [ 2 ], and are now commonly synthesised to inform health policy, health services, and health technology assessment [ 3 ]. Qualitative health researchers are increasingly turning to online platforms to collect data, whether in response to social distancing requirements during the COVID-19 pandemic [ 4 ], to research online worlds as unique cultures and communication environments [ 5 ], or because innovative methods can achieve novel aims [ 6 ]. Moving research online is not a simple ‘like-for-like’ transfer however; the transition can be a disorienting struggle even for experienced researchers.

Qualitative research is diverse and heterogeneous, with different underpinning assumptions, aims, methods for data collection and analysis, and reporting styles [ 7 ]. We will concentrate only on interview and focus group methods because they are frequently used in patient preferences research. The online environment is reinventing these methods, with adaptations including online focus groups, email interviewing, Instant Messaging (IM) interviewing, and the use of internet-based video interviewing [ 8 ]. There are many other qualitative methods that can be used in the online environment, including netnography [ 9 ], online visual research methods [ 10 ], and social media research methods [ 5 ], but these are beyond the scope of this paper.

Qualitative researchers have adapted repeatedly to technological change, both in the mode of engagement with participants, and the collection, transformation and storage of data. A longitudinal view reveals multiple moments of technological recalibration for qualitative researchers. For some time, researchers accustomed to face-to-face interviews asked whether telephone interviews were acceptable, but they are now both commonplace and recognised as highly suitable for interaction with certain participants, e.g. with elites [ 11 , 12 ]. As natural language processing improves and data storage and processing speed increases, human transcribers are being replaced with automated transcription software, and transcripts with clipping and coding digital recordings directly [ 13 ]. These changes have not been linear—technologies are reinvented and recombined over time—but change and technological adaptation have been a constant. In each of these transformations, new issues arise that need to be considered.

The authors are experienced qualitative researchers who share an interest in methodology, methods and research ethics. This paper emerged through discussion of issues that had arisen in our online experience to date and potential issues we could foresee given the different topics and specific populations we research, along with looking to the literature for answers to questions we faced in our practice. We are writing in early 2021, when social distancing requirements in many countries have greatly accelerated a nascent move towards greater online data collection. As the qualitative research community continues to come to terms with these changes, we consider the opportunities and challenges of online data collection that pandemic conditions have made evident.

1 Doing Qualitative Research in a Virtual Environment: Opportunities, Challenges and Solutions

A recent scoping review compared face-to-face with online research studies of health and illness experiences. The authors concluded that while online methods appear to increase the likelihood of obtaining the desired sample, responses are typically shorter, less contextual information is obtained, and relational satisfaction and consensus development are lower [ 14 ]. This does not mean that online methods are inferior, but it does mean that researchers should deliberately plan to mitigate their potential weaknesses.

In the following sections, we consider a set of interconnected issues, taking a lead from Davies and colleagues’ scoping review [ 14 ]. First, we will argue that while the online environment may facilitate participation, the move online can enable or hinder inclusion. We will then consider the ethical, technological and social adaptations required in online data collection to, among other things, maximise data quality and care for participants. We note as a background premise that usual qualitative study design considerations—the need for sound aims, research questions, recruitment and sampling strategies, interview or focus group guides and analysis strategies—still hold. We will focus on adaptation of procedures, with sound research design principles assumed [ 15 , 16 ].

2 Moving Online Can Enable or Hinder Inclusion

Unjustly excluding people because of their technological or material circumstances is an old research ethics problem that potentially takes a new form in online research, potentially altering the accessibility of research for participants in positive or negative ways. Transitioning from face-to-face to online data collection can broaden access by lifting geographic limits. Online data collection can reduce the burdens of time and cost of participating in research. Participants do not have to travel or host a researcher, and it may be more convenient to conduct interviews and focus groups outside of working hours. These adjustments are likely to make participation easier or more appealing for some groups that previously faced practical limitations to taking part in qualitative research. For example, people with limited mobility, as well as caregivers, may find online participation from home inviting because they do not need to make the same sorts of accommodations that can stand in the way of in-person research [ 17 ].

Conversely, online data collection may also limit participation only to those who have a web-enabled device, and sometimes authority to install software. Online video platforms require a good-quality internet connection and relatively high data usage. People without access to fast and reliable internet, as well as people with limited access to data, may find it difficult or less appealing to participate. Online data collection risks excluding, or creating additional burdens and considerable stress for, participants who do not feel competent in the use of technology. Finally, not all technology can accommodate the needs of participants living with specific disabilities.

Researchers can mitigate these barriers to participation and inclusion through mindful and consultative technological and logistical choices. For those with limited access to technology, video conferencing platforms may be inappropriate; inclusion may require conducting an interview without video (audio only) or via telephone to reduce the need for a high-quality internet connection. Researchers may also consider methods such as email interviewing or IM interviewing, which offer accessibility benefits (e.g. more time for participant reflection, less data-intensive technology) but also disadvantages (e.g. requires sufficient literacy) [ 8 ]. Researchers can provide participants with data credit vouchers so that they can participate in video calls without the burden of additional data costs. Different platforms offer different participation options for people with disabilities (Table 1), and accessibility options are improving. Accessibility experts and advocacy groups are a good source of information (e.g. [ 18 , 19 ]). As in face-to-face data collection, specialist advice, including from participants themselves, can assist inclusion of people who use augmentative or alternative communication devices. Researchers should also be flexible with, and take the lead from, participants to maximise inclusion, as participants may have identified or developed solutions that make video conferencing platforms more accessible for them. People with impaired hearing, for example, may find it difficult to rely on lip-reading in video calls, but could participate via a synchronous text chat interview, or on a video platform with the right speech-to-text captioning tool, or with a sign language interpreter pinned next to the main speaker on screen [ 20 , 21 , 22 ].

Traditionally, meeting in person has helped shape sampling and recruitment strategies for studies. The location of the research team has often determined the geographic parameters of the study population because face-to-face interviews and focus groups have been the norm for data collection. Online platforms potentially eradicate some geographic barriers and may prompt researchers to think differently about their research questions. While it may be tempting to substantially widen sampling and recruitment because online methods have made it possible, researchers should remain mindful of the importance of methodological concerns. Study populations are shaped by considerations other than practicality. Researchers must be clear about why they have identified the population of interest and how that sample will help them answer their study questions. It may be that geographic location or experience of a particular healthcare system remains an important factor to capture.

3 Practical Ways to Adapt to Technological, Social and Ethical Challenges in Online Research

Successful online data collection requires three kinds of adaptation: to ethical challenges, to a new technological environment, and to a new social environment. These are interconnected but for clarity we deal with each of them in turn below.

3.1 Adapting to Ethical Challenges

In addition to usual research ethics considerations, online data collection raises special challenges. For example, online data collection creates different privacy risks. Online engagement with video means a researcher (and if a focus group, other group members) can potentially see and hear a participant’s domestic space. There are other privacy considerations—some communication platforms require a participant profile, including name, date of birth, email address and/or mobile phone number; participants may not want a profile, or if they have one they may not want to disclose it. Supporting people to participate anonymously may be vital for some populations/research topics. Participants also need access to a quiet and private space. For example, participants who rely on public libraries for internet access are unlikely to be able to do this with privacy.

During in-person research, we use ordinary actions to show our presence and care, or to create rapport: small talk, sharing a beverage, handing a tissue to a distressed participant, closing an encounter by walking a participant out of the building. Online data collection means the loss of this embodied care. Researchers need to develop strategies to establish rapport or comfort a distressed participant; these protocols should be included in ethics applications. We suggest the following adaptations to address these and other important ethical concerns.

Develop a protocol for dealing with distress or disengagement Common in research with vulnerable participants or on sensitive topics, we recommend these protocols for all online qualitative research. Develop clear strategies for how you will deal with an interview participant who becomes visibly distressed or unresponsive, moves away from the screen, shuts down the platform, does not return from an agreed comfort break, or where you witness problematic interactions with other people in the participant’s setting. A similar protocol is advisable for focus groups to deal with distress, or with abuse or discriminatory actions between participants. Ensure you have an alternative means to contact each participant and let participants know in advance under what circumstances you will contact them via this alternative channel.

Ensure video and/or audio recordings are stored appropriately Researchers should check where an online platform is storing recordings and their privacy policy. Using a platform’s cloud service can be in contravention of local privacy legislation (e.g. the European Union’s General Data Protection Regulation [GDRP]) or ethical approval; choose a platform that allows researchers to store recordings on their computer or institutional cloud service. For sensitive research topics, recording via an offline audio device (e.g. digital recorder) provides greater security.

Decide how consent will be recorded Consent processes can be less straightforward for online research; several methods are available, each with benefits and disadvantages. Asking participants to return a written consent form prior to data collection can place burdens on participants and requires a printer and scanner/smartphone. Online platforms (e.g. DocuSign) can be efficient but raise participant access, competency and data security concerns. Adobe Acrobat offers several methods including allowing participants to ‘sign’ via a smartphone screen, print and scan. Researchers can seek and record verbal consent (if acceptable to their ethics review board); this may be preferable, both for its lower burden on participants and to encourage the participant to ask questions before participating. Consider doing this in an introductory interaction (before the data collection event), especially for focus groups; this allows more attention to individual questions, and greater confidentiality. Flexibility is important as methods should suit participants’ comfort and capabilities.

Address online data collection challenges in ethics applications Ethics review boards will vary in their understanding of and tolerance for online data collection. As with face-to-face research, anticipate and address concerns: provide a logic for your study design, explain how the chosen data collection method(s) and platform meet the needs of the participants and the research topic. Be transparent about challenges and outline specific strategies for enhancing participation and offsetting risk. If your online research engages participants in new and unfamiliar locations, researchers should investigate whether their local ethics board approval will be sufficient to work in that context. Seeking advice from ethics review boards in advance can reveal common concerns and offer solutions.

3.2 Adapting to Technological Challenges: Hardware and Software

Planning ahead As online research events rely on the functionality and management of technology, both hardware and software, technological logistics should be central to research planning. Before commencing data collection, researchers should ensure that prospective participants have (1) access to hardware (e.g. phone, tablet, computer); (2) a reliable internet connection; (3) familiarity with the chosen platform; and (4) adequate support to respond to technological problems. Participants may need technical coaching and support before data collection occurs.

Affordances that facilitate desired social interactions Different online communication platforms have different affordances [ 4 ], and these functionalities enable, for example, different degrees of interactivity, data recording, confidentiality and privacy, and security (Box 1 ). Although ideally platforms would be chosen to suit the participants, in some instances a researcher’s institution, or local legislation will dictate the use of certain platforms for reasons including licensing or security. Issues to consider in selecting and managing the technological aspects of online research include the following.

Microphone and camera control: allows either, or both, participant or host to manually control their own or others’ cameras and microphones, helpful for managing background noise or speaking order if required.

Chat functions: allows short textual comments or questions to be posted by participants, usually in a sidebar from the main screen, and usually without disrupting the verbal conversation.

Breakout rooms: small subgroup discussions that can be separated out from the main meeting; host/s can join in and out, for example to answer or ask questions, or to facilitate discussions. Some platforms can automatically assign participants into rooms, with a mandatory timed finish, and automatically rejoin participants back into the main meeting.

Participant polling: short surveys or votes to gauge participant sentiments or show preferences.

Screen sharing: allows any participant to share the contents of their own screen, which is useful for sharing digital images or other materials the participant might want to introduce to the discussion.

Screen annotation: interactive screen-based textual and drawing tools, enabling participants to visually mark the content shown on screen.

Live subtitles and captioning: an additional service, often requiring subscription, that enables live subtitling of video calls, using a ‘speech to text’ recognition software. This may aid the participation of people living with hearing impairment [ 19 ].

Anonymity of participants If anonymity of participants is important, choose a platform that can easily control username displays and prepare participants to control how they present themselves. Some platforms display both first and surnames by default when entering an online meeting, therefore ensure participants know how to edit their display name. Avoid online platforms that require an account sign-up and automatically displays the user’s account name or contact phone number, as this compromises privacy and confidentiality. As participants may join the virtual research from their own homes or private offices, pre-research coaching should include the option of using virtual backgrounds for greater privacy protection.

Recording, screenshots and transcription Certain platforms offer recording of online interactions and transcription of audio data. Be sure to check how and where these data files will be stored and secured (see ‘Adapting to Ethical Challenges’ section). A screenshot allows anyone accessing the online event to take a photograph of the screen. This can be a useful tool in research but also allows participants to take recordings and screenshots without the knowledge of researchers and others. Consent for recording should be discussed with everyone taking part prior to commencing any online data collection activities, recording turned off for participants, and participants instructed not to make their own offline recordings.

Manually controlled or password entry Controlled entry by the host usually comes in the form of a ‘waiting room’, whereby the host manually admits participants. This gives hosts a greater degree of control but will also require more time and attention, particularly for larger groups. Password entry allows anyone with a password to the meeting to enter automatically and may save more time. Many research institutions and Human Research Ethics Committees (HRECs) already require password protection for online research.

Box 1 Platform functions checklist

When choosing an appropriate platform, check these specific technological affordances against the needs and suitability for your research method and participants:

For managing privacy, confidentiality and security of the participants and the research space:

✓ Password entry

✓ Admission and removal of participants

✓ Username display control

✓ Virtual background

For facilitating effective social interactions online

✓ Microphone and camera control

✓ Chat functions

✓ Breakout rooms

✓ Participant polling

✓ Screen sharing

✓ Screen annotation

For managing data collection and storage

✓ Built-in video and audio recording

✓ Subtitles and captioning

✓ Secure storage of recorded data

✓ Screenshot

3.3 Adapting to Social Difference: Knowing the Virtual Social Environment and Working with It

Compared with face-to-face research settings, researchers will have less control over potential interruptions to online data collection activities, as they cannot be physically present to offer alternative arrangements or interventions. Some participants may be practiced in online interactions as part of their daily work or social routine, while others will not [ 23 ]. Being prepared to manage interruptions, unpredictability and diversity of comfort level with online interactions is crucial. Below we suggest some adaptations to manage the social dimensions of online research.

Pre-research briefing/check-in Conducting a pre-research briefing can help participants be informed about what to expect and ensure they are comfortable using the online technologies and platforms. If you are working with participants who are vulnerable, have challenges in communicating, or are not familiar with using online technologies, supporting their communication and technology-use needs before data collection is crucial. This can also help build rapport to enhance participants’ relational satisfaction with participation.

Determining numbers in a focus group Compared with face-to-face research, online group interactions demand more cognitive effort for both moderators and participants [ 23 ]. Online interactions can also have a slower flow due to minor lags in screen interactions, which tends to exacerbate as the number of participants increase. Maximum numbers will likely be smaller than in face-to-face interactions; we recommend four to six participants for online focus groups. The goal is to not only ensure enough ‘energy’ in the room to sustain interaction but to also make facilitation manageable and the experience more enjoyable for participants.

Manage the energy in the ‘room’ Online focus groups and interviews require more than facilitating the content and flow of the discussion. Focused social interactions between people on a research topic, particularly with unfamiliar others, are particularly mentally demanding. Ways to manage this include slowing down the speed of the conversation with slightly longer pauses between sentences or questions and taking shorter breaks more frequently if a focus group runs for more than an hour. In our experience, online group modalities can encourage participants to take discrete turns rather than interacting in a dynamic flow; this may be offset to some extent by smaller group size and less intrusive moderation that creates more space for participant talk.

Use assistant moderators and make them co-hosts of the online call Assistants can help manage the technology while remaining muted with the camera off in the background. This can reduce cognitive burden for the moderator, allowing greater focus on the conversation. Ensure the assistant moderator role is explained to participants at the start of research events.

Designate personnel for emotional support In addition to an assistant moderator, a ‘runner’ or research assistant can act as a point of support for participants in difficulty. The role of this person should also be explained to all participants. Some participants may also wish to access support more discretely, and how this can be done should also be made clear.

Establish a culturally safe research space In any research, whether face-to-face or online, participants should feel culturally safe [ 24 ]. Managing the cultural safety of online interactions, particularly in group research, may sometimes be more challenging because visual cues that threaten cultural safety may be more difficult to read and respond to. Moderators need to establish ground rules early to set the tone and expectations of the room and be firm and decisive in using microphone control to temporarily mute disrespectful participants, or, in unresolvable situations, have an assistant remove them. Check that the selected technological platforms will allow the host to eject or temporarily mute a participant if necessary. Assistant moderators can also keep track of chat room interactions to help manage any challenging circumstances. While some online platforms (e.g. Zoom) can facilitate the provision of language interpretation via simultaneous audio channels, we note that ensuring cultural safety requires more than interpretation, and that adding additional channels does add technological and interpersonal complexity.

Manage microphones and background noise While asking participants to mute their microphones can often minimise background noise, having to turn the microphone on and off during interactions will also interrupt the flow of interactions. To maximise participation, leaving microphones on is recommended, despite the trade-off with background noise, which can interfere with data quality and the experience of other participants. Asking participants to do their best in minimising background noise or asking an assistant moderator to mute individual participants if background noise becomes problematic may help manage this. Discuss the preferred arrangement with participants at the start of the research event, including when and if microphones should be muted, and the best way to manage when to speak.

Have a back-up plan Sometimes technology can go wrong (computers crash, hardware malfunctions, internet connections go down), either halting the research or producing inaudible content. We have already considered the need for a clear, agreed backup plan to manage distress and cultural safety; this is also important to manage technical problems. Assistant moderators should hold a list of participants’ contact phone numbers and clear agreement with participants on when their contact number will be used. Moderators should be decisive about when to abandon the online platform and move to the back-up plan.

Manage unexpected intrusions ‘Zoom bombers’ join online meetings uninvited. They can cause interruption and embarrassment and they breach the privacy of a confidential research event. ‘Zoom bombing’ happens mostly when a link to the meeting is posted publicly and becomes searchable online. Use a private password for every online research event and consider using a waiting room for more control. Explicitly ask participants not to post events publicly or share links, and ensure passwords are secure and not publicised (e.g. on social media).

Conduct evaluation, and research online qualitative research Consider including questions about the use of the technology and online platform in post-research evaluations; feedback can not only be used to refine design and processes in future research but can also support methodological research.

4 Conclusions

Online methods were once marginal in qualitative research, rarely considered a first choice for data collection, and restricted mostly to those researchers who were interested in online worlds such as social media or gaming cultures as a subject of study. This has radically shifted. At the time of writing, the COVID-19 pandemic has driven much of everyday life into virtual worlds, as families, workplaces and existing social networks try to sustain themselves in the face of the risk of transmission. Niels Bohr allegedly quipped that prediction is very difficult, especially about the future; allowing for this caveat, we cannot imagine a future where everyday life or research practices return exactly to a 2019 pre-pandemic status quo. Online qualitative research has opened up a world of options for accessing participants and creating new types of data, and this seems likely to continue to expand. Qualitative researchers, then, need to respond to these new circumstances and opportunities in methodologically and ethically sound ways.

This paper is limited by our knowledge, experience and reading. Others will have expertise that we do not (e.g. in assistive communication technologies). We are also writing in a particular moment—a pandemic-induced flight to online research. As online qualitative research becomes mainstream, it is likely that technologies, practices and understandings will mature. Because change is inevitable, we have focused on principles rather than fine details of different platforms. There may be scope for researchers to engage with platforms over time and demand technological innovations that will more easily serve the ethical and methodological needs of research practice. Researchers themselves will also generate new qualitative methods that respond to the particularities of online platforms and their affordances. If researchers remain focused on design principles and attend to ethical, technical and social challenges, online methods will continue to make a significant contribution to qualitative health preferences research.

Hollin IL, Craig BM, Coast J, Beusterien K, Vass C, DiSantostefano R, et al. Reporting formative qualitative research to support the development of quantitative preference study protocols and corresponding survey instruments: guidelines for authors and reviewers. Patient Patient Centered Outcomes Res. 2020;13(1):121–36.

Article   Google Scholar  

Sikirica V, Flood E, Dietrich CN, Quintero J, Harpin V, Hodgkins P, et al. Unmet needs associated with attention-deficit/hyperactivity disorder in eight European countries as reported by caregivers and adolescents: results from qualitative research. Patient. 2015;8(3):269–81.

Hansen HP, Draborg E, Kristensen FB. Exploring qualitative research synthesis: the role of patients’ perspectives in health policy design and decision making. Patient. 2011;4(3):143–52.

Lobe B, Morgan D, Hoffman KA. Qualitative data collection in an era of social distancing. Int J Qual Methods. 2020;19:1609406920937875.

Sloan L, Quan-Haase A, editors. The SAGE handbook of social media research methods. London: SAGE; 2017.

Google Scholar  

Clarke VBV, Gray D. Innovations in qualitative methods. In: Gough B, editor. The Palgrave handbook of critical social psychology. London: Palgrave MacMillan; 2017. p. 243–66.

Gooberman-Hill R. Qualitative approaches to understanding patient preferences. Patient. 2012;5(4):215–23.

PubMed   Google Scholar  

Braun V, Clarke V, Gray D. Innovations in qualitative methods. In: Gough B, editor. The Palgrave handbook of critical social psychology. London: Palgrave MacMillan; 2017.

Elvey R, Voorhees J, Bailey S, Burns T, Hodgson D. GPs’ views of health policy changes: a qualitative ‘netnography’ study of UK general practice online magazine commentary. Br J Gen Pract 2018;68(671):e441–e448. https://doi.org/10.3399/bjgp18X696161

Pauwels L, Mannay D, editors. The SAGE handbook of visual research methods. London: SAGE; 2019.

Banner D. Telephone interviews in qualitative health research. Int J Qual Methods. 2011;10:507–8.

Sturges JE, Hanrahan KJ. Comparing telephone and face-to-face qualitative interviewing: a research note. Qual Res. 2004;4(1):107–18. https://doi.org/10.1177/1468794104041110

Tessier S. From field notes, to transcripts, to tape recordings: evolution or combination? Int J Qual Methods. 2012;11(4):446–60.

Davies L, LeClair KL, Bagley P, Blunt H, Hinton L, Ryan S, et al. Face-to-face compared with online collected accounts of health and illness experiences: a scoping review. Qual Health Res. 2020;30:2092–102.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2017.

Mason J. Qualitative researching. 2nd ed. Los Angeles: SAGE Publications; 2007.

Hewson C. Qualitative approaches in internet-mediated research: opportunities, issues, possibilities. In: The Oxford handbook of qualitative research, 2nd edn. Oxford University Press, Oxford Handbooks Online; 2020.

Disability Advocacy Resource Unit. Accessible online meetings. https://www.daru.org.au/lesson/accessible-online-meetings . Accessed 22 Feb 2021

Highfield P. Subtitles for video calls—searching for the Holy Grail. Ideas for ears; 2021. https://www.ideasforears.org.uk/blog/subtitles-for-video-calls/ . Accessed 22 Feb 2021

Keast Q. I’m deaf, and this is what happens when I get on a Zoom call. Fast Company. 2020. https://www.fastcompany.com/90565930/im-deaf-and-this-is-what-happens-when-i-get-on-a-zoom-call . Accessed 22 Feb 2021

Tamarov M. Zoom addresses accessibility for deaf and hard of hearing. TechTarget. 2020. https://bit.ly/3cxtyjm . Accessed 22 Feb 2021

Happy meetings for everyone. Zoom Video Communications Inc.; 2020. https://zoom.us/accessibility . Accessed 22 Feb 2021

Nadler R. Understanding, “Zoom fatigue”: theorizing spatial dynamics as third skins in computer-mediated communication. Comput Compos. 2020;58:102613.

Neville S, Adams J, Cook C. Using internet-based approaches to collect qualitative data from vulnerable groups: reflections from the field. Contemp Nurse. 2016;52(6):657–68.

Download references

Acknowledgements

The authors would like to thank Dr. Bridget Haire for permission to use the interview excerpt, and Lucy Carolan for assistance with the submission process.

Author information

Authors and affiliations.

ACHEEV, School of Health and Society, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, NSW, 2522, Australia

Stacy M. Carter, Patti Shih & Chris Degeling

Sydney School of Public Health, The University of Sydney, Sydney, NSW, 2006, Australia

Jane Williams & Julie Mooney-Somers

You can also search for this author in PubMed   Google Scholar

Contributions

All authors, conceptualised, wrote, read, and approved the final manuscript.

Corresponding author

Correspondence to Stacy M. Carter .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/ .

Reprints and permissions

About this article

Carter, .M., Shih, P., Williams, J. et al. Conducting Qualitative Research Online: Challenges and Solutions. Patient 14 , 711–718 (2021). https://doi.org/10.1007/s40271-021-00528-w

Download citation

Accepted : 18 May 2021

Published : 11 June 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s40271-021-00528-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

University Library, University of Illinois at Urbana-Champaign

University of Illinois Library Wordmark

Qualitative Data Analysis: Find Methods Examples

  • Atlas.ti web
  • R for text analysis
  • Microsoft Excel & spreadsheets
  • Other options
  • Planning Qual Data Analysis
  • Free Tools for QDA
  • QDA with NVivo
  • QDA with Atlas.ti
  • QDA with MAXQDA
  • PKM for QDA
  • QDA with Quirkos
  • Working Collaboratively
  • Qualitative Methods Texts
  • Transcription
  • Data organization
  • Example Publications
  • Find Methods Examples

Locating Methods Examples

Find dissertations that cite a core methods work.

You can conduct a cited reference search in the ProQuest Dissertations & Theses database. 

Example Two Heading

Example Two Body

Example Three Heading

Example Three Body

  • << Previous: Example Publications
  • Last Updated: Jun 6, 2024 9:59 AM
  • URL: https://guides.library.illinois.edu/qualitative
  • Open access
  • Published: 03 June 2024

The use of evidence to guide decision-making during the COVID-19 pandemic: divergent perspectives from a qualitative case study in British Columbia, Canada

  • Laura Jane Brubacher   ORCID: orcid.org/0000-0003-2806-9539 1 , 2 ,
  • Chris Y. Lovato 1 ,
  • Veena Sriram 1 , 3 ,
  • Michael Cheng 1 &
  • Peter Berman 1  

Health Research Policy and Systems volume  22 , Article number:  66 ( 2024 ) Cite this article

144 Accesses

2 Altmetric

Metrics details

The challenges of evidence-informed decision-making in a public health emergency have never been so notable as during the COVID-19 pandemic. Questions about the decision-making process, including what forms of evidence were used, and how evidence informed—or did not inform—policy have been debated.

We examined decision-makers' observations on evidence-use in early COVID-19 policy-making in British Columbia (BC), Canada through a qualitative case study. From July 2021- January 2022, we conducted 18 semi-structured key informant interviews with BC elected officials, provincial and regional-level health officials, and civil society actors involved in the public health response. The questions focused on: (1) the use of evidence in policy-making; (2) the interface between researchers and policy-makers; and (3) key challenges perceived by respondents as barriers to applying evidence to COVID-19 policy decisions. Data were analyzed thematically, using a constant comparative method. Framework analysis was also employed to generate analytic insights across stakeholder perspectives.

Overall, while many actors’ impressions were that BC's early COVID-19 policy response was evidence-informed, an overarching theme was a lack of clarity and uncertainty as to what evidence was used and how it flowed into decision-making processes. Perspectives diverged on the relationship between 'government' and public health expertise, and whether or not public health actors had an independent voice in articulating evidence to inform pandemic governance. Respondents perceived a lack of coordination and continuity across data sources, and a lack of explicit guidelines on evidence-use in the decision-making process, which resulted in a sense of fragmentation. The tension between the processes involved in research and the need for rapid decision-making was perceived as a barrier to using evidence to inform policy.

Conclusions

Areas to be considered in planning for future emergencies include: information flow between policy-makers and researchers, coordination of data collection and use, and transparency as to how decisions are made—all of which reflect a need to improve communication. Based on our findings, clear mechanisms and processes for channeling varied forms of evidence into decision-making need to be identified, and doing so will strengthen preparedness for future public health crises.

Peer Review reports

The challenges of evidence-informed decision-making Footnote 1 in a public health emergency have never been so salient as during the COVID-19 pandemic, given its unprecedented scale, rapidly evolving virology, and multitude of global information systems to gather, synthesize, and disseminate evidence on the SARS-CoV-2 virus and associated public health and social measures [ 1 , 2 , 3 ]. Early in the COVID-19 pandemic, rapid decision-making became central for governments globally as they grappled with crucial decisions for which there was limited evidence. Critical questions exist, in looking retrospectively at these decision-making processes and with an eye to strengthening future preparedness: Were decisions informed by 'evidence'? What forms of evidence were used, and how, by decision-makers? [ 4 , 5 , 6 ].

Scientific evidence, including primary research, epidemiologic research, and knowledge synthesis, is one among multiple competing influences that inform decision-making processes in an outbreak such as COVID-19 [ 7 ]. Indeed, the use of multiple forms of evidence has been particularly notable as it applies to COVID-19 policy-making. Emerging research has also documented the important influence of ‘non-scientific’ evidence such as specialized expertise and experience, contextual information, and level of available resources [ 8 , 9 , 10 ]. The COVID-19 pandemic has underscored the politics of evidence-use in policy-making [ 11 ]; what evidence is used and how can be unclear, and shaped by political bias [ 4 , 5 ]. Moreover, while many governments have established scientific advisory boards, the perspectives of these advisors were reportedly largely absent from COVID-19 policy processes [ 6 ]. How evidence and public health policy interface—and intersect—is a complex question, particularly in the dynamic context of a public health emergency.

Within Canada, a hallmark of the public health system and endorsed by government is evidence-informed decision-making [ 12 ]. In British Columbia (BC), Canada, during the early phases of COVID-19 (March—June 2020), provincial public health communication focused primarily on voluntary compliance with recommended public health and social measures, and on supporting those most affected by the pandemic. Later, the response shifted from voluntary compliance to mandatory enforceable government orders [ 13 ]. Like many other jurisdictions, the government’s public messaging in BC asserted that the province took an approach to managing the COVID-19 pandemic and developing related policy that was based on scientific evidence, specifically. For example, in March 2021, in announcing changes to vaccination plans, Dr. Bonnie Henry, the Provincial Health Officer, stated, " This is science in action " [ 14 ]. As a public health expert with scientific voice, the Provincial Health Officer has been empowered to speak on behalf of the BC government across the COVID-19 pandemic progression. While this suggests BC is a jurisdiction which has institutionalized scientifically-informed decision-making as a core tenet of effective public health crisis response, it remains unclear as to whether BC’s COVID-19 response could, in fact, be considered evidence-informed—particularly from the perspectives of those involved in pandemic decision-making and action. Moreover, if evidence-informed, what types of evidence were utilized and through what mechanisms, how did this evidence shape decision-making, and what challenges existed in moving evidence to policy and praxis in BC’s COVID-19 response?

The objectives of this study were: (1) to explore and characterize the perspectives of BC actors involved in the COVID-19 response with respect to evidence-use in COVID-19 decision-making; and (2) to identify opportunities for and barriers to evidence-informed decision-making in BC’s COVID-19 response, and more broadly. This inquiry may contribute to identifying opportunities for further strengthening the synthesis and application of evidence (considered broadly) to public health policy and decision-making, particularly in the context of future public health emergencies, both in British Columbia and other jurisdictions.

Study context

This qualitative study was conducted in the province of British Columbia (BC), Canada, a jurisdiction with a population of approximately five million people [ 15 ]. Within BC’s health sector, key actors involved in the policy response to COVID-19 included: elected officials, the BC Government’s Ministry of Health (MOH), the Provincial Health Services Authority (PHSA), Footnote 2 the Office of the Provincial Health Officer (PHO), Footnote 3 the BC Centre for Disease Control (BCCDC), Footnote 4 and Medical Health Officers (MHOs) and Chief MHOs at regional and local levels.

Health research infrastructure within the province includes Michael Smith Health Research BC [ 16 ] and multiple post-secondary research and education institutions (e.g., The University of British Columbia). Unlike other provincial (e.g., Ontario) and international (e.g., UK) jurisdictions, BC did not establish an independent, formal scientific advisory panel or separate organizational structure for public health intelligence in COVID-19. That said, a Strategic Research Advisory Council was established, reporting to the MOH and PHO, to identify COVID-19 research gaps and commission needed research for use within the COVID-19 response [ 17 ].

This research was part of a multidisciplinary UBC case study investigating the upstream determinants of the COVID-19 response in British Columbia, particularly related to institutions, politics, and organizations and how these interfaced with, and affected, pandemic governance [ 18 ]. Ethics approval for this study was provided by the University of British Columbia (UBC)’s Institutional Research Ethics Board (Certificate #: H20-02136).

Data collection

From July 2021 to January 2022, 18 semi-structured key informant interviews were conducted with BC elected officials, provincial and regional-level health officials, and civil society actors (e.g., within non-profit research organizations, unions) (Table 1 ). Initially, respondents were purposively sampled, based on their involvement in the COVID-19 response and their positioning within the health system organizational structure. Snowball sampling was used to identify additional respondents, with the intent of representing a range of organizational roles and actor perspectives. Participants were recruited via email invitation and provided written informed consent to participate.

Interviews were conducted virtually using Zoom® videoconferencing, with the exception of one hybrid in-person/Zoom® interview. Each interview was approximately one hour in duration. One to two research team members led each interview. The full interview protocol focused on actors’ descriptions of decision-making processes across the COVID-19 pandemic progression, from January 2020 to the date of the interviews, and they were asked to identify key decision points (e.g., emergency declaration, business closures) [see Additional File 1 for the full semi-structured interview guide]. For this study, we used a subset of interview questions focused on evidence-use in the decision-making process, and the organizational structures or actors involved, in BC's early COVID-19 pandemic response (March–August 2020). Questions were adapted to be relevant to a respondent’s expertise and particular involvement in the response. ‘Evidence’ was left undefined and considered broadly by the research team (i.e., both ‘scientific’/research-based and ‘non-scientific’ inputs) within interview questions, and therefore at the discretion of the participant as to what inputs they perceived and described as ‘evidence’ that informed or did not inform pandemic decision-making. Interviews were audio-recorded over Zoom® with permission and transcribed using NVivo Release 1.5© software. Each transcript was then manually verified for accuracy by 1–2 members of the research team.

Data analysis

An inductive thematic analysis was conducted, using a constant comparative method, to explore points of divergence and convergence across interviews and stakeholder perspectives [ 19 ]. Transcripts were inductively coded in NVivo Release 1.5© software, which was used to further organize and consolidate codes, generate a parsimonious codebook to fit the data, and retrieve interview excerpts [ 20 ]. Framework analysis was also employed as an additional method for generating analytic insights across stakeholder perspectives and contributed to refining the overall coding [ 21 ]. Triangulation across respondents and analytic methods, as well as team collaboration in reviewing and refining the codebook, contributed to validity of the analysis [ 22 ].

How did evidence inform early COVID-19 policy-making in BC?

Decision-makers described their perceptions on the use of evidence in policy-making; the interface between researchers and policy-makers; and specific barriers to evidence-use in policy-making within BC’s COVID-19 response. In discussing the use of evidence, respondents focused on ‘scientific’ evidence; however, they noted a lack of clarity as to how and what evidence flowed into decision-making. They also acknowledged that ‘scientific’ evidence was one of multiple factors influencing decisions. The themes described below reflect the narrative underlying their perspectives.

Perceptions of evidence-use

Multiple provincial actors generally expressed confidence or had an overall impression that decisions were evidence-based (IDI5,9), stating definitively that, "I don’t think there was a decision we made that wasn’t evidence-informed" (IDI9) and that "the science became a driver of decisions that were made" (IDI5). However, at the regional health authority level, one actor voiced skepticism that policy decisions were consistently informed by scientific evidence specifically, stating, "a lot of decisions [the PHO] made were in contrast to science and then shifted to be by the science" ( IDI6). The evolving nature of the available evidence and scientific understanding of the virus throughout the pandemic was acknowledged. For instance, one actor stated that, "I’ll say the response has been driven by the science; the science has been changing…from what I’ve seen, [it] has been a very science-based response" (IDI3).

Some actors narrowed in on certain policy decisions they believed were or were not evidence-informed. Policy decisions in 2020 that actors believed were directly informed by scientific data included the early decision to restrict informal, household gatherings; to keep schools open for in-person learning; to implement a business safety plan requirement across the province; and to delay the second vaccine dose for maximum efficacy. One provincial public health actor noted that an early 2020 decision made, within local jurisdictions, to close playgrounds was not based on scientific evidence. Further, the decision prompted public health decision-makers to centralize some decision-making to the provincial level, to address decisions being made 'on the ground' that were not based on scientific evidence (IDI16). Similarly, they added that the policy decision to require masking in schools was not based on scientific evidence; rather, "it's policy informed by the noise of your community." As parents and other groups within the community pushed for masking, this was "a policy decision to help schools stay open."

Early in the pandemic response, case data in local jurisdictions were reportedly used for monitoring and planning. These "numerator data" (IDI1), for instance case or hospitalization counts, were identified as being the primary mode of evidence used to inform decisions related to the implementation or easing of public health and social measures. The ability to generate epidemiological count data early in the pandemic due to efficient scaling up of PCR testing for COVID-19 was noted as a key advantage (IDI16). As the pandemic evolved in 2020, however, perspectives diverged in relation to the type of data that decision-makers relied on. For example, it was noted that BCCDC administered an online, voluntary survey to monitor unintended consequences of public health and social measures and inform targeted interventions. Opinions varied on whether this evidence was successfully applied in decision-making. One respondent emphasized this lack of application of evidence and perceived that public health orders were not informed by the level and type of evidence available, beyond case counts: "[In] a communicable disease crisis like a pandemic, the collateral impact slash damage is important and if you're going to be a public health institute, you actually have to bring those to the front, not just count cases" (IDI1).

There also existed some uncertainty and a perceived lack of transparency or clarity as to how or whether data analytic ‘entities’, such as BCCDC or research institutions, fed directly into decision-making. As a research actor shared, "I’m not sure that I know quite what all those channels really look like…I’m sure that there’s a lot of improvement that could be driven in terms of how we bring strong evidence to actual policy and practice" (IDI14). Another actor explicitly named the way information flowed into decision-making in the province as "organic" (IDI7). They also noted the lack of a formal, independent science advisory panel for BC’s COVID-19 response, which existed in other provincial and international jurisdictions. Relatedly, one regional health authority actor perceived that the committee that was convened to advise the province on research, and established for the purpose of applying research to the COVID-19 response, "should have focused more on knowledge translation, but too much time was spent commissioning research and asking what kinds of questions we needed to ask rather than looking at what was happening in other jurisdictions" (IDI6). Overall, multiple actors noted a lack of clarity around application of evidence and who is responsible for ensuring evidence is applied. As a BCCDC actor expressed, in relation to how to prevent transmission of COVID-19:

We probably knew most of the things that we needed to know about May of last year [2020]. So, to me, it’s not even what evidence you need to know about, but who’s responsible for making sure that you actually apply the evidence to the intervention? Because so many of our interventions have been driven by peer pressure and public expectation rather than what we know to be the case [scientifically] (IDI1).

Some described the significance of predictive disease modelling to understand the COVID-19 trajectory and inform decisions, as well as to demonstrate to the public the effectiveness of particular measures, which "help[ed] sustain our response" (IDI2). Others, however, perceived that "mathematical models were vastly overused [and] overvalued in decision-making around this pandemic" (IDI1) and that modellers stepped outside their realm of expertise in providing models and policy recommendations through the public media.

Overall, while many actors’ impressions were that the response was evidence-informed, an overarching theme was a lack of clarity and uncertainty with respect to how evidence actually flowed into decision-making processes, as well as what specific evidence was used and how. Participants noted various mechanisms created or already in place prior to COVID-19 that fed data into, and facilitated, decision-making. There was an acknowledgement that multiple forms of evidence—including scientific data, data on public perceptions, as well as public pressure—appeared to have influenced decision-making.

Interface between researchers and policy-makers

There was a general sense that the Ministry supported the use of scientific and research-based evidence specifically. Some actors identified particular Ministry personnel as being especially amenable to research and focused on data to inform decisions and implementation. More broadly, the government-research interface was characterized by one actor as an amicable one, a "research-friendly government", and that the Ministry of Health (MOH), specifically, has a research strategy whereby, "it’s literally within their bureaucracy to become a more evidence-informed organization" (IDI11). The MOH was noted to have funded a research network intended to channel evidence into health policy and practice, and which reported to the research side of the MOH.

Other actors perceived relatively limited engagement with the broader scientific community. Some perceived an overreliance on 'in-house expertise' or a "we can do that [ourselves] mentality" within government that precluded academic researchers’ involvement, as well as a sense of "not really always wanting to engage with academics to answer policy questions because they don’t necessarily see the value that comes" (IDI14). With respect to the role of research, an actor stated:

There needs to be a provincial dialogue around what evidence is and how it gets situated, because there’s been some tension around evidence being produced and not used or at least not used in the way that researchers think that it should be (IDI11).

Those involved in data analytics within the MOH acknowledged a challenge in making epidemiological data available to academic researchers, because "at the time, you’re just trying to get decisions made" (IDI7). Relatedly, a research actor described the rapid instigation of COVID-19 research and pivoting of academic research programs to respond to the pandemic, but perceived a slow uptake of these research efforts from the MOH and PHSA for decision-making and action. Nevertheless, they too acknowledged the challenge of using research evidence, specifically, in an evolving and dynamic pandemic:

I think we’ve got to be realistic about what research in a pandemic situation can realistically contribute within very short timelines. I mean, some of these decisions have to be made very quickly...they were intuitive decisions, I think some of them, rather than necessarily evidence-based decisions (IDI14).

Relatedly, perspectives diverged on the relationship between 'government' and public health expertise, and whether or not public health actors had an independent voice in articulating evidence to inform governance during the pandemic. Largely from Ministry stakeholders, and those within the PHSA, the impressions were that Ministry actors were relying on public health advice and scientific expertise. As one actor articulated, "[the] government actually respected and acknowledged and supported public health expertise" (IDI9). Others emphasized a "trust of the people who understood the problem" (IDI3)—namely, those within public health—and perceived that public health experts were enabled "to take a lead role in the health system, over politics" (IDI12). This perspective was not as widely held by those in the public health sector, as one public health actor expressed, "politicians and bureaucrats waded into public health practice in a way that I don't think was appropriate" and that, "in the context of a pandemic, it’s actually relatively challenging to bring true expert advice because there’s too many right now. Suddenly, everybody’s a public health expert, but especially bureaucrats and politicians." They went on to share that the independence of public health to speak and act—and for politicians to accept independent public health advice—needs to be protected and institutionalized as "core to good governance" (IDI1). Relatedly, an elected official linked this to the absence of a formal, independent science table to advise government and stated that, "I think we should have one established permanently. I think we need to recognize that politicians aren't always the best at discerning scientific evidence and how that should play into decision-making" (IDI15).

These results highlight the divergent perspectives participants had as to the interface between research and policy-making and a lack of understanding regarding process and roles.

Challenges in applying evidence to policy decisions

Perspectives converged with respect to the existence of numerous challenges with and barriers to applying evidence to health policy and decision-making. These related to the quality and breadth of available data, both in terms of absence and abundance. For instance, as one public health actor noted in relation to health policy-making, "you never have enough information. You always have an information shortage, so you're trying to make the best decisions you can in the absence of usually really clear information" (IDI8). On the other hand, as evidence emerged en masse across jurisdictions in the pandemic, there were challenges with synthesizing evidence in a timely fashion for 'real-time' decision-making. A regional health authority actor highlighted this challenge early in the COVID-19 pandemic and perceived that there was not a provincial group bringing new synthesized information to decision-makers on a daily basis (IDI6). Other challenges related to the complexity of the political-public health interface with respect to data and scientific expertise, which "gets debated and needs to be digested by the political process. And then decisions are made" (IDI5). This actor further expressed that debate among experts needs to be balanced with efficient crisis response, that one has to "cut the debate short. For the sake of expediency, you need to react."

It was observed that, in BC’s COVID-19 response, data was gathered from multiple sources with differing data collection procedures, and sometimes with conflicting results—for instance, 'health system data' analyzed by the PHSA and 'public health data' analyzed by the BCCDC. This was observed to present challenges from a political perspective in discerning "who’s actually getting the 'right' answers" (IDI7). An added layer of complexity was reportedly rooted in how to communicate such evidence to the public and "public trust in the numbers" (IDI7), particularly as public understanding of what evidence is, how it is developed, and why it changes, can influence public perceptions of governance.

Finally, as one actor from within the research sector noted, organizationally and governance-wise, the system was "not very well set up to actually use research evidence…if we need to do better at using evidence in practice, we need to fix some of those things. And we actually know what a lot of those things are." For example , "there’s no science framework for how organizations work within that" and " governments shy away from setting science policy " (IDI11). This challenge was framed as having a macro-level dimension, as higher-level leadership structures were observed to not incentivize the development and effective use of research among constituent organizations, and also micro-level implications. From their perspective, researchers will struggle without such policy frameworks to obtain necessary data-sharing agreements with health authorities, nor will they be able to successfully navigate other barriers to conducting action-oriented research that informs policy and practice.

Similarly, a research actor perceived that the COVID-19 pandemic highlighted pre-existing fragmentation, "a pretty disjointed sort of enterprise" in how research is organized in the province:

I think pandemics need strong leadership and I think pandemic research response needed probably stronger leadership than it had. And I think that’s to do with [how] no one really knew who was in charge because no one really was given the role of being truly in charge of the research response (IDI14).

This individual underscored that, at the time of the interview, there were nearly 600 separate research projects being conducted in BC that focused on COVID-19. From their perspective, this reflected the need for more centralized direction to provide leadership, coordinate research efforts, and catalyze collaborations.

Overall, respondents perceived a lack of coordination and continuity across data sources, and a lack of explicit guidelines on evidence-use in the decision-making process, which resulted in a sense of fragmentation. The tension between the processes involved in research and the need for rapid decision-making was perceived as a barrier to using evidence to inform policy.

This study explored the use of evidence to inform early COVID-19 decision-making within British Columbia, Canada, from the perspectives of decision-makers themselves. Findings underscore the complexity of synthesizing and applying evidence (i.e., ‘scientific’ or research-based evidence most commonly discussed) to support public health policy in 'real-time', particularly in the context of public health crisis response. Despite a substantial and long-established literature on evidence-based clinical decision-making [ 23 , 24 ], understanding is more limited as to how public health crisis decision-making can be evidence-informed or evidence-based. By contributing to a growing global scholarship of retrospective examinations of COVID-19 decision-making processes [ 25 , 26 , 27 , 28 ], our study aimed to broaden this understanding and, thus, support the strengthening of public health emergency preparedness in Canada, and globally.

Specifically, based on our findings on evidence-based public health practice, we found that decision-makers clearly emphasized ‘evidence-based’ or ‘evidence-informed’ as meaning ‘scientific’ evidence. They acknowledged other forms of evidence such as professional expertise and contextual information as influencing factors. We identified four key points related to the process of evidence-use in BC's COVID-19 decision-making, with broader implications as well:

Role Differences: The tensions we observed primarily related to a lack of clarity among the various agencies involved as to their respective roles and responsibilities in a public health emergency, a finding that aligns with research on evidence-use in prior pandemics in Canada [ 29 ]. Relatedly, scientists and policy-makers experienced challenges with communication and information-flow between one another and the public, which may reflect their different values and standards, framing of issues and goals, and language [ 30 ].

Barriers to Evidence-Use: Coordination and consistency in how data are collected across jurisdictions reportedly impeded efficiency and timeliness of decision-making. Lancaster and Rhodes (2020) suggest that evidence itself should be treated as a process, rather than a commodity, in evidence-based practice [ 31 ]. Thus, shifting the dialogue from 'barriers to evidence use' to an approach that fosters dialogue across different forms of evidence and different actors in the process may be beneficial.

Use of Evidence in Public Health versus Medicine: Evidence-based public health can be conflated with the concept of evidence-based medicine, though these are distinct in the type of information that needs to be considered. While ‘research evidence’ was the primary type of evidence used, other important types of evidence informed policy decisions in the COVID-19 public health emergency—for example, previous experience, public values, and preferences. This concurs with Brownson’s (2009) framework of factors driving decision-making in evidence-based public health [ 32 ]. Namely, that a balance between multiple factors, situated in particular environmental and organizational context, shapes decision-making: 1) best available research evidence; 2) clients'/population characteristics, state, needs, values, and preferences; and 3) resources, including a practitioner’s expertise. Thus, any evaluation of evidence-use in public health policy must take into consideration this multiplicity of factors at play, and draw on frameworks specific to public health [ 33 ]. Moreover, public health decision-making requires much more attention to behavioural factors and non-clinical impacts, which is distinct from the largely biology-focused lens of evidence-based medicine.

Transparency: Many participants emphasized a lack of explanation about why certain decisions were made and a lack of understanding about who was involved in decisions and how those decisions were made. This point was confirmed by a recent report on lessons learned in BC during the COVID-19 pandemic in which the authors describe " the desire to know more about the reasons why decisions were taken " as a " recurring theme " (13:66). These findings point to a need for clear and transparent mechanisms for channeling evidence, irrespective of the form used, into public health crisis decision-making.

Our findings also pointed to challenges associated with the infrastructure for utilizing research evidence in BC policy-making, specifically a need for more centralized authority on the research side of the public health emergency response to avoid duplication of efforts and more effectively synthesize findings for efficient use. Yet, as a participant questioned, what is the realistic role of research in a public health crisis response? Generally, most evidence used to inform crisis response measures is local epidemiological data or modelling data [ 7 ]. As corroborated by our findings, challenges exist in coordinating data collection and synthesis of these local data across jurisdictions to inform 'real-time' decision-making, let alone to feed into primary research studies [ 34 ].

On the other hand, as was the case in the COVID-19 pandemic, a 'high noise' research environment soon became another challenge as data became available to researchers. Various mechanisms have been established to try and address these challenges amid the COVID-19 pandemic, both to synthesize scientific evidence globally and to create channels for research evidence to support timely decision-making. For instance: 1) research networks and collaborations are working to coordinate research efforts (e.g., COVID-END network [ 35 ]); 2) independent research panels or committees within jurisdictions provide scientific advice to inform decision-making; and 3) research foundations, funding agencies, and platforms for knowledge mobilization (e.g., academic journals) continue to streamline funding through targeted calls for COVID-19 research grant proposals, or for publication of COVID-19 research articles. While our findings describe the varied forms of evidence used in COVID-19 policy-making—beyond scientific evidence—they also point to the opportunity for further investments in infrastructure that coordinates, streamlines, and strengthens collaborations between health researchers and decision-makers that results in timely uptake of results into policy decisions.

Finally, in considering these findings, it is important to note the study's scope and limitations: We focused on evidence use in a single public health emergency, in a single province. Future research could expand this inquiry to a multi-site analysis of evidence-use in pandemic policy-making, with an eye to synthesizing lessons learned and best practices. Additionally, our sample of participants included only one elected official, so perspectives were limited from this type of role. The majority of participants were health officials who primarily referred to and discussed evidence as ‘scientific’ or research-based evidence. Further work could explore the facilitators and barriers to evidence-use from the perspectives of elected officials and Ministry personnel, particularly with respect to the forms of evidence—considered broadly—and other varied inputs, that shape decision-making in the public sphere. This could include a more in-depth examination of policy implementation and how the potential societal consequences of implementation factor into public health decision-making.

We found that the policy decisions made during the initial stages of the COVID-19 pandemic were perceived by actors in BC's response as informed by—not always based on—scientific evidence, specifically; however, decision-makers also considered other contextual factors and drew on prior pandemic-related experience to inform decision-making, as is common in evidence-based public health practice [ 32 ]. The respondents' experiences point to specific areas that need to be considered in planning for future public health emergencies, including information flow between policy-makers and researchers, coordination in how data are collected, and transparency in how decisions are made—all of which reflect a need to improve communication. Furthermore, shifting the discourse from evidence as a commodity to evidence-use as a process will be helpful in addressing barriers to evidence-use, as well as increasing understanding about the public health decision-making process as distinct from clinical medicine. Finally, there is a critical need for clear mechanisms that channel evidence (whether ‘scientific’, research-based, or otherwise) into health crisis decision-making, including identifying and communicating the decision-making process to those producing and synthesizing evidence. The COVID-19 pandemic experience is an opportunity to reflect on what needs to be done to guild our public health systems for the future [ 36 , 37 ]. Understanding and responding to the complexities of decision-making as we move forward, particularly with respect to the synthesis and use of evidence, can contribute to strengthening preparedness for future public health emergencies.

Availability of data and materials

The data that support the findings of this study are not publicly available to maintain the confidentiality of research participants.

The terms 'evidence-informed' and 'evidence-based' decision-making are used throughout this paper, though are distinct. The term 'evidence-informed' suggests that evidence is used and considered, though not necessarily solely determinative in decision-making [ 38 ].

The Provincial Health Services Authority (PHSA) works with the Ministry of Health (MOH) and regional health authorities to oversee the coordination and delivery of programs.

The Office of the Provincial Health Officer (PHO) has binding legal authority in the case of an emergency, and responsibility to monitor the health of BC’s population and provide independent advice to Ministers and public offices on public health issues.

The British Columbia Centre for Disease Control (BCCDC) is a program of the PHSA and provides provincial and national disease surveillance, detection, treatment, prevention, and consultation.

Abbreviations

British Columbia

British Columbia Centre for Disease Control

Coronavirus Disease 2019

Medical Health Officer

Ministry of Health

Provincial Health Officer

Provincial Health Services Authority

Severe Acute Respiratory Syndrome Coronavirus—2

University of British Columbia

Rubin O, Errett NA, Upshur R, Baekkeskov E. The challenges facing evidence-based decision making in the initial response to COVID-19. Scand J Public Health. 2021;49(7):790–6.

Article   PubMed   Google Scholar  

Williams GA, Ulla Díez SM, Figueras J, Lessof S, Ulla SM. Translating evidence into policy during the COVID-19 pandemic: bridging science and policy (and politics). Eurohealth (Lond). 2020;26(2):29–48.

Google Scholar  

Vickery J, Atkinson P, Lin L, Rubin O, Upshur R, Yeoh EK, et al. Challenges to evidence-informed decision-making in the context of pandemics: qualitative study of COVID-19 policy advisor perspectives. BMJ Glob Heal. 2022;7(4):1–10.

Piper J, Gomis B, Lee K. “Guided by science and evidence”? The politics of border management in Canada’s response to the COVID-19 pandemic. Front Polit Sci. 2022;4

Cairney P. The UK government’s COVID-19 policy: what does “Guided by the science” mean in practice? Front Polit Sci. 2021;3(March):1–14.

Colman E, Wanat M, Goossens H, Tonkin-Crine S, Anthierens S. Following the science? Views from scientists on government advisory boards during the COVID-19 pandemic: a qualitative interview study in five European countries. BMJ Glob Heal. 2021;6(9):1–11.

Salajan A, Tsolova S, Ciotti M, Suk JE. To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review. Evid Policy. 2020;16(3):453–75.

Article   Google Scholar  

Cairney P. The UK government’s COVID-19 policy: assessing evidence-informed policy analysis in real time. Br Polit. 2021;16(1):90–116.

Lancaster K, Rhodes T, Rosengarten M. Making evidence and policy in public health emergencies: lessons from COVID-19 for adaptive evidence-making and intervention. Evid Policy. 2020;16(3):477–90.

Yang K. What can COVID-19 tell us about evidence-based management? Am Rev Public Adm. 2020;50(6–7):706–12.

Parkhurst J. The politics of evidence: from evidence-based policy to the good governance of evidence. Abingdon: Routledge; 2017.

Office of the Prime Minister. Minister of Health Mandate Letter [Internet]. 2021. https://pm.gc.ca/en/mandate-letters/2021/12/16/minister-health-mandate-letter

de Faye B, Perrin D, Trumpy C. COVID-19 lessons learned review: Final report. Victoria, BC; 2022.

First Nations Health Authority. Evolving vaccination plans is science in action: Dr. Bonnie Henry. First Nations Health Authority. 2021.

BC Stats. 2021 Sub-provincial population estimates highlights. Vol. 2021. Victoria, BC; 2022.

Michael Smith Health Research BC [Internet]. 2023. healthresearchbc.ca. Accessed 25 Jan 2023.

Michael Smith Health Research BC. SRAC [Internet]. 2023. https://healthresearchbc.ca/strategic-provincial-advisory-committee-srac/ . Accessed 25 Jan 2023.

Brubacher LJ, Hasan MZ, Sriram V, Keidar S, Wu A, Cheng M, et al. Investigating the influence of institutions, politics, organizations, and governance on the COVID-19 response in British Columbia, Canada: a jurisdictional case study protocol. Heal Res Policy Syst. 2022;20(1):1–10.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

DeCuir-Gunby JT, Marshall PL, McCulloch AW. Developing and using a codebook for the analysis of interview data: an example from a professional development research project. Field Methods. 2011;23(2):136–55.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(117):1–8.

Creswell JW, Miller DL. Determining validity in qualitative inquiry. Theory Pract. 2000;39(3):124–30.

Sackett D. How to read clinical journals: I. Why to read them and how to start reading them critically. Can Med Assoc J. 1981;1245:555–8.

Evidence Based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA Netw. 1992;268(17):2420–5.

Allin S, Fitzpatrick T, Marchildon GP, Quesnel-Vallée A. The federal government and Canada’s COVID-19 responses: from “we’re ready, we’re prepared” to “fires are burning.” Heal Econ Policy Law. 2022;17(1):76–94.

Bollyky TJ, Hulland EN, Barber RM, Collins JK, Kiernan S, Moses M, et al. Pandemic preparedness and COVID-19: an exploratory analysis of infection and fatality rates, and contextual factors associated with preparedness in 177 countries, from Jan 1, 2020, to Sept 30, 2021. Lancet. 2022;6736(22):1–24.

Kuhlmann S, Hellström M, Ramberg U, Reiter R. Tracing divergence in crisis governance: responses to the COVID-19 pandemic in France, Germany and Sweden compared. Int Rev Adm Sci. 2021;87(3):556–75.

Haldane V, De Foo C, Abdalla SM, Jung AS, Tan M, Wu S, et al. Health systems resilience in managing the COVID-19 pandemic: lessons from 28 countries. Nat Med. 2021;27(6):964–80.

Article   CAS   PubMed   Google Scholar  

Rosella LC, Wilson K, Crowcroft NS, Chu A, Upshur R, Willison D, et al. Pandemic H1N1 in Canada and the use of evidence in developing public health policies—a policy analysis. Soc Sci Med. 2013;83:1–9.

Article   PubMed   PubMed Central   Google Scholar  

Saner M. A map of the interface between science & policy. Ottawa, Ontario; 2007. Report No.: January 1.

Lancaster K, Rhodes T. What prevents health policy being “evidence-based”? New ways to think about evidence, policy and interventions in health. Br Med Bull. 2020;135(1):38–49.

Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56:119–27.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Khan Y, Brown A, Shannon T, Gibson J, Généreux M, Henry B, et al. Public health emergency preparedness: a framework to promote resilience. BMC Public Health. 2018;18(1):1–16.

COVID-19 Evidence Network to Support Decision-Making. COVID-END [Internet]. 2023. https://www.mcmasterforum.org/networks/covid-end . Accessed 25 Jan 2023.

Canadian Institutes of Health Research. Moving forward from the COVID-19 pandemic: 10 opportunities for strengthening Canada’s public health systems. 2022.

Di Ruggiero E, Bhatia D, Umar I, Arpin E, Champagne C, Clavier C, et al. Governing for the public’s health: Governance options for a strengthened and renewed public health system in Canada. 2022.

Adjoa Kumah E, McSherry R, Bettany-Saltikov J, Hamilton S, Hogg J, Whittaker V, et al. Evidence-informed practice versus evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior toward the application of evidence into practice: a comprehensive systematic review of undergraduate studen. Campbell Syst Rev. 2019;15(e1015):1–19.

Download references

Acknowledgements

We would like to extend our gratitude to current and former members of the University of British Columbia Working Group on Health Systems Response to COVID-19 who contributed to various aspects of this study, including Shelly Keidar, Kristina Jenei, Sydney Whiteford, Dr. Md Zabir Hasan, Dr. David M. Patrick, Dr. Maxwell Cameron, Mahrukh Zahid, Dr. Yoel Kornreich, Dr. Tammi Whelan, Austin Wu, Shivangi Khanna, and Candice Ruck.

Financial support for this work was generously provided by the University of British Columbia's Faculty of Medicine (Grant No. GR004683) and Peter Wall Institute for Advanced Studies (Grant No. GR016648), as well as a Canadian Institutes of Health Research Operating Grant (Grant No. GR019157). These funding bodies were not involved in the design of the study, the collection, analysis or interpretation of data, or in the writing of this manuscript.

Author information

Authors and affiliations.

School of Population and Public Health, University of British Columbia, Vancouver, Canada

Laura Jane Brubacher, Chris Y. Lovato, Veena Sriram, Michael Cheng & Peter Berman

School of Public Health Sciences, University of Waterloo, Waterloo, Canada

Laura Jane Brubacher

School of Public Policy and Global Affairs, University of British Columbia, Vancouver, Canada

Veena Sriram

You can also search for this author in PubMed   Google Scholar

Contributions

CYL, PB, and VS obtained funding for and designed the study. LJB, MC, and PB conducted data collection. LJB and VS analyzed the qualitative data. CYL and LJB collaboratively wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Laura Jane Brubacher .

Ethics declarations

Ethics approval and consent to participate.

This case study received the approval of the UBC Behavioural Research Ethics Board (Certificate # H20-02136). Participants provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Semi-structured interview guide [* = questions used for this specific study]

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Brubacher, L.J., Lovato, C.Y., Sriram, V. et al. The use of evidence to guide decision-making during the COVID-19 pandemic: divergent perspectives from a qualitative case study in British Columbia, Canada. Health Res Policy Sys 22 , 66 (2024). https://doi.org/10.1186/s12961-024-01146-2

Download citation

Received : 08 February 2023

Accepted : 29 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12961-024-01146-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Decision-making
  • Public health
  • Policy-making
  • Qualitative

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research articles using qualitative methods

  • Open access
  • Published: 03 June 2024

Experience and training needs of nurses in military hospital on emergency rescue at high altitude: a qualitative meta-synthesis

  • Ruixuan Zhao 1 ,
  • Shijie Fang 1 ,
  • Dongwen Li 2 &
  • Cheng Zhang 3  

BMC Nursing volume  23 , Article number:  370 ( 2024 ) Cite this article

110 Accesses

Metrics details

Nurses play an important role in the treatment of war wounds on the plateau, and they face multiple challenges and a variety of needs in their caregiving process. This study aimed to systematically integrate and evaluate qualitative research data to understand the altitude emergency rescue experience and training needs of nurses in military hospitals and provide them with targeted assistance.

We critically assessed the study using the Joanna Briggs Institute Critical Assessment Checklist for Qualitative Research. Extraction, summarization and meta-synthesis of qualitative data. Cochrane Library, PubMed, Embase, FMRS, CINAHL, PsycINFO, Chinese National Knowledge Infrastructure (CNKI), Wanfang Database (CECDB), VIP Database, and China Biomedical Database (CBM) were searched for relevant studies published from the establishment of the database to May 2023. Additionally, we conducted a manual search of the references of the identified studies. Registered on the PROSPERO database (CRD42024537104).

A total of 17 studies, including 428 participants, were included, and 139 research results were extracted, summarized into 10 new categories, and formed 3 meta-themes. Meta-theme 1: mental state of military nurses during deployment. Meta-theme 2: the experience of military nurses during deployment. Meta-theme 3: training needs for emergency care.

Conclusions

Emergency rescue of high-altitude war injuries is a challenging process. Leaders should pay full attention to the feelings and needs of military nurses during the first aid process and provide them with appropriate support.

Peer Review reports

Introduction

The plateau area has the characteristics of high altitude, cold all the year round, many ice peaks and snow mountains, and hypoxia [ 1 ]. These characteristics pose major obstacles to both military operations and non-military operations and at the same time, due to the complex terrain and inconvenient transportation, the detection, handling, treatment, and evacuation of the wounded become very difficult. These special natural environments put forward higher requirements for medical rescue [ 2 , 3 ]. As an important part of military or non-military missions, military nurses play an important role in emergency rescue [ 4 ]. There has been a long history of military nurses engaging in war, military operations and humanitarian missions, they are required to provide not only routine health care during peacetime, but also medical services during conflict or humanitarian assistance in response to disasters, public emergencies and epidemics [ 5 , 6 ]. The rescue process is arduous, and nurses may face great challenges. When they are at high altitude environment, they are prone to hypoxia, frostbite, sunburn, fall, blindness, etc., and may be accompanied by high altitude pulmonary edema and high-altitude coma. In war and non-war military operations, military nurses are required to care for a variety of trauma patients, including burns, traumatic amputations, shock, bleeding, penetrating injuries, spinal cord injuries, head injuries, crush injuries, radiation injuries, chemical injuries, infectious diseases, and more. This has higher requirements for the physical, psychological and professional knowledge of military nurses [ 4 , 7 , 8 ].

To provide better care for the wounded and respond to various emergency situations, military nurses must continuously improve their competence. In addition, according to the literature [ 4 , 6 , 9 ], the demand of military nurses for emergency rescue training is gradually increasing, with nurses with deployment experience reporting limited first aid proficiency and a lack of practical training, and related qualitative studies are also increasing, but a single qualitative research result is difficult to fully and accurately reflect the needs of military nurses. Therefore, this study uses a meta-synthesis approach to analyze and summarize such studies the to understand experience and training needs of nurses in military hospitals with altitude war injury emergency rescue, to provide reference for formulating altitude emergency rescue training strategies, and better meet their needs and provide them with appropriate support.

Study design

The Joanna Briggs Institute(JBI)methodology for systematic reviews of qualitative evidence [ 10 ] guided this systematic review and qualitative meta-synthesis. We used the PROSPERO to identify published or ongoing research relevant to the topic and registered for this review(CRD42024537104). In addition, we report our findings by the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) Statement [ 11 ].

Search strategy

We performed systematic searches in Cochrane Library, PubMed, Embase, FMRS, CINAHL, PsycINFO, Chinese National Knowledge Infrastructure (CNKI), Wanfang Database (CECDB), VIP Database, and China Biomedical Database (CBM). The retrieval time limit was from the establishment of the database to May 2023. The following search terms were used in different combinations: plateau, qualitative study, Emergency rescue, train, Military nurses, education, disaster, public health emergency, rescue, army, War readiness, war. Additionally, we conducted a manual search of the references to the identified studies to find additional eligible articles.

Inclusion and exclusion criteria

Articles that satisfied the following criteria were included in the qualitative synthesis: 1)study population(P): military nurses; 2)phenomenon of interest(I): highland or mountain emergency rescue or emergency rescue experiences, experiences and training needs; 3)context(Co): military nurse emergency rescue process or training process; 4)type of study: qualitative research, including phenomenological, descriptive qualitative research, rooted theory, ethnography, etc.

The exclusion criteria were as follows: 1)duplicate literature, literature with unavailable full text or incomplete data, literature with substandard quality (The JBI qualitative research critical assessment is graded C); 2)literature not in English; 3) secondary research.

Article filtering and quality assessment

Literature screening was done independently by 2 researchers following strict inclusion and exclusion criteria, and they independently assessed the quality of the included literature using the JBI Manual for Systematic reviews of qualitative evidence [ 10 ]. The guideline has 10 evaluation items, each items uses “yes”, “no”, and “not provided” as evaluation indicators. In this study, literature quality is divided into A, B and C. A represents that the literature meets all the above evaluation indicators, B represents that the literature partially meets, and C represents that it does not meet all the above evaluation indicators. During the article selection and quality evaluation process, disagreements were settled with discussion or with a third author’s assistance.

Data extraction

Data management was enabled by the reference management program Endnote 20. Data extraction consists of two researchers reading the content contained in the study independently to extract relevant and useful information, cross-reviewed, and when any disagreement was discussion to resolve it with a third experienced researcher. The relevant content of each study was extracted using a standardized data extraction tool from the Joanna Briggs Institute Qualitative Assessment and Review (JBI-QARI), the JBI-QARI qualitative criteria are: (1) unequivocal (U)—refers to findings that are a matter of fact, beyond a reasonable doubt; (2) credible (C)—refers to findings that are plausible interpretations of the primary data within the theoretical framework; (3) unsupported (Un)—relates to findings that are unsupported by the data [ 12 ]. The researchers extracted data according to the above criteria. Data extraction included author, country, objective, study population, research Methodology, and main results.

Data synthesis

This data extraction was carried out and checked independently by 2 researchers, and when disagreements were encountered, a third researcher was asked and consensus was reached on the results. We used Thomas and Hardens’ three stage thematic synthesis approach [ 13 ]: (1) coding the text; (2) developing descriptive themes; (3) generating analytical themes. First, two researchers independently coded the results based on text content and meaning; then, researchers looked for similarities and differences between the textual data, and classify the meaning of the original dataset; finally, the categories were evaluated repeatedly to identify similarities and obtain synthesized results.

Study characteristics

A total of 1070 articles were searched, we found two additional articles by checking the references of articles, and the exclusion of duplicate publications yielded 783 articles. After reading the titles and abstracts, 708 articles were excluded. After reading the remaining 75 articles 58 articles were excluded, including 52 articles with content mismatches, 3 articles studied population errors and full text information could not be obtained for 3 articles, Finally, 17 studies [ 7 , 9 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 ] were identified for inclusion in this analysis. The results of the search are shown in the PRISMA flowchart in Fig.  1 . The 17 included studies were published between 2005 and 2023, of which 16 were qualitative studies [ 7 , 9 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 28 ] and one were mixed-methods studies [ 27 ]. A total of 428 participants, involved 6 countries, including China (2 study [ 7 , 27 ]), USA (6 studies [ 9 , 15 , 17 , 20 , 21 , 22 ]), Sweden (2 studies [ 14 ]), Iran (3studies [ 19 , 24 , 25 ]), Israel (1 study [ 28 ]), Korean (2 studies [ 23 , 26 ]), and British (1 study [ 18 ]). The characteristics of the included literature are shown in Table  1 .

Quality assessment of studies

The included studies were evaluated separately by two trained researchers using the JBI Qualitative Research quality Evaluation criteria, who then participated in the discussion together. When disagreements arose, the help of a third researcher was sought and the final results were unanimously approved by the researchers. All literature included in this study was either A or B grade, which three studies were quality rating of A and 13 studies with a B. Table  2 presents the results of the critical appraisal of the 17 studies.

figure 1

PRISMA flowchart and literature selection results

Results of synthesis

This study uses the method of aggregative integration [ 12 ] to integrate the results, that is, to further organize and summarize the meaning of the collected results, so as to make the results more convincing, targeted and general. Researchers in understanding the various qualitative research philosophy and methodology of the premise, through repeated reading, analysis and interpretation of each research results, are summarized, integration, form a new category and form integrated results. Finally extracted the results of 17 studies [ 7 , 9 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 ], which were summarized into 10 new categories and formed 3 meta-themes. The categories are presented below with supporting subcategories and illustrative quotes from the original studies.

Theme 1: Mental state of military nurses during deployment

Feeling down.

Military nurses are often frustrated by complex battlefield environments or natural disasters. For example, some nurses may be frustrated by the lack of equipment or supplies, or despair that they cannot save the lives of the wounded; They were frustrated that they could not do more for the wounded. Other nurses were depressed about life after witnessing the brutality of war.

“I am afraid of the battlefield situation on the plateau, and do not understand the local dialect, I do not know how to carry out the rescue work, and I am worried that I have not done anything, dragging everyone down.” [ 7 ]. “You are going to be frustrated at the lack of resources”; “you are going to see young people slaughtered more or less and feel hopelessness at not being able to save their lives.’’ [ 14 ]. “Nurses reported frustration at the time it took for patients to arrive, the extent of injuries, and that they could not do more to save some patients.” [ 9 ].

Emotion management

During deployment, nurses use a variety of methods to vent their emotions and keep them positive. Such as, taking a shower, keeping a journal, talking to others, Mutual acceptance and respect. By adopting positive coping measures, they enable themselves to be competent in their caring role and increase their belief in caring.

“After each surgery I went to take a shower, pouring out my heart in tears, washing myself changing to a clean uniform, then going back like a new person” [ 28 ]. “I’ve had some depression on and off since I came back from Vietnam. If I kept a journal maybe I could get a better handle on some of the things that happened to me over there” [ 15 ]. “Confide in you colleagues and don’t hold things in…I think that’s what kept us going real well” [ 15 ].

Sense of responsibility

It is crucially important for a nurse to understand the mission, policies, and procedures of the armed forces and the part one is asked to play as a military nurse. They need to understand that the purpose of the military is to support, protect, and defend a country’s national security interests. Performing military missions will enable them to serve a greater purpose in life. As both soldiers as well as nurses, based on the sense of responsibility to make them in a state of crisis to protect and serve the people, which make them proud. Military nurses also have an inspiring role to play by example.

“We worked together in the implementation of emergency rescue support tasks, filled with positive energy and a sense of honor, and strengthened our sense of mission” [ 7 ]. “To be something of a father-figure, to give the soldiers a feeling of safety. Keep your eye on your men so that they know they will be looked after if anything happens” [ 14 ].

Theme 2: the experience of military nurses during deployment

There are three main types of “chaos” here: Natural disasters and wars make the environment chaotic; the environment of disaster or war often makes the rescue work of nurses full of uncertainty, which leads to confusion in the team; chaos in the role of nurses during deployment.

“You get over there, [combat] it [the chaos] becomes real, bullets are flying, we’re being mortared … all these injuries, people with broken bones, blown off arms, burns … [In disasters, initially] “It was pure chaos, triage was going on, treatment was going on, people [were] everywhere, lying on the conveyor belt, in wheelchairs, tons of elderly, some had no clothing, it was just a sea of people that you could not see through” [ 22 ]. “One of our biggest challenges in critical situations is ambiguity or confusion in roles. These programs help us to clarify different roles in critical situations” [ 24 ].

Unique environment

This is different from the usual environment, its “Unique” is manifested as: the uncertainty of the war zone; patients with complex injuries, such as explosion injuries, penetrating injuries; lack of resources and poor health care; and the special natural environment at high-altitudes.

“We did not know what to expect in a war zone” [ 28 ]. “I usually have the habit of taking a bath every day, the most difficult to adapt to the field toilet and bathing, bathing like a market, the toilet is very simple, what flying animals can appear, often the toilet has not yet waited, it is necessary to gather training” [ 7 ]. “The biggest headache for me was the sweltering heat of the tent during the day and the shivering cold at night” [ 7 ].

Team support

Team support is important. Maintaining a cohesive team relationship can not only improve the efficiency of casualty rescue, but also provide psychological support to each other. During deployment, the team helps and supports each other, and they are like a family. In addition, successful teams need strong leadership to ensure that the task is completed smoothly.

“We were working in harmony, with collaboration between us. In this way, we could overcome this difficult and stressful time” [ 28 ]. “The chief nurse knew her people. She knew the nurses. She had a feel for what was going on in the unit and she knew who and when she could pull them, and where the staff needed to be to get the job done to cut down on the confusion ” [ 9 ].

The need for specialized skills

Due to the special nature of war trauma, medical personnel lack knowledge and experience in its cause mechanism and operation principle. Other nurses noted their lack of experience in military nursing because they had not been deployed before. Therefore, according to the study, military nurses need to improve their professional skills before deployment.

“I have not systematically received the training of the professional theoretical knowledge of war injury rescue, and I have a sense of panic about the lack of professional knowledge when facing the practical rescue” [ 7 ].

Training needs for emergency care

Psychological training needs.

Military nursing is different from traditional nursing in terms of military obligations and requirements. Firstly, nurses need to cultivate military values, responsibility, patriotism, and a sense of sacrifice. Second, in a battlefield or disaster environment, military nurses face a variety of scenarios, so it requires them to develop a positive mindset. Finally, they need to keep their confidence and overcome their fear.

“I think professional education should begin with enforcement in mind, and it is necessary for nurses to cultivate a spirit of sacrifice and patriotism.” [ 27 ]. “Be secure in yourself and in your professional abilities and limitations. Be realistic in your expectations. You have to cope with the reality and deal with it, even though it is very, very hard” [ 15 ].

Military training content needs

Nurses play an increasingly important role in military missions and are often deployed to different missions, such as humanitarian operations, natural disasters and public health emergencies. Therefore, it is necessary that they have the relevant knowledge, skills and abilities. And they suggest that it is best to train them in local customs and languages before deployment. The special nature of military medicine, they have a lot to learn in the military, including combat and trauma care areas; Chemical, biological, radiological or nuclear (CBRN) preparation/reaction, such as Combat Casualty Care Course, Emergency War Surgery Course, or Trauma Nursing Core Course, etc. In addition, in the plateau region, they also learn medical care under extreme conditions.

“I think the emergency response capacity should be enforced, such as when we run into public health emergencies and natural disasters; s” [ 27 ]. “Now, I think we are dealing with these cultural aspects in all our operational readiness courses” [ 15 ]. “Fluid resuscitation on plains and plateaus is different; thus, we also need to learn medical care and nursing skills for extreme environments” [ 27 ].

Training methods needs

Mixed training methods should be adopted in teaching. Among them, practice, scenario simulation and distance learning are effective training methods, for example, they participated in training exercises in a field training environment or simulation laboratory. At the same time, they should not forget that teamwork training is also important in training.

“I think scenario simulation is a good way, because theory lectures are too boring and we need to put theory into practice” [ 27 ]. “When participating in professional education, trainees should take part in exercise to avoid only talking on paper” [ 27 ]. “We had teamwork training during that education program, and I was impressed with this activity, which provided training on team cohesion” [ 27 ]. “Tabletop exercises were unrealistic and less helpful. We did not practice for a mass casualty.” [ 9 ].

This systematic review and comprehensive study discussed the experience and training needs of nurses in military hospital in altitude first aid. The findings of the review have shown that military nurses faced a lot of physical and emotional stress during deployment. These stressors came from lack of professional ability, inadequate professional preparation, chaotic battlefield environment and extreme natural environment and similar. Military nurses found reasonable ways to cope with stress in a variety of military Settings. They receive training to improve professional competence and self-efficacy, while external support from care managers and colleagues also plays a vital role. However, more strategies are needed to enhance this effect.

The comprehensive quality of the individual (including physical and psychological quality) has a crucial impact on the rescue mission of military nurses [ 8 ]. For rescue in various environments(aircraft carriers, hospital ships, evacuation aircraft, plateaus, hypoxia, cold, desert, Gobi, high humidity, low pressure, jungle, and other area), rescuers need to have good physical fitness, positive and optimistic psychological quality and self-adjustment ability, in order to maximize their own knowledge and skills of high quality play out [ 29 ]. However, the findings of this review [ 7 , 9 , 16 , 17 ] indicate that military nurses may experience altitude sickness, fatigue, nausea, and even acute pulmonary edema when faced with a cold, oxygen-deprived altitude environment; faced with many casualties, they feel depressed, helpless, sad and even depressed. Therefore, military nurses should pay attention to physical training, enhance physical quality, to resist and adapt to extreme environment; nursing managers accurately their psychological state, timely guidance, tracking comfort. The findings of this review also suggest strengthening teamwork and support, which can help nurses support each other during periods of loneliness and provide quality care to wounded patients [ 6 , 7 , 22 , 24 ]. Bonnie et al [ 30 ]. also suggests trying to change thinking and manage emotions by changing feelings and reframing experiences.

Knowledge and technology are the fundamental prerequisites for military nurses to accomplish rescue operations [ 31 ]. This review found that knowledge and skills were mentioned more frequently, indicating that knowledge and skills were the most concerned skills of nurses participating in deployment, and rich knowledge storage and skilled nursing skills are crucial to the first aid of the wounded. Other studies have also drawn a similar conclusion. For example, Harris [ 32 ] found that one unique aspect of clinical expertise in the context of military nursing is clinical diversity, and military nurses should not specialize in just one specialty, but should have multidisciplinary nursing knowledge and skills. Formulating a scientific and effective training program is helpful to improve the ability of military nurses. Caporiccio et al [ 33 ]. found continuing professional education (CPE) is widely recognized by nurses who learn the latest knowledge and skills through CPE, which has become the primary source for maintaining their competencies and ensuring better outcomes worldwide. The training including trauma care, combat knowledge, field nursing, the cultural customs and languages of the deployment place, chemical, biological, radiological or nuclear (CBRN) preparation/reaction(such as Combat Casualty Care Course, Emergency War Surgery Course, or Trauma Nursing Core Course, etc.) [ 8 , 9 , 14 , 15 , 27 ]. Learning barriers have family and work factors, trainees often did not want to attend training because they are worried about their children or heavy work, the learning environment is also an important factor, and the positive learning atmosphere organized by the staff can make the trainees full of passion for learning [ 34 ]. In addition, appropriate training methods have a positive effect on improving nurses’ professional skills. The main methods include practice, scene simulation and distance learning. And leaders should pay attention to teamwork training among medical staff [ 9 , 27 , 35 ]. Overall, making scientific training programs and creating a good learning atmosphere are helpful to improve the knowledge and technology of military nurses.

Competency is the key to affect the rescue mission of military nurses [ 31 ]. Competency is an important invisible feature for military nurses to complete rescue tasks, and is the driving force for other skills to play. Military nurses need to have the ability of organization and management, nursing risk prediction, nursing decision making, emergency handling and so on when performing rescue tasks [ 29 , 36 ]. These are essential conditions for successful treatment. Some studies have shown that team members from different majors simulate operation and rescue tasks in non-task environments, which can effectively prevent the repetition of wrong behaviors by improving leadership, communication skills, teamwork, etc [ 24 , 37 ]. Good communication and teamwork can also reduce the occurrence of adverse events during rescue [ 24 ]. Decisive decision-making ability becomes the key to winning survival time, and good emergency response ability can often avoid further damage [ 4 , 29 , 38 ]. Therefore, military nurses with good comprehensive ability can achieve the rescue effect of both efficiency and quality. Through simulation-based training, military nurses can improve their personal knowledge, skills, abilities, thinking and team ability [ 4 ]. Such as high-fidelity simulation could improve emergency management capabilities, team leadership, and basic nursing skills [ 39 ]; human patient simulators could improve their cognitive thinking and critical thinking skills [ 40 ]; hyper-realistic immersive training could improve the performance of multidisciplinary medical team members and facilitate effective collaboration between members and teams [ 41 ]. We found that military nurses are more willing to improve their ability through practice [ 27 ]. Consequently, it is suggested that the management should expand the practical training mode and combine various simulated training with simulated extreme environment to enhance the comprehensive ability and adaptability of military nurses to special environment.

Strengths and limitations

The advantage of this study is that we not only searched medical databases but supplemented this with manual searches to ensure that studies were fully retrieved. Secondly, we conducted quality control, data extraction, and study quality assessment. Finally, the study is largely reflective of the dilemmas and needs of military nurse and is of great significance to military emergency care. However, there are some limitations to this study. Although the search strategy was thorough, some articles may have been missed, such as the gray literature. And the lack of detailed discussion on the potential influence of the researchers on some of the research studies suggests a possible bias of the findings of original studies.

This qualitative systematic review reviews the experience of military nurses during deployment and analyzes the feelings, experiences, and needs of military nurses during military duty. In contrast, there is less research on emergency rescue operations in extreme environments such as high altitudes, which should be the focus of future exploratory research. Qualitative research in this area should address the lack of mental, physical, and professional preparedness of deployers by understanding the experiences of those with deployment experience in extreme environments. In the future, managers should design diversified, personalized training programs and training methods that are suitable for the deployment of military nurses in a variety of environments.

Data availability

Data used to support the findings of this study are available from the corresponding author upon request.

Nicholson B, Neskey J, Stanfield R, Fetterolf B, Ersando J, Cohen J, Kue R. Integrating prolonged Field Care into Rough Terrain and Mountain Warfare Training: the Mountain critical care course. J Spec Oper Med. 2019;19(1):66–9.

Article   PubMed   Google Scholar  

Xu T, Wang Z, Li T, Pei V, Wen L, Wan L, Wang Y, Yu X. Tibetan plateau earthquake: Altitude challenges to medical rescue work. Emerg Med J. 2013;30(3):232–5.

Larsen J, Blagnys H, Cooper B, Press C, Sambridge N, Livesey M, Watt C, Allewell C, Chapman N. Mountain Rescue Casualty Care and the Undergraduate Medical Elective. Wild Environ med. 2019;30(2):210–6.

Article   Google Scholar  

Niu A, Ma H, Zhang S, Zhu X, Deng J, Luo Y. The effectiveness of simulation-based training on the competency of military nurses: a systematic review. Nurse Educ Today. 2022;119:105536.

Lu Y, Rong G, Yu SP, Sun Z, Duan X, Dong Z, Xia H, Zhan N, Jin C, Ji J, et al. Chinese military medical teams in the Ebola outbreak of Sierra Leone. J R Army Med Corps. 2016;162(3):198–202.

Article   PubMed   PubMed Central   Google Scholar  

Ma H, Huang J, Deng Y, Zhang Y, Lu F, Yang Y, Luo Y. Deployment experiences of military nurses: a systematic review and qualitative meta-synthesis. J Nurs Manag. 2021;29(5):869–77.

X L, X F. A qualitative study of army civilian nurses’ experience on battlefield rescue in the northwest plateau theater. J Nurs Adm. 2020;20(03):184–8.

Google Scholar  

Ma H, Chihava TN, Fu J, Zhang S, Lei L, Tan J, Lin L, Luo Y. Competencies of military nurse managers: a scoping review and unifying framework. J Nurs Manag. 2020;28(6):1166–76.

PubMed   PubMed Central   Google Scholar  

De Jong MJ, Benner R, Benner P, Richard ML, Kenny DJ, Kelley P, Bingham M, Debisette AT. Mass casualty care in an expeditionary environment: developing local knowledge and expertise in context. J Trauma Nurs. 2010;17(1):45–58.

Aromataris E, MZ: JBI Manual for Evidence Synthesis. 2020, JBI( https://synthesismanual.jbi.global/https://jbi-global-wiki.refined.site/space/MANUAL) .

Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc. 2015;13(3):179–87.

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45.

Lindblad C, Sjöström B. Battlefield emergency care: a study of nurses’ perspectives. Accid Emerg Nurs. 2005;13(1):29–35.

Scannell-Desch EA. Lessons learned and advice from Vietnam war nurses: a qualitative study. J Adv Nurs. 2005;49(6):600–7.

Ekfeldt B, Österberg R, Nyström M. Preparing for care in a combat environment. Int J Caring Sci. 2015;8(1):1–8.

Elliott B. Military nurses’ experiences returning from war. J Adv Nurs. 2015;71(5):1066–75.

Finnegan A, Finnegan S, Bates D, Ritsperis D, McCourt K, Thomas M. Preparing British military nurses to deliver nursing care on deployment. An Afghanistan study. Nurse Educ Today. 2015;35(1):104–12.

Rahimaghaee F, Hatamopour K, Seylani K, Delfan V. Nurses’ perceptions of care during wartime: a qualitative study. Int Nurs Rev. 2016;63(2):218–25.

Rivers FM. US Military nurses: serving within the Chaos of Disaster. Nurs Clin North Am. 2016;51(4):613–23.

Tow JC, Hudson DB. Lived experience of the Warrior nurse as an Advisor. Mil Med. 2016;181(4):328–33.

Rivers F, Gordon S. Military nurse deployments: similarities, differences, and resulting issues. Nurs Outlook. 2017;65(5s):S100–8.

Han JJ. The lived experience of Korean female military nursing officers during the Vietnam War. J Transcult Nurs. 2019;30(5):471–7.

Vafadar Z, Aghaei MH, Ebadi A. Military nurses’ experiences of interprofessional education in Crisis Management: a qualitative content analysis. J Adv Med Educ Prof. 2021;9(2):85–93.

Varpio L, Bader-Larsen K, Hamwey M, Durning S, Meyer H, Cruthirds D, Artino A. Delivering patient care during large-scale emergency situations: lessons from military care providers. PLoS ONE. 2021;16(3):e0248286.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Kwon YH, Han HJ, Park E. Nursing experience of New nurses Caring for COVID-19 patients in military hospitals: a qualitative study. Healthc (Basel) 2022, 10(4).

Ma H, Zhang S, Zhu X, Huang J, Cheng Z, Luo Y. Continuing professional education experiences and expectations of nurses in Chinese military hospitals: a quantitative and qualitative study. Nurse Educ Today. 2023;120:105645.

Segev R. Learning from critical care nurses’ wartime experiences and their long-term impacts. Nurs Crit Care. 2023;28(2):253–60.

Ross MC. Military nursing competencies. Nurs Clin North Am. 2010;45(2):169–77.

Hagerty BM, Williams RA, Bingham M, Richard M. Military nurses and combat-wounded patients: a qualitative analysis of psychosocial care. Perspect Psychiatr Care. 2011;47(2):84–92.

Zhu XL, Ni W, Linlin L, Yanfei S, Anting L, Jianmei W. The Core Competence Construction of Helicopter Rescue Nurses in Plateau and Alpine areas:a qualitative research. Military Nurs. 2023;40(01):102–5.

Harris RA. A qualitative descriptive study that identifies essential competencies and leadership characteristics of army adult medical-surgical critical care head nurses. George Mason University; 2008.

Caporiccio J, Louis KR, Lewis-O’Connor A, Son KQ, Raymond N, Garcia-Rodriguez IA, Dollar E, Gonzalez L. Continuing Education for Haitian nurses: evidence from qualitative and quantitative Inquiry. Ann Glob Health 2019, 85(1).

Ma H, Niu A, Sun L, Luo Y. Development and evaluation of competency-based curriculum for continuing professional development among military nurses: a mixed methods study. BMC Med Educ. 2022;22(1):793.

Kellicut DC, Kuncir EJ, Williamson HM, Masella PC, Nielsen PE. Surgical Team Assessment Training: improving surgical teams during deployment. Am J Surg. 2014;208(2):275–83.

Suresh MR, Valdez-Delgado KK, Staudt AM, Trevino JD, Mann-Salinas EA, VanFosson CA. An Assessment of Pre-deployment Training for Army nurses and medics. Mil Med. 2021;186(1–2):203–11.

Webster CS. Crisis Management in Acute Care settings: human factors and team psychology in a high-stakes environment. Anesth Analgesia. 2017;125(3):1069.

Savage E, Forestier C, Withers N, Tien H, Pannell D. Tactical combat casualty care in the Canadian forces: lessons learned from the Afghan war. Can J Surg. 2011;54(6):S118–123.

Hughes RV, Smith SJ, Sheffield CM, Wier G. Assessing performance outcomes of new graduates utilizing simulation in a military transition program. J Nurses Prof Dev. 2013;29(3):143–8.

Johnson D, Johnson S. The effects of using a human patient simulator compared to a CD-ROM in teaching critical thinking and performance. US Army Med Dep J 2014:59–64.

Hoang TN, LaPorta AJ, Malone JD, Champagne R, Lavell K, De La Rosa GM, Gaul L, Dukovich M. Hyper-realistic and immersive surgical simulation training environment will improve team performance. Trauma Surg Acute Care Open. 2020;5(1):e000393.

Download references

This systematic review is supported by the military medical research project of General Hospital of Western Theater Command (2019ZY08).

Author information

Authors and affiliations.

North Sichuan Medical University, Nanchong, Sichuan, 637000, China

Ruixuan Zhao & Shijie Fang

General Hospital of Western Theater Command, Chengdu, Sichuan, 610083, China

Chengdu Medical College, Chengdu, Sichuan, 610500, China

Cheng Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

Ruixuan Zhao wrote the main manuscript text; Ruixuan Zhao, Shijie Fang and Dongwen Li Collectioned and analysis the data.; Ruixuan Zhao, Shijie Fang and Cheng Zhang were involved in data synthesis; Dongwen Li had a writing review; Ruixuan Zhao and Dongwen Li prepared Fig. 1 and Table 1, and 2; Ruixuan Zhao, Shijie Fang, Cheng Zhang, and Dongwen Li prepared additional file 1 – 4 ; All authors reviewed the manuscript.

Corresponding author

Correspondence to Dongwen Li .

Ethics declarations

Ethics approval and consent to participate.

Not Applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhao, R., Fang, S., Li, D. et al. Experience and training needs of nurses in military hospital on emergency rescue at high altitude: a qualitative meta-synthesis. BMC Nurs 23 , 370 (2024). https://doi.org/10.1186/s12912-024-02029-1

Download citation

Received : 18 September 2023

Accepted : 20 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12912-024-02029-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Military nurse
  • High altitude
  • Training needs
  • Meta-synthesis

BMC Nursing

ISSN: 1472-6955

research articles using qualitative methods

  • Open access
  • Published: 05 June 2024

Experiences of medical students and faculty regarding the use of long case as a formative assessment method at a tertiary care teaching hospital in a low resource setting: a qualitative study

  • Jacob Kumakech 1 ,
  • Ian Guyton Munabi 2 ,
  • Aloysius Gonzaga Mubuuke 3 &
  • Sarah Kiguli 4  

BMC Medical Education volume  24 , Article number:  621 ( 2024 ) Cite this article

117 Accesses

Metrics details

Introduction

The long case is used to assess medical students’ proficiency in performing clinical tasks. As a formative assessment, the purpose is to offer feedback on performance, aiming to enhance and expedite clinical learning. The long case stands out as one of the primary formative assessment methods for clinical clerkship in low-resource settings but has received little attention in the literature.

To explore the experiences of medical students and faculty regarding the use of the Long Case Study as a formative assessment method at a tertiary care teaching hospital in a low-resource setting.

Methodology

A qualitative study design was used. The study was conducted at Makerere University, a low-resource setting. The study participants were third- and fifth-year medical students as well as lecturers. Purposive sampling was utilized to recruit participants. Data collection comprised six Focus Group Discussions with students and five Key Informant Interviews with lecturers. The qualitative data were analyzed by inductive thematic analysis.

Three themes emerged from the study: ward placement, case presentation, and case assessment and feedback. The findings revealed that students conduct their long cases at patients’ bedside within specific wards/units assigned for the entire clerkship. Effective supervision, feedback, and marks were highlighted as crucial practices that positively impact the learning process. However, challenges such as insufficient orientation to the long case, the super-specialization of the hospital wards, pressure to hunt for marks, and inadequate feedback practices were identified.

The long case offers students exposure to real patients in a clinical setting. However, in tertiary care teaching hospitals, it’s crucial to ensure proper design and implementation of this practice to enable students’ exposure to a variety of cases. Adequate and effective supervision and feedback create valuable opportunities for each learner to present cases and receive corrections.

Peer Review reports

The long case serves as an authentic assessment method for evaluating medical students’ competence in clinical tasks [ 1 ]. This form of assessment requires students to independently spend time with patients taking their medical history, conducting physical examinations, and formulating diagnosis and management plans. Subsequently, students present their findings to senior clinicians for discussion and questioning [ 2 , 3 ]. While developed countries increasingly adopt simulation-based assessments for formative evaluation, logistical challenges hinder the widespread use of such methods in developing countries [ 4 ]. Consequently, the low-resource countries heavily rely on real patient encounters for formative assessment. The long case is one such method predominantly used as a primary formative assessment method during clinical clerkship and offers a great opportunity for feedback [ 5 ]. The assessment grounds students’ learning into practice by providing them with rich opportunities to interact with patients and have the feel of medical practice. The long case thus bridges the gap between theory and practice, immersing students in the real tasks of a physician [ 1 ]. The complexity of clinical scenarios and the anxiety associated with patient encounters may not be well replicated in simulation-based assessments because diseases often have atypical presentations not found in textbooks. Assessment methods should thus utilize authentic learning experiences to provide learners with applications of learning that they would expect to encounter in real life [ 6 ]. This requires medical education and the curriculum to focus attention on assessment because it plays a significant role in driving learning [ 7 ]. The long case thus remains crucial in medical education as one of the best ways of preparing for practice. It exposes the student repeatedly to taking medical history, examining patients, making clinical judgments, deciding treatment plans, and collaborating with senior clinicians.

The long case, however, has faced significant criticism in the medical education literature due to perceived psychometric deficiencies [ 8 , 9 , 10 ]. Consequently, many universities have begun to adopt assessment methods that yield more reliable and easily defensible results [ 2 ] due to concerns over the low reliability, generalizability, and validity of the long case, coupled with rising litigations and student appeals [ 11 , 12 ]. Despite these shortcomings, the long case remains an educationally valuable assessment tool that provides diagnostic feedback essential for the learning process during clinical clerkship [ 13 ]. Teachers can utilize long-case results to pinpoint neglected areas or teaching deficiencies and align with course outcomes.

However, there is a paucity of research into the long case as a formative assessment tool. A few studies conducted in developed countries highlighted its role in promoting a holistic approach to patient care, fostering students’ clinical skills, and a driving force for students to spend time with patients [ 2 , 13 ], . There is a notable absence of literature on the use of long case as a formative assessment method in low-resource countries, and no published work is available at Makerere University where it has been used for decades. This underscores the importance of conducting research in this area to provide insight into the effectiveness, challenges, and potentials for improvement. Therefore, this study aimed to investigate the experiences of medical students and faculty regarding the utilization of the long case as a formative assessment method within the context of a tertiary care teaching hospital in a low-resource setting.

Study design

This was an exploratory qualitative study.

Study setting

The research was conducted at Makerere University within the Department of Internal Medicine. The Bachelor of Medicine and Bachelor of Surgery (MBChB) degree at Makerere University is a five-year program with the first two years for pre-clinical (biomedical Sciences) course and the last three years dedicated to clinical clerkship. Medical students do Internal Medicine clerkships in third- and fifth-year at the two tertiary teaching hospitals namely; Mulago and Kiruddu National Referral Hospitals. The students are introduced to the long case in third-year as Junior Clerks and later in the fifth-year as Senior Clerks. During clerkship, students are assigned to various medical wards, where they interact with patients, take medical history from them, perform physical examinations, and develop diagnosis and management plans. Subsequently, students present their long cases to lecturers or postgraduate students, often in the presence of their peers, followed by feedback and comprehensive case discussions. Students are afforded ample time to prepare and present their cases during ward rounds, at their discretion. The students are formatively assessed and a mark is awarded on a scale of one to ten in the student’s logbook. Each student is required to make a minimum of ten long cases over the seven weeks of clerkship.

Study participants

The study participants were third- and fifth-year medical students who had completed junior and senior clerkship respectively, as well as lecturers who possessed at least five years of experience with the long case. The participants were selected through purposive sampling. The sample size for the study was determined by data saturation.

Data collection

Data were collected through Focus Group Discussions (FGDs) and Key Informant Interviews (KIIs). A total of 36 medical students participated in FGDs, reflecting on their experiences with the long case. Five faculty members participated in individual KIIs. The students were mobilized by their class representative and a brief recruitment presentation was made at the study site while the lecturers were approached via email and telephone invitation.

Six FGDs were conducted, three for junior clerks and three for senior clerks. Each FGD comprised of 5–7 participants with balanced male and female gender representation. Data saturation was achieved by the fifth FGD, at which point no additional new information emerged. A research assistant proficient in qualitative research methods moderated the FGDs. The discussions lasted between 55 min and 1 h 10 min and were audio recorded. The Principal Investigator attended all the FGDs to document interactions and record his perspectives and non-verbal cues of participants.

Semi-structured KIIs were used to collect data from Internal Medicine faculty. Five KIIs were conducted, and data saturation was achieved by the fourth interview, at which point no new theme emerged. The Principal Investigator conducted the KIIs via Zoom. Each interview lasted between 25 and 50 min and all were audio recorded. A research assistant proficient in qualitative methods attended all the Zoom meetings. The data collected were securely stored on a hard drive and Google Drive with password protection to prevent unauthorized access.

Data analysis

Data analysis was done through inductive thematic analysis method. Following each FGD or KII session, the data collection team listened to the recordings to familiarize themselves with the data and develop general ideas regarding the participants’ perspectives. The data were transcribed verbatim by the researchers to generate text data. Two separate transcripts were generated by the Principal Investigator and a research assistant. The transcripts were then compared and manually reviewed by the research team to compare the accuracy with the audio recordings. After transcript harmonization, data cleaning was done for both FGDs and KIIs transcripts.

The transcribed data from both FGDs and KIIs underwent inductive thematic analysis as aggregated data. This involved initial line-by-line coding, followed by focused coding where the relationships between initial codes were explored and similar codes were grouped. Throughout the analysis, the principle of constant comparison was applied, where emerging codes were compared for similarities and differences.

Study results

Socio-demographics.

A total of 36 medical students participated in the FGDs, comprising 18 junior clerks and 19 senior clerks. The participants were aged between 21 and 25 years except two participants who were aged above 25 (30 and 36 years old). Among the third-year students, there were 10 male and 9 female participants while the fifth-year student comprised of 8 male and 10 female participants.

Five lecturers participated in the Key Informant Interviews, three of whom were females and two male participants. They were aged between 40 and 50 years, and all had over 10 years of experience with the long case. The faculty members included one consultant physician, one associate professor, two senior lecturers, and one lecturer.

Themes that emerged

Three themes emerged from the study: ward placement, case presentations, and case assessment and feedback.

Themes

Codes

Theme 1; ward placement

Allocation to specific ward, specialization of the wards, orientation on the ward, and exposure to other ward

Theme 2; case presentation

Variation in the mode of presentation, limited observation of skills, and unreliable presence of lecturers.

Theme 3; case assessment and feedback

Marks awarded for the long case, case write-up, marks as motivators, pressure to hunt for mark

Feedback is given to the student, feedback to the lecturer, limitations of the feedback practice

Theme 1: Ward placement

The study findings disclosed that medical students are assigned to specific wards for the duration of their clerkship. The specialization of medical wards was found to significantly restrict students’ exposure to limited disease conditions found only in their allocated ward.

With the super-specialization of the units, there is some bias on what they do learn; if a particular group is rotating on the cardiology unit, they will obviously have a bias to learn the history and physical exam related to cardiovascular disease (KII 1).

The students, particularly junior clerks, expressed dissatisfaction with the lack of proper and standardized orientation to the long case on the wards. This deficiency led to wastage of time and a feeling of being unwelcome in the clerkship.

Some orient you when you reach the ward but others you reach and you are supposed to pick up on your own. I expect orientation, then taking data from us, what they expect us to do, and what we expect from them, taking us through the clerkship sessions (FGD 4 Participant 1).

Students’ exposure to cases in other wards poses significant challenges; the study found that as some lecturers facilitate visits to different wards for scheduled teaching sessions, others don’t, resulting in missed learning opportunities. Additionally, some lecturers leave the burden on students’ personal initiative to explore cases in other wards.

We actually encourage them to go through the different specialties because when you are faced with a patient, you will not have to choose which one to see and not to see (KII 4).

Imagine landing on a stroke patient when you have been in the infectious disease ward or getting a patient with renal condition when you have been in the endocrinology ward can create problems (FGD 6 Participant 3).

Theme 2 Case presentation

Medical students present their long case to lecturers and postgraduate students. However, participants revealed variations among lecturers regarding their preferences on how they want students to present their cases. While some prefer to listen to the entire history and examination, others prefer only a summary, and some prefer starting from the diagnosis.

The practice varies depending on the lecturer, as everyone does it their own way. There are some, who listen to your history, examination, and diagnosis, and then they go into basic discussion of the case; others want only a summary. Some lecturers come and tell you to start straight away from your diagnosis, and then they start treating you backward (FGD 6 Participant 3).

The students reported limited observation of their skills due a little emphasis placed by examiners on physical examination techniques, as well as not providing the students with the opportunity to propose treatment plans.

When we are doing these physical examinations on the ward no one is seeing you. You present your physical examination findings, but no one saw how you did it. You may think you are doing the right thing during the ward rotations, but actually your skills are bad (FGD 4 Participant 6).

They don’t give us time to propose management plans. The only time they ask for how you manage a patient is during the summative long case, yet during the ward rotation, they were not giving us the freedom to give our opinion on how we would manage the patient.(FGD 2Participant 6).

Supervision was reportedly dependent on the ward to which the student was allocated. Additionally, the participants believe that the large student-to-lecturer ratio negatively affects the opportunity to present.

My experience was different in years three and five. In year three, we had a specialist every day on the ward, but in year five, we would have a specialist every other day, sometimes even once a week. When I compare year five with year three, I think I was even a better doctor in year three than right now (FGD 1 Participant 1).

Clinical training is like nurturing somebody to behave or conduct themselves in a certain way. Therefore, if the numbers are large, the impacts per person decrease, and the quality decreases (KII 5).

Theme C: Case assessment and feedback

The study found that a student’s long case is assessed both during the case presentation on the ward and through the case write-up, with marks awarded accordingly.

They present to the supervisor and then also write it up, so at a later time you also mark the sheet where they have written up the cases; so they are assessed at presentation and write up (KII 2).

The mark awarded was reportedly a significant motivator for students to visit wards and clerk patients, but students also believe that the pressure to hunt for marks tends to override the goal of the formative assessment.

Your goal there is to learn, but most of us go with the goal of getting signatures; signature-based learning. The learning, you realize probably comes on later if you have the individual morale to go and learn (FGD 1 participant 1).

Feedback is an integral part of any formative assessment. While students receive feedback from lecturers, the participants were concerned about the absence of a formal channel for soliciting feedback from students.

Of course, teachers provide feedback to students because it is a normal part of teaching. However, it is not a common routine to solicit feedback about how teaching has gone. So maybe that is something that needs to be improved so that we know if we have been effective teachers (KII 3).

Whereas the feedback intrigues students to read more to compensate for their knowledge gap, they decried several encounters with demeaning, intimidating, insulting, demotivating, and embarrassing feedback from assessors.

Since we are given a specific target of case presentation we are supposed to make in my training , if I make the ten, I wouldn’t want to present again. Why would I receive other negative comments for nothing? They truly have a personality effect on the student, and students feel low self-esteem (FGD 1, Participant 4).

This study aimed to investigate the experiences of medical students and faculty regarding the use of the long case as a formative assessment method at a tertiary care teaching hospital in a low-resource setting. This qualitative research provides valuable insights into the current practices surrounding the long case as a formative assessment method in such a setting.

The study highlighted the patient bedside as the primary learning environment for medical students. Bedside teaching plays a crucial role in fostering the development of skills such as history-taking and physical examination, as well as modeling professional behaviors and directly observing learners [ 14 , 15 ]. However, the specialization of wards in tertiary hospitals means that students may not be exposed to certain conditions found in other wards. This lack of exposure can lead to issues of case specificity, which has been reported in various literature as a cause of low reliability and generalizability of the long case [ 16 , 17 ]. Participants in the study expressed feeling like pseudo-specialists based on their ward allocations. This is partly attributed to missing scheduled teachings and poor management of opportunities to clerk and present patients on other wards. Addressing these challenges is essential for enhancing the effectiveness of the long case as a formative assessment method in medical education.

Proper orientation at the beginning of a clerkship is crucial for clarifying the structure and organization, defining students’ roles, and providing insights into clinical supervisors’ perspectives [ 18 ]. However, the study revealed that orientation into the long case was unsatisfactory, resulting in time wastage and potentially hindering learning. Effective orientation requires dedicated time and should involve defining expectations and goals, as well as guiding students through the steps of history-taking and physical examination during the initial weeks of the rotation. Contrary to this ideal approach, the medical students reported being taken through systemic examinations when the clerkship was nearing its end, highlighting a significant gap in the orientation process. Proper orientation is very important since previous studies have also documented the positive impact of orientation on student performance [ 19 ]. Therefore, addressing the shortcomings in orientation practices identified in this study is essential for optimizing learning outcomes and ensuring that students are adequately prepared to engage in the long case.

There was reportedly a significant variation in the way students present their long cases, with some lecturers preferring only a case summary, while others expect a complete presentation or begin with a diagnosis. While this diversity in learning styles may expose students to both familiar and unfamiliar approaches, providing a balance of comfort and tension [ 20 ], it’s essential for students to first be exposed to familiar methods before transitioning to less familiar ones to expand their ability to use diverse learning styles. The variation observed in this context may be attributed to time constraints, as lecturers may aim to accommodate the large number of students within the available time. Additionally, a lack of standardized practices could also contribute to this variation. Therefore, there is a pressing need for standardized long-case practices to ensure a consistent experience for students and to meet the desired goals of the assessment. Standardizing the long case practice would not only provide a uniform experience for students but also enhance the reliability, validity, and perception of fairness of the assessment [ 9 , 21 ]. It would ensure that all students are evaluated using the same criteria, reducing potential biases and disparities in grading. Additionally, standardized practices facilitate better alignment with learning objectives and promote more effective feedback mechanisms [ 22 ].

Related to the above, students reported limited observation of skills and little emphasis placed on them to learn physical examination techniques. This finding resonates with the research conducted by Abdalla and Shorbagi in 2018, where many students reported a lack of observation during history-taking and physical examination [ 23 ]. The importance of observation is underscored by the fact that students often avoid conducting physical examinations, as highlighted in Pavlakis & Laurent’s study among postgraduate trainees in 2001 [ 24 ]. This study sheds more light on the critical role of observation in forcing medical students to master clinical assessment and practical skills. The study also uncovered that students are rarely given the opportunity to propose management plans during case presentations, which hampers their confidence and learning of clinical decision-making. These findings likely stem from the large student-to-lecturer ratio and little attention given to these aspects of the long case during the planning of the assessment method. The result is students not receiving the necessary guidance and support to develop their clinical and decision-making skills. Therefore, addressing these issues by putting more emphasis on observation of student-patient interaction, management plan, and having a smaller student group is vital to ensure that medical students receive comprehensive training and are adequately prepared for their future roles as physicians.

The study found that the marks awarded for the long case serve as the primary motivator for students. This finding aligns with previous research indicating that the knowledge that each long case is part of assessment drives students to perform their duties diligently [ 2 , 25 ]. It underscores the crucial role that assessment plays in driving learning processes. However, the pressures to obtain marks and signatures reportedly hinder students’ engagement in learning. This could be attributed to instances where some lecturers relax on supervision or are absent, leaving students to struggle to find someone to assess them. Inadequate supervision by attending physicians has been identified in prior studies as one of the causes of insufficient clinical experience [ 26 ], something that need to be dealt with diligently. While the marks awarded are a motivating factor, it is essential to understand other underlying motivations of medical students to engage in the long case and their impact on the learning process.

Feedback is crucial for the long case to fulfill its role as an assessment for learning. The study participants reported that feedback is provided promptly as students present their cases. This immediate feedback is essential for identifying errors and learning appropriate skills to enhance subsequent performance. However, the feedback process appears to be unilateral, with students receiving feedback from lecturers but lacking a structured mechanism for providing feedback themselves. One reason for the lack of student feedback may be a perceived intimidating approach from lecturers which discourages students from offering their input. It is thus important to establish a conducive environment where students feel comfortable providing feedback without fear of negative repercussions. The study underscores the significance of feedback from students in improving the learning process. This aligns with the findings of Hattie and Timperley (2007), who emphasized that feedback received from learners contributes significantly to improvements in student learning [ 27 ]. Therefore, it is essential to implement strategies to encourage and facilitate bidirectional feedback between students and lecturers in the context of the long case assessment. This could involve creating formal channels for students to provide feedback anonymously or in a structured format, fostering open communication, and addressing any perceived barriers to feedback exchange [ 28 ]. By promoting a culture of feedback reciprocity, educators can enhance the effectiveness of the long case as an assessment tool.

Conclusions

In conclusion, the long case remains a cornerstone of formative assessment during clerkship in many medical schools, particularly in low-resource countries. However, its effectiveness is challenged by limitations such as case specificity in tertiary care hospitals, which can affect the assessment’s reliability and generalizability. The practice of awarding marks in formative assessment serves as a strong motivator for students but also creates tension, especially when there is inadequate contact with lecturers. This can lead to a focus on hunting for marks at the expense of genuine learning. Thus adequate supervision and feedback practices are vital for ensuring the success of the long case as an assessment for learning.

Furthermore, there is a need to foster standardized long case practice to ensure that scheduled learning activities are completed and that all students clerk and present patients with different conditions from various wards. This will promote accountability among both lecturers and students and ensure a consistent and uniform experience with the long case as an assessment for learning, regardless of the ward a student is assigned.

Data availability

The data supporting the study results of this article can be accessed from the Makerere University repository, titled “Perceptions of Medical Students and Lecturers of the Long Case Practices as Formative Assessment in Internal Medicine Clerkship at Makerere University,” available on DSpace. The identifier is http://hdl.handle.net/10570/13032 . Additionally, the raw data are securely stored with the researchers in Google Drive.

Dare AJ, Cardinal A, Kolbe J, Bagg W. What can the history tell us? An argument for observed history-taking in the trainee intern long case assessment. N Z Med J. 2008;121 1282:51–7.

Google Scholar  

Tey C, Chiavaroli N, Ryan A. Perceived educational impact of the medical student long case: a qualitative study. BMC Med Educ. 2020;20(1):1–9.

Article   Google Scholar  

Jayasinghe R. Mastering the Medical Long Case. Elsevier Health Sciences; 2009.

Martinerie L, Rasoaherinomenjanahary F, Ronot M, Fournier P, Dousset B, Tesnière A, Mariette C, Gaujoux S, Gronnier C. Health care simulation in developing countries and low-resource situations. J Continuing Educ Health Professions. 2018;38(3):205–12.

van der Vleuten C. Making the best of the long case. Lancet (London England). 1996;347(9003):704–5.

Reeves TC, Okey JR. Alternative assessment for constructivist learning environments. Constructivist Learn Environments: Case Stud Instructional Des. 1996;191:202.

Biggs J. What the student does: teaching for enhanced learning. High Educ Res Dev. 1999;18(1):141.

Michael A, Rao R, Goel V. The long case: a case for revival? Psychiatrist. 2013;37(12):377–81.

Benning T, Broadhurst M. The long case is dead–long live the long case: loss of the MRCPsych long case and holism in psychiatry. Psychiatr Bull. 2007;31(12):441–2.

Burn W, Brittlebank A. The long case: the case against its revival: Commentary on… the long case. Psychiatrist. 2013;37(12):382–3.

Norcini JJ. The death of the long case? Bmj 2002;324(7334):408–9.

Pell G, Roberts T. Setting standards for student assessment. Int J Res Method Educ. 2006;29(1):91–103.

Masih CS, Benson C. The long case as a formative Assessment Tool–views of medical students. Ulster Med J. 2019;88(2):124.

Peters M, Ten Cate O. Bedside teaching in medical education: a literature review. Perspect Med Educ. 2014;3(2):76–88.

Wölfel T, Beltermann E, Lottspeich C, Vietz E, Fischer MR, Schmidmaier R. Medical ward round competence in internal medicine–an interview study towards an interprofessional development of an Entrustable Professional Activity (EPA). BMC Med Educ. 2016;16(1):1–10.

Wilkinson TJ, Campbell PJ, Judd SJ. Reliability of the long case. Med Educ. 2008;42(9):887–93.

Sood R. Long case examination-can it be improved. J Indian Acad Clin Med. 2001;2(4):252–5.

Atherley AE, Hambleton IR, Unwin N, George C, Lashley PM, Taylor CG. Exploring the transition of undergraduate medical students into a clinical clerkship using organizational socialization theory. Perspect Med Educ. 2016;5:78–87.

Owusu GA, Tawiah MA, Sena-Kpeglo C, Onyame JT. Orientation impact on performance of undergraduate students in University of Cape Coast (Ghana). Int J Educational Adm Policy Stud. 2014;6(7):131–40.

Vaughn L, Baker R. Teaching in the medical setting: balancing teaching styles, learning styles and teaching methods. Med Teach. 2001;23(6):610–2.

Olson CJ, Rolfe I, Hensley. The effect of a structured question grid on the validity and perceived fairness of a medical long case assessment. Med Educ. 2000;34(1):46–52.

Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: clinician attitudes toward standardized assessment tools. J Clin Child Adolesc Psychol. 2010;39(6):885–96.

Abdalla ME, Shorbagi S. Challenges faced by medical students during their first clerkship training: a cross-sectional study from a medical school in the Middle East. J Taibah Univ Med Sci. 2018;13(4):390–4.

Pavlakis N, Laurent R. Role of the observed long case in postgraduate medical training. Intern Med J. 2001;31(9):523–8.

Teoh NC, Bowden FJ. The case for resurrecting the long case. BMJ. 2008;336(7655):1250–1250.

Mulindwa F, Andia I, McLaughlin K, Kabata P, Baluku J, Kalyesubula R, Kagimu M, Ocama P. A quality improvement project assessing a new mode of lecture delivery to improve postgraduate clinical exposure time in the Department of Internal Medicine, Makerere University, Uganda. BMJ Open Qual. 2022;11(2):e001101.

Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77(1):81–112.

Weallans J, Roberts C, Hamilton S, Parker S. Guidance for providing effective feedback in clinical supervision in postgraduate medical education: a systematic review. Postgrad Med J. 2022;98(1156):138–49.

Download references

Acknowledgements

Not applicable.

This research was supported by the Fogarty International Centre of the National Institute of Health under award number 1R25TW011213. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institute of Health.

Author information

Authors and affiliations.

School of Medicine, Department of Paediatrics & Child Health, Makerere University, Kampala, Uganda

Jacob Kumakech

School of Biomedical Sciences, Department of Anatomy, Makerere University, Kampala, Uganda

Ian Guyton Munabi

School of Medicine, Department of Radiology, Makerere University, Kampala, Uganda

Aloysius Gonzaga Mubuuke

School of Medicine, Department of Pediatrics & Child Health, Makerere University, Kampala, Uganda

Sarah Kiguli

You can also search for this author in PubMed   Google Scholar

Contributions

JK contributed to the conception and design of the study, as well as the acquisition, analysis, and interpretation of the data. He also drafted the initial version of the work and approved the submitted version. He agrees to be personally accountable for his contribution and to ensure that any questions related to the accuracy or integrity of any part of the work, even those in which he was not personally involved, are appropriately investigated and resolved, with the resolution documented in the literature.IMG contributed to the analysis and interpretation of the data. He also made major corrections to the first draft of the manuscript and approved the submitted version. He agrees to be personally accountable for his contribution and to ensure that any questions related to the accuracy or integrity of any part of the work, even those in which he was not personally involved, are appropriately investigated and resolved, with the resolution documented in the literature.MA contributed to the analysis and interpretation of the data. He made major corrections to the first draft of the manuscript and approved the submitted version. He agrees to be personally accountable for his contribution and to ensure that any questions related to the accuracy or integrity of any part of the work, even those in which he was not personally involved, are appropriately investigated and resolved, with the resolution documented in the literature.SK made major corrections to the first draft and the final corrections for the submitted version of the work. She agrees to be personally accountable for her contribution and to ensure that any questions related to the accuracy or integrity of any part of the work, even those in which she was not personally involved, are appropriately investigated and resolved, with the resolution documented in the literature.

Corresponding author

Correspondence to Jacob Kumakech .

Ethics declarations

Ethical approval.

Ethical approval to conduct the study was obtained from the Makerere University School of Medicine Research and Ethics Committee, with ethics ID Mak-SOMREC-2022-524. Informed consent was obtained from all participants using the Mak-SOMREC informed consent form.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kumakech, J., Munabi, I.G., Mubuuke, A.G. et al. Experiences of medical students and faculty regarding the use of long case as a formative assessment method at a tertiary care teaching hospital in a low resource setting: a qualitative study. BMC Med Educ 24 , 621 (2024). https://doi.org/10.1186/s12909-024-05589-7

Download citation

Received : 04 April 2024

Accepted : 22 May 2024

Published : 05 June 2024

DOI : https://doi.org/10.1186/s12909-024-05589-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Formative assessment
  • Medical education
  • Low-resource setting

BMC Medical Education

ISSN: 1472-6920

research articles using qualitative methods

  • Open access
  • Published: 01 July 2022

Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden

  • Lena Petersson 1 ,
  • Ingrid Larsson 1 ,
  • Jens M. Nygren 1 ,
  • Per Nilsen 1 , 2 ,
  • Margit Neher 1 , 3 ,
  • Julie E. Reed 1 ,
  • Daniel Tyskbo 1 &
  • Petra Svedberg 1  

BMC Health Services Research volume  22 , Article number:  850 ( 2022 ) Cite this article

28k Accesses

61 Citations

30 Altmetric

Metrics details

Artificial intelligence (AI) for healthcare presents potential solutions to some of the challenges faced by health systems around the world. However, it is well established in implementation and innovation research that novel technologies are often resisted by healthcare leaders, which contributes to their slow and variable uptake. Although research on various stakeholders’ perspectives on AI implementation has been undertaken, very few studies have investigated leaders’ perspectives on the issue of AI implementation in healthcare. It is essential to understand the perspectives of healthcare leaders, because they have a key role in the implementation process of new technologies in healthcare. The aim of this study was to explore challenges perceived by leaders in a regional Swedish healthcare setting concerning the implementation of AI in healthcare.

The study takes an explorative qualitative approach. Individual, semi-structured interviews were conducted from October 2020 to May 2021 with 26 healthcare leaders. The analysis was performed using qualitative content analysis, with an inductive approach.

The analysis yielded three categories, representing three types of challenge perceived to be linked with the implementation of AI in healthcare: 1) Conditions external to the healthcare system; 2) Capacity for strategic change management; 3) Transformation of healthcare professions and healthcare practice.

Conclusions

In conclusion, healthcare leaders highlighted several implementation challenges in relation to AI within and beyond the healthcare system in general and their organisations in particular. The challenges comprised conditions external to the healthcare system, internal capacity for strategic change management, along with transformation of healthcare professions and healthcare practice. The results point to the need to develop implementation strategies across healthcare organisations to address challenges to AI-specific capacity building. Laws and policies are needed to regulate the design and execution of effective AI implementation strategies. There is a need to invest time and resources in implementation processes, with collaboration across healthcare, county councils, and industry partnerships.

Peer Review reports

The use of artificial intelligence (AI) in healthcare can potentially enable solutions to some of the challenges faced by healthcare systems around the world [ 1 , 2 , 3 ]. AI generally refers to a computerized system (hardware or software) that is equipped with the capacity to perform tasks or reasoning processes that we usually associate with the intelligence level of a human being [ 4 ]. AI is thus not one single type of technology but rather many different types within various application areas, e.g., diagnosis and treatment, patient engagement and adherence, and administrative activities [ 5 , 6 ]. However, when implementing AI technology in practice, certain problems and challenges may require an optimization of the method in combination with the specific setting. We may therefore define AI as complex sociotechnical interventions as their success in a clinical healthcare setting depends on more than the technical performance [ 7 ]. Research suggests that AI technology may be able to improve the treatment of many health conditions, provide information to support decision-making, minimize medical errors and optimize care processes, make healthcare more accessible, provide better patient experiences and care outcomes as well as reduce the per capita costs of healthcare [ 8 , 9 , 10 ]. Even if the expectations for AI in healthcare are great [ 2 ], the potential of its use in healthcare is far from having been realized [ 5 , 11 , 12 ].

Most of the research on AI in healthcare focuses heavily on the development, validation, and evaluation of advanced analytical techniques, and the most significant clinical specialties for this are oncology, neurology, and cardiology [ 2 , 3 , 11 , 13 , 14 ]. There is, however, a current research gap between the development of robust algorithms and the implementation of AI systems in healthcare practice. The conclusion in newly published reviews addressing regulation, privacy and legal aspects [ 15 , 16 ], ethics [ 16 , 17 , 18 ], clinical and patient outcomes [ 19 , 20 , 21 ] and economic impact [ 22 ], is that further research is needed in a real-world clinical setting although the clinical implementation of AI technology is still at an early stage. There are no studies describing implementation frameworks or models that could inform us concerning the role of barriers and facilitators in the implementation process and relevant implementation strategies of AI technology [ 23 ]. This illustrates a significant knowledge gap on how to implement AI in healthcare practice and how to understand the variation of acceptance of this technology among healthcare leaders, healthcare professionals, and patients [ 14 ]. It is well established in implementation and innovation research that novel technologies, such as AI, are often resisted by healthcare leaders, which contributes to their slow and variable uptake [ 13 , 24 , 25 , 26 ]. New technologies often fail to be implemented and embedded in practice because healthcare leaders do not consider how they fit with or impact existing healthcare work practices and processes [ 27 ]. Although, understanding how AI technologies should be implemented in healthcare practice is unexplored.

Based on literature from other scientific fields, we know that the leaders’interest and commitment is widely recognized as an important factor for successful implementation of new innovations and interventions [ 28 , 29 ]. The implementation of AI in healthcare is thus supposed to require leaders who understand the state of various AI systems. The leaders have to drive and support the introduction of AI systems, the integration into existing or altered work routines and processes, and how AI systems can be deployed to improve efficiency, safety, and access to healthcare services [ 30 , 31 ]. There is convincing evidence from outside the healthcare field of the importance of leadership for organizational culture and performance [ 32 ], the implementation of planned organizational change [ 33 ], and the implementation and stimulation of organizational innovation [ 34 ]. The relevance of leadership to implementing new practices in healthcare is reflected in many of the theories, frameworks, and models used in implementation research that analyses barriers to and facilitators of its implementation [ 35 ]. For example, Promoting Action on Research Implementation in Health Services [ 36 ], Consolidated Framework for Implementation Research (CFIR) [ 37 ], Active Implementation Frameworks [ 38 ], and Tailored Implementation for Chronic Diseases [ 39 ] all refer to leadership as a determinant of successful implementation. Although these implementation models are available and frequently used in healthcare research, they are highly abstract and not tailored to the implementation of AI systems in healthcare practices. We thus do not know if these models are applicable to AI as a socio-technical system or if other determinants are important for the implementation process. Likewise, based on a new literature study, we found no AI-specific implementation theories, frameworks, or models that could provide guidance for how leaders could facilitate the implementation and realize the potential of AI in healthcare [ 23 ]. We thus need to understand what the unique challenges are when implementing AI in healthcare practices.

Research on various types of stakeholder perspectives on AI implementation in healthcare has been undertaken, including studies involving professionals [ 40 , 41 , 42 , 43 ], patients [ 44 ], and industry partners [ 42 ]. However, very few studies have investigated the perspectives of healthcare leaders. This is a major shortcoming, given that healthcare leaders are expected to have a key role in the implementation and use of AI for the development of healthcare. Petitgand et al.’s study [ 45 ] serves as a notable exception. They interviewed healthcare managers, providers, and organizational developers to identify barriers to integrating an AI decision-support system to enhance diagnostic procedures in emergency care. However, the study did not focus on the leaders’ perspectives, and the study was limited to one particular type of AI solution in one specific care department. Our present study extends beyond any specific technology and encompasses the whole socio-technical system around AI technology. The present study thus aimed to explore challenges perceived by leaders in a regional Swedish healthcare setting regarding implementation of AI systems in healthcare.

This study took an explorative qualitative approach to understanding healthcare leaders’ perceptions in contexts in which AI will be developed and implemented. The knowledge generated from this study will inform the development of strategies to support an AI implementation and help avoid potential barriers. The analysis was based on qualitative content analysis, with an inductive approach [ 46 ]. Qualitative content analysis is widely used in healthcare research [ 46 ] to find similarities and differences in the data, in order to understand human experiences [ 47 ]. To ensure trustworthiness, the study is reported in accordance with the Consolidated Criteria for Reporting Qualitative Research 32‐item checklist [ 48 ].

The study was conducted in a county council (also known as “region”) in the south of Sweden. The Swedish healthcare system is publicly financed based on local taxation; residents are insured by the state and there is a vision that healthcare should be equally accessible across the population. Healthcare responsibility is decentralized to 21 county councils, whose responsibilities include healthcare provision and promotion of good health for citizens.

The county council under investigation has since 2016 invested financial, personnel and service resources to enable agile analysis (based on machine learning models) of clinical and administrative data of patients in healthcare [ 49 , 50 ]. The ambition is to gain more value from the data, utilizing insights drawn from machine learning on healthcare data to make facts-based decisions on how healthcare is managed, organized, and structured in routines and processes. The focus is thus on overall issues around management, staffing, planning and standardization for optimization of resource use, workflows, patient trajectories and quality improvement at system level. This includes several layers within the socio-technical ecosystem around the technology, dealing with: a) generating, cleaning, and labeling data, b) developing models, verifying, assuring, and auditing AI tools and algorithms, c) incorporating AI outputs into clinical decisions and resource allocation, and d) the shaping of new organizational structures, roles, and practices. Given that AI thus extends beyond any specific technology and encompasses the whole socio-technical system around the technology, in the context of this article, it is hereafter referred to generically as ‘AI systems’. We deliberately sought to understand the broad perspectives on healthcare leaders in a region that has a high level of support for AI developments and our study thus focuses on the potential of a wide range of AI systems that could emerge from the regional investments, rather than a specific AI application or AI algorithms.

Participants

Given the focus on understanding healthcare leaders’ perceptions, we purposively recruited leaders who were in a position to potentially influence the implementation and use of AI systems in relation to the setting described above. To achieve potential variability, these leaders belonged to three groups: politicians at the highest county council level, managers at various levels, such as the hospital director, manager for primary care, manager for knowledge and evidence, head of research and development center, and quality developers and strategists with responsibilities for strategy-based work at county council level or development work in various divisions in the county council healthcare organization.

The ambition was to include leaders who had a range of experiences, interests and with different mandates and responsibilities in relation to funding, running, and sustaining the implementation of AI systems in practice. A sample of 28 healthcare leaders was invited through snowball recruitment; two declined and 26 agreed to participate (Table 1 ). This sample comprised five individuals originally identified on the basis of their knowledge and insights. They were interviewed and they then identified and suggested other leaders to interview.

Data collection

Individual semi-structured interviews were conducted between October 2020 and May 2021 via phone or video communication by one of the authors (LP or DT). We start from a broad perspective on AI focusing on healthcare leaders’ perceptions bottom-up and not on the views of AI experts or healthcare professionals who work with specific AI algortihms in clinical practice. The interviews were based on an interview guide, structured around: 1) the roles and previous experiences of the informants regarding the application of AI systems in practice, 2) the opportunities and problems that need to be considered to support implementation of AI systems, 3) beliefs and attitudes towards the possibilities of using AI systems to support healthcare improvements, and 4) the obstacles, opportunities and facilitating factors that need to be considered to enable AI systems to fit into existing processes, methods and systems. The interview guide was thus based on important factors previously identified in terms of implementing technology in healthcare [ 51 , 52 ]. Interviews lasted between 30 and 120 min, with a total length of 23 h and 49 min and were audio-recorded.

Data analysis

An inductive qualitative content analysis [ 46 ] was used to analyze the data. First, the interviews were transcribed verbatim and read several times by the first (LP) and second (IL) authors, to gain familiarity. Then, the first (LP) and second (IL) authors conducted the initial analyses of the interviews, by identifying and extracting meaning units and/or phrases with information relevant to the object of the study. The meaning units were then abstracted into codes, subcategories, and categories. The analytical process was discussed continuously between authors (LP, IL, JMN, PN, MN, PS). Finally, all authors, who are from different disciplines, reviewed and discussed the analysis to increase the trustworthiness and rigour of the analysis. To further strengthen the trustworthiness, the leaders’ quotations used in this paper were translated from Swedish to English by a native English-speaking professional proofreader and were edited only slightly to improve readability.

Three categories consisting of nine sub-categories emerged from the analysis of the interviews with the healthcare leaders (Fig.  1 ). Conditions external to the healthcare system concern various exogenous conditions and circumstances beyond the direct control of the healthcare system that the leaders believed could affect AI implementation. Capacity for strategic change management reflects endogenous influences and internal requirements related to the healthcare system that the leaders suggested could pose challenges to AI implementation. Transformation of healthcare professions and healthcare practice concerns challenges to AI implementation observed by the leaders, in terms of how AI might change professional roles and relations and its impact on existing work practices and routines.

figure 1

Categories and subcategories

Conditions external to the healthcare system

Addressing liability issues and legal information sharing.

The healthcare leaders described the management of existing laws and policies for the implementation of AI systems in healthcare as a challenge and an issue that was essential to address. According to them, the existing laws and policies have not kept pace with technological developments and the organization of healthcare in today’s society and need to be revised to ensure liability.

The accountability held among individuals, organizations, and AI systems regarding decisions based on support from an AI algorithm was perceived as a risk and an element that needs to be addressed. However, accountability is not addressed in existing laws, which were perceived by the leaders to present problematic uncertainties in terms of responsibilities. They raised concerns about where responsibilities lie in relation to decisions made by AI algorithms, such as when an AI algorithm run in one part of the system identifies actions that should be taken in another part of the system. For example, if a patient is given AI-based advice from a county council-operated patient portal for triaging suggesting self-care, and the advice instead should have been to visit the emergency department, who has the responsibility, is it the AI system itself, the developers of the system or the county council. Additionally, concerns were raised about accountability, if it turns out that the advice was not accurate.

The issue of accountability is a very difficult one. If I agree with what doctor John (AI systems) recommended, where does the burden of proof lie? I may have looked at this advice and thought that it worked quite well. I chose to follow this advice, but can I blame Doctor John? The legislation is a risk that we have to deal with. Leader 7.

Concerns were raised as to how errors would be handled when AI systems contributed to decision making, highlighting the need for clear laws and policies. The leaders emphasized that, if healthcare professionals made erroneous decisions based on AI systems, they could be reported to the Patients Advisory Committee or have their medical license revoked. This impending threat could lead to a stressful situation for healthcare professionals. The leaders expressed major concerns about whether AI systems would be support systems for healthcare professionals’ decisions or systems that could take automated and independent decisions. They believed based on the latter interpretation that there would be a need for changes in the laws before they could be implemented in practice. Nevertheless, some leaders anticipated a development where some aspects of care could be provided without any human involvement.

If the legislation is changed so that the management information can be automated, that is to say that they start acting themselves, but they’re not allowed to do that yet. It could, however, be so that you open an app in a few years’ time, then you furnish the app with the information that it needs about your health status. Then the app can write a prescription for medication for you, because it has all the information that is needed. That is not allowed at present, because the judicial authority still need an individual to blame when something goes wrong. But even that aspect will be gradually developed. Leader 2.

According to the leaders, legislation and policies also constituted obstacles to the foundation in the implementation of AI systems in healthcare: collecting, using, merging, and analyzing patient information. The limited opportunities to legally access and share information about patients within and between organizations were described as a crucial obstacle in implementing and using AI systems. Another issue was the legal problems when a care provider wanted to merge information about patients from different providers, such as the county council and a municipality. For this to take place, it was perceived that a considerable change of the laws regulating the possibilities of sharing information across different care providers would be required. Additionally, there are challenges in the definition of personal data in laws regulating personal integrity and in the risk of individuals being identified when the data is used for computerized advanced analytics. The law states that it is not legal to share personal data, but the boundaries of what is constituted by personal data in today’s society are changing, due to the increasing amounts of data and opportunities for complex and intelligent analysis.

You are not allowed to share any personal information. No, we understand that but what is personal information and when is personal information no longer personal information? Because legally speaking it is definitely not just the case of removing the personal identity number and the name, as a computer can still identify who you are at an individual level. When can it not do that? Leader 2.

Thus, according to the healthcare leaders, laws and regulations presented challenges for an organization that want to implement AI systems in healthcare practice, as laws and regulations have different purposes and oppose each other, e.g., the Health and Medical Services Act, the Patient Act and the Secrecy Act. Leaders described how outdated laws and regulations are handled in healthcare practice, by stretching current regulations and attempts to contribute to changing laws . They aimed to not give up on visions and ideas, but to try to find gaps in existing laws and to use rather than break the laws. When possible, another way to approach this was to try to influence decision-makers on the national political level to change the laws. The leaders reported that civil servants and politicians in the county council do this lobbying work in different contexts, such as the parliament or the Swedish Association of Local Authorities and Regions (SALAR).

We discuss this regularly with our members of parliament with the aim of influencing the legislative work towards an enabling of the flow of information over boundaries. It’s all a bit old-fashioned. Leader 16.

Complying with standards and quality requirements

The healthcare leaders believed it could be challenging to follow standardized care processes when AI systems are implemented in healthcare. Standardized care processes are an essential feature that has contributed to development and improved quality in Swedish healthcare. However, some leaders expressed that the implementation of AI systems could be problematic because of uncertainties regarding when an AI algorithm is valid enough to be a part of a standardized care process. They were uncertain about which guarantees would be required for a product or service before it would be considered “good enough” and safe to use in routine care. An important legal aspect for AI implementation is the updated EU regulation for medical devices (MDR) that came into force in May 2021. According to one of the leaders, this regulation could be problematic for small innovative companies, as they are not used to these demands and will not always have the resources needed to live up to the requirements. Therefore, the leaders perceived that the county council should support AI companies to navigate these demands, if they are to succeed in bringing their products or services to implementation in standardized care processes.

We have to probably help the narrow, supersmart and valuable ideas to be realized, so that there won’t be a cemetery of ideas with things that could have been good for our patients, if only the companies had been given the conditions and support to live up to the demands that the healthcare services have and must have in terms of quality and security. Leader 2.

Integrating AI-relevant learning in higher education for healthcare staff

The healthcare leaders described that changes needed to be made in professional training, so that new healthcare professionals would be prepared to use digital technology in their practical work. Some leaders were worried that basic level education for healthcare professionals, such as physicians, nurses, and assistant nurses has too little focus on digital technology in general, and AI systems in particular. They stated that it is crucial that these educational programs are restructured and adapted to prepare students for the ongoing digitalization of the healthcare sector. Otherwise, recently graduated healthcare professionals will not be ready to take part in utilizing and implementing new AI systems in practice.

I am fundamentally quite concerned that our education, mainly when it comes to the healthcare services. Both for doctors and nurses and also assistant nurses for that matter. That it isn’t sufficiently proactive and prepare those who educate themselves for what will come in the future. // I can feel a certain concern for the fact that our educations do not actually sufficiently prepare our future co-workers for what everybody is talking now about that will take place in the healthcare services. Leader 15.

Capacity for strategic change management

Developing a systematic approach to ai implementation.

The healthcare leaders described that there is a need for a systematic approach and shared plans and strategies at the county council level, in order to meet the challenge of implementing AI systems in practice. They recognized that it will not be successful if the change is built on individual interests, instead of organizational perspectives. According to the leaders, the county council has focused on building the technical infrastructure that enables the use of AI algorithms. The county council have tried to establish a way of working with multi-professional teams around each application area for AI-based analysis. However, the leaders expressed that it is necessary to look beyond the technology development and plan for the implementation at a much earlier stage in the development process. They believed that their organization generally underestimated the challenges of implementation in practice. Therefore, the leaders believed that it was essential that the politicians and the highest leadership in the county council both support and prioritize the change process. This requires an infrastructure for strategic change management together with clear leadership that has the mandate and the power to prioritize and support both development of AI systems and implementation in practice. This is critical for strategic change to be successful.

If the County Council management does not believe in this, then nothing will come of it either, the County Council management have to indicate in some way that this is a prioritized issue. It is this we are going to work with, then it’s not sufficient for a single executive director who pursues this and who thinks it’s interesting. It has to start at the top and then filter right through, but then the politicians have to also believe in this and think that it’s important. Leader 4.

Additionally, the healthcare leaders experienced that there was increasing interest among unit managers within the organization in using data for AI-based analysis and that there might be a need to make more prioritizations of requests for data analysis in the future. The leaders expressed that it would not be enough to simply have a shared core facility supporting this. Instead, management at all levels should also be involved and active in prioritization, based on their needs. They also perceived that the implementation of AI systems will demand skilled and structured change management that can prioritize and that is open to new types of leadership and decision-making processes. Support for innovative work will be needed, but also caution so that change does not proceed too quickly and is sufficiently anchored among the staff. The implementation of AI systems in healthcare was anticipated to challenge old routines and replace them with new ones, and that, as a result, would meet resistance from the staff. Therefore, a prepared plan at the county council level was perceived to be required for the purpose of “anchoring” with managers at the unit level, so that the overall strategy would be aligned with the needs and views of those who would have to implement it and supported by the knowledge needed to lead the implementation work.

It’s in the process of establishing legitimacy that we have often erred, where we’ve made mistakes and mistakes and mistakes all the time, I’ve said. That we’re not at the right level to make the decisions and that we don’t follow up and see that they understand what it’s about and take it in. It’s from the lowest manager to the middle manager to executive directors to politicians, the decisions have to have been gained legitimacy otherwise we’ll not get the impetus. Leader 21.

The leaders believed that it was essential to consider how to evaluate different parts of the implementation process. They expressed that method development is required within the county council, because, at the moment, there is a lack of knowledge and guidelines on how to evidence-base the use of AI systems in practice. There will be a need for a support organization spanning different levels within the county council, to guide and supervise units in the systematic evaluation of AI implementations. There will also be a need for quantitative evaluation of the clinical and organizational effects and qualitative assessment that focuses on how healthcare professionals and patients experience the implementation. Additionally, validation and evaluation of AI algorithms will be needed, both before they can be used in routine care, and afterwards, to provide evidence of quality improvements and optimizations of resources.

I believe that one needs to get an approval in some way, perhaps not from the Swedish Medical Products Agency, but the AI Agency or something similar. I don’t know. The Swedish National Board of Health and Welfare or some agency needs to go in and check that it is a sufficiently good foundation that they have based this algorithm on. So that it can be approved for clinical use. Leader 10.

Furthermore, the leaders described a challenge around how the implementation of AI systems in practice could be sustainable and last over time. They expressed that the county council should develop strategies in the organization so that they are readied for sustainability and long-term implementation. At the same time, this is an area with fast development and high uncertainty about the future, and thus what AI systems and services will look like in five or ten years, and how healthcare professionals and patients will use them. This is a challenge and requires that both leaders and staff are prepared to adjust and change their ways of working during the implementation process, including continuous improvements and uptake, updating and evolution of technologies and work practices.

The rate of change where digitalization, technology, new technology and AI is concerned is so high and the rate of implementation is low, so this will entail that as soon as we are about to implement something then there is something else in the market that is better. So I think it’s important to dare to implement something that is a little further on in the future. Leader 13.

Ascertaining resources for AI implementation

The leaders emphasized the importance of training for implementation of AI systems in healthcare. The county council should provide customized training at the workplace and extra knowledge support for certain professions. This could result in difficult decisions regarding what and whom to prioritize. The leaders discussed whether there was a need to provide all staff with basic training on AI systems or if it would be enough to train some of them, such as quality developers, and provide targeted training for some healthcare professionals who are close to the implementation of the AI system at a care unit. Furthermore, the leaders described that the training had to be connected to implementing the AI system at a specific care unit, which could present a challenge for the planning and realization. They emphasized that it could be a waste of resources to educate the staff beforehand. They need to be educated in close connection to the implementation of a specific AI system in their workplace, which thus demands organizational resources and planning.

I think that we often make the mistake of educating first, and then you have to use it. But you have been educated, so now you should know this? Yes, but it is not until we use something that the questions arise. Leader 13.

There could also be a need for patient education and patient guidance, if they are to use AI systems for self-care or remote monitoring. Thus, it is vital to give all citizens the same opportunities to access and utilize new technical solutions in healthcare.

We treat all our patients equally now, everyone will receive the same invitation, and everyone will need to ring about their appointment, although 99% could really book and do this themselves. Then we should focus on that, and thus return the impetus and the power to the patient and the population for them to take care of this themselves to a greater extent. But then of course information is needed and that in turn needs intuitive systems. That is not something we are known for. Leader 14.

Many of the healthcare leaders found financial resources and time, especially the prioritization of time, to be critical to the implementation process of AI system. There is already time pressure in many care units, and it can be challenging to set aside time and other resources for the implementation.

Involving staff throughout the implementation process of AI systems

The healthcare leaders stated that anchoring and involving staff and citizens is crucial to the successfully implementation of AI systems. The management has to be responsible for the implementation process but also ensure that the staff are aware of and interested in the implementation, based on their needs. Involvement of the staff together with representatives from patient groups was considered key to successful implementation and to limit risks of perceiving the AI system as unnecessary and erroneously used. At the same time, the leaders described that it would be important for unit managers to “stand up” for the change that is required, if their staff questioned the implementation.

I think for example that if you’re going to make a successful implementation then you have to perhaps involve the co-workers. You can’t involve all of them, but a representative sample of co-workers and patients and the population who are part of it. // We mess it up time after time, and something comes that we have to implement with short notice. So we try to force it on the organization, so we forget that we need to get the support of the co-workers. Leader 4.

The propensity for change differs both among individuals and within the organization. According to the leaders, that could pose a challenge, since the support and needs differ between individuals. The motivational aspect could also vary between different actors, and some leaders claim that it is crucial to arouse curiosity among healthcare professionals. If the leaders are not motivated and do not believe that the change benefits them, implementation will not be successful. To increase healthcare professionals’ motivation and engagement, the value that will be created for the clinicians has to be made obvious, along with whether the AI system will support them in their daily work.

It has to be beneficial for the clinics otherwise it’s meaningless so to speak. A big risk with AI is that you work and work with data and then algorithms emerge that are sort of obvious. Everyone can do this. It’s why it’s important to have clinical staff in the small agile teams, that there really is a clinical benefit, this actually improves it. Leader 10.

Developing new strategies for internal and external collaboration

The healthcare leaders believed that there was a need for new forms of collaboration and communication within the county council, at both organizational and professional levels. Professionals need to interact with professions other than their own, thus enabling new teamwork and new knowledge. The challenge is for different groups to talk to each other, since they do not always have the same professional language. However, it was perceived that, when these kinds of team collaborations are successful, there will be benefits, such as automation of care processes that are currently handled by humans.

To be successful in getting a person with expert knowledge in computer science to talk to a person with expert knowledge in integrity legislation, to a one who has expert knowledge in the clinical care of a patient. Even if all of them go to work with exactly the same objective, that one person or a few people can live a bit longer or feel a bit better, then it’s difficult to talk with each other because they use essentially different languages. They don’t know much about what knowledge the other has, so just getting that altogether. Leader 2.

Leaders’ views the implementation of AI systems would require the involvement and collaboration of several departments in the county council across organizational boundaries, and with external actors. A perceived challenge was that half of the primary care units are owned by private care providers, where the county council has limited jurisdiction, which challenges the dissemination of common ways of working. Additionally, the organization in the county council and its boundaries might have to be reviewed to enable different professions to work together and interact on an everyday basis.

The complexity in terms of for example apps is very, very, very much greater, we see that now. Besides there being this app, so perhaps the procurement department must be involved, the systems administration must definitely be involved, the knowledge department must be involved and the digitalization department, there are so many and the finance department of course and the communication department, the system is thus so complex. Leader 9.

There was also consensus among the healthcare leaders that the county council should collaborate with companies in AI systems implementation and should not handle such processes on their own. An eco-system of actors working in AI systems implementation is required, who have shared goals for the joint work. The leaders expressed that companies must be supported and invited to collaborate within the county council’s organization at an early stage. In that way, pitfalls regarding legal or technical aspects can be discovered early in product development. Similar relations and dialogues are also needed with patients to succeed with implementation that is not primarily based on technical possibilities, but patients’ needs. Transparency is essential to patients’ awareness of AI systems’ functions and for the reliability in outcomes.

This is born out of a management philosophy, which is based on the principle of not being able to command everything oneself, one has to be humble, perceptive about not being able to do it. One needs to invite others to be there and help with the solution. Leader 16.

Transformation of healthcare professions and healthcare practices

Managing new roles in care processes.

The healthcare leaders described a need for new professions and professional roles in healthcare for AI systems implementation. All professional groups in today’s healthcare sector were expected to be affected by these changes, particularly the work unit managers responsible for daily work processes and the physicians accountable for the medical decisions. The leaders argued that the changes could challenge traditions, hierarchies, conventional professional roles and division of labour. There might be changes regarding the responsibilities for specific work tasks, changes in professional roles, a need for new professions that do not exist in today’s labour market and the AI systems might replace some work tasks and even professions. A change towards more combined positions at both the county council and a company or a university might also be a result of the development and implementation of AI systems. However, the leaders perceived that, for some healthcare professionals, these ideas are unthinkable, and it may take several years before these changes in roles and care processes become a reality in the healthcare sector.

I think I will be seeing other professions in the healthcare services who have perhaps not received a healthcare education. It will be a culture shock, I think. It also concerns that you may perhaps not need to be medically trained, for sitting there and checking those yellow flags or whatever they are, or it could perhaps be another type of professional group. I think that it would actually be good. We have to start economizing with the competencies we now have and it’s difficult enough to manage. Leader 15.

The acceptance of the AI systems may vary within and between professional groups, ages, and areas of specialized care. The leaders feared that the implementation of AI systems would change physicians’ knowledge base and that there would be a loss of knowledge that could be problematic in the long run. The leaders argued that younger, more recently graduated physicians would never be able to accumulate the experience-based knowledge to the extent that their older colleagues have done, as they will rely more on AI systems to support their decisions. Thus, on one hand, professional roles and self-images might be threatened when output from the AI systems is argued to be more valid than the recommendation by an experienced physician. However, on the other hand, physicians who do not “work with their hands” can utilize such output as decision support to complement their experience-based knowledge. Thus, it is important that healthcare professionals have trust in recommendations from the AI systems in clinical practice. If some healthcare professionals do not trust the AI systems and their output, there is a risk that they will not use them in clinical practice and continue to work in the way they are used to, resulting in two parallel systems. This might be problematic, both for the work environment and the healthcare professionals’ wellbeing. The leaders emphasized that this would represent a challenge for the implementation of AI systems in healthcare.

We can’t add anything more today without taking something else away, I’d say it was impossible. // The level of burden is so high today so it’s difficult to see, it’s not sufficient to say that this will be of use to us in two years’ time. Leader 20.

Implementing AI systems can change existing care processes and change the role of the patient. The leaders described that, in primary care, AI systems have the best potential to change existing work processes and make care more efficient, for example through an automatic AI-based triage for patients. The AI system could take the anamnesis, instead of the healthcare professionals, and do this when patients still are at home, so the healthcare professionals will not meet the patient unless the AI system has decided that it is necessary. The AI system can also autonomously discover something in a patient’s health status and suggest that the patient contact healthcare staff for follow-up. This use of AI systems could open up opportunities for more proactive and personalized care.

The leaders also described that the implementation of AI systems in practice could facilitate an altered patient role. The development that is taking place in the healthcare sector with, for instance, patient-reported data, enables and, in some cases, requires an active and committed patient that takes part in his or her care process. The leaders mentioned that there might be a need for patient support. Otherwise, there might be a risk that only patients with high digital literacy would be able to participate with valid data. The leaders described that AI systems could facilitate this development, by recommending self-care advice to patients or empowering them to make decisions. Still, there were concerns that not all patients would benefit from AI systems, due to variations in patients’ capabilities and literacy.

We also deal with people who are ill, we must also have respect for that. Everyone will not be able to use these tools. Leader 7.

Building trust for AI systems acceptance in clinical practice

A challenge and prerequisite for implementing AI systems in healthcare is that the technology meets expectations on quality to support the healthcare professionals in their practical work, such as having a solid evidence base, being thoroughly validated and meeting requirements for equality. It is important to have confidence in the validity of the data, the algorithms and their output. A key challenge pointed out was the need to have a sufficiently large population base, the “right” type of data and the right populations to build valid AI systems. For common conditions, where rich data exists to base AI algorithms, leaders believed the reliability would be high. For unusual conditions, there were concerns that there would be lower accuracy. Questions were also raised about how AI systems take aspects around equity and equality into account, such as gender and ethnicity. The leaders expressed concern that, due to these obstacles, in relation to certain unusual or complex conditions AI systems might not be suitable.

Then there is a challenge with the new technology, whether it’s Ok to apply it. Because it’s people who are affected, people’s health and lives that are affected by the new technology. How can we guarantee that it delivers what it says it will deliver? It must be safe and reviewed, validated and evidence-based in order for us to be able to use it. If a bug is built in then the consequences can be enormous. Leader 2.

Lack of confidence in the reliability of AI systems was also described and will place higher demands and requirements on their accuracy than on similar assessments made by humans. Thus, acceptance depends on confidence in AI systems as highly sensitive and that they can diagnose conditions at earlier stages than skilled healthcare professionals. The leaders perceived that the “black box” needs to be understood in order to be reliable, i.e. what the AI algorithms calculations are based on. Thus, reliance on the outputs from AI algorithms depends on reliance on the algorithm itself and the data used for its calculation.

There are a number of inherent problems with AI. It’s a little black box. AI looks at all the data. AI is not often easy to explain, “oh, you’ve got a risk, that it passed the cut-off value for that person or patient”, no because it weighs up perhaps a hundred different dimensions in a mathematical model. AI models are often called a black box and there have been many attempts at opening that box. The clinics are a bit skeptical then when they are not able to, they just get a risk score, I would say. Leader 10.

Big data sets are important for quality, but the leaders stated that too much information about a patient also could be problematic. There is a risk that information about a patient is available to healthcare professionals who should not have that information. The leaders believed that this could already be a problem today, but that it would be an increased risk in the future. This challenge needs to be handled as the amount of patient information increases, and as more healthcare professionals get access to such information when it’s being used in AI systems, regardless of the reason for the patient’s contact with the healthcare unit. Another challenge and prerequisite for implementing AI systems in healthcare is that the technology is user-friendly and create value for both healthcare professionals and patients. The leaders expected AI systems to be user-friendly, self-instructing, and easy to use, without requiring too much prior knowledge or training. In addition to being easy to use, the AI systems must also be time-saving and never time-consuming or dependent on the addition of yet more digital operative systems to work with. Using AI systems should, in some cases, be equated with having a second opinion from a colleague, when it comes to simplicity and time consumption.

An easy way to receive this support is needed. One needs to ask a number of questions in order to receive the correct information. But it mustn’t be too complicated, and it mustn’t take time, then nothing will come of it. Leader 4.

The leaders expected that AI systems would place the patients in focus and thereby contribute to more person-centred care. These expectations are based on a large amount of data on which AI algorithms are built, which leaders perceive will make it possible to individualize assessments and treatment options. AI systems would enable more person-centred and value-creating care for patients. AI systems could potentially contribute to making healthcare efficient without compromising quality. It was seen as an opportunity to meet future increasing needs for care among the citizens, combined with a reduced number of healthcare professionals. Smart and efficient AI systems used in investigations, assessments, and treatments can streamline care and allow more patients to receive care. Making healthcare efficient was also about the idea that AI systems should contribute to improved communication within and between caregivers for both public and private care. Using AI systems to follow up the given care and to evaluate the quality of care with other caregivers was highlighted, along with the risk that the increased efficiency provided by AI systems could result in a loss of essential values for healthcare and in impaired care.

I think that automatization via AI would be a safe way and it would be perfect for the primary care services. It would have entailed that we have more hands, that we can meet the patients who need to be met and that we can meet more often and for longer periods and perhaps do more house calls and just be there where we are needed a little more and help these a bit more easily. Leader 13.

The perspectives of the challenges described by leaders in the present study are an important contribution to improving knowledge regarding the determinants influencing the implementation of AI systems in healthcare. Our results showed that healthcare leaders perceived challenges to AI implementation concerning the handling of conditions external to the healthcare system, the building of internal capacity for strategic change management and the transformation of professional roles and practices. While implementation science has advanced the knowledge concerning determinants for successful implementation of digital technology in healthcare [ 53 ], our study is one of the few that have investigated leaders’ perceptions of the implementation of AI systems in healthcare. Our findings demonstrate that the leaders concerns do not lie so much with the specific technological nuances of AI, but with the more general factors relating to how such AI systems can be channeled into routine service organization, regulation and practice delivery. These findings demonstrate the breadth of concerns that leaders perceive are important for the successful application of AI systems and therefore suggest areas for further advancements in research and practice. However, the findings also demonstrate a potential risk that, even in a county council where there is a high level of investment and strategic support for AI systems, there is a lack of technical expertise and awareness of AI specific challenges that might be encountered. This could cause challenges to the collaboration between the developers of AI systems and healthcare leaders if there is a cognitive dissonance about the nature and scope of the problem they are seeking to address, and the practical and technical details of both AI systems and healthcare operational issues [ 7 ]. This suggests the need for people who are conversant in languages of both stakeholder groups maybe necessary to facilitate communication and collaboration across professional boundaries [ 54 ]. Importantly, these findings demonstrate that addressing the technological challenges of AI alone is unlikely to be sufficient to support their adoption into healthcare services, and AI developers are likely to need to collaborate with those with expertise in healthcare implementation and improvement scientists in order to address the wider systems issues that this study has identified.

The healthcare leaders perceived challenges resulting from external conditions and circumstances, such as ambiguities in existing laws and sharing data between organizations. The external conditions highlighted in our study resonate with the outer setting in the implementation framework CFIR [ 37 ], which is described in terms of governmental and other bodies that exercise control, with the help of policies and incentives that influence readiness to implement innovations in practice. These challenges described in our study resulted in uncertainties concerning responsibilities in relation to the development and implementation of AI systems and what one was allowed to do, giving rise to legal and ethical considerations. The external conditions and circumstances were recognized by the leaders as having considerable impact on the possibility of implementing AI systems in practice although they recognized that these were beyond their direct influence. This suggests that, when it comes to the implementation of AI systems, the influence of individual leaders is largely restricted and bounded. Healthcare leaders in our study perceived that policy and regulation cannot keep up with the national interest in implementing AI systems in healthcare. Here, concerted and unified national authority initiatives are required according to the leaders. Despite the fact that the introduction of AI systems in healthcare appears to be inevitable, the consideration of existing regulatory and ethical mechanisms appears to be slow [ 16 , 18 ]. Additionally, another challenge attributable to the setting was the lack of to increase the competence and expertise among professionals in AI systems, which could be a potential barrier to the implementation of AI in practice. The leaders reflected on the need for future higher education programs to provide healthcare professionals with better knowledge of AI systems and its use in practice. Although digital literacy is described as important for healthcare professionals [ 55 , 56 ], higher education faces many challenges in meeting emerging requirements and demands of society and healthcare.

The healthcare leaders addressed the fact that the healthcare system’s internal capacity for strategic change management is a hugh challenge, but at the same time of great importance for successful and sustainable implementation of AI systems in the county council. The leaders highlighted the need to create an infrastructure and joint venture, with common structures and processes for the promotion of the capability to work with implementation strategies of AI systems at a regional level. This was needed to obtain a lasting improvement throughout the organization and to meet organizational goals, objectives, and missions. Thus, this highlights that the implementation of change within an organization is a complex process that does not solely depend on individual healthcare professionals’ change responses [ 57 ]. We need to focus on factors such as organisational capacity, climate, culture and leadership, which are common factors within the “inner context” in CFIR [ 37 ]. The capacity to put the innovations into practice consists of activities related to maintaining a functioning organization and delivery system [ 58 ]. Implementation research has most often focused on implementation of various individual, evidence-based practices, typically (digitally) health interventions [ 59 ]. However, AI implementation represents a more substantial and more disruptive form of change than typically involved in implementing new practices in healthcare [ 60 ]. Although there are likely many similarities between AI systems and other new digital technologies implemented in healthcare, there may also be important differences. For example, our results and other AI research has acknowledged that the lack of transparency (i.e. the “black box” problem) might yield resistance to some AI systems [ 61 ]. This problem is probably less apparent when implementing various evidence-based practices based on empirical research conducted according to well-established principles to be trustworthy [ 62 ]. Ethical and trust issues were also highlighted in our study as playing a more prominent role in AI implementation, perhaps more prominently than in “traditional” implementation of evidence-based practices. There might thus be AI-specific characteristics that are not really part of existing frameworks and models currently used in implementation science.

Transformation of healthcare professions and healthcare practice

The healthcare leaders perceived that the use of AI in practice could transform professional roles and practices and this could be an implementation challenge. They reflected on how the implementation of AI systems would potentially impact provider-patient relationships and how the shifts in professional roles and responsibilities in the service system could potentially lead to changes in clinical processes of care. The leaders’ concerns related to the compatibility of new ways of working with existing practice, which is an important innovation characteristic highlighted in the Diffusion of Innovation theory [ 63 ]. According to the theory, compatibility with existing values and past experiences facilitates implementation. The leaders in our study also argued that it was important to see the value of AI systems for both professionals and service-users. Unless the benefits of using AI systems are observable healthcare professionals will be reluctant to drive the implementation forward. The importance of observability for adoption of innovations is also addressed in the Diffusion of Innovation theory [ 63 ], being the degree to which the results of an innovation are visible to the users. The leaders in our study conveyed the importance for healthcare professionals of having trust and confidence in the use of AI systems. They discussed uncertainties regarding accountability and liability in situations where AI systems impacts directly or indirectly on human healthcare, and how ambiguity and uncertainty about AI systems could lead to healthcare workers having a lack of trust in the technology. Trust in relation to AI systems is well reflected on as a challenge in research in healthcare [ 30 , 41 , 64 , 65 , 66 ]. The leaders also perceived that the expectations of patient-centeredness and usability (efficacy and usefulness) for service users could be a potential challenge in connection with AI implementation. Their concerns are echoed in a review by Buchanan et al. [ 67 ], in which it was observed that the use of AI systems could serve to weaken the person-centred relationships between healthcare professionals and patients.

In summary, the expectations for AI in healthcare are high in society and the technological impetus is strong. A lack of “translation” of the technology is in some ways part of the initial difficulties of implementing AI, because implementation strategies still need to be developed that might facilitate testing and clinical use of AI to demonstrate its value in regular healthcare practice. Our results relate well to the implementation science literature, identifying implementation challenges attributable to both external and internal conditions and circumstances [ 37 , 68 , 69 ] and the characteristics of the innovation [ 37 , 63 ]. However, the leaders in our study also pointed out the importance of establishing an infrastructure and common strategies for change management on the system level in healthcare. Thus, introducing AI systems and the required changes in healthcare practice should not only be dependent on early adopters at the particular units. This resonates with the Theory of Organizational Readiness for Change [ 70 ], which emphasizes the importance of an organization being both willing and able to implement an innovation [ 71 ]. The theory posits that, although organizational willingness is one of the factors that may facilitate the introduction of an innovation into practice, both the organization’s general capacities and its innovation-specific capacities for adoption and sustained use of an innovation are key to all phases in the implementation process [ 71 ].

Methodological considerations

In qualitative research, the concepts credibility, dependability, and transferability are used to describe different aspects of trustworthiness [ 72 ]. Credibility was strengthened by the purposeful sample of participants with various experiences and a crucial role in any implementation process. It is considered of great relevance to investigate the challenges that leaders in the county council expressed concerning the implementation of various AI systems in healthcare, albeit the preparation for implementing AI systems is a current issue in many Swedish county councils. Furthermore, the research team members’ familiarity with the methodology, together with their complementary knowledge and backgrounds enabled a more nuanced and profound, in-depth analysis of the empirical material and was another strength of the study.

Dependability was strengthened by using an interview guide to ensure that the same opening questions were put to all participants and that they were encouraged to talk openly. Because this study took place during the COVID-19 pandemic, the interviews were performed either at a distance, using the Microsoft Teams application, or face-to-face, the variation might be a limitation. However, according to Archibald et al. [ 73 ], distance interviewing with videoconferencing services, such as Microsoft Teams, could be beneficial and even preferred. Based on the knowledge gap regarding implementation of AI systems in healthcare, the authors chose to use an inductive qualitative approach to the exploration of healthcare leaders’ perceptions of implementation challenges. It might be that the implementation of AI systems largely aligns with the implementation of other digital technologies or techniques in healthcare. A strength of our study is that it focuses on perceptions on AI systems in general regardless of the type of AI algorithm or the context or area of application. However, one potential limitation of this approach is the possibility that more specific AI systems and or areas of applications may become associated with somewhat different challenges. Further studies specifying such boundaries will provide more specific answers but will probably also require the investigation be conducted in connection with the actual implementation of a specific AI systems and based on participants' experiences of having participated in the implementation process. With this in mind, we encourage future research to take this into account when deciding upon study designs.

Transferability was strengthened by a rich presentation of the results along with appropriate quotations. However, a limitation could be that all healthcare leaders work in the same county council, so transferability to other county councils must be considered with caution. In addition, an important contextual factor that might have an impact on whether, and how, the findings observed in this study will occur in other settings as well, concerns the nature of, and approach to, AI implementation. AI could be considered a rather broad concept, and while we adopted a broad and general approach to AI systems in order to understand healthcare leader’s perceptions, we would, perhaps, expect that more specific AI systems and or areas of applications become associated with different challenges. Taken together, these are aspects that may affect the possibilities for our results to be portable or transferred to other contexts. We thus suggest that the perceptions of healthcare leaders in other empirical contexts and the involvement of both more specific and broader AI systems are utilized in the study designs of future research.

In conclusion, the healthcare leaders highlighted several implementation challenges in relation to AI within the healthcare system and beyond the healthcare organization. The challenges comprised conditions external to the healthcare system, internal capacity for strategic change management, and transformation of healthcare professions and healthcare practice. Based on our findings, there is a need to see the implementation of AI system in healthcare as a changing learning process at all organizational levels, necessitating a healthcare system that applies more nuanced systems thinking. It is crucial to involve and collaborate with stakeholders and users inside the regional healthcare system itself and other actors outside the organization in order to succeed in developing and applying system thinking on implementation of AI. Given that the preparation for implementing AI systems is a current and shared issue in many (Swedish) county councils and other countries, and that our study is limited to one specific county council context, we encourage future studies in other contexts, in order to corroborate the findings.

Availability of data and materials

Empirical material generated and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.

Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: current trends and future possibilities. Br J Gen Pract. 2018;68(668):143–4.

Article   PubMed   PubMed Central   Google Scholar  

Mehta N, Pandit A, Shukla S. Transforming healthcare with big data analytics and artificial intelligence: A systematic mapping study. J Biomed Inform. 2019;100: 103311.

Article   PubMed   Google Scholar  

Horgan D, Romao M, Morré SA, Kalra D. Artificial Intelligence: Power for Civilisation - and for Better Healthcare. Public Health Genomics. 2019;22(5–6):145–61.

Union E. A definition of AI: Main capabilities and scientific disciplines. https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf ; 2018.

Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–8.

Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial Intelligence and the Implementation Challenge. J Med Internet Res. 2019;21(7):e13659.

Elish MC. The Stakes of Uncertainty: Developing and Integrating Machine Learning in Clinical Care. Ethnographic Praxis in Industry Conference Proceedings. 2018;2018(1):364–80.

Article   Google Scholar  

Lee JC. The perils of artificial intelligence in healthcare: Disease diagnosis and treatment. J Comput Biol Bioinformatics Res. 2019;9(1):1–6.

Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.

Article   CAS   PubMed   Google Scholar  

Technology EIoIa. Transforming healthcare with AI. https://eithealth.eu/wp-content/uploads/2020/03/EIT-Health-and-McKinsey_Transforming-Healthcare-with-AI.pdf ; 2020.

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230–43.

De Nigris S, Craglia M, Nepelski D, Hradec J, Gomez-Gonzales E, Gomez Gutierrez E, et al. AI Watch : AI Uptake in Health and Healthcare. 2020.

He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30–6.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alhashmi S, Alshurideh M, Al Kurdi B, Salloum S. A Systematic Review of the Factors Affecting the Artificial Intelligence Implementation in the Health Care Sector. 2020. p. 37–49.

Google Scholar  

Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154.

Gooding P, Kariotis T. Ethics and Law in Research on Algorithmic and Data-Driven Technology in Mental Health Care: Scoping Review. JMIR Ment Health. 2021;8(6):e24668.

Beil M, Proft I, van Heerden D, Sviri S, van Heerden PV. Ethical considerations about artificial intelligence for prognostication in intensive care. Intensive Care Med Exp. 2019;7(1):70.

Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22(1):14.

Choudhury A, Asan O. Role of Artificial Intelligence in Patient Safety Outcomes: Systematic Literature Review. JMIR Med Inform. 2020;8(7):e18599.

Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JMC. Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: a Review. Artif Intell Med. 2020;102:101762.

Yin J, Ngiam KY, Teo HH. Role of Artificial Intelligence Applications in Real-Life Clinical Practice: Systematic Review. J Med Internet Res. 2021;23(4):e25759.

Wolff J, Pauling J, Keck A, Baumbach J. The Economic Impact of Artificial Intelligence in Health Care: Systematic Review. J Med Internet Res. 2020;22(2):e16866.

Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation Frameworks for Artificial Intelligence Translation Into Health Care Practice: Scoping Review. J Med Internet Res. 2022;24(1):e32215.

Safi S, Thiessen T, Schmailzl KJ. Acceptance and Resistance of New Digital Technologies in Medicine: Qualitative Study. JMIR Res Protoc. 2018;7(12):e11072.

Whitelaw S, Mamas MA, Topol E, Van Spall HGC. Applications of digital technology in COVID-19 pandemic planning and response. Lancet Digit Health. 2020;2(8):e435–40.

Alami H, Lehoux P, Denis J-L, Motulsky A, Petitgand C, Savoldelli M, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag. 2021;35(1):106–14.

Reed JE, Howe C, Doyle C, Bell D. Simple rules for evidence translation in complex systems: A qualitative study. BMC Med. 2018;16(1):92.

Reichenpfader U, Carlfjord S, Nilsen P. Leadership in evidence-based practice: a systematic review. Leadersh Health Serv (Bradf Engl). 2015;28(4):298–316.

Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

Chen M, Decary M. Artificial intelligence in healthcare: An essential guide for health leaders. Healthc Manage Forum. 2020;33(1):10–8.

Loh E. Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader. 2018;2(2):59.

Ogbonna E, Harris LC. Leadership style, organizational culture and performance: empirical evidence from UK companies. Int J Human Res Manage. 2000;11(4):766–88.

Battilana J, Gilmartin M, Sengul M, Pache A-C, Alexander JA. Leadership competencies for implementing planned organizational change. Leadersh Q. 2010;21(3):422–38.

Denti L, Hemlin S. Leadership and innovation in organizations: a systematic review of factors that mediate or moderate the relationship. Int J Innov Manag. 2012;16(03):1240007.

Nilsen P. Overview of theories, models and frameworks in implementation science. In: Nilsen P, Birken, S. A., editor. Handbook on Implementation Science. Cheltenham: Edward Elgar Publishing Limited; 2020. p. 8–31.  https://www.elgaronline.com/view/edcoll/9781788975988/9781788975988.xml .

Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. Qual Health Care. 1998;7(3):149–58.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JCJIS. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. 2009;4.

Blase KA, Van Dyke M, Fixsen DL, Bailey FW. Key concepts, themes, and evidence for practitioners in educational psychology. In: Handbook of implementation science for psychology in education. New York, NY, US: Cambridge University Press; 2012. p. 13–34.

Chapter   Google Scholar  

Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: A systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8(1):35.

Wangmo T, Lipps M, Kressig RW, Ienca M. Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders. BMC Med Ethics. 2019;20(1):98.

Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals’ understanding and experiences of artificial intelligence technology use in the delivery of healthcare: An integrative review. Health Informatics J. 2020;26(2):1225–36.

Laï MC, Brian M, Mamzer MF. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med. 2020;18(1):14.

Diprose WK, Buist N, Hua N, Thurier Q, Shand G, Robinson R. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J Am Med Inform Assoc. 2020;27(4):592–600.

Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P, et al. Patient Perspectives on the Use of Artificial Intelligence for Skin Cancer Screening: A Qualitative Study. JAMA Dermatol. 2020;156(5):501–12.

Petitgand C, Motulsky A, Denis JL, Régis C. Investigating the Barriers to Physician Adoption of an Artificial Intelligence- Based Decision Support System in Emergency Care: An Interpretative Qualitative Study. Stud Health Technol Inform. 2020;270:1001–5.

PubMed   Google Scholar  

Graneheim UH, Lindgren BM, Lundman B. Methodological challenges in qualitative content analysis: A discussion paper. Nurse Educ Today. 2017;56:29–34.

Krippendorff K. Content analysis : an introduction to its methodology. Thousand Oaks, Calif: SAGE; 2013.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Ashfaq A, Lingman M, Sant’Anna A, Nowaczyk S. Readmission prediction using deep learning on electronic health records. J Biomed Inform. 2019;97:103256.

Ashfaq A, Lönn S, Nilsson H, Eriksson JA, Kwatra J, Yasin ZM, et al. Data Resource Profile: Regional healthcare information platform in Halland. Sweden Int J Epidemiol. 2020;49(3):738–9.

Wärnestål P, Nygren J. Building an experience framework for a digital peer support service for children surviving from cancer. New York, New York, USA: Proceedings of the 12th International Conference on Interaction Design and Children Association for Computing Machinery; 2013. p. 269–72.

Reed MS, Curzon R. Stakeholder mapping for the governance of biosecurity: a literature review. J Integr Environ Sci. 2015;12(1):15–38.

Vis C, Bührmann L, Riper H, Ossebaard HC. Health technology assessment frameworks for eHealth: A systematic review. Int J Technol Assess Health Care. 2020;36(3):204–16.

Rosen MA, DiazGranados D, Dietz AS, Benishek LE, Thompson D, Pronovost PJ, et al. Teamwork in healthcare: Key discoveries enabling safer, high-quality care. Am Psychol. 2018;73(4):433–50.

Booth RG, Strudwick G, McBride S, O’Connor S, Solano López AL. How the nursing profession should adapt for a digital future. BMJ. 2021;373:n1190.

Article   PubMed Central   Google Scholar  

Foadi N, Varghese J. Digital competence - A Key Competence for Todays and Future Physicians. J Eur CME. 2022;11(1):2015200.

Nilsen P, Schildmeijer K, Ericsson C, Seing I, Birken S. Implementation of change in health care in Sweden: a qualitative study of professionals’ change responses. Implement Sci. 2019;14(1):51.

Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3–4):171–81.

Nilsen P, Birken S. Prologue. In: Nilsen P, Birken, S. A., editor. Handbook on Implementation Science. Cheltenham: Edward Elgar Publishing Limited; 2020. p. 1–6.  https://www.elgaronline.com/view/edcoll/9781788975988/9781788975988.xml .

Scott WR. Institutional change and healthcare organizations : from professional dominance to managed care. Chicago: University of Chicago Press; 2000.

von Eschenbach WJ. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology. 2021;34:1607+.

Li S-A, Jeffs L, Barwick M, Stevens B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Syst Rev. 2018;7(1):72.

Rogers EM. Diffusion of innovations. New York: Free Press; 1995.

Matheny ME, Whicher D, Thadaney IS. Artificial Intelligence in Health Care: A Report From the National Academy of Medicine. JAMA. 2020;323(6):509–10.

Moorman LP. Principles for Real-World Implementation of Bedside Predictive Analytics Monitoring. Appl Clin Inform. 2021;12(4):888–96.

Lee D, Yoon SN. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges. Int J Environ Res Public Health. 2021;18(1):271.

Buchanan C, Howitt ML, Wilson R, Booth RG, Risling T, Bamford M. Predicted Influences of Artificial Intelligence on the Domains of Nursing: Scoping Review. JMIR Nursing. 2020;3(1):e23939.

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda. Front Public Health. 2019;7:3.

Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

Scaccia JP, Cook BS, Lamont A, Wandersman A, Castellow J, Katz J, et al. A practical implementation science heuristic for organizational readiness: R = MC(2). J Community Psychol. 2015;43(4):484–501.

Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

Archibald MM, Ambagtsheer RC, Casey MG, Lawless M. Using Zoom Videoconferencing for Qualitative Data Collection: Perceptions and Experiences of Researchers and Participants. Int J Qual Methods. 2019;18:1609406919874596.

World Medical Association Declaration of Helsinki. ethical principles for medical research involving human subjects. JAMA. 2013;310(20):2191–4.

Article   CAS   Google Scholar  

Council SR. Good Research Practice: Swedish Research Council; 2017 [Available from: https://www.vr.se/download/18.5639980c162791bbfe697882/1555334908942/Good-Research-Practice_VR_2017.pdf .

Download references

Acknowledgements

The authors would like to thank the participants who contributed to this study with their experiences.

All authors belong to the Healthcare Improvement Research Group at Halmstad University, https://hh.se/english/research/our-research/research-at-the-school-of-health-and-welfare/healthcare-improvement.html

Open access funding provided by Halmstad University. The funders for this study are the Swedish Government Innovation Agency Vinnova (grant 2019–04526) and the Knowledge Foundation (grant 20200208 01H). The funders were not involved in any aspect of study design, collection, analysis, interpretation of data, or in the writing or publication process.

Author information

Authors and affiliations.

School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden

Lena Petersson, Ingrid Larsson, Jens M. Nygren, Per Nilsen, Margit Neher, Julie E. Reed, Daniel Tyskbo & Petra Svedberg

Department of Health, Medicine and Caring Sciences, Division of Public Health, Faculty of Health Sciences, Linköping University, Linköping, Sweden

Department of Rehabilitation, School of Health Sciences, Jönköping University, Jönköping, Sweden

Margit Neher

You can also search for this author in PubMed   Google Scholar

Contributions

LP, JMN, JR, DT and PS together identified the research question and designed the study. Applications for funding and coproduction agreements were put in place by PS and JMN. Data collection (the interviews) was carried out by LP and DT. Data analysis was performed by LP, IL, JMN, PN, MN and PS and then discussed with all authors. The manuscript was drafted by LP, IL, JMN, PN, MN and PS. JR and DT provided critical revision of the paper in terms of important intellectual content. All authors have read and approved the final submitted version.

Corresponding author

Correspondence to Lena Petersson .

Ethics declarations

Ethics approval and consent to participate.

The study conforms to the principles outlined in the Declaration of Helsinki (74) and was approved by the Swedish Ethical Review Authority (no. 2020–06246). The study fulfilled the requirements of Swedish research: information, consent, confidentiality, and safety of the participants and is guided by the ethical principles of: autonomy, beneficence, non-maleficence, and justice (75). The participants were first informed about the study by e-post and, at the same time, were asked if they wanted to participate in the study. If they agreed to participate, they were verbally informed at the beginning of the interview about the purpose and the structure of the study and that they could withdraw their consent to participate at any time. Participation was voluntary and the respondents were informed about the ethical considerations of confidentiality. Informed consent was obtained from all participants prior to the interview.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no potential conflicts of interest with respect to the research, authorship, and publication of this article.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Petersson, L., Larsson, I., Nygren, J.M. et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res 22 , 850 (2022). https://doi.org/10.1186/s12913-022-08215-8

Download citation

Received : 04 April 2022

Accepted : 20 June 2022

Published : 01 July 2022

DOI : https://doi.org/10.1186/s12913-022-08215-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Digital transformation
  • Implementation
  • Healthcare leaders
  • Organizational change
  • Qualitative methods
  • Stakeholders

BMC Health Services Research

ISSN: 1472-6963

research articles using qualitative methods

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(3); May-Jun 2015

Logo of cjhp

Qualitative Research: Data Collection, Analysis, and Management

Introduction.

In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area. Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. Whereas quantitative research methods can be used to determine how many people undertake particular behaviours, qualitative methods can help researchers to understand how and why such behaviours take place. Within the context of pharmacy practice research, qualitative approaches have been used to examine a diverse array of topics, including the perceptions of key stakeholders regarding prescribing by pharmacists and the postgraduation employment experiences of young pharmacists (see “Further Reading” section at the end of this article).

In the previous paper, 1 we outlined 3 commonly used methodologies: ethnography 2 , grounded theory 3 , and phenomenology. 4 Briefly, ethnography involves researchers using direct observation to study participants in their “real life” environment, sometimes over extended periods. Grounded theory and its later modified versions (e.g., Strauss and Corbin 5 ) use face-to-face interviews and interactions such as focus groups to explore a particular research phenomenon and may help in clarifying a less-well-understood problem, situation, or context. Phenomenology shares some features with grounded theory (such as an exploration of participants’ behaviour) and uses similar techniques to collect data, but it focuses on understanding how human beings experience their world. It gives researchers the opportunity to put themselves in another person’s shoes and to understand the subjective experiences of participants. 6 Some researchers use qualitative methodologies but adopt a different standpoint, and an example of this appears in the work of Thurston and others, 7 discussed later in this paper.

Qualitative work requires reflection on the part of researchers, both before and during the research process, as a way of providing context and understanding for readers. When being reflexive, researchers should not try to simply ignore or avoid their own biases (as this would likely be impossible); instead, reflexivity requires researchers to reflect upon and clearly articulate their position and subjectivities (world view, perspectives, biases), so that readers can better understand the filters through which questions were asked, data were gathered and analyzed, and findings were reported. From this perspective, bias and subjectivity are not inherently negative but they are unavoidable; as a result, it is best that they be articulated up-front in a manner that is clear and coherent for readers.

THE PARTICIPANT’S VIEWPOINT

What qualitative study seeks to convey is why people have thoughts and feelings that might affect the way they behave. Such study may occur in any number of contexts, but here, we focus on pharmacy practice and the way people behave with regard to medicines use (e.g., to understand patients’ reasons for nonadherence with medication therapy or to explore physicians’ resistance to pharmacists’ clinical suggestions). As we suggested in our earlier article, 1 an important point about qualitative research is that there is no attempt to generalize the findings to a wider population. Qualitative research is used to gain insights into people’s feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as “mixed methods” research, and further reading on this topic may be found at the end of this paper.

The role of the researcher in qualitative research is to attempt to access the thoughts and feelings of study participants. This is not an easy task, as it involves asking people to talk about things that may be very personal to them. Sometimes the experiences being explored are fresh in the participant’s mind, whereas on other occasions reliving past experiences may be difficult. However the data are being collected, a primary responsibility of the researcher is to safeguard participants and their data. Mechanisms for such safeguarding must be clearly articulated to participants and must be approved by a relevant research ethics review board before the research begins. Researchers and practitioners new to qualitative research should seek advice from an experienced qualitative researcher before embarking on their project.

DATA COLLECTION

Whatever philosophical standpoint the researcher is taking and whatever the data collection method (e.g., focus group, one-to-one interviews), the process will involve the generation of large amounts of data. In addition to the variety of study methodologies available, there are also different ways of making a record of what is said and done during an interview or focus group, such as taking handwritten notes or video-recording. If the researcher is audio- or video-recording data collection, then the recordings must be transcribed verbatim before data analysis can begin. As a rough guide, it can take an experienced researcher/transcriber 8 hours to transcribe one 45-minute audio-recorded interview, a process than will generate 20–30 pages of written dialogue.

Many researchers will also maintain a folder of “field notes” to complement audio-taped interviews. Field notes allow the researcher to maintain and comment upon impressions, environmental contexts, behaviours, and nonverbal cues that may not be adequately captured through the audio-recording; they are typically handwritten in a small notebook at the same time the interview takes place. Field notes can provide important context to the interpretation of audio-taped data and can help remind the researcher of situational factors that may be important during data analysis. Such notes need not be formal, but they should be maintained and secured in a similar manner to audio tapes and transcripts, as they contain sensitive information and are relevant to the research. For more information about collecting qualitative data, please see the “Further Reading” section at the end of this paper.

DATA ANALYSIS AND MANAGEMENT

If, as suggested earlier, doing qualitative research is about putting oneself in another person’s shoes and seeing the world from that person’s perspective, the most important part of data analysis and management is to be true to the participants. It is their voices that the researcher is trying to hear, so that they can be interpreted and reported on for others to read and learn from. To illustrate this point, consider the anonymized transcript excerpt presented in Appendix 1 , which is taken from a research interview conducted by one of the authors (J.S.). We refer to this excerpt throughout the remainder of this paper to illustrate how data can be managed, analyzed, and presented.

Interpretation of Data

Interpretation of the data will depend on the theoretical standpoint taken by researchers. For example, the title of the research report by Thurston and others, 7 “Discordant indigenous and provider frames explain challenges in improving access to arthritis care: a qualitative study using constructivist grounded theory,” indicates at least 2 theoretical standpoints. The first is the culture of the indigenous population of Canada and the place of this population in society, and the second is the social constructivist theory used in the constructivist grounded theory method. With regard to the first standpoint, it can be surmised that, to have decided to conduct the research, the researchers must have felt that there was anecdotal evidence of differences in access to arthritis care for patients from indigenous and non-indigenous backgrounds. With regard to the second standpoint, it can be surmised that the researchers used social constructivist theory because it assumes that behaviour is socially constructed; in other words, people do things because of the expectations of those in their personal world or in the wider society in which they live. (Please see the “Further Reading” section for resources providing more information about social constructivist theory and reflexivity.) Thus, these 2 standpoints (and there may have been others relevant to the research of Thurston and others 7 ) will have affected the way in which these researchers interpreted the experiences of the indigenous population participants and those providing their care. Another standpoint is feminist standpoint theory which, among other things, focuses on marginalized groups in society. Such theories are helpful to researchers, as they enable us to think about things from a different perspective. Being aware of the standpoints you are taking in your own research is one of the foundations of qualitative work. Without such awareness, it is easy to slip into interpreting other people’s narratives from your own viewpoint, rather than that of the participants.

To analyze the example in Appendix 1 , we will adopt a phenomenological approach because we want to understand how the participant experienced the illness and we want to try to see the experience from that person’s perspective. It is important for the researcher to reflect upon and articulate his or her starting point for such analysis; for example, in the example, the coder could reflect upon her own experience as a female of a majority ethnocultural group who has lived within middle class and upper middle class settings. This personal history therefore forms the filter through which the data will be examined. This filter does not diminish the quality or significance of the analysis, since every researcher has his or her own filters; however, by explicitly stating and acknowledging what these filters are, the researcher makes it easer for readers to contextualize the work.

Transcribing and Checking

For the purposes of this paper it is assumed that interviews or focus groups have been audio-recorded. As mentioned above, transcribing is an arduous process, even for the most experienced transcribers, but it must be done to convert the spoken word to the written word to facilitate analysis. For anyone new to conducting qualitative research, it is beneficial to transcribe at least one interview and one focus group. It is only by doing this that researchers realize how difficult the task is, and this realization affects their expectations when asking others to transcribe. If the research project has sufficient funding, then a professional transcriber can be hired to do the work. If this is the case, then it is a good idea to sit down with the transcriber, if possible, and talk through the research and what the participants were talking about. This background knowledge for the transcriber is especially important in research in which people are using jargon or medical terms (as in pharmacy practice). Involving your transcriber in this way makes the work both easier and more rewarding, as he or she will feel part of the team. Transcription editing software is also available, but it is expensive. For example, ELAN (more formally known as EUDICO Linguistic Annotator, developed at the Technical University of Berlin) 8 is a tool that can help keep data organized by linking media and data files (particularly valuable if, for example, video-taping of interviews is complemented by transcriptions). It can also be helpful in searching complex data sets. Products such as ELAN do not actually automatically transcribe interviews or complete analyses, and they do require some time and effort to learn; nonetheless, for some research applications, it may be a valuable to consider such software tools.

All audio recordings should be transcribed verbatim, regardless of how intelligible the transcript may be when it is read back. Lines of text should be numbered. Once the transcription is complete, the researcher should read it while listening to the recording and do the following: correct any spelling or other errors; anonymize the transcript so that the participant cannot be identified from anything that is said (e.g., names, places, significant events); insert notations for pauses, laughter, looks of discomfort; insert any punctuation, such as commas and full stops (periods) (see Appendix 1 for examples of inserted punctuation), and include any other contextual information that might have affected the participant (e.g., temperature or comfort of the room).

Dealing with the transcription of a focus group is slightly more difficult, as multiple voices are involved. One way of transcribing such data is to “tag” each voice (e.g., Voice A, Voice B). In addition, the focus group will usually have 2 facilitators, whose respective roles will help in making sense of the data. While one facilitator guides participants through the topic, the other can make notes about context and group dynamics. More information about group dynamics and focus groups can be found in resources listed in the “Further Reading” section.

Reading between the Lines

During the process outlined above, the researcher can begin to get a feel for the participant’s experience of the phenomenon in question and can start to think about things that could be pursued in subsequent interviews or focus groups (if appropriate). In this way, one participant’s narrative informs the next, and the researcher can continue to interview until nothing new is being heard or, as it says in the text books, “saturation is reached”. While continuing with the processes of coding and theming (described in the next 2 sections), it is important to consider not just what the person is saying but also what they are not saying. For example, is a lengthy pause an indication that the participant is finding the subject difficult, or is the person simply deciding what to say? The aim of the whole process from data collection to presentation is to tell the participants’ stories using exemplars from their own narratives, thus grounding the research findings in the participants’ lived experiences.

Smith 9 suggested a qualitative research method known as interpretative phenomenological analysis, which has 2 basic tenets: first, that it is rooted in phenomenology, attempting to understand the meaning that individuals ascribe to their lived experiences, and second, that the researcher must attempt to interpret this meaning in the context of the research. That the researcher has some knowledge and expertise in the subject of the research means that he or she can have considerable scope in interpreting the participant’s experiences. Larkin and others 10 discussed the importance of not just providing a description of what participants say. Rather, interpretative phenomenological analysis is about getting underneath what a person is saying to try to truly understand the world from his or her perspective.

Once all of the research interviews have been transcribed and checked, it is time to begin coding. Field notes compiled during an interview can be a useful complementary source of information to facilitate this process, as the gap in time between an interview, transcribing, and coding can result in memory bias regarding nonverbal or environmental context issues that may affect interpretation of data.

Coding refers to the identification of topics, issues, similarities, and differences that are revealed through the participants’ narratives and interpreted by the researcher. This process enables the researcher to begin to understand the world from each participant’s perspective. Coding can be done by hand on a hard copy of the transcript, by making notes in the margin or by highlighting and naming sections of text. More commonly, researchers use qualitative research software (e.g., NVivo, QSR International Pty Ltd; www.qsrinternational.com/products_nvivo.aspx ) to help manage their transcriptions. It is advised that researchers undertake a formal course in the use of such software or seek supervision from a researcher experienced in these tools.

Returning to Appendix 1 and reading from lines 8–11, a code for this section might be “diagnosis of mental health condition”, but this would just be a description of what the participant is talking about at that point. If we read a little more deeply, we can ask ourselves how the participant might have come to feel that the doctor assumed he or she was aware of the diagnosis or indeed that they had only just been told the diagnosis. There are a number of pauses in the narrative that might suggest the participant is finding it difficult to recall that experience. Later in the text, the participant says “nobody asked me any questions about my life” (line 19). This could be coded simply as “health care professionals’ consultation skills”, but that would not reflect how the participant must have felt never to be asked anything about his or her personal life, about the participant as a human being. At the end of this excerpt, the participant just trails off, recalling that no-one showed any interest, which makes for very moving reading. For practitioners in pharmacy, it might also be pertinent to explore the participant’s experience of akathisia and why this was left untreated for 20 years.

One of the questions that arises about qualitative research relates to the reliability of the interpretation and representation of the participants’ narratives. There are no statistical tests that can be used to check reliability and validity as there are in quantitative research. However, work by Lincoln and Guba 11 suggests that there are other ways to “establish confidence in the ‘truth’ of the findings” (p. 218). They call this confidence “trustworthiness” and suggest that there are 4 criteria of trustworthiness: credibility (confidence in the “truth” of the findings), transferability (showing that the findings have applicability in other contexts), dependability (showing that the findings are consistent and could be repeated), and confirmability (the extent to which the findings of a study are shaped by the respondents and not researcher bias, motivation, or interest).

One way of establishing the “credibility” of the coding is to ask another researcher to code the same transcript and then to discuss any similarities and differences in the 2 resulting sets of codes. This simple act can result in revisions to the codes and can help to clarify and confirm the research findings.

Theming refers to the drawing together of codes from one or more transcripts to present the findings of qualitative research in a coherent and meaningful way. For example, there may be examples across participants’ narratives of the way in which they were treated in hospital, such as “not being listened to” or “lack of interest in personal experiences” (see Appendix 1 ). These may be drawn together as a theme running through the narratives that could be named “the patient’s experience of hospital care”. The importance of going through this process is that at its conclusion, it will be possible to present the data from the interviews using quotations from the individual transcripts to illustrate the source of the researchers’ interpretations. Thus, when the findings are organized for presentation, each theme can become the heading of a section in the report or presentation. Underneath each theme will be the codes, examples from the transcripts, and the researcher’s own interpretation of what the themes mean. Implications for real life (e.g., the treatment of people with chronic mental health problems) should also be given.

DATA SYNTHESIS

In this final section of this paper, we describe some ways of drawing together or “synthesizing” research findings to represent, as faithfully as possible, the meaning that participants ascribe to their life experiences. This synthesis is the aim of the final stage of qualitative research. For most readers, the synthesis of data presented by the researcher is of crucial significance—this is usually where “the story” of the participants can be distilled, summarized, and told in a manner that is both respectful to those participants and meaningful to readers. There are a number of ways in which researchers can synthesize and present their findings, but any conclusions drawn by the researchers must be supported by direct quotations from the participants. In this way, it is made clear to the reader that the themes under discussion have emerged from the participants’ interviews and not the mind of the researcher. The work of Latif and others 12 gives an example of how qualitative research findings might be presented.

Planning and Writing the Report

As has been suggested above, if researchers code and theme their material appropriately, they will naturally find the headings for sections of their report. Qualitative researchers tend to report “findings” rather than “results”, as the latter term typically implies that the data have come from a quantitative source. The final presentation of the research will usually be in the form of a report or a paper and so should follow accepted academic guidelines. In particular, the article should begin with an introduction, including a literature review and rationale for the research. There should be a section on the chosen methodology and a brief discussion about why qualitative methodology was most appropriate for the study question and why one particular methodology (e.g., interpretative phenomenological analysis rather than grounded theory) was selected to guide the research. The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper. The findings should be written as if a story is being told; as such, it is not necessary to have a lengthy discussion section at the end. This is because much of the discussion will take place around the participants’ quotes, such that all that is needed to close the report or paper is a summary, limitations of the research, and the implications that the research has for practice. As stated earlier, it is not the intention of qualitative research to allow the findings to be generalized, and therefore this is not, in itself, a limitation.

Planning out the way that findings are to be presented is helpful. It is useful to insert the headings of the sections (the themes) and then make a note of the codes that exemplify the thoughts and feelings of your participants. It is generally advisable to put in the quotations that you want to use for each theme, using each quotation only once. After all this is done, the telling of the story can begin as you give your voice to the experiences of the participants, writing around their quotations. Do not be afraid to draw assumptions from the participants’ narratives, as this is necessary to give an in-depth account of the phenomena in question. Discuss these assumptions, drawing on your participants’ words to support you as you move from one code to another and from one theme to the next. Finally, as appropriate, it is possible to include examples from literature or policy documents that add support for your findings. As an exercise, you may wish to code and theme the sample excerpt in Appendix 1 and tell the participant’s story in your own way. Further reading about “doing” qualitative research can be found at the end of this paper.

CONCLUSIONS

Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. It can be used in pharmacy practice research to explore how patients feel about their health and their treatment. Qualitative research has been used by pharmacists to explore a variety of questions and problems (see the “Further Reading” section for examples). An understanding of these issues can help pharmacists and other health care professionals to tailor health care to match the individual needs of patients and to develop a concordant relationship. Doing qualitative research is not easy and may require a complete rethink of how research is conducted, particularly for researchers who are more familiar with quantitative approaches. There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples’ thoughts and feelings to enable researchers to tell participants’ stories.

Appendix 1. Excerpt from a sample transcript

The participant (age late 50s) had suffered from a chronic mental health illness for 30 years. The participant had become a “revolving door patient,” someone who is frequently in and out of hospital. As the participant talked about past experiences, the researcher asked:

  • What was treatment like 30 years ago?
  • Umm—well it was pretty much they could do what they wanted with you because I was put into the er, the er kind of system er, I was just on
  • endless section threes.
  • Really…
  • But what I didn’t realize until later was that if you haven’t actually posed a threat to someone or yourself they can’t really do that but I didn’t know
  • that. So wh-when I first went into hospital they put me on the forensic ward ’cause they said, “We don’t think you’ll stay here we think you’ll just
  • run-run away.” So they put me then onto the acute admissions ward and – er – I can remember one of the first things I recall when I got onto that
  • ward was sitting down with a er a Dr XXX. He had a book this thick [gestures] and on each page it was like three questions and he went through
  • all these questions and I answered all these questions. So we’re there for I don’t maybe two hours doing all that and he asked me he said “well
  • when did somebody tell you then that you have schizophrenia” I said “well nobody’s told me that” so he seemed very surprised but nobody had
  • actually [pause] whe-when I first went up there under police escort erm the senior kind of consultants people I’d been to where I was staying and
  • ermm so er [pause] I . . . the, I can remember the very first night that I was there and given this injection in this muscle here [gestures] and just
  • having dreadful side effects the next day I woke up [pause]
  • . . . and I suffered that akathesia I swear to you, every minute of every day for about 20 years.
  • Oh how awful.
  • And that side of it just makes life impossible so the care on the wards [pause] umm I don’t know it’s kind of, it’s kind of hard to put into words
  • [pause]. Because I’m not saying they were sort of like not friendly or interested but then nobody ever seemed to want to talk about your life [pause]
  • nobody asked me any questions about my life. The only questions that came into was they asked me if I’d be a volunteer for these student exams
  • and things and I said “yeah” so all the questions were like “oh what jobs have you done,” er about your relationships and things and er but
  • nobody actually sat down and had a talk and showed some interest in you as a person you were just there basically [pause] um labelled and you
  • know there was there was [pause] but umm [pause] yeah . . .

This article is the 10th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.

Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.

Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.

Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.

Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.

Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.

Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.

Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.

Competing interests: None declared.

Further Reading

Examples of qualitative research in pharmacy practice.

  • Farrell B, Pottie K, Woodend K, Yao V, Dolovich L, Kennie N, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists integrated into family practice. J Interprof Care. 2010; 24 (1):80–9. [ PubMed ] [ Google Scholar ]
  • Gregory P, Austin Z. Postgraduation employment experiences of new pharmacists in Ontario in 2012–2013. Can Pharm J. 2014; 147 (5):290–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marks PZ, Jennnings B, Farrell B, Kennie-Kaulbach N, Jorgenson D, Pearson-Sharpe J, et al. “I gained a skill and a change in attitude”: a case study describing how an online continuing professional education course for pharmacists supported achievement of its transfer to practice outcomes. Can J Univ Contin Educ. 2014; 40 (2):1–18. [ Google Scholar ]
  • Nair KM, Dolovich L, Brazil K, Raina P. It’s all about relationships: a qualitative study of health researchers’ perspectives on interdisciplinary research. BMC Health Serv Res. 2008; 8 :110. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pojskic N, MacKeigan L, Boon H, Austin Z. Initial perceptions of key stakeholders in Ontario regarding independent prescriptive authority for pharmacists. Res Soc Adm Pharm. 2014; 10 (2):341–54. [ PubMed ] [ Google Scholar ]

Qualitative Research in General

  • Breakwell GM, Hammond S, Fife-Schaw C. Research methods in psychology. Thousand Oaks (CA): Sage Publications; 1995. [ Google Scholar ]
  • Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks (CA): Sage Publications; 2015. [ Google Scholar ]
  • Miles B, Huberman AM. Qualitative data analysis. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]
  • Patton M. Qualitative research and evaluation methods. Thousand Oaks (CA): Sage Publications; 2002. [ Google Scholar ]
  • Willig C. Introducing qualitative research in psychology. Buckingham (UK): Open University Press; 2001. [ Google Scholar ]

Group Dynamics in Focus Groups

  • Farnsworth J, Boon B. Analysing group dynamics within the focus group. Qual Res. 2010; 10 (5):605–24. [ Google Scholar ]

Social Constructivism

  • Social constructivism. Berkeley (CA): University of California, Berkeley, Berkeley Graduate Division, Graduate Student Instruction Teaching & Resource Center; [cited 2015 June 4]. Available from: http://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/social-constructivism/ [ Google Scholar ]

Mixed Methods

  • Creswell J. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]

Collecting Qualitative Data

  • Arksey H, Knight P. Interviewing for social scientists: an introductory resource with examples. Thousand Oaks (CA): Sage Publications; 1999. [ Google Scholar ]
  • Guest G, Namey EE, Mitchel ML. Collecting qualitative data: a field manual for applied research. Thousand Oaks (CA): Sage Publications; 2013. [ Google Scholar ]

Constructivist Grounded Theory

  • Charmaz K. Grounded theory: objectivist and constructivist methods. In: Denzin N, Lincoln Y, editors. Handbook of qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 2000. pp. 509–35. [ Google Scholar ]

IMAGES

  1. Sample size in qualitative research Margarete Sandelowski

    research articles using qualitative methods

  2. Methods of qualitative data analysis.

    research articles using qualitative methods

  3. 6 Types of Qualitative Research Methods

    research articles using qualitative methods

  4. Qualitative Research

    research articles using qualitative methods

  5. (PDF) Qualitative Research Methods DEFINITION OF QUALITATIVE RESEARCH

    research articles using qualitative methods

  6. (PDF) Qualitative Research: Part Two

    research articles using qualitative methods

VIDEO

  1. QUALITATIVE RESEARCH: Methods of data collection

  2. Qualitative Research Reporting Standards: How are qualitative articles different from quantitative?

  3. Exploring Qualitative and Quantitative Research Methods and why you should use them

  4. Conducting and Publishing Innovative Qualitative Research

  5. Understanding organisational culture using qualitative methods

  6. Qualitative Methods

COMMENTS

  1. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  2. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  3. Qualitative Methods in Health Care Research

    Significance of Qualitative Research. The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality.Health interventions, explanatory health models, and medical-social ...

  4. Qualitative Methods in Implementation Research: An Introduction

    Qualitative methods are a valuable tool in implementation research because they help to answer complex questions such as how and why efforts to implement best practices may succeed or fail, and how patients and providers experience and make decisions in care. This article orients the novice implementation scientist to fundamentals of ...

  5. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  6. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  7. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  8. International Journal of Qualitative Methods: Sage Journals

    The International Journal of Qualitative Methods is the peer-reviewed interdisciplinary open access journal of the International Institute for Qualitative Methodology (IIQM) at the University of Alberta, Canada. The journal, established in 2002, is an eclectic international forum for insights, innovations and advances in methods and study designs using qualitative or mixed methods research.

  9. Interviews and focus groups in qualitative research: an update for the

    Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42-45. Article Google Scholar

  10. Qualitative research methods: when to use them and how to judge them

    As with research using quantitative methods, research using qualitative methods is home to the good, the bad and the ugly. It is essential that reviewers know the difference. Rejection letters are hard to take but more often than not they are based on legitimate critique. However, from time to time it is obvious that the reviewer has little ...

  11. Qualitative research: its value and applicability

    Research conducted using qualitative methods is normally done with an intent to preserve the inherent complexities of human behaviour as opposed to assuming a reductive view of the subject in order to count and measure the occurrence of phenomena. Qualitative research normally takes an inductive approach, moving from observation to hypothesis ...

  12. Qualitative and Mixed Methods Research

    Journal Article Reporting Standards for Qualitative Primary, Qualitative Meta-Analytic, and Mixed Methods Research in Psychology: The APA Publications and Communications Board Task Force Report. In recognition of the growth in qualitative research, the journal article reporting standards have been updated to include guidelines for reporting ...

  13. Qualitative Research

    Qualitative Research. Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus ...

  14. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  15. What is Qualitative in Qualitative Research

    A fourth issue is that the "implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm" (Goertz and Mahoney 2012:9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving ...

  16. Qualitative Research

    Qualitative Methods in Housing Research. H.C.C.H. Coolen, in International Encyclopedia of Housing and Home, 2012. Abstract. The field of qualitative research is a vast and complex area of research methodology that has rich and diverse traditions. In this article we start with positioning qualitative research with respect to quantitative ...

  17. Conducting Qualitative Research Online: Challenges and Solutions

    The increasing centrality of online environments to everyday life is driving traditional qualitative research methods to online environments and generating new qualitative research methods that respond to the particularities of online worlds. With strong design principles and attention to ethical, technical and social challenges, online methods ...

  18. Learning to Do Qualitative Data Analysis: A Starting Point

    In this article, we take up this open question as a point of departure and offer thematic analysis, an analytic method commonly used to identify patterns across language-based data (Braun & Clarke, 2006), as a useful starting point for learning about the qualitative analysis process.In doing so, we do not advocate for only learning the nuances of thematic analysis, but rather see it as a ...

  19. LibGuides: Qualitative Data Analysis: Find Methods Examples

    Resources on conducting qualitative data analysis. Skip to Main Content. About . Employee Directory ... Qualitative Methods Texts; Qualitative Data Analysis Strategies Toggle Dropdown. ... Tags: ATLAS.ti, caqdas, maxqda, qdas, qualitative data, qualitative data analysis, qualitative research, taguette. Main Library Information Desk (217) 333 ...

  20. The use of evidence to guide decision-making during the COVID-19

    Developing and using a codebook for the analysis of interview data: an example from a professional development research project. Field Methods. 2011;23(2):136-55. Article Google Scholar Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research.

  21. Unprofessional behaviours experienced by hospital staff: qualitative

    A growing body of literature has presented evidence demonstrating the negative impact that unprofessional behaviours amongst healthcare staff has on organisational outcomes, patient safety, and staff well-being [1,2,3,4,5,6,7,8,9,10].Waterson et al. examined the enactment of patient safety culture across hospitals and highlighted the need to further explore the complex range of factors that ...

  22. Perceptions of Patients and Nurses about Bedside Nursing Handover: A

    Methods. A meta-synthesis review was conducted to identify qualitative studies that reported patients and nurses' perceptions about bedside handover using seven electronic databases, including CINAHL, PsycINFO, Embase, Education Database (ProQuest), Web of Science, The Cochrane Library, and PubMed, from January 2013 to November 2023.

  23. Choosing a Qualitative Research Approach

    In this Rip Out, we describe 3 different qualitative research approaches commonly used in medical education: grounded theory, ethnography, and phenomenology. Each acts as a pivotal frame that shapes the research question (s), the method (s) of data collection, and how data are analyzed. 4, 5. Go to:

  24. Experience and training needs of nurses in military hospital on

    Results of synthesis. This study uses the method of aggregative integration [] to integrate the results, that is, to further organize and summarize the meaning of the collected results, so as to make the results more convincing, targeted and general.Researchers in understanding the various qualitative research philosophy and methodology of the premise, through repeated reading, analysis and ...

  25. Generic Qualitative Approaches: Pitfalls and Benefits of Methodological

    As qualitative research has evolved, researchers in the field have struggled with a persistent tension between a need for both methodological flexibility and structure (Holloway & Todres, 2003).In the development of qualitative research, three major methodologies are discussed most frequently and are often viewed as foundational: phenomenology, ethnography, and grounded theory (Holloway ...

  26. The implementation of person-centred plans in the community-care sector

    Background Person-centred planning refers to a model of care in which programs and services are developed in collaboration with persons receiving care (i.e., persons-supported) and tailored to their unique needs and goals. In recent decades, governments around the world have enacted policies requiring community-care agencies to adopt an individualized or person-centred approach to service ...

  27. Experiences of medical students and faculty regarding the use of long

    The long case is used to assess medical students' proficiency in performing clinical tasks. As a formative assessment, the purpose is to offer feedback on performance, aiming to enhance and expedite clinical learning. The long case stands out as one of the primary formative assessment methods for clinical clerkship in low-resource settings but has received little attention in the literature.

  28. Challenges to implementing artificial intelligence in healthcare: a

    Artificial intelligence (AI) for healthcare presents potential solutions to some of the challenges faced by health systems around the world. However, it is well established in implementation and innovation research that novel technologies are often resisted by healthcare leaders, which contributes to their slow and variable uptake. Although research on various stakeholders' perspectives on ...

  29. Qualitative Research: Data Collection, Analysis, and Management

    INTRODUCTION. In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area.