Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 31 October 2023

The sub-dimensions of metacognition and their influence on modeling competency

  • Riyan Hidayat 1 , 2 ,
  • Hermandra 3 &
  • Sharon Tie Ding Ying 4  

Humanities and Social Sciences Communications volume  10 , Article number:  763 ( 2023 ) Cite this article

1331 Accesses

4 Citations

1 Altmetric

Metrics details

Mathematical modeling is indeed a versatile skill that goes beyond solving real-world problems. Numerous studies show that many students struggle with the intricacies of mathematical modeling and find it a challenging and complex task. One important factor related to mathematical modeling is metacognition which can significantly impact expert and student success in a modeling task. However, a notable gap of research has been identified specifically in relation to the influence of metacognition in mathematical modeling. The study’s main goal was to assess whether the different sub-dimensions of metacognition can predict the sub-constructs of a student’s modeling competence: horizontal and vertical mathematization. The study used a correlational research design and involved 538 participants who were university students studying mathematics education in Riau Province, Indonesia. We employed structural equation modeling (SEM) using AMOS version 18.0 to evaluate the proposed model. The measurement model used to assess metacognition and modeling ability showed a satisfactory fit to the data. The study found that the direct influence of awareness on horizontal mathematization was insignificant. However, the use of cognitive strategies, planning, and self-checking had a significant positive effect on horizontal mathematization. Concerning vertical mathematization, the direct effect of cognitive strategy, planning, and awareness was insignificant, but self-checking was positively related to this type of mathematization. The results suggest that metacognition, i.e., awareness and control over a person’s thinking processes, plays an important role in modeling proficiency. The research implies valuable insights into metacognitive processes in mathematical modeling, which could inform teaching approaches and strategies for improving mathematical modeling. Further studies can build on these findings to deepen our understanding of how cognitive strategies, planning, self-assessment, and awareness influence mathematical modeling in both horizontal and vertical contexts.

Similar content being viewed by others

modular approach in teaching problem solving a metacognitive process

Secondary school students’ attitude towards mathematics word problems

modular approach in teaching problem solving a metacognitive process

The selections and differences in mathematical problem-posing strategies of junior high school students

modular approach in teaching problem solving a metacognitive process

Profiles of learners based on their cognitive and metacognitive learning strategy use: occurrence and relations with gender, intrinsic motivation, and perceived autonomy support

Introduction.

Changing curriculum content and instructional styles in teaching and learning processes for regular mathematics classes is critical to promote more meaningful engagement with mathematics (Schoenfeld, 2016 ). A shift to searching for solutions, exploring patterns, and formulating conjectures rather than simply memorizing procedures and formulas or completing exercises can lead to deeper understanding and more versatile problem-solving skills. Incorporating mathematical modeling into classroom activities by engaging students in authentic problem-solving within complex systems and interdisciplinary contexts can help develop the competencies to tackle increasingly complex problems. Mathematical modeling can strengthen problem-solving skills and connect mathematics to real-world situations, making it relevant to students’ current and future lives (Hidayat and Wardat, 2023 ). The importance of mathematical modeling is further underscored by its inclusion as a primary component in the mathematics assessment of the Program for International Student Assessment (PISA) (Niss, 2015 ). Students can tackle non-routine real-life challenges by engaging in modeling activities and working collaboratively on realistic and authentic mathematical tasks. However, traditional instructional methods for assessing student modeling proficiency are inadequate. This information underscores the need for improved methods of evaluation that capture the full range of students’ modeling abilities and the development of their problem-solving skills. Educators should consider incorporating alternative assessment methods such as project-based assessments, performance tasks, or reflective journals to better assess student modeling skills. In addition, professional development opportunities for teachers to learn effective strategies for integrating mathematical modeling into their instruction can contribute to more successful implementation and assessment of these skills.

Mathematical modeling is a multifaceted skill beyond solving real-world problems (Mohd Saad et al., 2023 ; Niss et al., 2007 ). As Minarni and Napitupulu ( 2020 ) point out, students can apply modeling abilities to describe context problems mathematically, organize tools, discover relationships, transfer between real-world and mathematical problems, and visualize problems in various ways. In modeling real-world problems, students activate other competencies, such as representing mathematical objects, arguing, and justifying (National Council of Teachers of Mathematics, 1989 ). Engaging in mathematical modeling in the classroom helps students clarify and interpret phenomena, solve problems, and develop social competencies necessary for effective teamwork and collaborative knowledge building. Mathematical modeling instruction aims to improve students’ mathematical knowledge, promote critical and creative thinking, and foster positive attitudes toward mathematics (Blum, 2002 ). Cognitive modeling combined with task orientation is more effective in increasing the likelihood of success. In high school curricula, students can connect mathematical modeling to different courses, reinforcing the importance of this skill in different contexts (Hernández et al., 2016 ). Integrating mathematical modeling into different subject areas can help students develop a comprehensive understanding of the relevance and applicability of mathematics in real-world situations, ultimately leading to better problem-solving abilities and an appreciation for the power of mathematical thinking.

Numerous studies have shown that mathematical modeling is challenging for many students (Anhalt et al., 2018 ; Corum and Garofalo, 2019 ; Czocher, 2017 ; Kannadass et al., 2023 ). Metacognitive competencies improve students’ modeling abilities (Galbraith, 2017 ; Vorhölter, 2019 ; Wendt et al., 2020 ). Metacognition, the ability to reflect on and regulate one’s thinking, can significantly impact expert and student success in problem-solving (Schoenfeld, 1983 , 2007 ). Productive metacognitive behaviors can help students better understand the given problem, search for and distinguish relevant and irrelevant information, and focus on the overall structure of the problem (Kramarski et al., 2002 ). These behaviors can lead to improved understanding and problem-solving abilities. Although the benefits of metacognition to learning are widely recognized, there is limited research on the specific types of metacognitive strategies that are most effective in helping students (Wilson and Clarke, 2004 ). Future research should focus on identifying these strategies and understanding how they can best be used in educational settings to improve students’ mathematical modeling and problem-solving abilities. This research could include exploring the most effective methods for teaching metacognitive skills, examining how metacognition can be tailored to individual student needs, and examining the impact of metacognitive interventions on student modeling performance. Thus, this study aimed to investigate how the sub-dimensions of metacognition can predict modeling performance. The study questions are as follows: (a) Do the sub-constructs of metacognition (awareness, cognitive strategy, planning, and self-checking) predict horizontal mathematization? (b) Do the sub-constructs of metacognition (awareness, cognitive strategy, planning, and self-checking) predict vertical mathematization?

Theoretical perspective

Models and modeling perspective (mmp).

The term ‘model’ is a collection of elements, connections between elements, and actions that describe or explain how the elements interact (English, 2007 ; Lesh and Doerr, 2003 ). Modeling exercises allow students to reveal their multiple forms of reasoning, create conceptual frameworks, and develop effective ways to represent the structural features of the topic (Carreira and Baioa, 2018 ). Models and Modeling Perspective (MMP), also known as contextual modeling (Kaiser and Sriraman, 2006 ), is considered a method to understand real-life situations and develop formal mathematical knowledge based on students’ understanding (Csapó and Funke, 2017 ; Lesh and Doerr, 2003 ). Students must move from a real-world situation to a mathematical world using their previously learned mathematical concepts as a modeling tool that goes beyond calculational prescriptions (Sevinc, 2022 ) and learning theories (Abassian et al., 2019 ). Moreover, MMP considers the mathematical model as a conceptual tool of a mathematical system that emerges from a specific real-world situation (Lesh and Lehrer, 2003 ). In brief, MMP is a new concept that incorporates real-world context into the teaching and learning of mathematical problem-solving because MMP prepares students to be mentally active in modeling. An important feature of MMP is the recognition that problem-solving typically involves numerous modeling cycles in which descriptions, explanations, and predictions are continuously refined. In contrast, solutions are modified or discarded depending on their interpretation of the world.

Students will use their internal conceptual systems to organize, understand, and make connections between events, experiences, or issues (Erbas et al., 2014 ) to adapt to MMP. Student learning through the use of MMP will also facilitate communication between peers and teachers through project-based learning or problem-based learning (Ärlebäck, 2017 ) as they practice solving authentic problem situations by engaging in mathematical thinking that involves interpreting situations, describing and explaining, computing through procedures, and deductive reasoning (English et al., 2008 ). MMP summarizes a cycle of activities that, in the first step, requires students to understand the real-world situation, followed by structuring the situation model, mathematizing to develop a mathematical model, and collaborating mathematical models to develop results that are considered and validated within the real-world situation, and finally presenting a solution to a real-world situation.

Mathematical modeling and mathematization

Modeling is also known as organizing representative descriptions in which symbolic representations and formal model structures develop (Hidayat et al., 2018 ; Niss, 2015 ). According to the South African Department of Basic Education (2011), mathematical modeling is an important curriculum focus, and real-world situations should be included in all areas, such as economics, health, social services, and others. Mathematical modeling is a process of mathematization or mathematization in which students can discover relevant issues or assumptions in a given real-world scenario by mathematizing, interpreting, and evaluating solutions to resulting mathematical problems related to the given circumstance (Leong and Tan, 2020 ). The mathematization method can be applied as a series of activities directed toward the activity system object, with the goal of the modeling project serving as the activity object itself (Araújo and Lima, 2020 ). Students with mathematical skills can acquire mathematical knowledge through logical reasoning using problem-solving. Formal mathematical information is obtained during the mathematization process by referring to informal knowledge, including components of actual problem situations (Freudenthal, 2002 ). Mathematical modeling can be divided into many tasks: simplifying, mathematizing, computing, interpreting, and validating. When students are proficient in the modeling process, they can independently and insightfully perform all components of a mathematical modeling process (Hankeln et al., 2019 ), with the focus of the competencies being on identifying specific fundamental capabilities.

A mathematical model is created using mathematization (Yilmaz and Dede, 2016 ). The concept of mathematization involves using mathematical methods to organize and examine various aspects of reality. The idea of the mathematization of actual reality is formulated in two forms of mathematization (Treffers, 1978 ; Treffers and Goffree, 1985 ), namely horizontal and vertical mathematization. Horizontal and vertical mathematization are complementary processes in mathematical modeling and problem-solving (Freudenthal, 1991 ). The process of horizontal mathematization begins with understanding the problem and extends to problem-solving (Galbraith, 2017 ). Horizontal mathematization involves translating real-world problems into mathematical representations, while vertical mathematics involves working within mathematics to solve the problem. Both processes are important for students to develop a comprehensive understanding of mathematics and its applications in real-world situations. Horizontal mathematization refers to translating a real-world problem into a mathematical problem or representation. Students identify relevant mathematical structures, concepts, and relationships related to the given problem in this phase. They may simplify the problem by making assumptions, recognizing patterns, or constructing a model. Horizontal mathematization aims to create a mathematical representation that captures the essence of the real-world situation and can be analyzed using mathematical tools. Simplification is about understanding the core problem and using mathematics to construct a model based on reality (Kaiser and Schwarz, 2006 ). Students must be able to clarify the essential elements of the situation, formulate the problem, and create a simplified version that can be analyzed mathematically. A further step is to identify relevant mathematical concepts, variables, and relationships that capture the essence of the real situation (mathematization). Students must be able to translate the problem into mathematical language using appropriate notations or visual representations (Kaiser and Stender, 2013 ). This study defines horizontal mathematization as simplifying assumptions, clarifying the objective, formulating the problem, assigning variables, establishing parameters and constants, formulating mathematical expressions, and selecting a model (Yilmaz and Dede, 2016 ).

Vertical mathematization occurs after the problem has been translated into a mathematical representation through horizontal mathematization. In this phase, students work within the domain of mathematics to solve the problem by using mathematical techniques, calculations, proofs, or manipulations. Vertical mathematization is about delving deeper into mathematical concepts, exploring connections, and gaining new insights. The focus here is on applying mathematical knowledge and reasoning to find a solution to the problem. Vertical mathematization refers to exploring the realm of formal symbols (Selter and Walter, 2019 ). Vertical mathematization also refers to the mathematical processing and improvement of real-world problems transformed into mathematics (Treffers and Goffree, 1985 ). Learners apply their mathematical knowledge or intuitive procedures to solve the problem within the framework of the mathematical model (Maaß, 2006 ). This model may involve calculations, manipulations, or proof to derive a mathematical solution. Once a mathematical solution is found, students must interpret the results in the context of the original problem (Garfunkel and Montgomery, 2016 ). To do this interpretation, they must understand the relationship between the mathematical solution and the real-world situation and place the solution in terms of the problem’s context. The final step is to review the solution for accuracy and critically evaluate the assumptions made, the model used, and the overall process (Kaiser and Stender, 2013 ). Students must determine if their solution is reasonable and sensible and if improvements or changes can be made to the model or assumptions. This paper defines vertical mathematization as interpreting, validating, and relating the result to a real-world context.

Metacognition

Metacognition encompasses two aspects: the capacity to recognize and understand one’s cognitive processes (referred to as metacognitive knowledge) and the ability to manage and adapt these processes (known as metacognitive control) (Fleur et al., 2021 ). This study must consider metacognition because modeling issues are typically worked on in small groups (Biccard and Wessels, 2011 ). Metacognition includes students’ understanding of their cognitive processes and their capability to regulate and manipulate them (Kwarikunda et al., 2022 ). Metacognition is the knowledge or cognitive activity that targets or controls any component of a cognitive effort (Flavell, 1979 ); for example, students use metacognition to solve issues while studying. Students must manage their cognitive processes during learning so that their learning achievement be measured afterward (Bedel, 2012 ). Metacognition is often divided into two parts: metacognitive knowledge and techniques, which are often complemented by an affective-motivational aspect (Efklides, 2008 ; Veenman et al., 2006 ). Planning cognitive activities, monitoring progress toward goals, selecting methods to solve difficulties, and reflecting on past performance to improve future outcomes are all examples of metacognitive techniques (Kim and Lim, 2019 ). Furthermore, O’Neil and Abedi ( 1996 ) operationalize students’ metacognitive inventory as a construct that includes planning, self-checking, cognitive strategy, and awareness. Metacognition is understanding how individuals gain information and manage the process (Schraw and Dennison, 1994 ).

Metacognitive abilities have a significant impact on student learning and performance. They enable students to identify areas of difficulty and select appropriate learning strategies to understand new concepts. Metacognition has been found to improve students’ problem-solving abilities (García et al., 2016 ). However, metacognitive skills differ among students with varying levels of modeling competence, with some putting little effort into organizing or expressing knowledge differences (García et al., 2016 ). Students with high levels of modeling competence tend to pay more attention to time management, which may contribute to their success in problem-solving tasks. Interestingly, metacognitive training is particularly beneficial for lower-performing students because it allows them to improve while working on the same tasks as their peers (Karaali, 2015 ). This finding suggests that metacognitive instruction can help level the playing field for students with different abilities and allow all learners to develop their problem-solving skills more effectively. In summary, metacognition is critical in mathematics and affects students’ abilities differently. Educators should integrate metacognitive training into their instructional practices to support all learners and help them develop self-awareness, reflection, and regulation skills to benefit their mathematical problem-solving efforts.

Relationship between metacognition and modeling competency

Metacognition can help with goal-oriented modeling and overcoming various challenges (Stillman, 2004 ), depending on students’ knowledge and experience. The success of metacognitive activity can be attributed to students’ responses to specific problem-solving scenarios that can activate metacognition (Vorhölter, 2021 ). Metacognition is an essential method associated with mathematical proficiency and problem-solving skills. Teachers can help students develop appropriate individual techniques for dealing with modeling challenges and various metacognitive activities, such as mathematizing across different circumstances and environments (Blum, 2011 ). Mathematizing is a horizontally sequential process of translating parts of the real world into the language of symbols and abstracting in a vertical direction (Freudenthal, 2002 ). The mathematization process is horizontal mathematization because it requires the learner to transform real life into mathematical symbols. Horizontal mathematization leads to results based on different problem-solving strategies and the concrete problem case (Gravemeijer, 2008 ). The process of horizontal mathematization focuses primarily on organizing, schematizing, and constructing a model of reality so that it can be treated mathematically (Piñero Charlo, 2020 ). Horizontal mathematization is highlighted as a learning difficulty in an instructional strategy where teachers do not recognize horizontal mathematization as a learning problem (Yvain-Prébiski and Chesnais, 2019 ), and students also have difficulty discovering connections and transferring real-world problems to known mathematical models. Changing models, merging and defining a connection in a formula, and improving and integrating models are challenges of vertical mathematization (Suaebah et al., 2020 ). Real-world modeling activities that promote the horizontal mathematization process can help students experience mathematics as a value by strengthening their understanding and tangible connection between mathematics and the effort expended, i.e., by improving their metacognition skills (Suh et al., 2017 ).

Awareness of metacognition is critical in developing and improving students’ problem-solving skills. Studies have shown a significant positive correlation between metacognition awareness and problem-solving abilities (Sevgi and Karakaya, 2020 ). Effective mathematical problem-solving is also associated with planning and revision techniques (García et al., 2019 ). Students can improve their problem-solving skills through self-reflection on planning, monitoring, and evaluating their thinking processes (Herawaty et al., 2018 ). This finding highlights the link between metacognition and modeling abilities such as awareness, self-checking, planning, and cognitive strategy. By using planning techniques, students can improve their problem-solving abilities, for example, through verbalization (Zhang et al., 2019 ). Although the transfer of metacognitive knowledge to mathematical modeling is modest, using planning and revision procedures still contributes positively to student success. The sub-dimension of monitoring can predict a student’s engagement in a discussion (Akman and Alagöz, 2018 ). Using cognitive strategies during the formulation phase of the modeling process provides a sense of guidance (Krüger et al., 2020 ). Awareness of metacognition and using metacognitive strategies such as planning, monitoring, and revising are essential to improve students’ problem-solving and mathematical modeling abilities. Educators should aim to incorporate metacognitive strategies into their teaching methods to support the development of these skills in students.

Metacognition has been recognized as critical for solving complicated tasks, such as modeling tasks (Wilson and Clarke, 2004 ). Individuals can cultivate a more methodical and comprehensive approach to horizontal mathematization by integrating the sub-constructs of metacognition (awareness, planning, self-checking, and cognitive strategies). For example, horizontal mathematization is enhanced by providing students with useful tools and tactics for planning, analyzing, and solving modeling tasks through awareness, planning, self-checking, and cognitive strategies. Students can recognize mathematical patterns and structures within a modeling task when they know the relevance and use of mathematics in everyday situations. Creating a plan allows students to break difficult tasks into manageable parts. Students can be disciplined and avoid errors or omissions by setting goals, outlining necessary mathematical operations, and choosing a sequence of tasks. Cognitive techniques enable effective information processing, allow students to connect different mathematical ideas, and promote creative thinking when solving modeling tasks. Finally, self-checking promotes error detection and correction, leading to a better understanding of mathematical ideas. At the same time, the sub-constructs of metacognition (awareness, planning, self-checking, and cognitive strategies) would help enhance vertical mathematization skills. For example, students can identify the relevant mathematical relationships and structures needed to build a mathematical model by improving their awareness. To fulfill this aim, they must recognize the mathematical concepts and principles that apply to the current real-world problem. Again, the objectives are set in the planning phase, variables and parameters are selected, and the mathematical operations and transformations are described. The problem is analyzed using cognitive techniques, and the mathematical solution is found through reasoning, pattern recognition, and visualization. Finally, self-validation assures that the mathematical model is accurate and reliable. Students can locate any errors or inconsistencies and correct them by examining and checking the model frequently.

The hypotheses of the research are as follows:

Significant relationships will occur between awareness and horizontal mathematization.

Significant relationships will occur between cognitive strategy and horizontal mathematization.

Significant relationships will occur between planning and horizontal mathematization.

Significant relationships will occur between self-checking and horizontal mathematization.

Significant relationships will occur between awareness and vertical mathematization.

Significant relationships will occur between cognitive strategy and vertical mathematization.

Significant relationships will occur between planning and vertical mathematization.

Significant relationships will occur between self-checking and vertical mathematization.

Methodology

Participants and design.

This study used a correlational research design (Creswell, 2012 ; Shanmugam and Hidayat, 2022 ), which explores the level of interrelation between metacognition and mathematical modeling using structural equation modeling (SEM). The current study sample consisted of college students studying mathematics education in Riau Province, Indonesia, with similar modeling experiences. These students were prospective mathematics teachers who were prepared to teach mathematics at the secondary level. First-year (133 or 24.7%), second-year (223 or 41.4%), and third-year (182 or 33.8%) students participated in the study, with a total of 538 samples. The fourth-year study samples were not included due to practical exercises. All participants were selected using cluster random sampling from universities with similar characteristics such as location and modeling experience. We used this type of sampling because this research focused on groups rather than individuals, which resulted in students coming from selected universities to take the test. Although the current research found that the percentage of gender resulted in more female (483 or 89.8%) than male (55 or 10.2%) samples, we did not use gender as a moderator or covariate for analyzing the data. The Department of Investment and Integrated One Stop Services, Indonesia, approved the study. Subsequently, all selected samples received written informed consent. We explained the study’s objectives and the voluntary nature of participation before the test was administered. All students from the selected universities took 60 min to complete the metacognitive inventory instrument and the mathematical modeling test.

To measure mathematical modeling competence, we developed and used the Modeling Test (Haines and Crouch, 2001 ), which we divided into two sub-constructs: horizontal and vertical mathematization. The items were assessed by multiple-choice questions with a three-level scoring (0=wrong answer, 1=partially correct answer, and 2=true answer). The modeling test had 22 questions and a final score of 44. Moreover, the test is also suitable for this study because the study included a large sample (Lingefjärd and Holmquist, 2005 ). Figure 1 shows one of the examples of measuring horizontal mathematization.

figure 1

The examples of horizontal mathematization test.

Reliability scores for modeling competence followed the sub-construct: horizontal mathematization (18 items, α  = 0.861) and vertical mathematization (4 items, α  = 0.740). These overall reliability values were acceptable ( α  > 0.70) (Tavakol and Dennick, 2011 ). The internal consistency of the mathematical modeling test was good, with composite reliability values (CR) ranging from 0.775 to 0.925 (> 0.6). The value of the Average Variance Extracted (AVE) ranged from 0.500 to 0.501 ( > 0.5), indicating good discriminant validity. At the same time, the square roots of all AVE values were larger than the associations suggested among them or to the left of them, which underlined the discriminant validity of the mathematical modeling test. All these values were consistent with the recommendations of researchers (Fornell and Larcker, 1981 ; Hair et al., 2010 ; Nunnally and Bernstein, 1994 ), which were satisfactory.

The metacognitive inventory (O’Neil and Abed, 1996 ) was adopted for measuring metacognition, which comprised four sub-scales: awareness (5 items), cognitive strategy (5 items), planning (5 items), and self-checking (5 items). The example of the item for each sub-contract provided (awareness; I am always aware of my thoughts in modeling task ), (cognitive strategy; I am trying to find the main idea in the modeling task ), (planning; I am trying to understand the purpose of the modeling task before attempting to solve it ) and (self-checking; If I notice any mistakes while working on the modeling task, I always correct them ). Reliability scores for metacognition followed the sub-constructs of awareness ( α  = 0.825), cognitive strategy ( α  = 0.853), planning ( α  = 0.842), and self-checking ( α  = 0.828). These overall reliability values were acceptable ( α  > 0.70) (Tavakol and Dennick, 2011 ). The internal consistency of the metacognitive inventory was high, with composite reliability values (CR) ranging from 0.775 to 0.925 (>0.6). The value of the Average Variance Extracted (AVE) ranged from 0.500 to 0.526 (>0.5), indicating good discriminant validity. The square roots of all AVE values were higher than the associations suggested among them or to the left of them, underlining the discriminant validity of the metacognition scale. These values were consistent with what researchers proposed (Fornell and Larcker, 1981 ; Hair et al., 2010 ; Nunnally and Bernstein, 1994 ), which were satisfactory.

Strategy of data analyses

In the first analysis, we used descriptive statistics for all sub-constructs with missing data, outliers (boxplots), means, standard deviations, skewness, and kurtosis. At the same time, the relationships between latent variables were calculated using Pearson correlations to determine multicollinearity. According to Kline ( 2005 ), the relationship between the latent variables should be less than 0.900 for the observed variables to be free from multicollinearity. For the cut-off value of univariate normality, we used skewness (±2.0) (Tabachnick and Fidell, 2013 ) and kurtosis (±8.0) (Kline, 2005 ) in this paper. Then, SEM (AMOS version 18.0) was used to evaluate the hypothesized model. First, we calculated a measurement model (Confirmatory Factor Analyzes—CFA) for each variable to test whether or not the dimensional structures of the instruments could be confirmed for the sample in the present study. For the construct of metacognition, we assessed awareness models, cognitive strategy, planning, and self-checking sequentially. The following measurement model assessed two-dimensional modeling competence (horizontal and vertical mathematization). Next, we set up the hypothetical model to test the effect of the sub-dimensions of metacognition on mathematical modeling (horizontal and vertical mathematization). Model fit was assessed using the standardized root mean residual (SRMR) (<0.080), chi-square values ( P  > 0.05), comparative fit index (CFI) (>0.950), Tucker-Lewis index (TLI) (>0.950), the root mean square error of approximation (RMSEA) (<0.080) (Bandalos and Finney, 2018 ; Dash and Paul, 2021 ), and the goodness-of-fit index (>0.900) (Dash and Paul, 2021 ). SRMR was determined by taking the average of the residuals from the comparison of the observed and implied matrices (Bandalos and Finney, 2018 ). The chi-square test assessed the discrepancy between the observed sample data and the covariance matrices within the model. CFI and TLI compare the goodness of fit of a model to that of a null or independent model. Finally, to assess the discriminant validity, reliability, and convergent validity of the measures, we used the composite reliability (CR) (>0.60), Cronbach’s alpha values (0.60–0.70), and average variance extracted (AVE) (>0.50).

Descriptive results

Table 1 shows the descriptive results and correlation matrix for the sub-construct of metacognition (awareness, cognitive strategy, planning, and self-checking) and the sub-construct of modeling competency (horizontal and vertical mathematization).

As indicated in Table 1 , the highest relationship was between awareness and cognitive strategy ( r  = 0.677), while horizontal and vertical mathematization ( r  = 0.342) were the lowest correlated. Again, the students’ awareness, cognitive strategy, planning, and self-checking were moderate ( M  = 3.940, M  = 3.737, M  = 3.951, M  = 3.910, respectively). The skewness score ranged between −0.658 and −0.124 ( ± 2.0), while the kurtosis values ranged between 0.087 and 2.343 ( ± 8.0). The outputs indicated that no values exceeded the cut-off score for all of the four sub-constructs (Kline, 2005 ; Tabachnick and Fidell, 2013 ), which was normally distributed. At the same time, the students’ horizontal and vertical mathematization were also moderate ( M  = 0.914, M  = 0.848, respectively). The skewness score ranged between 0.095 and 0.195 ( ± 2.0), while the kurtosis scores ranged between −0.670 and 0.032 ( ± 8.0). The results showed that no scores exceeded the cut-off score for the two sub-constructs (Kline, 2005 ; Tabachnick and Fidell, 2013 ), which was normally distributed.

Measurement models

The measurement model was employed to confirm that observed variables reflected unobserved variables before evaluating the hypothetical structural model. We employed CFA to measure the fitness of the latent variables of metacognition (20 indicators) and mathematical modeling competency (22 indicators). The outputs of maximum likelihood estimation revealed that the measurement model of metacognition for the four sub-constructs indicated an acceptable match; χ 2  = 325.454, χ 2 / df  = 1.984, RMSEA = 0.043, SRMR = 0.036, CFI = 0.965, GFI = 0.955, TLI = 0.959 (Table 2 ). Moreover, the measurement model of mathematical modeling competency also revealed that two sub-constructs indicated an adequate fit of the model to the data; χ 2  = 261.077, χ 2 / df  = 1.305, RMSEA = 0.024, SRMR = 0.041, CFI = 0.975, GFI = 0.958, TLI = 0.971. Despite the significance of the chi-square result, χ²/ df , RMSEA, SRMR, CFI, GFI, and TLI recommended that the a priori model had an adequate factor structure.

Factor loading and coefficient of SEM regression are shown in Table 3 . All factor loadings from sub-constructs of horizontal mathematization (around 0.617–0.837), vertical mathematization (from 0.660 to 0.703), awareness (around 0.662–0.738), cognitive strategy (from 0.770 to 0.758), planning (around 0.660–0.757) and self-checking (from 0.662 to 0.760), were significant. Each item within every sub-construct exhibited statistically significant factor loadings ( P  < 0.001), affirming the correlation among items for each sub-construct. The standardized estimate for factor loading indicated that all items had factor loadings greater than 0.50, which surpassed the desired criteria (Hair et al., 2010 ).

Testing the hypothesized models

Similar to the examining measurement model, some cut-off scores were also applied for each measurement to evaluate the hypothesized model; χ 2 / df  < 5.00, RMSEA < 0.080, SRMR < 0.080, CFI > 0.950, GFI > 0.900, TLI > 0.950. The results of SEM indicated a highly satisfactory fit to data, χ 2  = 1163.570, χ 2 / df  = 1.460, RMSEA = 0.029, SRMR = 0.043, CFI = 0.950, GFI = 0.908, TLI = 0.950 (see Fig. 2 ). The hypothesized model shown in Fig. 2 was the final structural model that indicated the relationship between the sub-construct of metacognition and mathematical modeling competency. The parameter estimates for whole structural paths in the hypothesized model were statistically significant.

figure 2

The final model.

Next, Table 4 shows detailed statistics on the final model (e.g., standardized estimate, unstandardized estimate, standard errors, CR, and P value).

As appeared in Table 4 , the direct path coefficient was significant: (a) cognitive strategy → horizontal mathematization [ β  = 0.26, P  < 0.05, t  = 2.535], (b) planning → horizontal mathematization [ β  = 0.23, P  < 0.05, t  = 2.369], (c) self-checking → horizontal mathematization [ β  = 0.23, P  < 0.05, t  = 2.470]. The hypothesis was fully accepted. Students who used cognitive strategy, planning, and self-checking accomplished well in horizontal mathematization. Conversely, the direct path coefficient of awareness to horizontal mathematization was insignificant [ β  = 0.17, P  > 0.05, t  = 1.685]. Thus, the hypothesis was not fully supported. It implied that awareness alone might not strongly predict success in horizontal mathematization. At the same time, the direct path coefficient was not significant: (a) cognitive strategy → vertical mathematization [ β  = 0.24, P  > 0.05, t  = 1.763], (b) planning → vertical mathematization [ β  = 0.15, P  > 0.05, t  = 1.180], (c) awareness → vertical mathematization [ β  = 0.08, P  > 0.05, t  = 0.635]. It showed that awareness, cognitive strategy, and planning alone may not strongly predict success in vertical mathematization. The direct path coefficient of self-checking → vertical mathematization was significant [ β  = 0.27, P  < 0.05, t  = 2.138]. Students who used self-checking accomplished well in vertical mathematization. In conclusion, cognitive strategy (26%), planning (23%), and self-checking (23%) accounted for a variance for horizontal mathematization; at the same time, self-checking (27%) accounted for a variance for vertical mathematization.

Integrating mathematical modeling across subject areas can give students a more meaningful and context-rich understanding of mathematics. Numerous studies have shown that many students find mathematical modeling difficult and complex (Anhalt et al., 2018 ; Corum and Garofalo, 2019 ; Czocher, 2017 ). For example, some students have difficulty translating real-world problems into mathematical terms, while others have difficulty finding appropriate mathematical models to represent complex systems and phenomena. This study aimed to examine whether the different sub-dimensions of metacognition could be used to predict a student’s level of competency in modeling.

We found no significant or positive relationship between awareness and horizontal or vertical mathematization. Despite numerous studies that do not support the finding of a significant and positive relationship between these variables (Kreibich et al., 2022 ; Sevgi and Karakaya, 2020 ; Toraman et al., 2020 ), previous research has primarily focused on metacognitive awareness rather than the sub-domain of awareness within metacognition. Indeed, much of the research in mathematics education has focused on problem-solving and not specifically on the context of mathematical modeling. This focus on problem-solving has led to valuable insights into how students learn, think, and apply mathematical concepts. However, certain aspects of mathematical modeling may have been less explored or understood in the process. One possible explanation could be insufficient mathematical knowledge in mathematical modeling. Leong ( 2014 ) indicated that incorporating mathematical modeling into the curriculum may face challenges, including teacher readiness, time constraints, and educator dispositions. The extent of a student’s mathematical understanding can influence the connection between awareness and horizontal or vertical mathematization. Students who do not have the requisite mathematical foundations may have difficulty making connections or applying problem-solving techniques, regardless of their level of awareness. For example, increased awareness can help students identify relevant information, recognize patterns and relationships, develop appropriate assumptions, select mathematical tools, and reflect on their modeling process.

Our results show a positive and significant correlation between cognitive strategy and horizontal mathematization; however, no significant relationship was found between cognitive strategy and vertical mathematization. This result confirms previous research in this area (Hidayat et al., 2020 , 2022 ; Krüger et al., 2020 ). This observation can be attributed to the complexity of the tasks. Horizontal mathematization involves translating real-world problems into mathematical representations, whereas vertical mathematization involves working within the domain of mathematics to solve problems. Cognitive strategies, such as organizing information, recognizing patterns, and selecting appropriate tools, may be more applicable to horizontal mathematization. This result is consistent with Krüger et al.‘s ( 2020 ) view that using cognitive strategies provides direction in the formulation phase of the modeling process. Conversely, in vertical mathematization, tasks may be more complex or abstract and require higher mathematical knowledge or skills. Vertical mathematization involves going deeper into the mathematical domain, working with more abstract concepts, and using advanced problem-solving techniques. Cognitive strategies typically focus on organizing, planning, and selecting tools that may not be as influential in this more abstract and complex domain. Consequently, cognitive strategies alone may not be sufficient to influence vertical mathematization. Another possible explanation is that students’ different cognitive styles may lead to different approaches to mathematization processes. Students with different cognitive styles may lead different approaches to mathematization processes (Mariani and Hendikawati, 2017 ).

This research’s results indicate a significant and positive relationship between planning and horizontal mathematization, but no significant correlation was found between planning and vertical mathematization. This result is consistent with previous research (García et al., 2019 ; Herawaty et al., 2018 ; Zhang et al., 2019 ). In a horizontal mathematization context, verbalization can potentially explain this observation. Zhang et al. ( 2019 ) indicated that students can improve their problem-solving skills through planning strategies such as verbalization. Verbalization, i.e., talking about the problem and their thought processes, can also help students clarify their thinking and identify possible errors or inconsistencies in their reasoning. By breaking down complex problems into smaller, more manageable steps, students can more easily understand the problem and develop an action plan for solving it. In horizontal mathematization, students must be able to analyze the problem, identify the most important variables and relationships, and develop a plan to solve the problem using mathematical concepts and procedures. However, the sub-domain of planning is not used effectively in vertical mathematization. Vertical mathematization requires students to engage in a more analytical and abstract form of thinking, which can be more challenging than the more concrete and tangible aspects of horizontal mathematics. In addition, vertical mathematization often involves multiple mathematical concepts and procedures, making it more challenging to plan a clear and effective problem-solving strategy. Students may rely on trial-and-error methods or intuitive problem-solving approaches rather than explicit planning.

Our study shows a significant positive correlation between self-checking and horizontal and vertical mathematization. This result is consistent with previous studies conducted on this topic, such as those by Akman and Alagöz ( 2018 ), García et al. ( 2019 ), and Herawaty et al. ( 2018 ). This consistency of results between studies highlights the importance of self-checking or monitoring in mathematical modeling. One possible explanation for this consistent finding is that self-checking is beneficial for students to identify errors, ensure accuracy, and build confidence in their mathematical abilities. Using self-checking techniques, students monitor their understanding and advancement as they work through the problem. This monitoring can help them identify errors or misunderstandings early on and correct their thought processes or methods accordingly. Self-checking can also help students stay organized and focused as they solve the problem, reducing the chance of making mistakes or overlooking important details. For example, modelers correctly identified the relevant variables and relationships in the problem. Similarly, monitoring strategies can improve vertical mathematization by helping students stay organized and focused, reflecting on their problem-solving approaches, and interpreting the outcomes of their solutions. For example, monitoring or self-checking can help students interpret the results of their problem-solving efforts in the context of the original problem. By reflecting on the meaning of the solution and its relation to the real world, students can develop a deeper understanding of mathematical concepts and their applications. In addition, monitoring can help students stay organized and focused as they work through a problem, reducing the likelihood of making mistakes or missing important details. Research has shown that the sub-dimension of monitoring can predict student engagement in classroom discussions (Akman and Alagöz, 2018 ).

Mathematical modeling involves applying mathematical concepts and techniques to real-world situations and requires students to think critically, creatively, and systematically about problems. Students need opportunities to engage in various tasks that require applying their mathematical knowledge to real-world situations and sufficient time to gain experience and develop their skills. Metacognition plays an important role in mathematical modeling by helping students become more aware of their thinking processes, monitor their understanding, and decide when to seek help or additional support. According to this research, awareness alone did not significantly impact horizontal mathematization. However, using cognitive techniques, making intelligent plans, and self-checking significantly improved horizontal mathematization. To improve learners’ horizontal mathematics skills, it is important to motivate them to use proper cognitive methods, acquire efficient planning techniques, and develop the habit of self-checking. In addition, the results pave the way for further research on the exact cognitive strategies, planning methods, and self-checking procedures that support effective horizontal mathematization. By analyzing how these variables interact and influence student performance, insights can be gained into instructional strategies and interventions that support successful mathematical modeling. Finally, these discoveries improve our understanding of the intricate connection between metacognition and mathematical modeling. Awareness may not directly affect horizontal mathematization, but cognitive techniques, planning, and self-checking are critical. The unique processes and techniques associated with different types of mathematical modeling must also be considered, as demonstrated by the differential effects on vertical mathematization. These findings extend our theoretical understanding of the relationship between mastery of mathematical modeling, metacognitive processes, and specific cognitive skills.

Limitations and suggestions

It is common for research studies to have limitations, and the current study is no exception. Acknowledging and considering the study’s limitations in future research is essential. Firstly, some hypotheses are fully supported by the research findings, while others are not. It is possible that other factors, such as students’ prior mathematical knowledge and experience, their motivation and engagement in mathematical modeling, and the quality of instruction, play a more important role in promoting horizontal and vertical mathematization. Further research is needed to fully understand the complex interplay of factors contributing to horizontal and vertical mathematization and to identify effective strategies for promoting mathematization in students. Secondly, although the current study found correlations among variables, it is important to note that correlational studies cannot prove causality. Future research may therefore benefit from using experimental designs or other methods to establish causal relationships among variables. These methods may involve interventions or manipulations designed to directly change the independent variable and observe its effects on the dependent variable. Such methods allow researchers to understand the causal relationships between variables better and draw more meaningful conclusions about the effects of various factors on the outcome of interest. Finally, a potential limitation of the current study is that it relied on self-reported measures of variables that could be susceptible to bias or error. Future research could benefit from using objective measurements or multiple data sources to increase the validity of the results. Objective measurements may include direct observation or physiological measurements, providing more accurate and reliable data. In addition, using multiple data sources can contribute to a more comprehensive understanding of the phenomenon under study, as different data sources may capture different aspects of the measured construct. Using such methods, researchers can increase the validity and reliability of their findings and draw more meaningful conclusions about the relationship between different variables.

Data availability

All relevant data can be found in the manuscript and its accompanying supplementary files.

Abassian A, Safi F, Bush S, Bostic J (2019) Five different perspectives on mathematical modeling in mathematics education. Investig Math Learn 12(1):53–65. https://doi.org/10.1080/19477503.2019.1595360

Article   Google Scholar  

Akman Ö, Alagöz B (2018) Relation between metacognitive awareness and participation to class discussion of university students. Univers J Educ Res 6(1):11–24. https://doi.org/10.13189/ujer.2018.060102

Anhalt CO, Cortez R, Bennett AB (2018) The emergence of mathematical modeling competencies: an investigation of prospective secondary mathematics teachers. Math Think Learn 20(3):202–221. https://doi.org/10.1080/10986065.2018.1474532

Araújo JDL, Lima FHD (2020) The mathematization process as object-oriented actions of a modelling activity system. Bolema Boletim de Educação Matemática 34(68):847–868. https://doi.org/10.1590/1980-4415v34n68a01

Ärlebäck J (2017) Using a models and modeling perspective (MMP) to frame and combine research, practice- and teachers’ professional development. CERME 10, Dublin, Ireland, https://shorturl.at/pyNT8

Google Scholar  

Bandalos DL, Finney SJ (2018) Factor analysis. In: The reviewer’s guide to quantitative methods in the social sciences. Routledge, p. 98–122. https://doi.org/10.4324/9781315755649-8

Bedel EF (2012) An examination of locus of control, epistemological beliefs and metacognitive awareness in preservice early childhood teachers. Educ Sci Theory Pract 12(4):, 3051–3060

Biccard P, Wessels DC (2011) Documenting the development of modelling competencies of grade 7 mathematics students. Trends Teach Learn Math Modell ICTMA 14:375–383. https://doi.org/10.1007/978-94-007-0910-2_37

Blum W (2011) Can modelling be taught and learnt? Some answers from empirical research. In: Kaiser G, Blum W, Borromeo Ferri R, Stillman G (eds). Trends in teaching and learning of mathematical modelling. International perspectives on the teaching and learning of mathematical modelling, 1. Springer, p. 15–30

Blum W (2002) ICMI study 14: applications and modelling in mathematics education—discussion document. Educ Stud Math 51:149–171. https://doi.org/10.1007/BF02655826

Carreira S, Baioa AM (2018) Mathematical modeling with hands-on experimental tasks: on the student’s sense of credibility. ZDM Math Educ 50(1):201–215. https://doi.org/10.1007/s11858-017-0905-1

Corum K, Garofalo J (2019) Engaging preservice secondary mathematics teachers in authentic mathematical modeling: deriving Ampere’s law. Math Teacher Educ 8(1):76–91. https://doi.org/10.5951/mathteaceduc.8.1.0076

Creswell JW (2012) Educational research: planning, conducting, and evaluating quantitative and qualitative research. In: Educational Research, Vol. 4. Pearson

Csapó B, Funke, J (2017) The nature of problem solving: using research to inspire 21st century learning. OECD Publishing

Czocher JA (2017) Mathematical modeling cycles as a task design heuristic. Math Enthusiast 14(1–3):129–140. https://doi.org/10.54870/1551-3440.1391

Dash G, Paul J (2021) CB-SEM vs PLS-SEM methods for research in social sciences and technology forecasting. Technol Forecast Soc Change 173:121092. https://doi.org/10.1016/j.techfore.2021.121092

Efklides A (2008) Metacognition: defining its facets and levels of functioning in relation to self-regulation and co-regulation. Eur Psychol 13(4):277–287. https://doi.org/10.1027/1016-9040.13.4.277

English L (2007) Interdisciplinary modelling in the primary mathematics curriculum. In: Watson J, Beswick K (eds) Mathematics: Essential research, essential practice, 1. Mathematics education research group of Australasia, Australia, p. 275–284

English L, Lesh R, Fennewald T (2008) Future directions and perspectives for problem solving research and curriculum development. Paper presented at the 11th international conference on mathematical education, Monterrey, Mexico. http://tsg.icme11.org/document/get/458

Erbas AK, Kertil M, Çetinkaya B, Çakiroglu E, Alacaci C, Bas S (2014) Mathematical modeling in mathematics education: basic concepts and approaches. Educ Sci Theory Pract 14(4):1621–1627. https://doi.org/10.12738/estp.2014.4.2039

Flavell JH (1979) Metacognition and cognitive monitoring: a new area of cognitive–developmental inquiry. Am Psychol 34(10):906. https://doi.org/10.1037/0003-066x.34.10.906

Fleur DS, Bredeweg B, van den Bos W (2021) Metacognition: ideas and insights from neuro-and educational sciences. NPJ Sci Learn 6(1):13. https://doi.org/10.31234/osf.io/zx6f7

Article   PubMed   PubMed Central   Google Scholar  

Fornell C, Larcker DF (1981) Evaluating structural equation models with unobservable variables and measurement error. J Market Res 18:39–50. https://doi.org/10.2307/3151312

Freudenthal H (1991) Revisiting mathematics education, China lectures. Kluwer Academic Publishers

Freudenthal H (2002) Revisiting mathematics education. China lectures. Kluwer Academic Publishers

Galbraith P (2017) Forty years on: mathematical modelling in and for education. In: Downton A, Livy S, Hall J (eds) 40 Years on: We are still learning! Proceedings of the 40th annual conference of the mathematics education research group of Australasia, MERGA, p. 47–50

García T, Rodríguez C, González-Castro P, González-Pienda JA, Torrance M (2016) Elementary students’ metacognitive processes and post-performance calibration on mathematical problem-solving tasks. Metacogn Learn 11:139–170. https://doi.org/10.1007/s11409-015-9139-1

García T, Boom J, Kroesbergen EH, Núñez JC, Rodríguez C (2019) Planning, execution, and revision in mathematics problem solving: does the order of the phases matter? Stud Educ Eval 61:83–93. https://doi.org/10.1016/j.stueduc.2019.03.001

Garfunkel S, Montgomery M (2016) Guidelines for assessment and instruction in mathematical modeling education (GAIMME) report. Consortium for Mathematics and Its Applications (COMAP)/Society For Industrial and Applied Mathematics (SIAM), Boston/Philadelphia, Pennsylvania, United States

Gravemeijer K (2008) RME theory and mathematics teacher education. In: Tirosh, D, Wood T (eds) The international handbook of mathematics teacher education: tools and processes in mathematics teacher education. Sense Publishers, p. 283–302

Haines C, Crouch R (2001) Recognizing constructs within mathematical modelling. Teach Math Appl 20(3):129–138. https://doi.org/10.1093/teamat/20.3.129

Hair JF, Black WC, Babin BJ, Anderson RE (2010) Multivariate data analysis, 7th Edition. Prentice Hall

Hankeln C, Adamek C, Greefrath G (2019) Assessing sub-competencies of mathematical modelling—development of a new test instrument. Lines of Inquiry in Mathematical Modelling Research in Education, 143–160. https://doi.org/10.1007/978-3-030-14931-4_8

Herawaty D, Widada W, Novita T, Waroka L, Lubis ANMT (2018) Students’ metacognition on mathematical problem solving through ethnomathematics in Rejang Lebong, Indonesia. J Phys Conf Ser 1088(1):012089. https://doi.org/10.1088/1742-6596/1088/1/012089

Hernández ML, Levy R, Felton-Koestler MD, Zbiek RM (2016) Mathematical modeling in the high school curriculum. Math Teacher 110(5):336–342. https://doi.org/10.5951/mathteacher.110.5.0336

Hidayat R, Wardat Y (2023) A systematic review of augmented reality in science, technology, engineering and mathematics education. Educ Inf Technol. https://doi.org/10.1007/s10639-023-12157-x

Hidayat R, Hermandra H, Zetriuslita Z, Lestari S, Qudratuddarsi H (2022) Achievement goals, metacognition and horizontal mathematization: a mediational analysis. TEM J 11(04):1537–1546. https://doi.org/10.18421/TEM114-14

Hidayat R, Syed Zamri SNA, Zulnaidi H, Yuanita P (2020) Meta-cognitive behaviour and mathematical modelling competency: mediating effect of performance goals. Heliyon 6(4). https://doi.org/10.1016/j.heliyon.2020.e03800

Hidayat R, Zulnaidi H, Zamri SNAS (2018) Roles of metacognition and achievement goals in mathematical modeling competency: a structural equation modeling analysis. PLoS ONE 13(11). https://doi.org/10.1371/journal.pone.0206211

Kaiser G, Schwarz B (2006) Mathematical modelling as bridge between school and university. ZDM Math Educ 38(2):196–208. https://doi.org/10.1007/BF02655889

Kaiser G, Sriraman B (2006) A global survey of international perspectives on modelling in mathematics education. ZDM Math Educ 38(3):302–310. https://doi.org/10.1007/BF02652813

Kaiser G, Stender P (2013) Complex modelling problems in co-operative, self-directed learning environments. In: Stillman GA, Kaiser G, Blum W, Brown JP (eds) Teaching mathematical modelling: connecting to research and practice. The Netherlands. Dordrecht, South Holland, p. 277–293. https://doi.org/10.1007/978-94-007-6540-5_23

Kannadass P, Hidayat R, Siregar PS, Husain AP (2023) Relationship between computational and critical thinking towards modelling competency among pre-service mathematics teachers. TEM J 1370–1382. Portico. https://doi.org/10.18421/tem123-17

Karaali G (2015) Metacognition in the classroom: motivation and self-awareness of mathematics learners. Problems Resour Issues Math Undergraduate Stud 25:439–452. https://doi.org/10.1080/10511970.2015.1027837

Kim JY, Lim KY (2019) Promoting learning in online, ill-structured problem solving: the effects of scaffolding type and metacognition level. Comput Educ 138:116–129. https://doi.org/10.1016/j.compedu.2019.05.001

Kline RB (2005) Principles and practice of structural equation modeling. The Guilford Press

Kramarski B, Mevarech Z, Arami M (2002) The effects of metacognitive instruction on solving mathematical authentic tasks. Educ Stud Math 49:225–250. https://doi.org/10.1023/A:1016282811724

Kreibich A, Hennecke M, Brandstätter V (2022) The role of self-awareness and problem-solving orientation for the instrumentality of goal-related means. J Individ Differ 43(2):57–69. https://doi.org/10.1027/1614-0001/a000355

Krüger A, Vorhölter K, Kaiser G (2020) Metacognitive strategies in group work in mathematical modelling activities–The students’ perspective. In: Stillman GA, Kaiser G, Lampen, CE (eds) Mathematical modelling education and sense-making. Springer, p. 311–321

Kwarikunda D, Schiefele U, Muwonge CM, Ssenyonga J (2022) Profiles of learners based on their cognitive and metacognitive learning strategy use: occurrence and relations with gender, intrinsic motivation, and perceived autonomy support. Humanit Soc Sci Commun 9(1). https://doi.org/10.1057/s41599-022-01322-1

Leong KE (2014) Mathematical modelling in the Malaysian secondary curriculum. Learn Sci Math Online J 8:66–74

Leong KE, Tan JY (2020) Exploring secondary students’ modeling competencies. Math Enthusiast 17(1):85–107. https://doi.org/10.54870/1551-3440.1481

Lesh R, Doerr HM (2003) Beyond constructivism: A models & modeling perspective on mathematics problem solving, learning, and teaching. Mahwah

Lesh R, Lehrer R (2003) Models and modeling perspectives on the development of students and teachers. Math Think Learn 5(2):109–129. https://doi.org/10.1207/S15327833MTL0502&3_01

Lingefjärd T, Holmquist M (2005) To assess students’ attitudes, skills, and competencies in mathematical modeling. Teach Math Appl 24(2–3):123–133. https://doi.org/10.1093/teamat/hri021

Maaß K (2006) What are modelling competencies? ZDM Math Educ 38(2):113–142. https://doi.org/10.1007/bf02655885

Mariani S, Hendikawati P (2017) Mathematizing process of junior high school students to improve mathematics literacy refers PISA on RCP learning. J Phys Conf Ser 824(1):012049. https://doi.org/10.1088/1742-6596/824/1/012049

Minarni A, Napitupulu EE (2020) The role of constructivism-based learning in improving mathematical high order thinking skills of Indonesian students. Infinity J 9(1):111–132. https://doi.org/10.22460/infinity.v9i1.p111-132

Mohd Saad MR, Mamat S, Hidayat R, Othman AJ (2023) Integrating technology-based instruction and mathematical modelling for STEAM-based language learning: a sociocultural and self-determination theory perspective. Int J Interact Mobile Technol 17(14):55–80. https://doi.org/10.3991/ijim.v17i14.39477

National Council of Teachers of Mathematics (1989) Curriculum and evaluation standards for school mathematics. NCTM

Niss M (2015) Mathematical competencies and PISA. In: Stacey K, Turner R (eds) Assessing mathematical literacy. Springer, Cham, https://doi.org/10.1007/978-3-319-10121-7_2

Chapter   Google Scholar  

Niss M, Blum W, Galbraith P (2007) Introduction. In: Blum W, Galbraith PL, Henn H-W, Niss M (eds) Modelling and applications in mathematics education, 10th edn. Springer, p. 2–32

Nunnally JC, Bernstein IH (1994) Psychometric theory, 3rd ed. McGraw-Hill

O’Neil HF, Abedi J (1996) Reliability and validity of a state metacognitive inventory: potential for alternative assessment. J Educ Res 89:234–245. https://doi.org/10.1037/e650722011-001

Piñero Charlo JC (2020) Educational escape rooms as a tool for horizontal mathematization: learning process evidence. Educ Sci 10(9):213. https://doi.org/10.3390/educsci10090213

Schoenfeld AH (1983) Beyond the purely cognitive: belief systems, social cognitions, and metacognitions as driving forces in intellectual performance. Cogn Sci 7(4):329–363. https://doi.org/10.1016/S0364-0213(83)80003-2

Schoenfeld AH (2007) Method. In: Lester FK, Jr (ed) Second handbook of research on mathematics teaching and learning. Information Age Publishing Inc, p. 69–107

Schoenfeld AH (2016) Learning to think mathematically: problem solving, metacognition, and sense making in mathematics (reprint). J Educ 196(2):1–38. https://doi.org/10.1177/002205741619600202

Schraw G, Dennison RS (1994) Assessing metacognitive awareness. Contemporary Educ Psychol 19(4):460–475. https://doi.org/10.1006/ceps.1994.1033

Selter C, Walter D (2019) Supporting mathematical learning processes by means of mathematics conferences and mathematics language tools. ICME-13 Monographs 229–254. https://doi.org/10.1007/978-3-030-20223-1_13

Sevgi SEVİM, Karakaya M (2020) Investigation of metacognition awareness levels and problem-solving skills of middle school students. Int Online J Prim Educ 9(2):260–270. https://tinyurl.com/2vf34tbu

Sevinc S (2022) Toward a reconceptualization of model development from models-and-modeling perspective in mathematics education. Educ Stud Math 109(3):611–638. https://doi.org/10.1007/s10649-021-10096-3

Shanmugam P, Hidayat R (2022) Assessing grit and well-being of Malaysian ESL teachers: application of the PERMA model. Malaysian J Learn Instruct 19(2):153–181. https://doi.org/10.32890/mjli2022.19.2.6

Stillman G (2004) Strategies employed by upper secondary students for overcoming or exploiting conditions affecting accessibility of applications tasks. Math Educ Res J 16(1):41–71. https://doi.org/10.1007/bf03217390

Suaebah E, Mardiyana M, Saputro DRS (2020) How to analyze the students’ mathematization competencies in solving geometrical problems? J Phys Conf Ser 1469(1):012169. https://doi.org/10.1088/1742-6596/1469/1/012169

Suh JM, Matson K, Seshaiyer P (2017) Engaging elementary students in the creative process of mathematizing their world through mathematical modeling. Educ Sci 7(2):62. https://doi.org/10.3390/educsci7020062

Tabachnick BG, Fidell LS (2013) Using multivariate statistics. Harper & Row

Tavakol M, Dennick R (2011) Making sense of Cronbach’s alpha. Int J Med Educ 2:53–55. https://doi.org/10.5116/ijme.4dfb.8dfd

Toraman Ç, Orakci S, Aktan O (2020) Analysis of the relationships between mathematics achievement, reflective thinking of problem solving and metacognitive awareness. Int J Progres Educ 16(2):72–90. https://doi.org/10.29329/ijpe.2020.241.6

Treffers A (1978) Three dimensions. A model of goal and theory description in mathematics instruction—the Wiskobas project. D. Reidel Publishing Company

Treffers A, Goffree F (1985) Rational analysis of realistic mathematics education: the Wiskobas program. In: Streefland L (ed.) Proceedings of the ninth annual conference of the international group for the psychology of mathematics education. OW&OC, p. 97–121

Veenman MV, Van Hout-Wolters BH, Afflerbach P (2006) Metacognition and learning: conceptual and methodological considerations. Metacogn Learn 1:3–14. https://doi.org/10.1007/s11409-006-6893-0

Vorhölter K (2019) Enhancing metacognitive group strategies for modelling. ZDM Math Educ 51(4):703–716. https://doi.org/10.1007/s11858-019-01055-7

Vorhölter K (2021) Metacognition in mathematical modeling: the connection between metacognitive individual strategies, metacognitive group strategies and modeling competencies. Math Think Learn 1–18. https://doi.org/10.1080/10986065.2021.2012740

Wendt L, Vorhölter K, Kaiser G (2020) Teachers’ perspectives on students’ metacognitive strategies during mathematical modelling processes—a case study. In: Stillman G, Kaiser G, Lampen C (eds) Mathematical modelling education and sense-making: International perspectives on the teaching and learning of mathematical modelling. Springer, p. 335–346

Wilson J, Clarke D (2004) Towards the modelling of mathematical metacognition. Math Educ Res J 16(2):25–48. https://doi.org/10.1007/bf03217394

Yilmaz S, Dede TA (2016) Mathematization competencies of pre-service elementary mathematics teachers in the mathematical modelling process. Int J Educ Math Sci Technol 4(4):284. https://doi.org/10.18404/ijemst.39145

Yvain-Prébiski S, Chesnais A (2019) Horizontal mathematization: a potential lever to overcome obstacles to the teaching of modelling. In: Jankevist UT, van den Heuvel-Panhuizen M, Veldhuis M (eds) Eleventh congress of the European Society for research in mathematics education (No. 28). Freudenthal Group; Freudenthal Institute; ERME, p. 1284–1291

Zhang J, Xie H, Li H (2019) Improvement of students problem-solving skills through project execution planning in civil engineering and construction management education. Eng Constr Archit Manag 26(7):1437–1454. https://doi.org/10.1108/ecam-08-2018-0321

Download references

Author information

Authors and affiliations.

Department of Science and Technical Education, Faculty of Educational Studies, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia

Riyan Hidayat

Institut Penyelidikan Matematik, Universiti Putra Malaysia, Serdang, Malaysia

FKIP, Universitas Riau, Pekanbaru, 28293, Indonesia

Faculty Science and Mathematics, Universiti Pendidikan Sultan Idris, Perak, Malaysia

Sharon Tie Ding Ying

You can also search for this author in PubMed   Google Scholar

Contributions

The conception or design of the work: RH. The acquisition, analysis, or interpretation of the data for the work; RH and Hermandra. Drafting the work or revising it critically for important intellectual content; RH and STDY.

Corresponding author

Correspondence to Riyan Hidayat .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

The study had permission from the Department of Investment and Integrated One Stop Services, Indonesia with number 503/DPMPTSP/NON IZIN-RISET/8323.

Informed consent

All selected samples were given a written informed consent letter. After receiving confirmation from the researcher regarding complete confidentiality and the explicit clarification that their responses would be used exclusively for academic objectives, all 538 participants voluntarily participated in the study.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hidayat, R., Hermandra & Ying, S.T.D. The sub-dimensions of metacognition and their influence on modeling competency. Humanit Soc Sci Commun 10 , 763 (2023). https://doi.org/10.1057/s41599-023-02290-w

Download citation

Received : 17 April 2023

Accepted : 19 October 2023

Published : 31 October 2023

DOI : https://doi.org/10.1057/s41599-023-02290-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Educational paradigm shift: assessing the prospects of a master's course in green energy transition.

  • Baibhaw Kumar
  • Katalin Voith
  • Marti Rosas-Casals

Discover Sustainability (2024)

Longitudinal and reciprocal links between metacognition, mathematical modeling competencies, and mathematics achievement in grades 7–8: A cross-lagged panel analysis

Metacognition and Learning (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

modular approach in teaching problem solving a metacognitive process

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of jintell

Assessing Metacognitive Regulation during Problem Solving: A Comparison of Three Measures

Cristina d. zepeda.

1 Department of Psychology and Human Development, Vanderbilt University, Nashville, TN 37235, USA

Timothy J. Nokes-Malach

2 Department of Psychology, Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA 15260, USA

Associated Data

Summary levels of the data presented in this study are available on request from the corresponding author. The data are not publicly available to protect the privacy of the participants.

Metacognition is hypothesized to play a central role in problem solving and self-regulated learning. Various measures have been developed to assess metacognitive regulation, including survey items in questionnaires, verbal protocols, and metacognitive judgments. However, few studies have examined whether these measures assess the same metacognitive skills or are related to the same learning outcomes. To explore these questions, we investigated the relations between three metacognitive regulation measures given at various points during a learning activity and subsequent test. Verbal protocols were collected during the learning activity, questionnaire responses were collected after the learning tasks but before the test, and judgments of knowing (JOKs) were collected during the test. We found that the number of evaluation statements as measured via verbal protocols was positively associated with students’ responses on the control/debugging and evaluation components of the questionnaire. There were also two other positive trends. However, the number of monitoring statements was negatively associated with students’ responses on the monitoring component of the questionnaire and their JOKs on the later test. Each measure was also related to some aspect of performance, but the particular metacognitive skill, the direction of the effect, and the type of learning outcome differed across the measures. These results highlight the heterogeneity of outcomes across the measures, with each having different affordances and constraints for use in research and educational practice.

1. Introduction

Metacognition is a multi-faceted phenomenon that involves both the awareness and regulation of one’s cognitions ( Flavell 1979 ). Past research has shown that metacognitive regulation, or the skills learners use to manage their cognitions, is positively related to effective problem-solving ( Berardi-Coletta et al. 1995 ), transfer ( Lin and Lehman 1999 ), and self-regulated learning ( Zepeda et al. 2015 ). Furthermore, these skills have been shown to benefit student learning across a variety of academic domains, including math, science, reading, and writing ( Hacker et al. 2009 ). With research on metacognition advancing, multiple metacognitive skills have been proposed and evaluated, with researchers using different measures to assess each one ( Azevedo 2020 ). Although many measures and approaches have been proposed (e.g., verbal protocols, questionnaires, metacognitive judgments), less work has compared and contrasted these different measures with one another. This has led to questions about the relations of the measures to one another and concerns about measurement validity ( Veenman 2005 ; Veenman et al. 2003 ). To better understand metacognition conceptually and measure it practically, we need to compare how these different measures are similar to and different from one another.

In this work, we evaluate three types of metacognitive regulation measures: verbal protocols (e.g., students speaking their thoughts aloud during learning activities and then a researcher recording, transcribing, and coding those utterances for evidence of different metacognitive processes), a task-based questionnaire (e.g., asking students questions about how often they think they used different metacognitive processes during the learning task), and metacognitive judgments (specifically, judgments of knowing [JOKs]—a type of metacognitive judgment that asks students how confident they are about their answers on a test that is often based on content from a learning activity, sometimes referred to as retrospective confidence judgments). All three measures have been proposed to capture some aspect of metacognitive regulation. To evaluate the potential overlap of these measures, we conducted a theoretical analysis of each measure to better understand what it is intended to measure and how it has been typically used in research. We do so by reviewing the literature with consideration of each of the three measures in regard to their background theory, implications for what is learned, and attention to different aspects of validity. After this analysis, we investigate the measures in an empirical study, comparing and contrasting whether and how they are related to one another and learning outcomes during a problem-solving learning activity. Critically, this investigation has implications for practitioners trying to understand which aspects of their students’ metacognitive skills need support, as well as for theory and measurement development. Below, we first describe reasons why there might be some misalignment among the measures and then provide a detailed review of prior work using each type of measure and the validity of those measures.

1.1. Theory and Measurement: An Issue of Grain Size

One source of the variation in measurement is likely due to the variation in theories of metacognition (e.g., Brown 1987 ; Brown et al. 1983 ; Flavell 1979 ; Jacobs and Paris 1987 ; Nelson and Narens 1990 ; Schraw and Moshman 1995 ). Although most theories hypothesize that metacognition involves the ability to assess and regulate one’s thoughts, they differ in how they operationalize these constructs and their level of specificity ( Pintrich et al. 2000 ; e.g., Nelson and Narens 1990 ; Schraw and Dennison 1994 ). Two common differences across models of metacognition are the number of constructs specified and the level of analysis at which those constructs are described. Relevant to this study, metacognitive regulation has been represented across models as containing a variety of skills, such as planning, monitoring, control, and evaluating.

To illustrate the different number of constructs and the different levels of description, we compare a few models that conceptualize metacognitive regulation to one another. For example, Nelson and Narens’ ( 1990 ) model describes two higher-level constructs, whereas Schraw and Dennison’s ( 1994 ) model describes five constructs (see Figure 1 for an illustration). Nelson and Narens’ ( 1990 ) model consists of monitoring and control processes that assess the current state of working memory; it then uses that information to regulate and guide subsequent actions. These processes are described at a coarse grain level of analysis (see Figure 1 ), but the measurements of these constructs are operationalized at a more fine-grained level, focusing on different types of metacognitive judgments. Winne and Hadwin ( 1998 ) built upon this model and included additional higher-level metacognitive skills, such as planning and evaluating. Although Nelson and Narens’ ( 1990 ) model does contain aspects of planning (e.g., selection of processing) and evaluation (e.g., confidence in retrieved answers), these are included at the fine-grain level of description of monitoring and control and are not proposed as separate higher-level constructs.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g001.jpg

A comparison of the coarse-grain skills of two models that conceptualize metacognitive regulation. The gray and patterned rectangles represent the coarse-grain skills represented in each model. The rounded white rectangles connect to the coarse-grain skills that they are associated with for each of the models, highlighting the potential (mis)alignment between the constructs and measures. The rounded white rectangles also contain the definition for each of the coarse-grain skills and measures we aim to measure in this work. Note that judgments of knowing (JOKs) are shown in gray to represent the misalignment across the models with associations to evaluation for Schraw and Dennison ( 1994 ) and monitoring for Nelson and Narens ( 1990 ).

Schraw and Dennison’s ( 1994 ) model also includes planning, monitoring, evaluating, as well as two additional higher-level skills, information management, and debugging. Similarly, Zimmerman’s ( 2001 ) self-regulated learning model includes the same metacognitive skills of planning, monitoring, and evaluation. Across these different models, each skill is hypothesized to have a distinct process that interacts with the other skills. To further illustrate some of these differences and similarities in the conceptualization of metacognitive regulation, in Figure 1 , we compare Schraw and Dennison’s ( 1994 ) model with Nelson and Narens’ ( 1990 ) model. One clear difference between the two models is that JOKs are represented under monitoring in Nelson and Narens’ representation; however, given the definitions of monitoring and evaluation in Schraw and Dennison’s representation (as well as the other two models mentioned earlier), this might also be related to evaluation. This difference in particular highlights both the misalignment across the theories and the misalignment across theories and measurement.

This misalignment across theories and measurement is also seen with other measures. For example, although some researchers initially sought to capture specific metacognitive skills via a questionnaire, they often ended up combining them into a single factor due to the challenges of establishing each one as a separate construct (e.g., combining metacognitive skills such as monitoring and evaluating, among others, into a single component called metacognitive regulation— Schraw and Dennison 1994 ). Similarly, Pressley and Afflerbach ( 1995 ) had difficulty differentiating monitoring from control processes in verbal protocols and found that they tend to occur at the same time. The challenges in differentiating between the metacognitive skills of monitoring, control, and evaluating could also explain why other researchers have proposed fewer interactive skills ( Howard-Rose and Winne 1993 ; Pintrich et al. 2000 ). In contrast, within hypermedia contexts, some researchers have been able to differentiate between specific, fine-grain skills, which they refer to as micro skills (e.g., learners questioning whether they understand the content) and larger-grain skills, which they refer to as macro skills (e.g., monitoring) ( Azevedo and Witherspoon 2009 ; Greene and Azevedo 2009 ).

In this work, we examine the relation between theory and measurement with respect to a subset of metacognitive skills. This subset includes monitoring, control/debugging, and evaluating. We define monitoring as one’s awareness of one’s thinking and knowledge during the task, control/debugging as goal-directed activities that aim to improve one’s understanding during the task, and evaluation as an assessment of one’s understanding, accuracy, and/or strategy use once the task is completed. For example, if a student identifies what they do not understand (monitoring) while attempting to solve a problem, then they have an opportunity to fill the gap in their knowledge by seeking new information, rereading, summarizing the instructions, trying out new ideas, and so forth (control/debugging). Then, once the solution has been generated, they can reflect on their accuracy, as well as which strategies or knowledge they found most beneficial to prepare them for future tasks (evaluation). We chose this subset of metacognitive skills as they are commonly represented across theories of metacognition and measurements. Students are also more likely to engage in monitoring, control/debugging, and evaluation during problem-solving activities compared to other metacognitive skills such as planning, which appears to happen less frequently, as students often just dive right into solving the problem (e.g., Schoenfeld 1992 ).

1.2. Relation among Measures

In addition to the issues of grain size, there are two additional factors that differ across the measures. These factors concern when (e.g., prospective, concurrent, or retrospective) and how (e.g., think aloud vs. questionnaire vs. judgment) metacognition is assessed. Concurrent or “online” measures such as verbal protocols (e.g., Chi et al. 1989 ) attempt to examine people’s metacognition as it is occurring, whereas retrospective measures such as questionnaires (e.g., Schraw and Dennison 1994 ) and JOKs (i.e., retrospective confidence judgments; see Dunlosky and Metcalfe 2009 for an overview) evaluate metacognition after the skills have been employed and/or a solution has been generated or the answer has been given. Unlike a task-based questionnaire, which typically takes place at a longer interval after completing a learning activity, JOKs that assess one’s confidence on test items take place immediately after each problem is solved. Therefore, in Figure 2 , there is more overlap between the JOKs and the test than there is between the task-based questionnaire and the learning activity. A key difference between the timing of all these measures is that, in contrast with the retrospective measures, concurrent verbal protocols allow access to the contents of working memory without having to rely on one’s long-term memory ( Ericsson and Simon 1980 ). Given that JOKs occur after a problem is solved, but also while the information is still present, they may act more like a concurrent measure than a retrospective measure. See Figure 2 for a visual representation of where some of these measures take place during the learning and assessment sequence that we used in the present study.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g002.jpg

Visual representation of our across-methods-and-time design. The arrows indicate what each measure references. The verbal protocols were collected as a concurrent measure in reference to the learning activity. The task-based questionnaire was collected as a delayed retrospective measure in reference to the learning activity. The JOKs were collected as an immediate retrospective measure in reference to the test that was based on the learning content. Note, however, that the JOKs may act more like concurrent measures, as they are generated with the information still present (e.g., problem content); therefore, the box with JOKs overlaps more with the test on the learning activity, whereas the task-based questionnaire does not overlap with the learning activity.

Critically, few studies have directly compared these measures to one another. Those that have, have shown that student responses to questionnaires rarely correspond to concurrent measures ( Cromley and Azevedo 2006 ; Van Hout-Wolters 2009 ; Veenman 2005 ; Veenman et al. 2003 ; Winne and Jamieson-Noel 2002 ; Winne et al. 2002 ). For example, Veenman et al. ( 2003 ) found weak associations ( r ’s = −.18 to .29) between verbal protocols and a questionnaire assessing students’ metacognitive study habits. Van Hout-Wolters’ ( 2009 ) work revealed similar findings, in which correlations between verbal protocols and dispositional questionnaires were weak ( r ’s = −.07 to .22). In addition, Zepeda et al. ( 2015 ) found that students who received metacognitive training differed from a comparison condition in their accuracy in discriminating the metacognitive accuracy in their JOKs, but not in their general questionnaire responses. Schraw and Dennison ( 1994 ) and Sperling et al. ( 2004 ) showed similar findings, in which student accuracy regarding their JOKs was not related to their responses on the Metacognitive Awareness Inventory’s (MAI) metacognitive regulation dimension. The lack of associations among the different metacognitive measures may be due to the measures assessing different processes, an imprecise measure, or a combination of the two. Veenman et al. ( 2006 ) suggested that researchers should use a multi-method design to explicitly compare different methodologies and determine their convergent and external validity.

1.3. Relations to Robust Learning

Another way to examine the similarity of the measures is to examine whether they predict similar learning outcomes (e.g., external validity). To what degree do these different measures of the same construct predict similar learning outcomes? Prior research provides some evidence that metacognition is related to school achievement (e.g., grades or GPA) and performance on tests (e.g., quizzes, standardized assessments). However, no work has examined whether all three measures of the same construct predict the same type of learning outcome. Therefore, we investigated whether the different measures predicted different types of robust learning outcomes.

Robust learning is the acquisition of new knowledge or skills, which can be applied to new contexts (transfer) or prepare students for future learning (PFL) ( Bransford and Schwartz 1999 ; Koedinger et al. 2012 ; Schwartz et al. 2005 ; Richey and Nokes-Malach 2015 ). Transfer is defined as the ability to use and apply prior knowledge to solve new problems and PFL is defined as the ability use prior knowledge to learn new material (see Figure 3 for a comparison). For example, to assess transfer in the current study, learners attempt to apply knowledge (e.g., concept A) acquired from a statistics learning activity to new questions on a post-test that address the same concept (e.g., concept A’). Schwartz et al. refer to this process as ’transferring out’ knowledge from learning to test. To assess PFL, an embedded resource (Concept B) is incorporated into the post-test, in which learners have to apply what they learned in the earlier learning activity (i.e., their prior knowledge, Concept A) to understand the content in the resource. This is what Schwartz et al. refer to as ‘transferring in’. Then, that knowledge is assessed with a question to determine how well the students learned that information (i.e., testing with Concept B’). To our knowledge, there is no work examining the relation between metacognition and PFL using these different metacognitive regulation measures. To gain an understanding of how these measures have been related to different learning outcomes, we surveyed the literature.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g003.jpg

A comparison of the flow of information and knowledge between transfer and PFL, as derived from Bransford and Schwartz ( 1999 ) and Schwartz et al. ( 2005 ). The top light-gray box represents transfer, and the bottom white box represents PFL. “Out” means that the knowledge learned is then demonstrated on an outside assessment. “In” means the learner takes in the information from the learning activity to inform how they interpret later information. The A’ and B’ on the assessment designate that the problems are not identical to the original problems presented in the learning activity.

1.3.1. Verbal Protocols and Learning

Past work has examined the relation of verbal protocols to different types of learning. For example, Van der Stel and Veenman ( 2010 ) found that increased use of metacognitive skills (e.g., planning, monitoring, and evaluating) was associated with better near transfer (e.g., performance on isomorphic problems with the same problem structure but different surface features). In other work, Renkl ( 1997 ) found that the frequency of positive monitoring statements (e.g., “that makes sense”) was unrelated to transfer performance, but the frequency of negative monitoring statements (e.g., “I do not understand this”) was negatively related to transfer. This result shows that different types of metacognitive phenomena are differentially related to transfer. In this case, monitoring behaviors can be useful for identifying when a learner does not understand something.

1.3.2. Questionnaires and Learning

Metacognitive questionnaires are typically used to capture the relation between metacognitive skills with measures of student achievement as assessed by class grades, GPA, or standardized tests ( Pintrich and De Groot 1990 ; Pintrich et al. 1993 ; Sperling et al. 2004 ). However, a focus on achievement measures makes it difficult to determine how much and what type of knowledge a student gained because the measures are coarse grained and often do not account for prior knowledge. For example, class grades (which determine GPA) typically include other factors in addition to individual learning assessments, such as participation and group work. Unlike prior work with verbal protocols, research using questionnaires has not evaluated the relations of metacognitive skills and different types of learning outcomes, such as transfer or PFL.

1.3.3. Metacognitive Judgments—JOKs and Learning

Judgments of knowing (JOKs) have typically been used in paired-associate learning paradigms ( Winne 2011 ). However, some work has examined JOKs and their relation to test performance and GPA ( Nietfeld et al. 2005 , 2006 ). For example, Nietfeld et al. ( 2005 ) found that students’ JOKs were positively related to learning outcomes across different tests (that included transfer items), even when controlling for GPA.

1.3.4. Summary of the Relations to Robust Learning

From this brief survey of the prior literature, we see that different metacognitive measures have been related to different types of learning outcomes. Questionnaires have primarily been related to achievement outcomes (e.g., grades and GPA), whereas verbal protocols and JOKs have been related to multiple learning outcomes, including achievement and transfer. This variation makes it difficult to determine whether these measures predict the same types of learning. To gain a better understanding of how metacognition is related to learning, we examine the relations among all three measures to transfer and PFL. These empirical and theoretical challenges have direct implications for determining measurement validity.

1.4. Measurement Validity

Given the different approaches used across the three metacognitive measures and drawing inspiration from Pintrich et al.’s ( 2000 ) review, we used aspects of Messick’s ( 1989 ) validity framework to structure our review for the validity and scope of each measure. The components of measurement validity that we focus on include substantial validity, external validity, content validity, generality of the meaning (generality for short), and relevance and utility (utility for short). Substantial validity concerns whether the measure produces the predicted structure of the theoretical constructs (e.g., the type and number of metacognitive skills). External validity concerns the predictive or convergent relations to variables that the theory predicts (e.g., the relation to similar types of learning outcomes and the relation between metacognitive measures). Content validity concerns whether the measure is tailored to a specific activity or material. Generality concerns the applicability of the measure to different populations, while utility examines the ease of implementation. Below, we describe each metacognitive measure and their alignment with each of the five aspects of validity.

1.4.1. Validity of Verbal Protocols

Verbal protocols provide fine-grained verbal data to test hypotheses about what and how metacognition is used when a participant is engaged in some learning or problem-solving activity. However, the level of theoretical specificity depends on the research goals of the work, the research questions asked, and the coding rubrics constructed. For example, Renkl ( 1997 ) only examined negative versus positive monitoring, whereas other verbal protocol analyses have attempted to create a detailed taxonomy for evaluating the metacognitive activity of a learner, regardless of the valence ( Greene and Azevedo 2009 ; Meijer et al. 2006 ). Although Meijer et al. ( 2006 ) originally sought to develop a fine-grain taxonomy, due to difficulties obtaining interrater reliability, they condensed their codes into fewer, more generalized aspects of metacognition. These examples reveal that verbal protocols have not arrived at a consensus for the level of analysis with existing protocols, revealing mixed results for the substantive validity of this approach.

Verbal protocols also have mixed results regarding external validity, as they have been shown to correlate with learning outcomes in some studies (e.g., Van der Stel and Veenman 2010 ), but not others ( Meijer et al. 2012 ; Renkl 1997 ). However, this might be attributed to the way in which the verbal protocols were coded. Some coding rubrics differ in whether they code for the quality of metacognition (e.g., accuracy in application) versus the quantity of a specific metacognitive activity (e.g., the frequency of occurrence) ( Meijer et al. 2012 ).

Within a specific coding rubric, there is evidence that verbal protocols have some content validity, as it is domain general. Veenman et al. ( 1997 ) found that the same coding rubric could be applied across three domains and was predictive of learning outcomes within each domain. Verbal protocols have also been successfully employed with a variety of populations (e.g., Veenman et al. 2004 ) and can be applied to a variety of contexts and tasks. They have been used in physics ( Chi et al. 1989 ), biology ( Gadgil et al. 2012 ), probability ( Renkl 1997 ), and reading ( Pressley and Afflerbach 1995 ), among others. Thus, we reveal the flexibility in applying this approach across different content and contexts.

One drawback of verbal protocols is that they take a substantial amount of time to administer and evaluate. Instead of administering the measurement to groups of students, researchers typically focus on one student at a time because of the challenges of recording multiple speakers and potential verbal interference across speakers in the same room. These protocols also require more time to transcribe and code, making this a time-consuming task for researchers and practically challenging to use in the classroom. Although think-aloud protocols are more difficult to employ in classrooms, they provide benefits to researchers, such as a fine-grained source of trace data ( Ericsson and Simon 1980 ). So, while there is utility in the fine-grain products, there is a lack of practical utility in classrooms.

1.4.2. Validity of Questionnaires

Questionnaires are often used to determine the degree to which students perceive using various metacognitive skills. The majority of questionnaires ask students to report on their dispositional use of the skills, although a few are specific to a task or context. The similarity between the structure of the measurement and theory is not well aligned. Although many questionnaires attempt to assess fine-grain distinctions between metacognitive skills, they often have difficulty doing so empirically. For example, Schraw and Dennison ( 1994 ) originally sought to capture five distinct metacognitive skills within the MAI; however, the results revealed only a single factor. Similar to verbal protocols, this misalignment reveals that questionnaires have not arrived at a consensus for the level of analysis with existing questionnaires, revealing mixed results for the substantive validity of this approach.

In contrast, there is more evidence for the external validity of questionnaires. Prior work has shown that questionnaires relate to other variables predicted by metacognitive theory, such as achievement ( Pintrich and De Groot 1990 ; Pintrich et al. 1993 ) as well as convergence with similar questionnaires assessing similar processes ( Sperling et al. 2004 ; Muis et al. 2007 ). For example, Sperling et al. ( 2004 ) found that the Regulation of Cognition dimension of the MAI was related to the Metacognitive Self-Regulation scale of the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al. 1991 ) (r = .46).

The content validity of a questionnaire depends on its intended scope. Some questionnaires are designed to capture the general use of metacognitive skills such as the MAI or MSLQ. Other questionnaires assess metacognitive skills for a particular task. For example, work by Van Hout-Wolters ( 2009 ) demonstrated that task-based measures have a stronger positive relation to verbal protocols than dispositional questionnaires. It is difficult to assess the strength of these different types of questionnaires because dispositional questionnaires typically focus on a generalization of the skills over a longer time-period than task-based questionnaires.

Additionally, metacognitive questionnaires have been reliably adapted to serve a variety of ages (e.g., Jr. MAI; Sperling et al. 2002 ). Of particular interest to educators and researchers is the utility of the measure with the ease of administering and scoring the instrument. Researchers have sought to develop easy-to-use retrospective questionnaires that take just a few minutes to complete. Perhaps the ease of this measure is the reason why there are many questionnaires aimed at capturing different types of content, making it difficult to assess the validity of such measures.

1.4.3. Validity of Metacognitive Judgments—JOKs

JOKs assess students’ accuracy in their monitoring of how well they know what they know after they have solved a problem or answered a question. Although often referred to as a monitoring component, some work also refers to these judgments as an evaluative skill (e.g., Winne 2011 ). Therefore, JOKs might measure one or both monitoring and evaluating skills. In some studies, these skills are collapsed together (e.g., Kistner et al. 2010 ). JOKs are one of many types of metacognitive judgments (see Alexander 2013 for an overview). We used JOKs because there has been some evidence suggesting that they have stronger relations to performance outcomes in comparison to the other types of metacognitive judgments ( Hunter-Blanks et al. 1988 ). JOKs also allowed us to gather multiple observations during an assessment, whereas we would have been limited in the number of observations for the other types of judgments given the nature of the learning task (see below for a description of the learning task).

Different types of calculations have been applied to determine the accuracy and consistency of student judgments (see Schraw 2009 ; Schraw et al. 2013 ). The prior literature has shown some evidence for substantive validity in that it is designed to capture one to two metacognitive skills, referred to as monitoring and evaluating. However, this structure may differ depending on the calculations used to assess different types of accuracy (see Schraw 2009 for a review). JOKs also have some evidence of external validity, as Nietfeld et al. ( 2005 , 2006 ) showed that student judgments were related to learning performance and GPA.

The content validity of JOKs is unclear. Some work has demonstrated it is domain general ( Mazancieux et al. 2020 ; Schraw 1996 ; Schraw et al. 1995 ) and other work has shown it is domain specific ( Kelemen et al. 2000 ). For example, Schraw ( 1996 ) showed that when controlling for test difficulty, confidence ratings from three unrelated tests (math, reading comprehension, and syllogism) were moderately related to each other (average r = .42). More recently, work has compared the types of calculations that have been applied to JOKs ( Dentakos et al. 2019 ). For example, calibrations of JOKs are positively related across tasks, but the resolution of the JOKs (e.g., relative accuracy and discrimination) are not positively related across tasks, suggesting that the type of calculation applied to determine one’s accuracy has implications for when and how JOKs are related. Regardless of these limitations, JOKs have also been applied to multiple domains (e.g., physics, general facts) and age groups ( Dunlosky and Metcalfe 2009 ).

In terms of utility, JOKs are moderately easy to implement. It takes more time to determine the accuracy calculations of these judgments than it does to evaluate questionnaire responses, but it is not as time intensive as verbal protocols. Thus, from a practical standpoint, there is utility in the administration of JOKs, but the utility in applying the calculations is challenging, as it requires additional time to apply those calculations, as well as the knowledge of how and what types of calculations to apply.

Drawing from Zepeda et al. ( 2015 ), we focus on the relation between three types of JOK calculations: absolute accuracy, gamma, and discrimination. They found differences in an experimental manipulation for one type of calculation (discrimination) but not others (absolute accuracy and gamma), suggesting that they captured different metacognitive processes. Therefore, in this study, we employ three different types of calculations: absolute accuracy and two measures of relative accuracy, gamma, and discrimination. Absolute accuracy compares judgments to performance, whereas Gamma evaluates confidence judgment accuracy on one item relative to another ( Nelson 1996 ). Schraw ( 1995 ) suggested that since there is not a one-to-one relation between gamma and absolute accuracy, research should report both. Discrimination examines the degree to which students can distinguish their confidence regarding an incorrect or correct performance ( Schraw 2009 ). Positive discrimination indicates that a learner gave higher confidence ratings for correct trials compared to incorrect trials, a negative value indicates higher confidence ratings for incorrect trials compared to correct trials, and a zero indicates no relation between the two. It can be interpreted that those with positive discrimination are aware of their correct performance. In addition to these calculations, we also examined average JOK ratings, given that students are typically poor at calibrating their understanding when the task is difficult ( Howie and Roebers 2007 ).

1.4.4. Summary of Measurement Validity

The validity across the three types of measurement reveals two consistent patterns, such that they all have been applied to different age groups (generality), and they tend to have mixed or only some support for their substantive validity. For the remaining three types of validity, different patterns emerge. Both questionnaires and JOKs have evidence of external validity and their content validity tends to be more sensitive to context. In contrast, for verbal protocols, there is mixed support for external validity and evidence of content validity. Additionally, all three measurements range in their ease of implementation (their utility) such that questionnaires are more easily applied and scored in educational contexts than JOKs, and both are easier to implement than verbal protocols. Given this landscape, we paid particular attention to the design and development of each measure, especially their alignment with theory (i.e., substantive validity) and their framing to content (e.g., using a task-based questionnaire versus a general one and examining different calculations of JOK accuracy).

1.5. Underlying Processes of the Measures

In addition to their relations to learning outcomes and past work evaluating their validity and scope, these measures likely capture similar and different processes. For example, for the monitoring skills, all three measures likely capture some aspect of monitoring, such as reflecting on one’s use of monitoring during a task-based questionnaire, the actual verbalization of monitoring, and the monitoring that contributes to one’s JOKs. At the same time, each of these measures might also reflect other processes. Reporting one’s use of monitoring requires the person to be aware of their use of monitoring, to monitor their monitoring, and rely on their long-term memory, whereas the verbal protocols capture the monitoring as it unfolds. These verbal protocols also likely contain more information about how the monitoring unfolds and might be more accurate at distinguishing between monitoring, control/debugging, and evaluating one’s learning process. In contrast, self-reporting on these skills might have more cross-over effects when students reflect on using these skills and determining the boundaries between them. The JOKs are similar to the task-based questionnaire, such that they may rely on monitoring the monitoring that took place during the learning task and one’s long-term memory of that experience, but they are different in that JOKs mainly involve monitoring one’s monitoring during the test. Some recent work supports the idea that there may be different monitoring skills at play among the measures. For example, McDonough et al. ( 2021 ) revealed that there appear to be two types of monitoring skills among metacognitive judgments: monitoring skills that occur during the encoding stage versus monitoring skills that occur at the retrieval stage, such that they rely on different pieces of information and cues (e.g., the difficulty of the learning task versus post-test).

As described when comparing the monitoring task-based questionnaire and monitoring statements, the control/debugging skills represented in the task-based questionnaire and the verbal protocols likely have similar overlaps, with some additional differences. Reporting one’s use of control/debugging requires them to be aware and monitor their control/debugging while also relying on their long-term memory. In contrast, the verbalizations capture the control/debugging as it unfolds. The degree of their need to control/debug their learning might also have implications for their reports on the questionnaire, such that in their reports, they might focus on the quantity as well as the productivity of their controlling/debugging.

Evaluating can also be captured across all three types of measures, but more directly by the verbal protocols and the task-based survey. For instance, the processes captured in the task-based questionnaire require learners to be aware of their evaluation process and know of the boundary between the skills. The verbal protocols more directly capture the evaluations as they occur and allow for a potentially more accurate differentiation between monitoring and evaluating. Additionally, the JOKs require students to reflect on their current understanding (i.e., monitoring) but also include aspects in which they evaluate how well they solved the present problem and learned the material during the learning activity. Thus, those measures may be related as well.

Given the different processes, boundaries, and demands of these different types of measures that aim to capture the same set of metacognitive skills, some aspects suggest that they should be related across the measures. Other aspects suggest that these measures may not be well aligned with one another because of the different processes that are required for each skill and measurement type. Therefore, the question remains: when the measures are developed to capture the same metacognitive skills, do they have similar relations to each other and learning outcomes?

1.6. Current Work

In this work, we assessed the relations among three metacognitive regulation measures: a retrospective task-based questionnaire, concurrent verbal protocols recorded during a learning activity, and JOKs elicited during a posttest (outlined in Table 1 ). The overall goal of this study was to investigate how these measures related to each other and to determine the degree to which they predict the similar outcomes for the same task.

Overview of the metacognitive regulation measurements.

MeasurementMetacognitive SkillTimingFraming of the AssessmentAnalytical MeasuresPredicted Learning Outcome
Verbal ProtocolsMonitoring, Control/Debugging, and EvaluatingConcurrentTask basedInter-rater reliability, Cronbach’s alphaLearning, transfer, and PFL
QuestionnairesMonitoring, Control/Debugging, and EvaluatingRetrospectiveTask basedSecond-Order CFA, Cronbach’s alphaLearning, transfer, and PFL
Metacognitive Judgments—JOKsMonitoring and Monitoring AccuracyRetrospective Test itemsCronbach’s alpha, Average, Mean Absolute accuracy, Gamma, and Discrimination measuresLearning, transfer, and PFL

Therefore, we hypothesized that:

Given that prior work tends to use these measures interchangeably and that they were developed to capture the same set of metacognitive skills, one hypothesis is that they will be positively related, as they assess similar metacognitive processes. Monitoring and evaluating assessed by JOKs will have a small positive association with the monitoring and evaluating assessed by the verbal protocols and the task-based questionnaire (rs between .20 and .30), but the associations for one type of skill might be higher than the other. This relation is expected to be small given that they occur at different time points with different types of activities, although all on the same learning content. We also predict a moderate relation between the verbal protocols and the task-based questionnaire for monitoring, control/debugging, and evaluating (rs between .30 and .50), which would be consistent with past work examining the relations between questionnaire and verbal protocols by Schellings and Van Hout-Wolters ( 2011 ) and Schellings et al. ( 2013 ). This relation is larger given that both measures are used in the same learning task but at slightly different time points. Alternatively, given the lengthy review we conducted showing that these measures are often not positively related with each other and that the measures themselves may require different processes, these measures might not be related to one another, even after being carefully matched across the metacognitive skills.

Although there are nuances between each of the measures, the prior work we reviewed suggests that they all should predict performance on learning, transfer, and PFL.

Prior studies examining metacognition tend to utilize tell-and-practice activities in which students receive direct instruction on the topic (e.g., Meijer et al. 2006 ). In contrast, we chose a structured-inquiry activity, as it might provide more opportunities for students to engage in metacognitive regulation ( Schwartz and Bransford 1998 ; Schwartz and Martin 2004 ). A core feature of these activities is that students try to invent new ways to think about, explain, and predict various patterns observed in the data. In the task we chose, students attempt to solve a challenging statistics problem in which they have an opportunity to monitor their progress and understanding, try out different strategies, and evaluate their performance. Although there is controversy in the learning sciences about the benefits of inquiry-based instruction ( Alfieri et al. 2011 ), several research groups have accumulated evidence of the benefits of these types of structured inquiry activities in math and science domains (e.g., Belenky and Nokes-Malach 2012 ; Kapur 2008 ; Roll et al. 2009 ; Schwartz and Martin 2004 ). For example, these activities have been shown to engage students in more constructive cognitive processes ( Roll et al. 2009 ) and to facilitate learning and transfer ( Kapur and Bielaczyc 2012 ; Kapur 2008 , 2012 ; Roll et al. 2009 ).

2. Materials and Methods

2.1. participants.

Sixty-four undergraduates (13 female, 51 male) enrolled in an Introductory Psychology course at a large Mid-Atlantic university participated in the study. All students consented to participate in the study and received credit for their participation. We excluded data from 19 students from the analyses, as they were able to correctly solve for mean deviation and/or standard deviation on the pre-test, which were the two mathematical concepts to be learned during the learning activity. The remaining 45 students (9 female, 36 male) were included in the analyses, as they still had an opportunity to learn the material. Within this sample, student GPAs included a broad range, with students self-reporting below a 2.0 (4.4%), 2.0–2.5 (20%), 2.5–3.0 (28.9%), 3.0–3.5 (24.4%), and 3.5–4.0 (22.2%). Within this sample, 77.8% of the students identified as white, 6.7% as African American, 6.7% as Biracial, 4.4% as Hispanic, 2.2% as Asian Indian, and 2.2% did not specify.

2.2. Design

Using an across-method-and-time design, we recorded student behaviors with video-recording software during a learning activity and collected student responses to a task-based questionnaire and JOKs. See Figure 4 for an overview of the experimental design, materials, and procedure.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00016-g004.jpg

Design summary. The three metacognitive measures captured in this work are italicized. The arrow shows the direction of the ordered activities.

2.3. Materials

The materials consisted of a pre-test, a learning task, a task-based questionnaire, a post-test, and an additional questionnaire that captured demographic information. The learning task was divided into three segments: an invention task on variability, a lecture on mean deviation, and a learning activity on standard deviation. These were identical to those used by Belenky and Nokes-Malach ( 2012 ), which were adapted from Schwartz and Martin ( 2004 ). Parts of the questionnaires assessed student metacognition, motivation, and cognitive processes; however, for this paper, we focus only on the metacognitive components.

2.3.1. Learning Pre-Test

The pre-test was used as a screening tool to remove data from participants who already knew how to solve mean, mean deviation, and standard deviation problems. These items were adapted from Belenky and Nokes-Malach ( 2012 ) and Schwartz and Martin ( 2004 ). All students completed a pre-test with three types of items targeting procedural and conceptual knowledge. All items were scored as either correct (1) or incorrect (0). Two questions assessed basic procedural knowledge of mean and mean deviation, and one assessed a conceptual problem that is matched to a preparation for future learning problem in the post-test (PFL; Bransford and Schwartz 1999 ).

2.3.2. Learning Task

The learning task consisted of two activities and a lecture. The first learning activity was based on calculating variability. Students were asked to invent a mathematical procedure to determine which of four pitching machines was most reliable (see Belenky and Nokes-Malach 2012, Figure 4, p. 12 ; Schwartz and Martin 2004, p. 135 ). The consolidation lecture provided an example that explained how to calculate variability using mean deviation and two practice problems with feedback on how to correctly solve the problems. The second activity asked students to invent a procedure to determine which of two track stars on two different events performed better (Bill on the high jump versus Joe on the long jump). Students received scratch paper and a calculator.

Scoring of the learning activities . Learning materials were evaluated based on the use of correct procedures and the selection of the correct response. Since students could determine the correct answer based on evaluating the means, we coded for every step students took and their interpretations of their final answers. For the variability activity, students could receive a total of 4 points. They received 1 point for calculating the mean, 1 for subtracting the numbers from the mean and taking the absolute value, 1 for taking the mean of those numbers, and 1 for stating that the Fireball Pitching Machine was the most reliable. For the second activity, the standardization activity, students could receive a total of 5 points. They received 1 point for calculating the mean, 1 for subtracting the numbers from the mean and squaring that value, 1 for taking the mean of those numbers, 1 for taking the square root of that value, and 1 for stating that Joe was more reliable.

2.3.3. Learning Post-Test

Similar to the pretest, many of the post-test items were identical to or adapted from Belenky and Nokes-Malach ( 2012 ), Gadgil ( 2014 ), and Schwartz and Martin ( 2004 ). The post-test contained seven items that measured students’ conceptual and procedural knowledge of the mean deviation. It also assessed students’ abilities to visually represent and reason about data. These items assess a variety of different types of transfer such as near and immediate (e.g., Nokes-Malach et al. 2013 ). For this work, we do not analyze these levels of transfer separately as there are not enough items for each transfer type to effectively examine outcomes.

Within the assessment, there was also a PFL problem that evaluated students’ abilities to apply information from an embedded resource to this standard deviation problem. The embedded learning resource was presented as a worked example in the post-test and showed students how to calculate a standardized score with a simple data set which was identical to Belenky and Nokes-Malach ( 2012, Figure 8, p. 16 ; adapted from Schwartz and Martin 2004, pp. 177–78 ). This resource also gave another simple problem using standardized scores. The PFL transfer problem appeared five problems after the worked example. The problem was presented later in the post-test so that the application of the information was not due to mere temporal proximity (i.e., the next problem), but instead, it required that students to notice, recall, and apply the relevant information at a later time. The PFL problem required students to determine which value from two different distributions was more impressive than the other. During the post-test, students were also asked to respond to a JOK for each problem in which they rated how confident they were in their answer from 1 being not at all confident to 5 being very confident .

Scoring of post-test items. Each item was coded for accuracy. The post-test comprised two types of problems: 6 transfer items focused on solving the correct procedure and understanding the concepts of mean deviation (α = .39) and 1 PFL problem. Two transfer problems involved the use of the correct procedure in which a correct response was coded as 1, and an incorrect response was coded as a 0. The other four transfer problems involved reasoning and were coded for the amount of detail within their reasoning. Each of these conceptual problems included different types of reasoning. One point was granted for a complete understanding of the concept or either a .67, .50, .33 for partial understanding (dependent on how many ideas were needed to represent a complete concept) or a 0. The post-test transfer items were scored out of a total of 6 points. The PFL problem was scored as correct (1) or incorrect (0).

2.3.4. Verbal Protocols

To provide practice with talking aloud, we included a 3 min activity where participants solved multiplication problems. Specifically, participants were told, “As you go through the rest of the experiment, there are going to be parts where I ask you to talk aloud, say whatever you are thinking. It is not doing any extra thinking, different thinking, or filtering what you say. Just say whatever it is you are naturally thinking. We’re going to record what you say in order to understand your thinking. So, to practice that, I will give you some multiplication problems; try solving them out loud to practice.” Then, the experimenter was instructed to give them feedback about how they were talking aloud with prompts such as,” That is exactly right, just say what you’re thinking. Talk aloud your thoughts.” Or “Remember to say your thoughts out loud” or “Naturally say whatever you are thinking, related or unrelated to this. Please do not filter what you’re saying.” Once participants completed the practice talking aloud activity, they were instructed to talk aloud for the different learning activities.

Processing and coding of the verbal protocols . To capture the metacognitive processes, we used prior rubrics for monitoring, control/debugging, and evaluating ( Chi et al. 1989 ; Gadgil et al. 2012 ; Renkl 1997 ; see Table 2 ). We also coded for two distinct types of debugging—conceptual error correction and calculation error correction. These were coded separately, as these types of corrections might be more directly related to better performance. Students who focus on their conceptual or procedural (calculation) understanding are aiming to increase a different type of understanding than those who are rereading or trying out other strategies. Those who reread and try out different strategies are still on the path of figuring out what the question is asking them to achieve, whereas those who are focusing on conceptual and calculation errors are further in their problem-solving process. Critically, we coded for the frequency of each metacognitive process as it aligned with prior rubrics that have measured verbal protocols in the past. We hypothesized that the first learning activity would have representative instances of metacognitive regulation, since it was an invention task.

Verbal coding rubric.

Code TypeDefinitionTranscript Examples
Monitoring Checking one’s understanding about what the task is asking them to do; making sure they understand what they are learning/doing.“I’m gonna figure out a pretty much the range of them from vertically and horizontally? I’m not sure if these numbers work (inaudible)”.
“That doesn’t make sense”.
Control/DebuggingAn action to correct one’s understanding or to enhance one’s understanding/progress. Often involves using a different strategy or rereading.“I’m re-reading the instructions a little bit”
“So try a different thing”.
Conceptual Error CorrectionA statement that reflects an understanding that something is incorrect with their strategy or reflects noticing a misconception about the problem.“I’m thinking of finding a better system because, most of these it works but not for Smythe’s finest because it’s accurate, it’s just drifting”.
Calculation Error CorrectionNoticing of a small error that is not explicitly conceptual. Small calculator errors would fall into this category.“4, whoops”.
Evaluation Reflects on their work to make sure they solved the problem accurately. Reviews for understanding of concepts as well as reflects on accurate problem-solving procedures such as strategies. “Gotta make sure I added all that stuff together correctly”.
“Let’s see, that looks pretty good”.
“Let’s check the match on these.”

All videos were transcribed and coded from the first learning activity on variability. Statement length was identified by clauses and natural pauses in speech. Then, two coders independently coded 20% of the data and reached an agreement as examined by an inter-coder reliability analysis ( k > .7). The coders discussed and resolved their discrepancies. Then, they independently coded the rest of the transcripts. The verbal protocol coding was based on prior rubrics and is represented with examples from the transcripts in Table 2 . Due to an experimental error, one participant was not recorded and was therefore excluded from all analyses involving the verbal protocols. For each student, we counted the number of statements generated for each coding category and divided this number by their total number of statements. On average students generated 58.79 statements with much variation ( SD = 34.10). Students engaged in monitoring the most ( M = 3.05 statements per student) followed by evaluation ( M = 2.71 statements per student). Students rarely employed control/debugging, conceptual error correction, and calculation error correction ( M = .23, .05, and .61, respectively). Therefore, we combined these scores into one control/debugging verbal protocol code ( M = .88 statements per student).

We also examined the relations between the total number of statements generated (i.e., verbosity) and the number of statements for each type of metacognitive category. The amount students monitored ( r = .59, p < .001), control/debugged ( r = .69, p < .001), and evaluated ( r = .72, p < .001) their understanding was related to the total number of utterances. Given this relationship, we divided each type of verbal protocol by the total number of utterances to control for the number of utterances.

2.3.5. Task-Based Metacognitive Questionnaire

We adapted questionnaire items from previously validated questionnaires and verbal protocol coding rubrics ( Chi et al. 1989 ; Gadgil et al. 2012 ; Renkl 1997 ) as indicated in Table 3 . Informed by this research and Schellings and Van Hout-Wolters’ ( 2011 ) in-depth analysis of the use of questionnaires and their emphasis on selecting an appropriate questionnaire given the nature of the to-be-assessed activity, we created a task-based questionnaire and adapted items from the MAI, MSLQ, Awareness of Independent Learning Inventory (AILI, Meijer et al. 2013 ), a problem-solving based questionnaire ( Howard et al. 2000 ; Inventory of Metacognitive Self-Regulation [IMSR] that was developed from the MAI and Jr. MAI as well as Fortunato et al. 1991 ), and a state-based questionnaire ( O’Neil and Abedi 1996 ; State Metacognitive Inventory [SMI]). In total, there were 24 metacognitive questions: 8 for monitoring, 9 for control/debugging, and 7 for evaluation. Students responded to each item using a Likert scale ranging from 1, strongly disagree, to 7, strongly agree . All items and their descriptive statistics are presented in Table 3 . We chose to develop and validate a task-based metacognitive questionnaire for three reasons. First, there is mixed evidence about the generality of metacognitive skills ( Van der Stel and Veenman 2014 ). Second, there are no task-based metacognitive measures for a problem-solving activity. Third, to our knowledge, no existing domain-general questionnaires reliably distinguish between the metacognitive skills of monitoring, control/debugging, and evaluation.

Descriptive statistics and factor loading for questionnaire items.

ItemOriginal Construct[Min, Max] Standardized FactorResidual EstimateVariance
During the activity, I found myself pausing to regularly to check my comprehension.MAI ( )[1, 7]4.20 (1.78).90.810.19
During the activity, I kept track of how much I understood the material, not just if I was getting the right answers.MSLQ Adaptation ( )[1, 7]4.18 (1.60).83.690.31
During the activity, I checked whether my understanding was sufficient to solve new problems.Based on verbal protocols[1, 7]4.47 (1.59).77.590.41
During the activity, I tried to determine which concepts I didn’t understand well.MSLQ ( )[1, 7]4.44 (1.65).85.730.27
During the activity, I felt that I was gradually gaining insight into the concepts and procedures of the problems.AILI ( )[2, 7]5.31 (1.28).75.560.44
During the activity, I made sure I understood how to correctly solve the problems.Based on verbal protocols[1, 7]4.71 (1.46).90.800.20
During the activity, I tried to understand why the procedure I was using worked.Strategies ( )[1, 7]4.40 (1.74).78.620.39
During the activity, I was concerned with how well I understood the procedure I was using.Strategies ( )[1, 7]4.38 (1.81).74.550.45
During the activity, I reevaluated my assumptions when I got confused.MAI ( )[2, 7]5.09 (1.58).94.890.11
During the activity, I stopped and went back over new information that was not clear.MAI ( )[1, 7]5.09 (1.54).65.420.58
During the activity, I changed strategies when I failed to understand the problem.MAI ( )[1, 7]4.11 (1.67).77.600.40
During the activity, I kept track of my progress and, if necessary, I changed my techniques or strategies.SMI ( )[1, 7]4.51 (1.52).89.790.21
During the activity, I corrected my errors when I realized I was solving problems incorrectly.SMI ( )[2, 7]5.36 (1.35).50.250.75
During the activity, I went back and tried to figure something out when I became confused about something.MSLQ ( )[2, 7]5.20 (1.58).87.750.25
During the activity, I changed the way I was studying in order to make sure I understood the material.MSLQ ( )[1, 7]3.82 (1.48).70.490.52
During the activity, I asked myself questions to make sure I understood the material.MSLQ ( )[1, 7]3.60 (1.59).49.250.76
REVERSE During the activity, I did not think about how well I was understanding the material, instead I was trying to solve the problems as quickly as possible.Based on verbal protocols[1, 7]3.82 (1.72).54.300.71
During the activity, I found myself analyzing the usefulness of strategies I was using.MAI ( )[1, 7]5.02 (1.55).48.230.77
During the activity, I reviewed what I had learned.Based on verbal protocols[2, 7]5.04 (1.40).57.330.67
During the activity, I checked my work all the way through each problem.IMSR ( )[1, 7]4.62 (1.72).94.880.12
During the activity, I checked to see if my calculations were correct.IMSR ( )[1, 7]4.73 (1.97).95.910.09
During the activity, I double-checked my work to make sure I did it right.IMSR ( )[1, 7]4.38 (1.87).89.790.21
During the activity, I reviewed the material to make sure I understood the information.MAI ( )[1, 7]4.49 (1.71).69.480.52
During the activity, I checked to make sure I understood how to correctly solve each problem.Based on verbal protocols[1, 7]4.64 (1.57).86.750.26

Note. The bolded italics represents each of the three factors with their respective items listed below each factor.

To evaluate the substantive validity of the questionnaire, we used a second-order CFA model consisting of three correlated factors (i.e., monitoring, control/debugging, and evaluation) and one superordinate factor (i.e., metacognitive regulation) with MPlus version 6.11. A robust weighted least squares estimation (WLSMV) was applied. Prior to running the model, normality assumptions were tested and met. The resulting second-order CFA model had an adequate goodness of fit, CFI = .96 TLI = .96, RMSEA = .096, X 2 (276) = 2862.30, p < .001 ( Hu and Bentler 1999 ). This finalized model also had a high internal reliability for each of the factors: superordinate, α = .95, monitoring, α = .92, control/debugging, α = .86 and evaluation, α = .87. For factor loadings and item descriptive statistics, see Table 3 . On average, students reported a moderate use of monitoring ( M = 4.51), control/debugging ( M = 4.51), and evaluation ( M = 4.70).

2.3.6. Use of JOKS

We also analyzed the JOKs (α = .86) using different calculations. As mentioned in the introduction, we calculated the mean absolute accuracy, gamma, and discrimination (see Schraw 2009 for the formulas). Gamma could not be computed for 9 participants (25% of the sample) since they responded with the same confidence rating for all seven items. Therefore, we did not examine gamma in our analyses. Absolute accuracy ranged from .06 to .57, with a lower score indicating better precision in their judgments, whereas discrimination in this study ranged from −3.75 to 4.50, with more positive scores indicating that students were able to indicate when they knew something.

2.4. Procedure

The study took approximately 120 min to complete (see Figure 4 an overview). At the beginning of the study, students were informed that they were going to be videotaped during the experiment and consented to participating in the study. Then, they moved on to complete the pre-test (15 min), followed by the experimenter instructing students to say their thoughts aloud. Then, the experimenter gave the students a sheet of paper with three multiplication problems on it. If students struggled to think aloud while solving problems (i.e., they did not say anything), then the experimenter modeled how to think aloud. Once students completed all three problems and the experimenter was satisfied that they understood how to think aloud (3 min), the experimenter moved onto the learning activity. Students had 15 min to complete the variability learning activity. After the variability activity, students watched a consolidation video (15 min) and worked through a standard deviation activity (15 min). Then, they were asked to complete the task-based questionnaire (10 min). Once the questionnaire was completed, the students had 35 min to complete the post-test. Upon completion of the post-test, students completed several questionnaires, a demographic survey, and then students were debriefed (12 min).

The first set of analyses examined whether the three measures were related to one another. The second set of analyses evaluated the degree to which the different measures related to learning, transfer, and PFL, providing external reliability for the measurements. Descriptive statistics for each measure are represented in Table 4 . For all analyses, alpha was set to .05 and results were interpreted as trending if p < .10.

Descriptive statistics for each measure.

MeasureVariable MinMax
Verbal ProtocolsMonitoring440.000.290.050.010.06
Control/Debugging440.000.060.010.0020.02
Evaluation440.000.160.040.010.04
QuestionnaireMonitoring451.136.754.510.191.29
Control/Debugging452.336.444.510.161.08
Evaluation452.147.004.700.191.28
JOKsMean452.005.004.310.090.60
Mean Absolute Accuracy450.060.570.220.020.13
Discrimination45−3.754.51.430.332.21

Note. To control for the variation in the length of the verbal protocols across participants, the verbal protocol measures were calculated by taking the total number of times the specified verbal protocol measure occurred by a participant and dividing that by the total number of utterances that participant made during the learning activity.

3.1. Relation within and across Metacognitive Measures

To evaluate whether the measures revealed similar associations between the different skills both within and across the measures, we used Pearson correlation analyses. See Table 5 for all correlations. Within the measures, we found that there were no associations among the skills in the verbal protocol codes, but there were positive associations between all the skills in the task-based questionnaire (monitoring, control/debugging, and evaluation). For the JOKs, there was a negative association between mean absolute accuracy and discrimination, meaning that the more accurate participants were at judging their confidence (a score closer to zero for absolute accuracy), the more likely they were aware of their correct performance (positive discrimination score). There was also a positive association between the average ratings of the JOKs and discrimination, meaning those who were assigning higher values in their confidence were also more aware of their correct performance.

Correlations between the task-based questionnaire, verbal protocols, and judgments of knowing.

Variable123456789
VPs1. Monitoring-.09.01−.36 *−.10−.16 −.41 *− .07 −.14
2. Control/Debugging -.16.12−.08.14 −.16 .03 −.08
3. Evaluation - .29 .31 * .37 * −.10 .02.01
Qs4. Monitoring - .73 ** .73 ** .26 .06.02
5. Control/Debugging - .65 **.02−.02 −.03
6. Evaluation -.15 .11 −.09
JOKs7. Average - .14 .39 **
8. Mean Absolute Accuracy -− .76 **
9. Discrimination -

Note. VPs = Verbal Protocols, Qs = Questionnaire, JOKs = Judgments of Knowing, † = p < .10, * = p < .05, and ** p < .01.

Across the measures, an interesting pattern emerged. The proportion of monitoring statements was negatively associated with the monitoring questionnaire and the average JOK ratings. However, there was no relationship between the monitoring questionnaire and the average JOK ratings. For the other skills, control/debugging and evaluation questionnaire responses positively correlated with the proportion of evaluation statements. There were also two trends for the monitoring questionnaire, such that it was positively related to the proportion of evaluation statements and the average JOK ratings. Otherwise, there were no other associations.

3.2. Relation between Metacognitive Measures and Learning

3.2.1. learning and test performance.

The learning materials included the first and second learning activities, and a post-test that included transfer items and a PFL item. For the first learning activity, the scores ranged from 0 to 3 (out of 4) with an average score of 1.6 points ( SD = .72, 40%). For the second learning activity, the scores ranged between 0 and 2 (out of 5) with an average score of 1.56 points ( SD = .59; 31%). Given the low performance when solving the second activity and the observation that most students were applying mean deviation to the second activity, instead of inventing a new procedure, we did not analyze these results. For the post-test transfer items, the scores ranged from 1 to 5.67 (out of 6) with an average score of 3.86 points ( SD = 1.26). We did not include the PFL in the transfer score, as we were particularly interested in examining the relation between the metacognitive measures and PFL. The PFL scores ranged from 0 to 1 (out of 1) with an average score of 0.49 ( SD = 0.51). For ease of interpretation, we converted student scores for all learning measures into the correct proportion in Table 6 .

Descriptive statistics for each learning measure.

Measure MinMax
First Learning Activity450.000.750.400.030.18
Transfer450.170.940.640.030.21
PFL450.001.000.490.080.51

To evaluate the relation between each metacognitive measure and the learning materials, we used a series of regressions. We used multiple linear regressions to test the amount of variance explained in the first learning activity and post-test performance by each measure. Then, to test the amount of variance explained by each metacognitive measure in the PFL performance, we used multiple logistic regression. In addition to these models, we also regressed the learning outcomes on the most predictive variables from each of the measures and entered them into a competing model to evaluate whether and how much they uniquely contribute to the overall variance.

3.2.2. Verbal Protocols and Learning Outcomes

For verbal protocols, we entered each of the codes into the model. The model predicting performance on the first learning activity explained 14.2% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 2.21, p = .10. Within the model, there was only an effect of monitoring, β = −0.37, t = −2.51, p = .02, VIF = 1.00 ( Table 7 ). The models predicting transfer, F (3, 40) = 0.19, p = .90, and PFL scores, χ 2 (3, N = 44) = 5.05, p = .17, were not significant.

Multiple linear regression model predicting performance on the first activity with verbal protocols.

Variable VIF
Monitoring statements−0.37−2.51.02 *1.01
Control/Debugging statements−0.05−0.32.751.03
Evaluation statements−0.03−0.17.871.02
Constant 10.06<.001 ***

Note. * = p < .05 and *** p < .001.

3.2.3. Task-Based Questionnaire and Learning Outcomes

For the task-based questionnaire, we computed two types of models: one with all three metacognitive skills and the other with each metacognitive skill entered separately. Entering all three skills simultaneously led to no significant relations for the first learning activity, F (3, 41) = 1.46, p = .24, transfer, F (3, 41) = 0.15, p = .93, or PFL χ 2 (1, N = 45) = 2.97, p = .40. However, because the three factors were highly correlated, we entered each factor into three separate models ( Kraha et al. 2012 ).

Entering the skills into separate models revealed a marginal effect of self-reported monitoring, β = 0.27, t = 1.87, p = .07, VIF = 1.00, and self-reported evaluation, β = 0.29, t = 2.0, p = .05, VIF = 1.00, on the first learning activity. The model predicting performance on the first learning activity with self-reported monitoring explained 7.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 3.50, p = .07, whereas the model predicting performance on the first learning activity with self-reported evaluation explained 8.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 4.01, p = .05. Otherwise, there were no significant relations. Self-reported monitoring and evaluation were not related to performance on transfer, F (1, 43) = 0.1, p = .75 and F (1, 43) = 0.02, p = .88), respectively, or PFL scores, χ 2 (1, N = 45) = 0.01, p = .91, χ 2 (1, N = 45) = 1.29, p = .26), respectively, and self-reported control/debugging had no relation to any of the learning outcomes (learning activity: F (1, 43) = 1.52, p = .22; transfer: F (1, 43) = 0.07, p = .79; PFL: χ 2 (1, N = 45) = .69, p = .41).

3.2.4. JOKs and Learning Outcomes

The JOK calculations were entered into three separate models for each learning outcome, since they were highly correlated with each other.

Average ratings . The model predicting first activity explained 10.4% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 6.11, p = .02, in which there was an effect of average JOK ratings, β = 0.35, t = 2.47, p = .02, VIF = 1.00. The model predicting transfer explained 14.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 7.07, p = .01, in which there was an effect of average JOK ratings, β = 0.38, t = 2.66, p = .01, VIF = 1.00. The logistic model predicting PFL scores explained 15.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 5.6, p < .05. There was an effect of average JOK ratings, B = 4.17, Exp (B) = 64.71, Wald’s χ 2 (1, N = 44) = 4.21, p = .04. Thus, higher average JOK ratings were associated with an increase in the likelihood of solving the PFL problem.

Mean absolute accuracy . The model predicting first activity explained 4.2% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 1.85, p =.18. The model predicting transfer explained 50.8% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 43.42, p < .001, in which there was an effect of mean absolute accuracy, β = −0.71, t = −6.59, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 8.9% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 3.03, p = .08, in which there was a marginal effect of mean absolute accuracy, B = −4.26, Exp (B) = 0.01, Wald’s χ 2 (1, N = 44) = 2.74, p = .098. Thus, increasing mean absolute accuracy (i.e., worse accuracy) was associated with a reduction in the likelihood of solving the PFL problem.

Discrimination . The model predicting performance on the first activity explained 0.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 0.05, p = .83. The model predicting transfer explained 88.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 318.61, p < .001, in which there was an effect of discrimination, β = 0.94, t = 17.85, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 33.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 12.80, p < .001, in which there was an effect of discrimination, B = 0.60, Exp (B) = 1.82, Wald’s χ 2 (1, N = 44) = 8.88, p = .003. Thus, increasing discrimination was associated with an increased likelihood of solving the PFL problem.

3.2.5. Competing Models

We evaluated the competing models for the learning activity to determine whether constructs from different measurements were predictive of differential variances within these learning outcomes. The models predicting transfer and PFL were not computed, as only the JOKs were predictive. For the model predicting the first learning activity, we regressed it on self-reported evaluation, monitoring statements, and JOK average. The model explained 24.7% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 4.37, p = .009. Within the model, there was a marginal effect of self-reported evaluation, β = 0.24, t = 1.71, p = .095, VIF = 1.03. Otherwise, there were no other significant effects ( Table 8 ).

Multiple linear regression model predicting performance on the first activity with self-reported evaluation, monitoring statements, and JOK average.

Variable VIF
Self-reported Evaluation 0.24 1.71.0951.03
Monitoring Statements−0.24−1.60.121.22
JOK Average 0.23 1.53.131.21
Constant −0.08.93

4. Discussion

From these results, we raise some important questions about the measures of metacognitive regulation, specifically those that assess the skills of monitoring, control/debugging, and evaluation. Not only do we find that the task-based questionnaire, verbal protocols, and JOK measures assessing these skills show little relation to one another, but they also predict different learning outcomes. Although these results suggest that these measures are capturing different processes, one aspect of these results suggests that they capture some overlapping variance, such that the different types of measures did not result in a significant model in the competing model for the learning activity. Below, we discuss these results further by first focusing on relation among the measures and their relation to learning outcomes and then turning to their implications and areas for future research.

4.1. Relation of Measures

A central goal of this study was to examine the degree to which these different measures of metacognitive regulation relate to each other for a subset of metacognitive skills (monitoring, control/debugging, and evaluation). The results demonstrated that there is little association between the task-based metacognitive regulation questionnaire and the corresponding verbal protocols, suggesting that these measurements are inaccurate, measure different processes than intended, or some combination of the two. For example, self-reported monitoring was negatively related to the monitoring statements. This finding suggests that the more students monitored their understanding, the less likely they were to report doing so on a questionnaire, reflecting a disconnect between what students do versus what they think they do. This misalignment might be particularly true for students who are struggling with the content and are making more monitoring statements. It also implies that students are unaware of the amount they are struggling—or worse, they are aware of it, but when asked about it, they are biased to say the opposite, perhaps because they do not want to appear incompetent. This speculation is also related to the observational finding that when students monitored their understanding, they were more likely to share negative monitoring statements such as “I do not understand this.” Therefore, perhaps a more in-depth analysis of the monitoring statements might provide clarity on the relation between these two measures. Another possibility is a mismatch of monitoring valence across the two measures because the monitoring questionnaire items are almost all positively framed (e.g., “ During the activity , I felt that I was gradually gaining insight into the concepts and procedures of the problems ”), whereas the verbal protocols could capture either positive or negative framings. If what is being expressed in the verbal protocols is just monitoring what one does not understand, then we would expect to see a negative correlation such as the one we found. That is, self-reported monitoring is likely to be negatively aligned with negative monitoring statements but potentially not positive monitoring statements. A similar pattern might also be true of the JOK average ratings and the monitoring statements, as they were also negatively associated with each other, especially since the JOKs capture one’s confidence.

The frequency of evaluation statements was associated with self-reported evaluation as well as self-reported control/debugging, which suggests that the different self-reported constructs capture a similar aspect of metacognitive behavior. There was also a trend in which self-reported monitoring was also positively related to evaluation statements. This partial alignment between the questionnaire and verbal protocols might be due to students’ awareness in the moment in which some processes are more explicit (e.g., evaluation) than others (e.g., control/debugging). The lack of differentiation on the questionnaire could also be attributed to students not being very accurate at knowing what they did and did not do during a learning task. This interpretation is consistent with work by Veenman et al. ( 2003 ), in which students’ self-reports had little relation to their actual behaviors. Instead, students might be self-reporting the gist of their actions and not their specific behaviors which are captured in the verbal protocols. It is also possible that there could have been more overlap between the two measures if we coded the verbal protocols for the entire set of learning activities that the students were self-reporting about (not just the first learning activity). It is also unclear as to what students were referencing when answering the self-reports. They could have been referencing their behaviors on the most recent task (i.e., the standard deviation activity) in which we did not code for their metacognitive verbalizations.

There was also a trend in which the average JOK ratings were positively related to self-reported monitoring, suggesting that the average JOK ratings reflected some aspects of monitoring that were captured in the questionnaire. Otherwise, there were no associations between the JOKs and the monitoring and evaluation statements or questions. As mentioned earlier, JOKs capture the accuracy of one’s monitoring and evaluating, not just the act of performing the skill or recounting how many times they engaged in an instance. This result reveals that perhaps being able to identify when one engages in the skills is different from gauging whether one is understanding information or self-reporting on whether one was engaged in checking one’s understanding. Another interpretation is that the JOK accuracy might benefit from the additional learning experiences that took place after the verbal protocols (i.e., the consolidation video) and after the questionnaire (i.e., the embedded resource). These additional resources may provide a more comprehensive picture of the learner’s understanding and might have allowed them to resolve some of their misunderstandings. Prior research also shows that students can learn from a test ( Pan and Rickard 2018 ), providing them with additional information to inform their judgments.

The learning activity might have also played a role in the relationship across the different measures. As mentioned, the structured inquiry task allows for more opportunities to engage in metacognition. This opportunity might also allow for instances in which the metacognitive skills are difficult to distinguish, as they might co-occur or overlap with each other. Perhaps if the learning activity were designed to elicit a specific metacognitive behavior, different associations would emerge.

4.2. Robust Learning

In terms of learning, we see that students’ self-reported use of monitoring and evaluation has a marginal relation to their performance on the first activity, which provides some external validity for those two components. However, there was not a relation between the self-reports and the transfer or PFL performance. It could be that the monitoring and evaluation components of the questionnaire were able to predict performance specific to the task with which they were based on but not the application of the knowledge beyond the task. This finding suggests that these questionnaire measures are limited in the types of learning outcomes they can predict. It is also important to note the differences between this work and past; here, the questionnaire was task specific and involved a problem-solving activity, whereas other work has looked at more domain-general content and related the questionnaires to achievement. Therefore, it is difficult to know whether the task specific framing of the questionnaire limits its predictability, or the change in assessment, or both.

The low internal reliability of the transfer post-test could have also posed difficulties in examining these analyses, as students were responding very differently across the items. The lack of internal reliability might be attributed to the combination of different types of transfer items within the assessment. Future work could employ an assessment with multiple items per concept and per transfer type (e.g., near versus intermediate) to determine the extent to which the reliability of the test items impacted the results.

As predicted, there was an association between monitoring verbal protocols and performance on the first learning activity. The negative association, as well as the observation that the majority of the metacognitive statements reflected a lack of understanding, aligns well with Renkl’s ( 1997 ) findings, in which negative monitoring was related to transfer outcomes. Although monitoring was not a positive predictor, we used a verbal protocol rubric that differs from those who have found positive learning outcomes as we coded for the frequency of the metacognitive statements and not other aspects of a metacognitive event, such as the quality or valence; (e.g., Van der Stel and Veenman 2010 ). For example, the quality of the metacognitive event can be meaningful and add precision to the outcomes they predict ( Binbasaran-Tuysuzoglu and Greene 2015 ). We did not see an association between the verbal protocols with performance on the transfer or PFL problems. One reason for the lack of relationship might be that the verbal protocols occurred during encoding stage with different materials and were not identical to the retrieval- and application-based materials that were used at the post-test. Although there is no prior work evaluating PFL with verbal protocols, other work evaluating transfer suggests that we would have found some relation (e.g., Renkl 1997 ). It would be productive for research to explore how different verbal protocol rubrics relate to one another and whether the types of verbal protocols elicited from different tasks result in different relations to robust learning.

Students’ average JOK ratings, absolute accuracy (knowing when they knew something), and discrimination (rating correct items with higher confidence than incorrect items) were strong predictors of performance on transfer and PFL. These relations could be due to the time-contingent and content-dependent aspects of JOKs, as they were tied to the test which occurred after the learning, whereas the verbal protocols and questionnaires were tied to the learning materials and occurred during and after the learning materials, respectively. Regardless, these findings suggest that being able to monitor one’s understanding is important for learning outcomes. Given there was a strong negative relation between the average JOK ratings and monitoring questionnaire and no relationship between the questionnaire and discrimination and absolute accuracy, it also supports that these measures capture different aspects of metacognition. JOKs might be assessing one’s accuracy at identifying their understanding (i.e., monitoring accuracy) whereas the average JOKs and the monitoring questionnaire might be assessing one’s awareness of checking one’s understanding. However, when comparing the average JOK ratings to the monitoring questionnaire on performance for the first learning activity, the average JOKs have a stronger relationship, implying that after a learning experience and consolidation lecture, students are more accurate at recognizing their understanding.

Although prior work has argued that JOKs are domain general ( Schraw 1996 ), we do not find discrimination or absolute accuracy to be predictive of the learning activity; however, the average JOK ratings were predictive. Students who had higher average JOKs performed better on the learning activity, but it did not matter how accurate their JOKs were. However, for transfer and PFL measures, their accuracy in their monitoring did matter. This finding suggests that students’ ability to monitor their understanding might transfer across different learning measures, but their accuracy is more dependent on the actual learning measure. This assumption is consistent with prior work in which students’ monitoring accuracy varied as a function of the item difficulty ( Pulford and Colman 1997 ).

When generating competing models across the metacognitive measures, we were only able to examine one in which we predicted performance on the first activity with evaluation questionnaire, monitoring statements, and JOK average. The overall model was not significant. This finding suggests that they captured shared variances in their relation to learning, but that they are distinctly different in that they were not associated with each other.

4.3. Theoretical and Educational Implications

One goal of this study was to explore the relation between different skills and at what level of specificity to describe the constructs. We were able to establish a second-order factor with a task-based survey in which the different skills represented the higher-order factor of metacognitive regulation, but also the unique factors for each skill, such that they were distinguishable. We were also able to distinguish between the different metacognitive skills in the verbal protocols with adequate inter-rater reliability between the two coders and the differential relations the codes had with each other and the learning and robust learning outcomes. The lack of correlation between the verbal protocol codes shows that they are not related to each other and suggests that they are capturing different skills. This finding is further supported when predicting learning outcomes, as the verbal protocol codes are related to different types of learning outcomes. This work highlights the need for future theory building to incorporate specific types of metacognitive skills and measures into a more cohesive metacognitive framework. Doing so would inform both future research examining how these processes operate, as well as educators who want to understand whether there are particular aspects of metacognition that their students could use more or less support in using.

This work also has practical implications for education. Although verbal protocols provide insight into what participants were thinking, they were least predictive of subsequent learning performance. However, the utility in using verbal protocols in classroom settings is still meaningful and relevant in certain situations. Of course, a teacher could not conduct verbal protocols for all their students, but it could be applied if they were concerned about how a particular student was engaging in the problem-solving process. In this case, a productive exercise might be to ask the student to verbalize their thoughts as they solve the problem and for the teacher to take notes on whether there are certain metacognitive prompts that may help guide the student during their problem-solving process.

The task-based questionnaire and the metacognitive judgment measures, which are more easily applied to several students at one time and thus are more easily applied in educational contexts, had stronger relations to learning outcomes. Given that the JOKs in this study were positively related to multiple learning outcomes, it might have more utility in the classroom settings. The use of these JOKs will allow teachers to measure how well students are able to monitor their learning performance. To compliment this approach, if teachers want to understand whether their students are engaging in different types of metacognitive skills as they learn the content in their courses, then the use of the task-based questionnaire could readily capture which types of metacognitive skills they are employing. The use of these measures can be used in a way that is complimentary, given the goals of the teacher.

4.4. Future Research

This work examines a subset of metacognitive measures, but there are many more in the literature that should be compared to evaluate how metacognitive regulation functions. Given the nature of the monitoring examined in this work, it would be particularly interesting to examine how different metacognitive judgments such as judgments of learning relate to the monitoring assessed by the verbal protocols and the questionnaire. Kelemen et al. ( 2000 ) provide evidence that different metacognitive judgments assess different processes, so we might expect to find different associations. For example, perhaps judgments of learning are more related to monitoring statements than JOKs. Judgments of learning have a closer temporal proximity to the monitoring statements and target the same material as the verbal protocols. In contrast, JOKs typically occur at a delay and assess post-test materials that are not identical to the material presented in the learning activity. In this work, we were not able to capture both judgments of learning and JOKs because the learning activity did not allow for multiple measures of judgments of learning. Therefore, if a learning activity allowed for more flexibility in capturing multiple judgments of learning, then we might see different relations emerge due to the timing of the measures.

Future work could also explore the predictability the task-based questionnaire has over other validated self-report measures such as a domain-based adoption of the MAI or MSLQ. It would also be interesting to examine how these different measures relate to other external factors as predicted by theories of self-regulated learning. Some of these factors include examining the degree to which the task-based questionnaire, JOKs, and verbal protocols relate to motivational aspects such as achievement goal orientations, as well as more cognitive sense-making processes such as analogical comparison and self-explanation. Perhaps this type of research would provide more support for some self-regulated learning theories over others given their hypothesized relationships. More pertinent to this line of work, this approach has the potential to help refine theories of metacognitive regulation and their associated measures by providing greater insight into the different processes captured by each measure and skill.

Acknowledgments

We thank Christian Schunn, Vincent Aleven, and Ming-Te Wang for their feedback on the study. We also thank research assistants Christina Hlutkowsky, Morgan Everett, Sarah Honsaker, and Christine Ebdlahad for their help in transcribing and/or coding the data.

Funding Statement

This research was supported by National Science Foundation (SBE 0836012) to the Pittsburgh Science of Learning Center ( http://www.learnlab.org ).

Author Contributions

Conceptualization, C.D.Z. and T.J.N.-M.; Formal analysis, C.D.Z.; Writing—original draft, C.D.Z.; Writing—review & editing, C.D.Z. and T.J.N.-M.; Project administration, C.D.Z. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of Pittsburgh (PRO13070080, approved on 2/3/2014).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Conflicts of interest.

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

  • Alexander Patricia A. Calibration: What is it and why it matters? An introduction to the special issue on calibrating calibration. Learning and Instruction. 2013; 24 :1–3. doi: 10.1016/j.learninstruc.2012.10.003. [ CrossRef ] [ Google Scholar ]
  • Alfieri Louis, Brooks Patricia J., Aldrich Naomi J., Tenenbaum Harriet R. Does discovery-based instruction enhance learning? Journal of Educational Psychology. 2011; 103 :1–18. doi: 10.1037/a0021017. [ CrossRef ] [ Google Scholar ]
  • Azevedo Roger, Witherspoon Amy M. Self-regulated use of hypermedia. In: Hacker Douglas J., Dunlosky John, Graesser Arthur C., editors. Handbook of Metacognition in Education. Erlbaum; Mahwah: 2009. [ Google Scholar ]
  • Azevedo Roger. Reflections on the field of metacognition: Issues, challenges, and opportunities. Metacognition Learning. 2020; 15 :91–98. doi: 10.1007/s11409-020-09231-x. [ CrossRef ] [ Google Scholar ]
  • Belenky Daniel M., Nokes-Malach Timothy J. Motivation and transfer: The role of mastery-approach goals in preparation for future learning. Journal of the Learning Sciences. 2012; 21 :399–432. doi: 10.1080/10508406.2011.651232. [ CrossRef ] [ Google Scholar ]
  • Berardi-Coletta Bernadette, Buyer Linda S., Dominowski Roger L., Rellinger Elizabeth R. Metacognition and problem solving: A process-oriented approach. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1995; 21 :205–23. doi: 10.1037/0278-7393.21.1.205. [ CrossRef ] [ Google Scholar ]
  • Binbasaran-Tuysuzoglu Banu, Greene Jeffrey Alan. An investigation of the role of contingent metacognitive behavior in self-regulated learning. Metacognition and Learning. 2015; 10 :77–98. doi: 10.1007/s11409-014-9126-y. [ CrossRef ] [ Google Scholar ]
  • Bransford John D., Schwartz Daniel L. Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education. 1999; 24 :61–100. doi: 10.3102/0091732x024001061. [ CrossRef ] [ Google Scholar ]
  • Brown Ann L. Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In: Weinert Franz Emanuel, Kluwe Rainer H., editors. Metacognition, Motivation, and Understanding. Lawrence Erlbaum Associates; Hillsdale: 1987. pp. 65–116. [ Google Scholar ]
  • Brown Ann L., Bransford John D., Ferrara Roberta A., Campione Joseph C. Learning, remembering, and understanding. In: Flavell John H., Markman Ellen M., editors. Handbook of Child Psychology: Vol. 3. Cognitive Development. 4th ed. Wiley; New York: 1983. pp. 77–166. [ Google Scholar ]
  • Chi Michelene T. H., Bassok Miriam, Lewis Matthew W., Reimann Peter, Glaser Robert. Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science. 1989; 13 :145–82. doi: 10.1207/s15516709cog1302_1. [ CrossRef ] [ Google Scholar ]
  • Cromley Jennifer G., Azevedo Roger. Self-report of reading comprehension strategies: What are we measuring? Metacognition and Learning. 2006; 1 :229–47. doi: 10.1007/s11409-006-9002-5. [ CrossRef ] [ Google Scholar ]
  • Dentakos Stella, Saoud Wafa, Ackerman Rakefet, Toplak Maggie E. Does domain matter? Monitoring accuracy across domains. Metacognition and Learning. 2019; 14 :413–36. doi: 10.1007/s11409-019-09198-4. [ CrossRef ] [ Google Scholar ]
  • Dunlosky John, Metcalfe Janet. Metacognition. Sage Publications, Inc.; Thousand Oaks: 2009. [ Google Scholar ]
  • Ericsson K. Anders, Simon Herbert A. Verbal reports as data. Psychological Review. 1980; 87 :215–51. doi: 10.1037/0033-295X.87.3.215. [ CrossRef ] [ Google Scholar ]
  • Flavell John H. Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist. 1979; 34 :906–11. doi: 10.1037/0003-066X.34.10.906. [ CrossRef ] [ Google Scholar ]
  • Fortunato Irene, Hecht Deborah, Tittle Carol Kehr, Alvarez Laura. Metacognition and problem solving. Arithmetic Teacher. 1991; 38 :38–40. doi: 10.5951/AT.39.4.0038. [ CrossRef ] [ Google Scholar ]
  • Gadgil Soniya, Nokes-Malach Timothy J., Chi Michelene T. H. Effectiveness of holistic mental model confrontation in driving conceptual change. Learning and Instruction. 2012; 22 :47–61. doi: 10.1016/j.learninstruc.2011.06.002. [ CrossRef ] [ Google Scholar ]
  • Gadgil Soniya. Doctoral dissertation. University of Pittsburgh; Pittsburgh, PA, USA: 2014. Understanding the Interaction between Students’ Theories of Intelligence and Learning Activities. [ Google Scholar ]
  • Greene Jeffrey Alan, Azevedo Roger. A macro-level analysis of SRL processes and their relations to the acquisition of a sophisticated mental model of a complex system. Contemporary Educational Psychology. 2009; 34 :18–29. doi: 10.1016/j.cedpsych.2008.05.006. [ CrossRef ] [ Google Scholar ]
  • Hacker Douglas J., Dunlosky John, Graesser Arthur C. Handbook of Metacognition in Education. Routledge; New York: 2009. [ Google Scholar ]
  • Howard Bruce C., McGee Steven, Shia Regina, Hong Namsoo S. Metacognitive self-regulation and problem-solving: Expanding the theory base through factor analysis; Paper presented at the Annual Meeting of the American Educational Research Association; New Orleans, LA, USA. April 24–28; 2000. [ Google Scholar ]
  • Howard-Rose Dawn, Winne Philip H. Measuring component and sets of cognitive processes in self-regulated learning. Journal of Educational Psychology. 1993; 85 :591–604. doi: 10.1037/0022-0663.85.4.591. [ CrossRef ] [ Google Scholar ]
  • Howie Pauline, Roebers Claudia M. Developmental progression in the confidence-accuracy relationship in event recall: Insights provided by a calibration perspective. Applied Cognitive Psychology. 2007; 21 :871–93. doi: 10.1002/acp.1302. [ CrossRef ] [ Google Scholar ]
  • Hu Li-tze, Bentler Peter M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999; 6 :1–55. doi: 10.1080/10705519909540118. [ CrossRef ] [ Google Scholar ]
  • Hunter-Blanks Patricia, Ghatala Elizabeth S., Pressley Michael, Levin Joel R. Comparison of monitoring during study and during testing on a sentence-learning task. Journal of Educational Psychology. 1988; 80 :279–83. doi: 10.1037/0022-0663.80.3.279. [ CrossRef ] [ Google Scholar ]
  • Jacobs Janis E., Paris Scott G. Children’s metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist. 1987; 22 :255–78. doi: 10.1080/00461520.1987.9653052. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu, Bielaczyc Katerine. Designing for productive failure. Journal of the Learning Sciences. 2012; 21 :45–83. doi: 10.1080/10508406.2011.591717. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu. Productive failure. Cognition and Instruction. 2008; 26 :379–424. doi: 10.1080/07370000802212669. [ CrossRef ] [ Google Scholar ]
  • Kapur Manu. Productive failure in learning the concept of variance. Instructional Science. 2012; 40 :651–72. doi: 10.1007/s11251-012-9209-6. [ CrossRef ] [ Google Scholar ]
  • Kelemen William L., Frost Peter J., Weaver Charles A. Individual differences in metacognition: Evidence against a general metacognitive ability. Memory & Cognition. 2000; 28 :92–107. doi: 10.3758/BF03211579. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kistner Saskia, Rakoczy Katrin, Otto Barbara, Ewijk Charlotte Dignath-van, Büttner Gerhard, Klieme Eckhard. Promotion of self-regulated learning in classrooms: Investigating frequency, quality, and consequences for student performance. Metacognition and Learning. 2010; 5 :157–71. doi: 10.1007/s11409-010-9055-3. [ CrossRef ] [ Google Scholar ]
  • Koedinger Kenneth R., Corbett Albert T., Perfetti Charles. The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science. 2012; 36 :757–98. doi: 10.1111/j.1551-6709.2012.01245.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kraha Amanda, Turner Heather, Nimon Kim, Zientek Linda Reichwein, Henson Robin K. Tools to support interpreting multiple regression in the face of multicollinearity. Frontiers in Psychology. 2012; 3 :44. doi: 10.3389/fpsyg.2012.00044. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lin Xiaodong, Lehman James D. Supporting learning of variable control in a computer-based biology environment: Effects of prompting college students to reflect on their own thinking. Journal of Research in Science Teaching. 1999; 36 :837–58. doi: 10.1002/(SICI)1098-2736(199909)36:7<837::AID-TEA6>3.0.CO;2-U. [ CrossRef ] [ Google Scholar ]
  • Mazancieux Audrey, Fleming Stephen M., Souchay Céline, Moulin Chris J. A. Is there a G factor for metacognition? Correlations in retrospective metacognitive sensitivity across tasks. Journal of Experimental Psychology: General. 2020; 149 :1788–99. doi: 10.1037/xge0000746. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McDonough Ian M., Enam Tasnuva, Kraemer Kyle R., Eakin Deborah K., Kim Minjung. Is there more to metamemory? An argument for two specialized monitoring abilities. Psychonomic Bulletin & Review. 2021; 28 :1657–67. doi: 10.3758/s13423-021-01930-z. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Veenman Marcel V. J., van Hout-Wolters Bernadette H. A. M. Metacognitive activities in text studying and problem solving: Development of a taxonomy. Educational Research and Evaluation. 2006; 12 :209–37. doi: 10.1080/13803610500479991. [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Veenman Marcel V. J., van Hout-Wolters Bernadette H. A. M. Multi-domain, multi-method measures of metacognitive activity: What is all the fuss about metacognition … indeed? Research Papers in Education. 2012; 27 :597–627. doi: 10.1080/02671522.2010.550011. [ CrossRef ] [ Google Scholar ]
  • Meijer Joost, Sleegers Peter, Elshout-Mohr Marianne, van Daalen-Kapteijns Maartje, Meeus Wil, Tempelaar Dirk. The development of a questionnaire on metacognition for students in higher education. Educational Research. 2013; 55 :31–52. doi: 10.1080/00131881.2013.767024. [ CrossRef ] [ Google Scholar ]
  • Messick Samuel. Validity. In: Linn Robert L., editor. Educational Measurement. 3rd ed. Macmillan; New York: 1989. pp. 13–103. [ Google Scholar ]
  • Muis Krista R., Winne Philip H., Jamieson-Noel Dianne. Using a multitrait-multimethod analysis to examine conceptual similarities of three self-regulated learning inventories. The British Journal of Educational Psychology. 2007; 77 :177–95. doi: 10.1348/000709905X90876. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nelson Thomas O. Gamma is a measure of the accuracy of predicting performance on one item relative to another item, not the absolute performance on an individual item Comments on Schraw. Applied Cognitive Psychology. 1996; 10 :257–60. doi: 10.1002/(SICI)1099-0720(199606)10:3<257::AID-ACP400>3.0.CO;2-9. [ CrossRef ] [ Google Scholar ]
  • Nelson Thomas O., Narens L. Metamemory: A theoretical framework and new findings. Psychology of Learning and Motivation. 1990; 26 :125–73. doi: 10.1016/S0079-7421(08)60053-5. [ CrossRef ] [ Google Scholar ]
  • Nietfeld John L., Cao Li, Osborne Jason W. Metacognitive monitoring accuracy and student performance in the postsecondary classroom. The Journal of Experimental Education. 2005; 74 :7–28. [ Google Scholar ]
  • Nietfeld John L., Cao Li, Osborne Jason W. The effect of distributed monitoring exercises and feedback on performance, monitoring accuracy, and self-efficacy. Metacognition and Learning. 2006; 1 :159–79. doi: 10.1007/s10409-006-9595-6. [ CrossRef ] [ Google Scholar ]
  • Nokes-Malach Timothy J., Van Lehn Kurt, Belenky Daniel M., Lichtenstein Max, Cox Gregory. Coordinating principles and examples through analogy and self-explanation. European Journal of Education of Psychology. 2013; 28 :1237–63. doi: 10.1007/s10212-012-0164-z. [ CrossRef ] [ Google Scholar ]
  • O’Neil Harold F., Jr., Abedi Jamal. Reliability and validity of a state metacognitive inventory: Potential for alternative assessment. Journal of Educational Research. 1996; 89 :234–45. doi: 10.1080/00220671.1996.9941208. [ CrossRef ] [ Google Scholar ]
  • Pan Steven C., Rickard Timothy C. Transfer of test-enhanced learning: Meta-analytic review and synthesis. Psychological Bulletin. 2018; 144 :710–56. doi: 10.1037/bul0000151. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pintrich Paul R., De Groot Elisabeth V. Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology. 1990; 82 :33–40. doi: 10.1037/0022-0663.82.1.33. [ CrossRef ] [ Google Scholar ]
  • Pintrich Paul R., Wolters Christopher A., Baxter Gail P. Assessing metacognition and self-regulated learning. In: Schraw Gregory, Impara James C., editors. Issues in the Measurement of Metacognition. Buros Institute of Mental Measurements; Lincoln: 2000. [ Google Scholar ]
  • Pintrich Paul R., Smith David A. F., Garcia Teresa, McKeachie Wilbert J. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ) The University of Michigan; Ann Arbor: 1991. [ Google Scholar ]
  • Pintrich Paul R., Smith David A. F., Garcia Teresa, McKeachie Wilbert J. Predictive validity and reliability of the Motivated Strategies for Learning Questionnaire (MSLQ) Educational and Psychological Measurement. 1993; 53 :801–13. doi: 10.1177/0013164493053003024. [ CrossRef ] [ Google Scholar ]
  • Pressley Michael, Afflerbach Peter. Verbal Protocols of Reading: The Nature of Constructively Responsive Reading. Routledge; New York: 1995. [ Google Scholar ]
  • Pulford Briony D., Colman Andrew M. Overconfidence: Feedback and item difficulty effects. Personality and Individual Differences. 1997; 23 :125–33. doi: 10.1016/S0191-8869(97)00028-7. [ CrossRef ] [ Google Scholar ]
  • Renkl Alexander. Learning from worked-out examples: A study on individual differences. Cognitive Science. 1997; 21 :1–29. doi: 10.1207/s15516709cog2101_1. [ CrossRef ] [ Google Scholar ]
  • Richey J. Elizabeth, Nokes-Malach Timothy J. Comparing four instructional techniques for promoting robust learning. Educational Psychology Review. 2015; 27 :181–218. doi: 10.1007/s10648-014-9268-0. [ CrossRef ] [ Google Scholar ]
  • Roll Ido, Aleven Vincent, Koedinger Kenneth R. Helping students know “further”—Increasing the flexibility of students ’ knowledge using symbolic invention tasks. In: Taatgen Niels A., Van Rijn Hedderik., editors. Proceedings of the 33rd Annual Conference of the Cognitive Science Society. Cognitive Science Society; Austin: 2009. pp. 1169–74. [ Google Scholar ]
  • Schellings Gonny L. M., van Hout-Wolters Bernadette H. A. M., Veenman Marcel V. J., Meijer Joost. Assessing metacognitive activities: The in-depth comparison of a task-specific questionnaire with think-aloud protocols. European Journal of Psychology of Education. 2013; 28 :963–90. doi: 10.1007/s10212-012-0149-y. [ CrossRef ] [ Google Scholar ]
  • Schellings Gonny, Van Hout-Wolters Bernadette. Measuring strategy use with self-report instruments: Theoretical and empirical considerations. Metacognition and Learning. 2011; 6 :83–90. doi: 10.1007/s11409-011-9081-9. [ CrossRef ] [ Google Scholar ]
  • Schoenfeld Alan H. Learning to think mathematically: Problem solving, metacognition, and sense making in mathematics. In: Grouws Douglas., editor. Handbook for Research on Mathematics Teaching and Learning. Macmillan; New York: 1992. pp. 334–70. [ Google Scholar ]
  • Schraw Gregory, Moshman David. Metacognitive theories. Educational Psychology Review. 1995; 7 :351–71. doi: 10.1007/BF02212307. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Dennison Rayne Sperling. Assessing metacognitive awareness. Contemporary Educational Psychology. 1994; 19 :460–75. doi: 10.1006/ceps.1994.1033. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Kuch Fred, Gutierrez Antonio P. Measure for measure: Calibrating ten commonly used calibration scores. Learning and Instruction. 2013; 24 :48–57. doi: 10.1016/j.learninstruc.2012.08.007. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory, Dunkle Michael E., Bendixen Lisa D., Roedel Teresa DeBacker. Does a general monitoring skill exist? Journal of Educational Psychology. 1995; 87 :433–444. doi: 10.1037/0022-0663.87.3.433. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. Measures of feeling-of-knowing accuracy: A new look at an old problem. Applied Cognitive Psychology. 1995; 9 :321–32. doi: 10.1002/acp.2350090405. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. The effect of generalized metacognitive knowledge on test performance and confidence judgments. The Journal of Experimental Education. 1996; 65 :135–46. doi: 10.1080/00220973.1997.9943788. [ CrossRef ] [ Google Scholar ]
  • Schraw Gregory. A conceptual analysis of five measures of metacognitive monitoring. Metacognition and Learning. 2009; 4 :33–45. doi: 10.1007/s11409-008-9031-3. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Bransford John D. A time for telling. Cognition and Instruction. 1998; 16 :475–522. doi: 10.1207/s1532690xci1604_4. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Martin Taylor. Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction. 2004; 22 :129–84. doi: 10.1207/s1532690xci2202_1. [ CrossRef ] [ Google Scholar ]
  • Schwartz Daniel L., Bransford John D., Sears David. Efficiency and innovation in transfer. In: Mestre Jose., editor. Transfer of Learning from a Modern Multidisciplinary Perspective. Information Age Publishers; Greenwich: 2005. pp. 1–51. [ Google Scholar ]
  • Sperling Rayne A., Howard Bruce C., Miller Lee Ann, Murphy Cheryl. Measures of children’s knowledge and regulation of cognition. Contemporary Educational Psychology. 2002; 27 :51–79. doi: 10.1006/ceps.2001.1091. [ CrossRef ] [ Google Scholar ]
  • Sperling Rayne A., Howard Bruce C., Staley Richard, DuBois Nelson. Metacognition and self-regulated learning constructs. Educational Research and Evaluation. 2004; 10 :117–39. doi: 10.1076/edre.10.2.117.27905. [ CrossRef ] [ Google Scholar ]
  • Van der Stel Manita, Veenman Marcel V. J. Development of metacognitive skillfulness: A longitudinal study. Learning and Individual Differences. 2010; 20 :220–24. doi: 10.1016/j.lindif.2009.11.005. [ CrossRef ] [ Google Scholar ]
  • Van der Stel Manita, Veenman Marcel V. J. Metacognitive skills and intellectual ability of young adolescents: A longitudinal study from a developmental perspective. European Journal of Psychology of Education. 2014; 29 :117–37. doi: 10.1007/s10212-013-0190-5. [ CrossRef ] [ Google Scholar ]
  • Van Hout-Wolters B. H. A. M. Leerstrategieën meten. Soorten meetmethoden en hun bruikbaarheid in onderwijs en onderzoek. [Measuring learning strategies. Different kinds of assessment methods and their usefulness in education and research] Pedagogische Studiën. 2009; 86 :103–10. [ Google Scholar ]
  • Veenman Marcel V. J. The assessment of metacognitive skills: What can be learned from multi- method designs? In: Artelt Cordula, Moschner Barbara., editors. Lernstrategien und Metakognition: Implikationen für Forschung und Praxis. Waxmann; Berlin: 2005. pp. 75–97. [ Google Scholar ]
  • Veenman Marcel V. J., Van Hout-Wolters Bernadette H. A. M., Afflerbach Peter. Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning. 2006; 1 :3–14. doi: 10.1007/s11409-006-6893-0. [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Prins Frans J., Verheij Joke. Learning styles: Self-reports versus thinking-aloud measures. British Journal of Educational Psychology. 2003; 73 :357–72. doi: 10.1348/000709903322275885. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Elshout Jan J., Meijer Joost. The generality vs. domain-specificity of metacognitive skills in novice learning across domains. Learning and Instruction. 1997; 7 :187–209. doi: 10.1016/S0959-4752(96)00025-4. [ CrossRef ] [ Google Scholar ]
  • Veenman Marcel V. J., Wilhelm Pascal, Beishuizen Jos J. The relation between intellectual and metacognitive skills from a developmental perspective. Learning and Instruction. 2004; 14 :89–109. doi: 10.1016/j.learninstruc.2003.10.004. [ CrossRef ] [ Google Scholar ]
  • Winne Philip H. A cognitive and metacognitive analysis of self-regulated learning. In: Zimmerman Barry J., Schunk Dale H., editors. Handbook of Self-Regulation of Learning and Performance. Routeledge; New York: 2011. pp. 15–32. [ Google Scholar ]
  • Winne Philip H., Hadwin Allyson F. Studying as self-regulated learning. In: Hacker Douglas J., Dunlosky John, Graesser Arthur C., editors. Metacognition in Educational Theory and Practice. Erlbaum; Hillsdale: 1998. pp. 277–304. [ Google Scholar ]
  • Winne Philip H., Jamieson-Noel Dianne. Exploring students’ calibration of self reports about study tactics and achievement. Contemporary Educational Psychology. 2002; 27 :551–72. doi: 10.1016/S0361-476X(02)00006-1. [ CrossRef ] [ Google Scholar ]
  • Winne Philip H., Jamieson-Noel Dianne, Muis Krista. Methodological issues and advances in researching tactics, strategies, and self-regulated learning. In: Pintrich Paul R., Maehr Martin L., editors. Advances in Motivation and Achievement: New Directions in Measures and Methods. Vol. 12. JAI Press; Greenwich: 2002. pp. 121–55. [ Google Scholar ]
  • Wolters Christopher A. Advancing achievement goal theory: Using goal structures and goal orientations to predict students’ motivation, cognition, and achievement. Journal of Educational Psychology. 2004; 96 :236–50. doi: 10.1037/0022-0663.96.2.236. [ CrossRef ] [ Google Scholar ]
  • Zepeda Cristina D., Richey J. Elizabeth, Ronevich Paul, Nokes-Malach Timothy J. Direct Instruction of Metacognition Benefits Adolescent Science Learning, Transfer, and Motivation: An In Vivo Study. Journal of Educational Psychology. 2015; 107 :954–70. doi: 10.1037/edu0000022. [ CrossRef ] [ Google Scholar ]
  • Zimmerman Barry J. Theories of self-regulated learning and academic achievement: An overview and analysis. In: Zimmerman Barry J., Schunk Dale H., editors. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives. Erlbaum; Mahwah: 2001. pp. 1–37. [ Google Scholar ]
  • Corpus ID: 53319363

Modular Approach in Teaching Problem Solving : A Metacognitive Process

  • Carmela J. Go Silk , B. Silk , Ricardo A. Somblingo
  • Published 2017
  • Mathematics, Education

Tables from this paper

table 1

8 Citations

Contextualized problem solving : it ‟ s effect on students ‟ achievement , conceptual understanding and mathematics anxiety, high level cognitive process of high school students in solving mathematics problems based on learning style.

  • Highly Influenced

Testing the Validity and Reliability of Metaseller Tutoring Module for the Purpose of Mathematics Learning Intervention

Enhancing the skill in stating research questions using a contextualized module, development and evaluation of i-promaths module in calculus among matriculation students, academic achievement in algebra of the public high school students in the new normal, development and validation of module for teaching human rights at higher education level, learning activity sheets (las) and the english achievement of grade 8 students, 41 references, metacognitive activities in text-studying and problem-solving: development of a taxonomy, quantitative problem solving in science: cognitive factors and directions for practice..

  • Highly Influential

Metacognitive Development in Professional Educators

Enhancing mathematical literacy with the use of metacognitive guidance in forum discussion, assessing the mathematics achievement of college freshmen using piaget’s logical operations, preparing teachers to remediate reading disabilities in high school: what is needed for effective professional development., strategies and knowledge in problem solving: results and implication for education, promoting general metacognitive awareness, mathematics-related beliefs of filipino college students: factors affecting mathematics and problem solving performance, international association for the evaluation of educational achievement, related papers.

Showing 1 through 3 of 0 Related Papers

  • Cognitive Psychology
  • Metacognition

Metacognitive Skills and Problem-Solving

  • International Journal of Research in Education and Science
  • CC BY-NC-SA 4.0

Pınar Güner at İstanbul University-Cerrahpaşa

  • İstanbul University-Cerrahpaşa

Hatice Nur Erbay at İstanbul University-Cerrahpaşa

Abstract and Figures

Cognitive and Emotional Variables that Affect Problem-solving (Ozturk, Akkan & Kaplan, 2020)

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Irina Kibalchenko
  • Tatiana Eksakusto
  • J Sci Educ Tech
  • Merga Dinssa Eticha
  • Adula Bekele Hunde

Tsige Ketema

  • Heppy Sapulete
  • Fryan Sopacua
  • Venty Enty Sopacua
  • Nurmi - Nurmi

Herawati Susilo

  • Mohamad Ikram Zakaria

Eka Apriani

  • Lukman Asha
  • Maria Botifar

Dadan Supardan

  • Jezrel Palumar

Tania Qamar

  • Nabisah Ibrahim

Mohamad Salam

  • Abd Rahman A Ghani

Sare Şengül

  • Think Skills Creativ
  • Heather E. Branigan
  • David I Donaldson
  • Int J Math Educ Sci Tech

Mesut Öztürk

  • J. Comput. Theor. Nanosci.

Mohammad Tohir

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • MyU : For Students, Faculty, and Staff
  • Academic Leaders
  • Faculty and Instructors
  • Graduate Students and Postdocs

Center for Educational Innovation

Request a consultation

  • Campus and Collegiate Liaisons
  • Pedagogical Innovations Journal Club
  • Teaching Enrichment Series
  • Recorded Webinars
  • Video Series
  • All Services
  • Teaching Consultations
  • Student Feedback Facilitation
  • Instructional Media Production
  • Curricular and Educational Initiative Consultations
  • Educational Research and Evaluation
  • Thank a Teacher
  • All Teaching Resources
  • Teaching with GenAI
  • Active Learning
  • Active Learning Classrooms
  • Aligned Course Design
  • Assessments
  • Documenting Growth in Teaching
  • Early Term Feedback
  • Inclusive Teaching at a Predominantly White Institution
  • Leveraging the Learning Sciences
  • Online Teaching and Design
  • Scholarship of Teaching and Learning
  • Strategies to Support Challenging Conversations in the Classroom
  • Teaching During the Election Season
  • Team Projects
  • Writing Your Teaching Philosophy
  • All Programs
  • Assessment Deep Dive
  • Designing and Delivering Online Learning
  • Early Career Teaching and Learning Program
  • Inclusive STEM Teaching Program
  • International Teaching Assistant (ITA) Program
  • Preparing Future Faculty Program
  • Teaching with Access and Inclusion Program
  • "Teaching with AI" Book Club
  • Teaching for Student Well-Being Program
  • Teaching Assistant and Postdoc Professional Development Program

Metacognitive strategies improve learning

Metacognition refers to thinking about one's thinking and is a skill students can use as part of a broader collection of skills known as self-regulated learning. Metacognitive strategies for learning include planning and goal setting, monitoring, and reflecting on learning. Students can be instructed in the use of metacognitive strategies. Classroom interventions designed to improve students’ metacognitive approaches are associated with improved learning (Cogliano, 2021; Theobald, 2021).

Strategies to encourage students to use metacognitive techniques

  • Prompt students to develop study plans and to evaluate their approaches to planning for, monitoring, and evaluating their learning. Early in the term, advise and support students in making a study plan. After receiving feedback on the first and subsequent assessments, ask students to reflect on their performance and determine which study strategies worked and which did not. Encourage them to revise their study plans if needed. One way to support this is to ask students to identify their personal learning environment .  This is an activity where students identify the various resources and support available to them.
  • Offer practice tests. Explain to students the benefits of practice testing for improving retention and performance on exams. Create practice tests with an answer key to help students prepare for exams. Use practice questions for in-class formative feedback throughout the term. Consider creating a bank of practice questions from previous exams to share with students (Stanton, 2021).
  • Call attention to strategies students can adopt to space their practice. This can include explaining the benefits of spaced practice and encouraging students to map out weekly study sessions for your course on their calendar. These study sessions should include the most recent material and revisit older material, perhaps in the form of practice tests (Stanton, 2021).
  • Model your metacognitive processes with students. Show students the thinking process behind your approach to solving problems (Ambrose, 2010). This can take the form of a think-aloud where you talk through the steps you would take to plan, monitor, and reflect on your problem-solving approach.
  • Caroline Hilk
  • Why Use Active Learning?
  • Successful Active Learning Implementation
  • Addressing Active Learning Challenges
  • Research and Resources
  • Addressing Challenges
  • Course Planning
  • Align Assessments
  • Multiple Low Stakes Assessments
  • Authentic Assessments
  • Formative and Summative Assessments
  • Varied Forms of Assessments
  • Cumulative Assessments
  • Equitable Assessments
  • Essay Exams
  • Multiple Choice Exams and Quizzes
  • Academic Paper
  • Skill Observation
  • Alternative Assessments
  • Assessment Plan
  • Grade Assessments
  • Prepare Students
  • Reduce Student Anxiety
  • SRT Scores: Interpreting & Responding
  • Student Feedback Question Prompts
  • Definitions and PWI Focus
  • A Flexible Framework
  • Class Climate
  • Course Content
  • An Ongoing Endeavor
  • Working memory
  • Retrieval of information
  • Spaced practice
  • Active learning
  • Metacognition
  • Research Questions and Design
  • Gathering data
  • Publication
  • Learn About Your Context
  • Design Your Course to Support Challenging Conversations
  • Design Your Challenging Conversations Class Session
  • Use Effective Facilitation Strategies
  • What to Do in a Challenging Moment
  • Debrief and Reflect On Your Experience, and Try, Try Again
  • Supplemental Resources
  • Why Use Team Projects?
  • Project Description Examples
  • Project Description for Students
  • Team Projects and Student Development Outcomes
  • Forming Teams
  • Team Output
  • Individual Contributions to the Team
  • Individual Student Understanding
  • Supporting Students
  • Wrapping up the Project
  • GRAD 8101: Teaching in Higher Education
  • Finding a Practicum Mentor
  • GRAD 8200: Teaching for Learning
  • Proficiency Rating & TA Eligibility
  • Schedule a SETTA
  • TAPD Webinars

Plan to Attend Cell Bio 2024

Change Password

Your password must have 8 characters or more and contain 3 of the following:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

  • Sign in / Register

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

Metacognition and Self-Efficacy in Action: How First-Year Students Monitor and Use Self-Coaching to Move Past Metacognitive Discomfort During Problem Solving

  • Stephanie M. Halmo
  • Kira A. Yamini
  • Julie Dangremond Stanton

Department of Cellular Biology, University of Georgia, Athens, GA 30602

Search for more papers by this author

*Address correspondence to: Julie Dangremond Stanton ( E-mail Address: [email protected] ).

Stronger metacognitive regulation skills and higher self-efficacy are linked to increased academic achievement. Metacognition and self-efficacy have primarily been studied using retrospective methods, but these methods limit access to students’ in-the-moment metacognition and self-efficacy. We investigated first-year life science students’ metacognition and self-efficacy while they solved challenging problems, and asked: 1) What metacognitive regulation skills are evident when first-year life science students solve problems on their own? and 2) What aspects of learning self-efficacy do first-year life science students reveal when they solve problems on their own? Think-aloud interviews were conducted with 52 first-year life science students across three institutions and analyzed using content analysis. Our results reveal that while first-year life science students plan, monitor, and evaluate when solving challenging problems, they monitor in a myriad of ways. One aspect of self-efficacy, which we call self-coaching, helped students move past the discomfort of monitoring a lack of understanding so they could take action. These verbalizations suggest ways we can encourage students to couple their metacognitive skills and self-efficacy to persist when faced with challenging problems. Based on our findings, we offer recommendations for helping first-year life science students develop and strengthen their metacognition to achieve improved problem-solving performance.

INTRODUCTION

Have you ever asked a student to solve a problem, seen their solution, and then wondered what they were thinking while they were problem solving? As college instructors, we often ask students in our classes to solve problems. Sometimes we gain access to our students’ thought process or cognition through strategic question design and direct prompting. Far less often we gain access to how our students regulate and control their own thinking (metacognition) or their beliefs about their capability to solve the problem (self-efficacy). Retrospective methods can and have been used to access this information from students, but students often cannot remember what they were thinking a week or two later. We lack deep insight into students’ in-the-moment metacognition and self-efficacy because it is challenging to obtain their in-the-moment thoughts.

Educators and students alike are interested in metacognition because of its malleable nature and demonstrated potential to improve academic performance. Not having access to students’ metacognition in-the-moment presents a barrier towards developing effective metacognitive interventions to improve learning. Thus, there is a need to characterize how life science undergraduates use their metacognition during individual problem-solving and to offer evidence-based suggestions to instructors for supporting students’ metacognition. In particular, understanding the metacognitive skills first-year life science students bring to their introductory courses will position us to better support their learning earlier on in their college careers and set them up for future academic success.

Metacognition and Problem Solving

Metacognition, or one’s awareness and control of their own thinking for the purpose of learning ( Cross and Paris, 1988 ), is linked to improved problem-solving performance and academic achievement. In one meta-analysis of studies that spanned developmental stages from elementary school to adulthood, metacognition predicted academic performance when controlling for intelligence ( Ohtani and Hisasaka, 2018 ). In another meta-analysis specific to mathematics, researchers found a significant positive correlation between metacognition and math performance in adolescences, indicating individuals who demonstrated stronger metacognition also performed better on math tasks ( Muncer et al. , 2022 ). The strong connection between metacognition and problem-solving performance and academic achievement represents a potential leverage point for enhancing student learning and success in the life sciences. If we explicitly teach life science undergraduates how to develop and use their metacognition, we can expect to increase the effectiveness of their learning and subsequent academic success. However, in order to provide appropriate guidance, we must first know how students in the target population are employing their metacognition.

Based on one theoretical framework of metacognition, metacognition is comprised of two components: metacognitive knowledge and metacognitive regulation ( Schraw and Moshman, 1995 ). Metacognitive knowledge includes one’s awareness of learning strategies and of themselves as a learner. Metacognitive regulation encompasses how students act on their metacognitive knowledge or the actions they take to learn ( Sandi-Urena et al. , 2011 ). Metacognitive regulation is broken up into three skills: 1) planning how to approach a learning task or goal, 2) monitoring progress towards achieving that learning task or goal, and 3) evaluating achievement of said learning task or goal ( Stanton et al. , 2021 ). These regulation skills can be thought of temporally: planning occurs before learning starts, monitoring occurs during learning, and evaluating takes place after learning has occurred. As biology education researchers, we are particularly interested in life science undergraduates’ metacognitive regulation skills or the actions they take to learn because regulation skills have been shown to have a more dramatic impact on learning than awareness alone ( Dye and Stanton, 2017 ).

Importantly, metacognition is context-dependent, meaning metacognition use may vary depending on factors such as the subject matter or learning task ( Kelemen et al. , 2000 ; Kuhn, 2000 ; Veenman and Spaans, 2005 ). For example, the metacognitive regulation skills a student may use to evaluate their learning after reading a text in their literature course may differ from those skills the same student uses to evaluate their learning on a genetics exam. This is why it is imperative to study metacognition in a particular context, like problem solving in the life sciences.

Metacognition helps a problem solver identify and work with the givens or initial problem state, reach the goal or final problem state, and overcome any obstacles presented in the problem ( Davidson and Sternberg, 1998 ). Specifically, metacognitive regulation skills help a solver select strategies, identify obstacles, and revise their strategies to accomplish a goal. Metacognition and problem solving are often thought of as domain-general skills because of their broad applicability across different disciplines. However, metacognitive skills are first developed in a domain-specific way and then those metacognitive skills can become more generalized over time as they are further developed and honed ( Kuhn, 2000 ; Veenman and Spaans, 2005 ). This is in alignment with research from the problem-solving literature that suggests stronger problem-solving skills are a result of deep knowledge within a domain ( Pressley et al. , 1987 ; Frey et al. , 2022 ). For example, experts are known to classify problems based on deep conceptual features because of their well-developed knowledge base whereas novices tend to classify problems based on superficial features ( Chi et al. , 1981 ). Research on problem solving in chemistry indicates that metacognition and self-efficacy are two key components of successful problem solving ( Rickey and Stacy, 2000 ; Taasoobshirazi and Glynn, 2009 ). College students who achieve greater problem-solving success are those who: 1) use their metacognition to conceptualize problems well, select appropriate strategies, and continually monitor and check their work, and 2) tend to have higher self-efficacy ( Taasoobshirazi and Glynn, 2009 ; Cartrette and Bodner, 2010 ).

Metacognition and Self-efficacy

Self-efficacy, or one’s belief in their capability to carry out a task ( Bandura, 1977 , 1997 ), is another construct that impacts problem solving performance and academic achievement. Research on self-efficacy has revealed its predictive power in regards to performance, academic achievement, and selection of a college major ( Pajares, 1996 ). The large body of research on self-efficacy suggests that students who believe they are capable academically, engage more metacognitive strategies and persist to obtain academic achievement compared with those who do not (e.g., Pintrich and De Groot, 1990 ; Pajares, 2002 ; Huang et al. , 2022 ). In STEM in particular, studies tend to reveal gender differences in self-efficacy with undergraduate men indicating higher self-efficacy in STEM disciplines compared with women ( Stewart et al. , 2020 ). In one study of first-year biology students, women were significantly less confident than men and students’ biology self-efficacy increased over the course of a single semester when measured at the beginning and end of the course ( Ainscough et al. , 2016 ). However, self-efficacy is known to be a dynamic construct, meaning one’s perception of their capability to carry out a task can vary widely across different task types and over time as struggles are encountered and expertise builds for certain tasks ( Yeo and Neal, 2006 ).

Both metacognition and self-efficacy are strong predictors of academic achievement and performance. For example, one study found that students with stronger metacognitive regulation skills and greater self-efficacy beliefs (as measured by self-reported survey responses) perform better and attain greater academic success (as measured by GPA; Coutinho and Neuman, 2008 ). Additionally, self-efficacy beliefs were strong predictors of metacognition, suggesting students with higher self-efficacy used more metacognition. Together, the results from this quantitative study using structural equation modeling of self-reported survey responses suggests that metacognition may act as a mediator in the relationship between self-efficacy and academic achievement ( Coutinho and Neuman, 2008 ).

Most of the research on self-efficacy has been quantitative in nature. In one qualitative study of self-efficacy, interviews were conducted with middle school students to explore the sources of their mathematics self-efficacy beliefs ( Usher, 2009 ). In this study, evidence of self-modeling was found. Self-modeling or visualizing one’s own self-coping during difficult tasks can strengthen one’s belief in their capabilities and can be an even stronger source of self-efficacy than observing a less similar peer succeed ( Bandura, 1997 ). Usher (2009) described self-modeling as students’ internal dialogues or what they say to themselves while doing mathematics. For example, students would tell themselves they can do it and that they would do okay as a way of keeping their confidence up or coaching themselves while doing mathematics. Other researchers have called this efficacy self-talk, or “thoughts or subvocal statements aimed at influencing their efficacy for an ongoing academic task” ( Wolters, 2003 , p. 199). For example, one study found that college students reported saying things to themselves like “You can do it, just keep working” in response to an open-ended questionnaire about how they would maintain effort on a given task ( Wolters, 1998 ; Wolters, 2003 ). As qualitative researchers, we were curious to uncover how both metacognition (planning, monitoring, and evaluating) and self-efficacy (such as self-coaching) might emerge out of more qualitative, in-the-moment data streams.

Methods for Studying Metacognition

Researchers use two main methods to study metacognition: retrospective and in-the-moment methods. Retrospective methods ask learners to reflect on learning they’ve done in the past. In contrast, in-the-moment methods ask learners to reflect on learning they’re currently undertaking ( Veenman et al. , 2006 ). Retrospective methods include self-report data from surveys like the Metacognitive Awareness Inventory ( Schraw and Dennison, 1994 ) or exam “wrappers” or self-evaluations ( Hodges et al. , 2020 ). Whereas in-the-moment methods include think-aloud interviews, which ask students to verbalize all of their thoughts while they solve problems ( Bannert and Mengelkamp, 2008 ; Ku and Ho, 2010 ; Blackford et al. , 2023 ), or online computer chat log-files as groups of students work together to solve problems ( Hurme et al. , 2006 ; Zheng et al. , 2019 ).

Most metacognition research on life science undergraduates, including our own work, has utilized retrospective methods ( Stanton et al. , 2015 , 2019; Dye and Stanton, 2017 ). Important information about first-year life science students’ metacognition has been gleaned using retrospective methods, particularly in regard to planning and evaluating. For example, first-year life science students tend to use strategies that worked for them in high school, even if they do not work for them in college, suggesting first-year life science students may have trouble evaluating their study plans ( Stanton et al. , 2015 ). Additionally, first-year life science students abandon strategies they deem ineffective rather than modifying them for improvement ( Stanton et al. , 2019 ). Lastly, first-year life science students are willing to change their approach to learning, but they may lack knowledge about which approaches are effective or evidence-based ( Tomanek and Montplaisir, 2004 ; Stanton et al. , 2015 ).

In both of the meta-analyses described at the start of this Introduction , the effect sizes were larger for studies that used in-the-moment methods ( Ohtani and Hisasaka, 2018 ; Muncer et al. , 2022 ). This means the predictive power of metacognition for academic performance was more profound for studies that used in-the-moment methods to measure metacognition compared with studies that used retrospective methods. One implication of this finding is that studies using retrospective methods might be failing to capture metacognition’s profound effects on learning and performance. Less research has been done using in-the-moment methods to study metacognition in life science undergraduates likely because of the time-intensive nature of collecting and analyzing data using these methods. One study that used think-aloud methods to investigate biochemistry students’ metacognition when solving open-ended buffer problems found that monitoring was the most commonly used metacognitive regulation skill ( Heidbrink and Weinrich, 2021 ). Another study that used think-aloud methods to explore Dutch third-year medical school students’ metacognition when solving physiology problems about blood flow also revealed a focus on monitoring, with students also planning and evaluating but to a lesser extent ( Versteeg et al. , 2021 ). We hypothesize that in-the-moment methods like think-aloud interviews are likely to reveal greater insight into students monitoring skills because this metacognitive regulation skill occurs during learning tasks. Further investigation into the nature of the metacognition first-year life science students use when solving problems is needed in order to provide guidance to this population and their instructors on how to effectively use and develop their metacognitive regulation skills.

Research Questions

What metacognitive regulation skills are evident when first-year life science students solve problems on their own, what aspects of learning self-efficacy do first-year life science students reveal when they solve problems on their own, research participants & context.

This study is a part of a larger longitudinal research project investigating the development of metacognition in life science undergraduates which was classified by the Institutional Review Board at the University of Georgia (STUDY00006457) and University of North Georgia (2021-003) as exempt. For that project, 52 first-year students at three different institutions in the southeastern United States were recruited from their introductory biology or environmental science courses in the 2021–2022 academic year. Data was collected at three institutions to represent different academic environments because it is known that context can affect metacognition ( Table 1 ). Georgia Gwinnett College is classified as a baccalaureate college predominantly serving undergraduate students, University of Georgia is classified as doctoral R1 institution, and University of North Georgia is classified as a master’s university. Additionally, in our past work we found that first-year students from different institutions differed in their metacognitive skills ( Stanton et al. , 2015 , 2019). Our goal in collecting data from three different institution types was to ensure our qualitative study could be more generalizable than if we had only collected data from one institution.

Comparison of data collection sites

Georgia Gwinnett CollegeUniversity of GeorgiaUniversity of North Georgia
Institution typeBaccalaureate CollegeDoctoral R1Master’s University
SettingSuburbanCitySuburban
Number of undergraduates10,94930,16618,155
Students from racially minoritized groups57.8%14.4%19.3%
Students who identify as women58.7%58.9%57.8%
Students who identify as first-generation37%9%20.6%
Average high school GPA3.04.13.5
Average SAT score106513551135

Students at each institution were invited to complete a survey to provide their contact information, answer the revised 19-item Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ), 32-item Epistemic Beliefs Inventory ( Schraw et al. , 1995 ), and 8-item Self-efficacy for Learning and Performance subscale from the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al. , 1993 ). They were also asked to self-report their demographic information including their age, gender, race/ethnicity, college experience, intended major, and first-generation status. First-year students who were 18 years or older and majoring in the life sciences were invited to participate in the larger study. We used purposeful sampling to select a sample that matched the demographics of the student body at each institution and also represented a range in metacognitive ability based on students’ responses to the revised Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ). In total, eight students from Georgia Gwinnett College, 23 students from the University of Georgia, and 21 students from the University of North Georgia participated in the present study ( Table 2 ). Participants received $40 (either in the form of a mailed check, or an electronic Starbucks or Amazon gift card) for their participation in Year 1 of the larger longitudinal study. Their participation in Year 1 included completing the survey, three inventories, and a 2-hour interview, of which the think aloud interview was one quarter of the total interview.

Participant Demographics by Institution.

Georgia Gwinnett CollegeUniversity of GeorgiaUniversity of North Georgia
Number of participants82321
Participants from underrepresented racially minoritized groups455
Participants who identify as women81315
Participants who identify as first-generation536
Average High School GPA3.34.03.6
Average College GPA3.43.72.9

Note: We are using Ebony McGee’s rephrasing of URM as underrepresented racially minoritized groups ( McGee, 2020 ). In our work this means students who self-reported as Black or African American or Hispanic or Latinx. For average high school GPA, institutional data is missing from two GGC students.

Data Collection

All interviews were conducted over Zoom during the 2021–2022 academic year when participants had returned to the classroom. Participants ( n = 52) were asked to think aloud as they solved two challenging biochemistry problems ( Figure 1 ) that have been previously published ( Halmo et al. , 2018 , 2020; Bhatia et al. , 2022 ). We selected two challenging biochemistry problems for first-year students to solve because we know that students do not use metacognition unless they find a learning task challenging ( Carr and Taasoobshirazi, 2008 ). If the problems were easy, they may have solved them quickly without needing to use their metacognition or by employing metacognition that is so automatic they may have a hard time verbalizing it ( Samuels et al. , 2005 ). By having students solve problems we knew would be challenging, we hoped this would trigger them to use and verbalize their metacognition during their problem-solving process. This would enable us to study how they used their metacognition and what they did in response to their metacognition. The problems we selected met this criterion because participants had not yet taken biochemistry.

FIGURE 1. Think-Aloud Problems. Students were asked to think aloud as they solved two challenging biochemistry problems. Panel A depicts the Protein X Problem previously published in Halmo et al. , 2018 and 2020. Panel B depicts the Pathway Flux problem previously published in Bhatia et al. , 2022 . Both problems are open-ended and ask students to make predictions and provide scientific explanations for their predictions.

The problems were open-ended and asked students to make predictions and provide scientific explanations for their predictions about: 1) noncovalent interactions in a folded protein for the Protein X Problem ( Halmo et al. , 2018 , 2020) and 2) negative feedback regulation in a metabolic pathway for the Pathway Flux Problem ( Bhatia et al. , 2022 ). Even though the problems were challenging, we made it clear to students before they began that we were not interested in the correctness of their solutions but rather we were genuinely interested in their thought process. To elicit student thinking after participants fell silent for more than 5 seconds, interviewers used the following two prompts: “What are you thinking (now)?” and “Can you tell me more about that?” ( Ericsson and Simon, 1980 ; Charters, 2003 ). After participants solved the problems, they shared their written solutions with the interviewer using the chat feature in Zoom. Participants were then asked to describe their problem-solving process out loud and respond to up to four reflection questions (see Supplemental Material for full interview protocol). The think-aloud interviews were audio and video recorded and transcribed using a professional, machine-generated transcription service (Temi.com). All transcripts were checked for accuracy by members of the research team before analysis began.

Data Analysis

The resulting transcripts were analyzed by a team of three researchers in three cycles. In the first cycle of data analysis, half of the transcripts were open coded by members of the research team (S.M.H., J.D.S., and K.A.Y.). S.M.H. entered this analysis as a postdoctoral researcher in biology education research with experience in qualitative methods and deep knowledge about student difficulties with the two problems students were asked to solve in this study. J.D.S., an associate professor of cell biology and a biology education researcher, entered this analysis as an educator and metacognition researcher with extensive experience in qualitative methods. K.A.Y. entered this analysis as an undergraduate student double majoring in biology and psychology and as an undergraduate researcher relatively new to qualitative research. During this open coding process, we individually reflected on the contents of the data, remained open to possible directions suggested by our interpretation of the data, and recorded our initial observations using analytic memos ( Saldaña, 2021 ). The research team (S.M.H., J.D.S., and K.A.Y.) then met to discuss our observations from the open coding process and suggest possible codes that were aligned with our observations, knowledge of metacognition and self-efficacy, and our guiding research questions. This discussion led to the development of an initial codebook consisting of inductive codes discerned from the data and deductive codes derived from theory on metacognition and self-efficacy. In the second cycle of data analysis, the codebook was applied to the dataset iteratively by two researchers (S.M.H. and K.A.Y.) using MaxQDA2020 software (VERBI Software; Berlin, Germany) until the codebook stabilized or no new codes or modifications to existing codes were needed. Coding disagreements between the two coders were discussed by all three researchers until consensus was reached. All transcripts were coded to consensus to identify aspects of metacognition and learning self-efficacy that were verbalized by participants. Coding to consensus allowed the team to consider and discuss their diverse interpretations of the data and ensure trustworthiness of the analytic process ( Tracy, 2010 ; Pfeifer and Dolan, 2023 ). In the third and final cycle of analysis, thematic analysis was used to uncover central themes in our dataset. As a part of thematic analysis, two researchers (S.M.H. and K.A.Y.) synthesized one-sentence summaries of each participant’s think aloud interview. Student quotes presented in the Results & Discussion have been lightly edited for clarity, and all names are pseudonyms.

Problem-Solving Performance as One Context for Studying Metacognition

To compare the potential effect of institution and gender on problem solving performance, we scored the final problem solutions, and then interrogated them using R Statistical Software (R Core Team, 2021 ). A one-way ANOVA was performed to compare the effect of institution on problem-solving performance. This analysis revealed that there was not a statistically significant difference in problem-solving performance between the three institutions (F[2, 49] = 0.085, p = 0.92). This indicates students performed similarly on the problems regardless of which institution they attended (Supplemental Data, Table 1 ). Another one-way ANOVA was performed to compare the effect of gender on problem-solving performance which revealed no statistically significant differences in problem-solving performance based on gender (F[1, 50] = 0.956, p = 0.33). Students performed similarly on the problems regardless of their gender (Supplemental Data, Table 2 ). Taken together, this analysis suggests a homogeneous sample in regard to problem-solving performance.

Participants’ final problem solutions were individually scored by two researchers (S.M.H. and K.A.Y.) using an established rubric and scores were discussed until complete consensus was reached. The rubric used to score the problems is available from the corresponding author upon request. The median problem-solving performance of students in our sample was two points on a 10-point rubric. Students in our sample scored low on the rubric because they either failed to answer part of the problem or struggled to provide accurate explanations or evidence to support their predictions. Despite the phrase “provide a scientific explanation to support your prediction” included in the prompt, most students’ solutions contained a prediction, but lacked an explanation. For example, the majority of the solutions for the Protein X problem predicted the noncovalent interaction would be affected by the substitution, but lacked categorization of the relevant amino acids or identification of the noncovalent interactions involved, which are critical problem-solving steps for this problem ( Halmo et al. , 2018 , 2020). The majority of the Pathway Flux solutions also predicted that flux would be affected, but lacked an accurate description of negative feedback inhibition or regulation release of the pathway, which are critical features of this problem ( Bhatia et al. , 2022 ). This lack of accurate explanations is not unexpected. Previous work shows that both introductory biology and biochemistry students struggle to provide accurate explanations to these problems without pedagogical support, and introductory biology students generally struggle more than biochemistry students ( Bhatia et al. , 2022 ; Lemons, personal communication).

RESULTS AND DISCUSSION

To address our first research question, we looked for statements and questions related to the three skills of planning, monitoring, and evaluating in our participants’ think aloud data. Because metacognitive regulation skills encompass how students act on their metacognitive awareness, participants’ explicit awareness was a required aspect when analyzing our data for these skills. For example, the statement “this is a hydrogen bond” does not display awareness of one’s knowledge but rather the knowledge itself (cognition). In contrast, the statement “I know this is a hydrogen bond” does display awareness of one’s knowledge and is therefore considered evidence of metacognition. We found evidence of all three metacognitive regulation skills in our data. First-year life science students plan, monitor, and evaluate when solving challenging problems. However, our data collection method revealed more varied ways in which students monitor. We present our findings for each metacognitive regulation skill ( Table 3 ). For further demonstration of how students use these skills in concert when problem solving, we offer problem-solving vignettes of a student from each institution in Supplemental Data .

Metacognitive regulation skills revealed during individual problem solving & implications for instruction

Metacognitive regulation skillCategoryDescriptionExample DataImplications for instruction
PlanningAssessing the taskStudent identifies what the problem is asking them to do either successfully or unsuccessfully. Model planning for students by verbalizing how to assess the task and what strategies to use and why before walking through a worked example.Provide students with immediate feedback on the accuracy of their assessment of the task.
MonitoringRelevanceStudent describes what parts of the prompt or pieces of their own knowledge are relevant or irrelevant to solving the problem. Explicitly teach students relevant strategies that can help resolve confusion, a lack of understanding, or uncertainty. See , 2021 for an evidence-based teaching guide on metacognition.
ConfusionStudent expresses a general lack of understanding or knowledge about the problem.
FamiliarityStudent describes what is familiar or not familiar to them or something they remember or forget from class. Encourage students to assess the effectiveness of their strategy use in response to their monitoring. For example, was acknowledging and using an assumption helpful in moving forward when you were uncertain?
UnderstandingStudent describes specific pieces of knowledge they know or don’t know. . Provide guidance on how to keep track of the information gleaned from these types of monitoring during problem solving. For example, by writing down what they do and do not know.
QuestionsStudent asks themselves a question.
CorrectnessStudent corrects themselves while talking out loud .
EvaluatingSolutionStudent assesses the accuracy of their solution, double checks their answer, or rethinks their solution. Provide students with immediate feedback about the accuracy of their solution(s) to help them evaluate and develop well-calibrated self-evaluation skills. For example, provide answer keys on formative assessments.Encourage students to self-coach during problem-solving to overcome potentially negative emotions or feelings of discomfort that may occur when they are metacognitive.
ExperienceStudent assesses the problem difficulty or the feelings associated with their thought process. .

Planning: Students did not plan before solving but did assess the task in the moment

Planning how to approach the task of solving problems individually involves selecting strategies to use and when to use them before starting the task ( Stanton et al. , 2021 ). Planning did not appear in our data in a classical sense. This finding is unsurprising because the task was: 1) well-defined, meaning there were a few potentially accurate solutions rather than an abundant number of accurate solutions, 2) straightforward meaning the goal of solving the problem was clearly stated, and 3) relatively short meaning students were not entering and exiting from the task multiple times like they might when studying for an exam. Additionally, the stakes were comparatively low meaning task completion and performance carried little to no weight in participants’ college careers. In other data from this same sample, we know that these participants make plans for high-stakes assessments like exams but often admit to not planning for lower stakes assessments like homework (Stanton, personal communication). Related to the skill of planning, we observed students assessing the task after reading the problem ( Table 3 ). We describe how students assessed the task and provide descriptions of what happened after students planned in this way.

Assessing the task

While we did not observe students explicitly planning their approach to problem solving before beginning the task, we did observe students assessing the task or what other researchers have called “orientation” after reading the problems ( Meijer et al. , 2006 ; Schellings et al. , 2013 ). Students in our study either assessed the task successfully or unsuccessfully. For example, when Gerald states, “So I know that not only do I have to give my answer, but I also have to provide information on how I got my answer …” he successfully identified what the problem was asking him to do by providing a scientific explanation. In contrast, Simone admits her struggle with figuring out what the problem is asking when she states, “I’m still trying to figure out what the question’s asking. I don’t want to give up on this question just yet, but yeah, it’s just kinda hard because I can ’ t figure out what the question is asking me if I don’t know the terminology behind it.” In Simone’s case, the terminology she struggled to understand is what was meant by a scientific explanation. Assessing the task unsuccessfully also involved misinterpreting what the problem asked. This was a frequent issue for students in our sample during the Pathway Flux problem because students inaccurately interpreted the negative feedback loop, which is a known problematic visual representation in biochemistry ( Bhatia et al. , 2022 ). For example, students like Paulina and Kathleen misinterpreted the negative feedback loop as enzyme B no longer functioning when they stated, respectively, “So if enzyme B is taken out of the graph…” , or “…if B cannot catalyze…” Additionally some students misinterpreted the negative feedback loop as a visual cue of the change described in the problem prompt (IV-CoA can no longer bind to enzyme B). This can be seen in the following example quote from Mila: “So I was looking at it and I see what they’re talking about with the IV-CoA no longer binding to enzyme B and I think that’s what that arrow with the circle and the line through it is representing. It’s just telling me that it’s not binding to enzyme B.”

What happened after assessing the task?

Misinterpretations of what the problem was asking like those shared above from Simone, Paulina, Kathleen, and Mila led to inaccurate answers for the Pathway Flux problem. In contrast, when students like Gerald could correctly interpret what the problem asked them to do, this led to more full and accurate answers for both problems. Accurately interpreting what a problem is asking you to do is critical for problem-solving success. A related procedural error identified in other research on written think-aloud protocols from students solving multiple-choice biology problems was categorized as misreading ( Prevost and Lemons, 2016 ).

Implications for Instruction & Research about Planning

In our study, we did not detect evidence of explicit planning beyond assessing the task. This suggests that first-year students’ approaches were either unplanned or automatic ( Samuels et al. , 2005 ). As metacognition researchers and instructors, we find first-year life science students’ absence of planning before solving and presence of assessing the task during solving illuminating. This means planning is likely one area in which we can help first-year life science students grow their metacognitive skills through practice. While we do not anticipate that undergraduate students will be able to plan how to solve a problem that is unfamiliar to them before reading a problem, we do think we can help students develop their planning skills through modeling when solving life science problems.

When modeling problem solving for students, we could make our planning explicit for students by verbalizing how we assess the task and what strategies we plan to use and why. From the problem-solving literature, it is known that experts assess a task by recognizing the deep structure or problem type and what is being asked of them ( Chi et al. , 1981 ; Smith et al. , 2013 ). This likely happens rapidly and automatically for experts through the identification of visual and key word cues. Forcing ourselves to think about what these cues might be and alerting students to them through modeling may help students more rapidly develop expert-level schema, approaches, and planning skills. Providing students with feedback on their assessment of a task and whether or not they misunderstood the problem also seems to be critical for problem-solving success ( Prevost and Lemons, 2016 ). Helping students realize they can plan for smaller tasks like solving a problem by listing the pros and cons of relevant strategies and what order they plan to use selected strategies before they begin could help students narrow the problem solving space, approach the task with focus, and achieve efficiency to become “good strategy users” ( Pressley et al. , 1987 ).

Monitoring: Students monitored in the moment in a myriad of ways

Monitoring progress towards problem-solving involves assessing conceptual understanding during the task ( Stanton et al. , 2021 ). First-year life science students in our study monitored their conceptual understanding during individual problem solving in a myriad of ways. In our analysis, we captured the specific aspects of conceptual understanding students monitored. Students in our sample monitored: 1) relevance, 2) confusion, 3) familiarity, 4) understanding, 5) questions, and 6) correctness ( Table 3 ). We describe each aspect of conceptual understanding that students monitored and we provide descriptions of what happened after students monitored in this way ( Figure 2 ).

FIGURE 2. How monitoring can impact the problem-solving process. The various ways first-year students in this study monitored are depicted as ovals. See Table 3 for detailed descriptions of the ways students monitored. How students in this study acted on their monitoring are shown as rectangles. In most cases, what happened after students monitored determined whether or not problem solving moved forward. Encouraging oneself using positive self-talk, or self-coaching, helped students move past the discomfort associated with monitoring a lack of conceptual understanding (confusion, lack of familiarity, or lack of understanding) and enabled them to use problem-solving strategies, which moved problem solving forward.

Monitoring Relevance

When students monitored relevance, they described what pieces of their own knowledge or aspects of the problem prompts were relevant or irrelevant to their thought process ( Table 3 ). For the Protein X problem, many students monitored the relevance of the provided information about pH. First-year life science students may have focused on this aspect of the problem prompt because pH is a topic often covered in introductory biology classes, which participants were enrolled in at the time of the study. However, students differentially decided whether this information was relevant or irrelevant. Quinn decided this piece of information was relevant: “The pH of the water surrounding it. I think it ’ s important because otherwise it wouldn’t really be mentioned.” In contrast, Ignacio decided the same piece of information was irrelevant: “So the pH has nothing to do with it . The water molecules had nothing to do with it as well. So basically, everything in that first half, everything in that first thing, right there is basically useless. So , I ’ m just going to exclude that information out of my thought process cause the pH has nothing to do with what’s going on right now…” From an instructional perspective, knowing the pH in the Protein X problem is relevant information for determining the ionization state of acidic and basic amino acids, like amino acids D and E shown in the figure, could be helpful. However, this specific problem asked students to consider amino acids A and B, so Ignacio’s decision that the pH was irrelevant may have helped him focus on more central parts of the problem. In addition to monitoring the relevance of the provided information, sometimes students would monitor the relevance of their own knowledge that they brought to bear on the problem. For example, consider the following quote from Regan: “I just think that it might be a hydrogen bond, which has nothing to do with the question .” Regan made this statement during her think aloud for the Protein X problem, which is intriguing because the Protein X problem deals solely with noncovalent interactions like hydrogen bonding.

What happened after monitoring relevance?

Overall, monitoring relevance helped students narrow their focus during problem solving, but could be misleading if done inaccurately like in Regan’s case ( Figure 2 ).

Monitoring Confusion

When students monitored confusion when solving, they expressed a general lack of understanding or knowledge about the problem ( Table 3 ). As Sara put it, “ I have no clue what I’m looking at.” Sometimes monitoring confusion came as an acknowledgement of lack of prior knowledge students felt they needed to solve the problem. Take for instance when Ismail states, “I’ve never really had any prior knowledge on pathway fluxes and like how they work and it obviously doesn ’ t make much sense to me .” Students also expressed confusion about how to approach the problem, which is related to monitoring one’s procedural knowledge. For example, when Harper stated, “ I ’ m not sure how to approach the question ,” she was monitoring a lack of knowledge about how to begin. Similarly, after reading the problem Tiffani shared, “ I am not sure how to solve this one because I’ve actually never done it before…”

What happened after monitoring confusion?

When students monitored their confusion, one of two things happened ( Figure 2 ). Rarely, students would give up on solving altogether. In fact, only one individual (Roland) submitted a final solution that read, “ I have no idea .” More often students persisted despite their confusion. Rereading the problem was a common strategy students in our sample used after identifying general confusion. As Jeffery stated after reading the problem, “I didn’t really understand that, so I’m gonna read that again.” After rereading the problem a few times, Jeffery stated, “Oh, and we have valine here. I didn’t see that before.” Some students like Valentina revealed their rereading strategy rationale after solving, “First I just read it a couple of times because I wasn’t really understanding what it was saying.” After rereading the problem a few times Valentina was able to accurately assess the task by stating “amino acid (A) turns into valine.” When solving, some students linked their general confusion with an inability solve. As Harper shared, “I don’t think that I have enough like basis or learning to where I’m able to answer that question.” Despite making this claim of self-doubt in their ability to solve, Harper monitored in other ways and ultimately came up with a solution beyond a simple, “I don’t know.” In sum, when students acknowledged their confusion in this study, they usually did not stop there. They used their confusion as an indicator to use a strategy, like rereading, to resolve their confusion or as a jumping off point to further monitor by identifying more specifically what they did not understand. Persisting despite confusion is likely dependent on other factors, like self-efficacy.

Monitoring Familiarity

When students monitored familiarity, they described knowledge or aspects of the problem prompt that were familiar or not familiar to them ( Table 3 ). This category also captured when students would describe remembering or forgetting something from class. For example, when Simone states, “ I remember learning covalent bonds in chemistry, but I don ’ t remember right now what that meant” she is acknowledging her familiarity with the term covalent from her chemistry course. Similarly, Oliver acknowledges his familiarity with tertiary structure from his class when solving the Protein X problem. He first shared , “ This reminds me of something that we’ve looked at in class of a tertiary structure. It was shown differently but I do remember something similar to this .” Then later, he acknowledges his lack of familiarity with the term flux when solving the Pathway Fux problem, “That word flux. I ’ ve never heard that word before .” Quinn aptly pointed out that being familiar with a term or recognizing a word in the problem did not equate to her understanding, “I mean, I know amino acids, but that doesn’t… like I recognize the word , but it doesn’t really mean anything to me. And then non-covalent, I recognize the conjunction of words, but again, it's like somewhere deep in there…”

What happened after monitoring familiarity?

When students recognized what was familiar to them in the problem, it sometimes helped them connect to related prior knowledge ( Figure 2 ). In some cases, though, students connected words in the problem that were familiar to them to unrelated prior knowledge. Erika, for example, revealed in her problem reflection that she was familiar with the term mutation in the Protein X problem and formulated her solution based on her knowledge of the different types of DNA mutations, not noncovalent interactions. In this case, Erika’s familiarity with the term mutation and failure to monitor the relevance of this knowledge when problem solving impeded her development of an accurate solution to the problem. This is why Quinn’s recognition that her familiarity with terms does not equate to understanding is critical. This recognition can help students like Erika avoid false feelings of knowing that might come from the rapid and fluent recall of unrelated knowledge ( Reber and Greifeneder, 2017 ). When students recognized parts of the problem they were unfamiliar with, they often searched for familiar terms to use as footholds ( Figure 2 ). For example, Lucy revealed the following in her problem reflection: “So first I tried to look at the beginning introduction to see if I knew anything about the topic. Unfortunately, I did not know anything about it. So, I just tried to look for any trigger words that I did recognize.” After stating this, Lucy said she recognized the words protein and tertiary structure and was able to access some prior knowledge about hydrogen bonds for her solution.

Monitoring Understanding

When students monitored understanding, they described specific pieces of knowledge they either knew or did not know, beyond what was provided in the problem prompt ( Table 3 ). Monitoring understanding is distinct from monitoring confusion. When students displayed awareness of a specific piece of knowledge they did not know (e.g., “I don’t know what these arrows really mean.” ) this was considered monitoring (a lack of) understanding. In contrast, monitoring confusion was a more general awareness of their overall lack of understanding (e.g., “Well, I first look at the image and I ’ m already kind of confused with it [laughs].” ). For example, Kathleen demonstrated an awareness of her understanding about amino acid properties when she said, “ I know that like the different amino acids all have different properties like some are, what’s it called? Like hydrophobic, hydrophilic, and then some are much more reactive.” Willibald monitored his understanding using the mnemonic “when in doubt, van der Waals it out” by sharing, “So, cause I know basically everything has, well not basically everything, but a lot of things have van der Waal forces in them. So that’s why I say that a lot of times. But it’s a temporary dipole, I think.” In contrast, Jeffery monitored his lack of understanding of a specific part of the Pathway Flux figure when he stated, “I guess I don ’ t understand what this dotted arrow is meaning.” Ignoring or misinterpreting the negative feedback loop was a common issue as students solved this problem, so it’s notable that Jeffery acknowledged his lack of understanding about this symbol. When students identified what they knew, the incomplete knowledge they revealed sometimes had the potential to lead to a misunderstanding. Take for example Lucy’s quote: “ I know a hydrogen bond has to have a hydrogen. I know that much. And it looks like they both have hydrogen.” This statement suggests Lucy might be displaying a known misconception about hydrogen bonding – that all hydrogens participate in hydrogen bonding ( Villafañe et al. , 2011 ).

What happened after monitoring understanding?

When students could identify what they knew, they used this information to formulate a solution ( Figure 2 ). When students could identify what they did not know, they either did not know what to do next or they used strategies to move beyond their lack of understanding ( Figure 2 ). Two strategies students used after identifying a lack of understanding included disregarding information and writing what they knew. Kyle disregarded information when he didn’t understand the negative feedback loop in the Pathway Flux problem: “…there is another arrow on the side I see with a little minus sign. I’m not sure what that means… it’s not the same as [the arrows by] A and C. So, I’m just going to disregard it sort of for now. It’s not the same. Just like note that in my mind that it’s not the same.” In this example, Kyle disregards a critical part of the problem, the negative feedback loop, and does not revisit the disregarded information which ultimately led him to an incorrect prediction for this problem. We also saw one example of a student, Elaine, use the strategy of writing what she knew when she was struggling to provide an explanation for her answer: “I should know this more, but I don’t know, like a specific scientific explanation answer, but I’m just going to write what I do know so I can try to organize my thoughts.” Elaine’s focus on writing what she knew allowed her to organize the knowledge she did have into a plausible solution that specified which amino acids would participate in new noncovalent interactions (“I predict there will be a bond between A and B and possibly A and C.” ) despite not knowing “what would be required in order for it to create a new noncovalent interaction with another amino acid.” The strategies that Kyle and Elaine used in response to monitoring a lack of understanding shared the common goal of helping them get unstuck in their problem-solving process.

Monitoring Questions

When students monitored through questions, they asked themselves a question out loud ( Table 3 ). These questions were either about the problem itself or their own knowledge. An example of monitoring through a question about the problem itself comes from Elaine who asked herself after reading the problem and sharing her initial thoughts, “ What is this asking me? ” Elaine’s question helped reorient her to the problem and put herself back on track with answering the question asked. After Edith came to a tentative solution, she asked herself, “But what about the other information? How does that pertain to this? ” which helped her initiate monitoring the relevance of the information provided in the prompt. Students also posed questions to themselves about their own content knowledge. Take for instance Phillip when he asked himself, “So, would noncovalent be ionic bonds or would it be something else? Covalent bonds are sharing a bond, but what does noncovalent mean? ” After Phillip asked himself this question, he reread the problem but ultimately acknowledged he was “not too sure what noncovalent would mean.”

What happened after monitoring questions?

After students posed questions to themselves while solving, they either answered their question or they didn’t ( Figure 2 ). Students who answered their self-posed questions relied on other forms of monitoring and rereading the prompt to do so. For example, after questioning themselves about their conceptual knowledge, some students acknowledged they did not know the answer to their question by monitoring their understanding. Students who did not answer their self-posed questions moved on without answering their question directly out loud.

Monitoring Correctness

When students monitored correctness, they corrected their thinking out loud ( Table 3 ). A prime example of this comes from Kyle’s think aloud, where he corrects his interpretation of the problem not once but twice: “It said the blue one highlighted is actually a valine, which substituted the serine, so that’s valine right there. And then I’m reading the question. No, no, no. It ’ s the other way around. So, serine would substitute the valine and the valine is below… Oh wait wait , I had it right the first time. So, the blue highlighted is this serine and that’s supposed to be there, but a mutation occurs where the valine gets substituted.” Kyle first corrects his interpretation of the problem in the wrong direction but corrects himself again to put him on the right track. Icarus also caught himself reading the problem incorrectly by replacing the word noncovalent with the word covalent, which was a common error students made: “ Oh, wait, I think I read that wrong. I think I read it wrong. Well, yeah. Then that will affect it. I didn’t read the noncovalent part. I just read covalent.” Students also corrected their language use during the think aloud interviews, like Edith: “ because enzyme B is no longer functioning… No, not enzyme B… because IV-CoA is no longer functional and able to bind to enzyme B, the metabolic pathway is halted.” Edith’s correction of her own wording, while minor, is worth noting because students in this study often misinterpreted the Pathway Flux problem to read as “enzyme B no longer works”. There were also instances when students corrected their own knowledge that they brought to bear on the problem. This can be seen in the following quote from Tiffani when she says, “And tertiary structure. It has multiple… No, no, no. That ’ s primary structure. Tertiary structure’s when like the proteins are folded in on each other.”

What happened after monitoring correctness?

When students corrected themselves, this resulted in more accurate interpretations of the problem and thus more accurate solutions ( Figure 2 ). Specifically, monitoring correctness helped students avoid common mistakes when assessing the task which was the case for Kyle, Icarus, and Edith described above. When students do not monitor correctness, incorrect ideas can go unchecked throughout their problem-solving process, leading to more inaccurate solutions. In other research, contradicting and misunderstanding content were two procedural errors students experienced when solving multiple-choice biology problems ( Prevost and Lemons, 2016 ), which could be alleviated through monitoring correctness.

Implications for Instruction & Research about Monitoring

Monitoring is the last metacognitive regulation skill to develop, and it develops slowly and well into adulthood ( Schraw, 1998 ). Based on our data, first-year life science students are monitoring in the moment in a myriad of ways. This may suggest that college-aged students have already developed monitoring skills by the time they enter college. This finding has implications for both instruction and research. For instruction, we may need to help our students keep track of and learn what do with the information and insight they glean from their in-situ monitoring when solving life science problems. For example, students in our study could readily identify what they did and did not know, but they sometimes struggled to identify ways in which they could potentially resolve their lack of understanding, confusion, or uncertainty or use this insight in expert-like ways when formulating a solution.

As instructors who teach students about metacognition, we can normalize the temporary discomfort monitoring may bring as an integral part of the learning process and model for students what to do after they monitor. For example, when students glean insight from monitoring familiarity , we could help them learn how to properly use this information so that they do not equate familiarity with understanding when practicing problem solving on their own. This could help students avoid the fluency fallacy or the false sense that they understand something simply because they recognize it or remember learning about it ( Reber and Greifeneder, 2017 ).

The majority of the research on metacognition, including our own, has been conducted using retrospective methods. However, retrospective methods may provide little insight into true monitoring skills since these skills are used during learning rather than after learning has occurred ( Schraw and Moshman, 1995 ; Stanton et al. , 2021 ). More research using in-the-moment methods, which are used widely in the problem-solving literature, are needed to fully understand the rich monitoring skills of life science students and how they may develop over time. The monitoring skills of life science students in both individual and small group settings, and the relationship of monitoring skills across these two settings, warrants further exploration. This seems particularly salient given that questioning and responding to questions seems to be an important aspect of both individual metacognition in the present study and social metacognition in our prior study, which also used in-the-moment methods ( Halmo et al. , 2022 ).

Evaluating: Students evaluated their solution and experience problem solving

Evaluating achievement of individual problem solving involves appraising an implemented plan and how it could be improved for future learning after completing the task ( Stanton et al. , 2021 ). Students in our sample revealed some of the ways they evaluate when solving problems on their own ( Table 3 ). They evaluated both their solution and their experience of problem solving.

Evaluating A Solution

Evaluating a solution occurred when students assessed the accuracy of their solution, double-checked their answer, or rethought their solution ( Table 3 ). While some students evaluated their accuracy in the affirmative (that their solution is right), most students evaluated the accuracy of their solution in the negative (that their solution is wrong). For example, when Kyle states, “ I don ’ t think hydrogen bonding is correct .” Kyle clarified in his problem reflection, “I noticed [valine] did have hydrogens and the only noncovalent interaction I know of is probably hydrogen bonding. So, I just sort of stuck with that and just said more hydrogen bonding would happen with the same oxygen over there [in glutamine].” Through this quote, we see that Kyle went with hydrogen bonding as his prediction because it’s the only noncovalent interaction he could recall. However, Kyle accurately evaluated the accuracy of his solution by noting that hydrogen bonding was not the correct answer. Evaluating accuracy in the negative often seemed like hedging or self-doubt. Take for instance Regan’s quote that she shared right after submitting her final solution: “ The chances of being wrong are 100% , just like, you know [laughs].”

Students also evaluated their solution by double-checking their work. Kyle used a very clearly-defined approach for double checking his work by solving the problem twice: “So that’s just my initial answer I would put, and then what I do next was I ’ d just like reread the question and sort of see if I come up with the same answer after rereading and redoing the problem. So, I’m just going to do that real quick.” Checking one’s work is a well-established problem-solving step that most successful problem solvers undertake ( Cartrette and Bodner, 2010 ; Prevost and Lemons, 2016 ).

Students also evaluated by rethinking their initial solution. In the following case, Mila’s evaluation of her solution did not improve her final answer. Mila initially predicted that the change described in the Pathway Flux problem would affect flux, which is correct. However, she evaluates her solution when she states, “Oh, wait a minute, now that I’m saying this out loud, I don’t think it’ll affect it because I think IV-CoA will be binding to enzyme B or C. Sorry, hold on. Now I ’ m like rethinking my whole answer .” After this evaluation, Mila changes her prediction to “it won’t affect flux” , which is incorrect. In contrast, some students’ evaluations of their solutions resulted in improved final answers. For example, after submitting his solution and during his problem reflection, Willibald states, “Oh, I just noticed. I said there’ll be no effect on the interaction, but then I said van der Waals forces which is an interaction. So, I just contradicted myself in there .” After this recognition, Willibald decides to amend his first solution, ultimately improving his prediction. We also observed one student, Jeffery, evaluating whether or not his solution answered the problem asked, which is notable because we also observed students evaluating in this way when solving problems in small groups ( Halmo et al. , 2022 ): “I guess I can’t say for sure, but I’ll say this new amino acid form[s] a bond with the neighboring amino acids and results in a new protein shape. The only issue with that answer is I feel like I ’ m not really answering the question : Predict any new noncovalent interactions that might occur with such a mutation.” While the above examples of evaluating solution occurred spontaneously without prompting, having students describe their thinking process after solving the problems may have been sufficient to prompt them to evaluate their solution.

What happened after evaluating a solution?

When students evaluated the accuracy of their solution, double-checked their answer, or rethought their solution it helped them recognize potential flaws or mistakes in their answers. After evaluating their solution, they either decided to stick with their original answer or amend their solution. Evaluating a solution often resulted in students adding to or refining their final answer. However, these solution amendments were not always beneficial or in the correct direction because of limited content knowledge. In other work on the metacognition involved in changing answers, answer-changing neither reduced or significantly boosted performance ( Stylianou-Georgiou and Papanastasiou, 2017 ). The fact that Mila’s evaluation of her solution led to a less correct answer, whereas Willibald’s evaluation of his solutions led to a more correct answer further contributes to the variable success of answer-changing on performance.

Evaluating Experience

Evaluating experience occurred when students assessed the difficulty level of the problem or the feelings associated with their thought process ( Table 3 ). This type of evaluation occurred after solving in their problem reflection or in response to the closing questions of the think aloud interview. Students evaluated the problems as difficult based on the confusion, lack of understanding, or low self-efficacy they experienced when solving. For example, Ivy stated, “I just didn’t really have any background knowledge on them, which kind of made it difficult .” In one instance, Willibald’s evaluation of difficulty while amending his solution was followed up with a statement about self-efficacy: “ This one was a difficult one. I told you I’m bad with proteins [laughs].” Students also compared the difficulty of the two problems we asked them to solve. For example, Elena determined that the Pathway Flux problem was easier for her compared with the Protein X problem in her problem reflection: “ I didn ’ t find this question as hard as the last question just cause it was a little bit more simple.” In contrast, Elaine revealed that she found the Protein X problem challenging because of the open-ended nature of the question: “ I just thought that was a little more difficult because it’s just asking me to predict what possibly could happen instead of like something that’s like, definite, like I know the answer to. So, I just tried to think about what I know…” Importantly, Elaine indicated her strategy of thinking about what she knew in the problem in response to her evaluation of difficulty.

Evaluating experience also occurred when students assessed how their feelings were associated with their thought process. The feelings they described were directly tied to aspects of their monitoring. We found that students associated negative emotions (nervousness, worry, and panic) with a lack of understanding or a lack of familiarity. For example, in Renee’s problem reflection, she connected feelings of panic to when she monitored a lack of understanding: “I kind of panicked for a second, not really panicked cause I know this isn’t like graded or anything, but I do not know what a metabolic pathway is.” In contrast, students associated more positive feelings when they reflected on moments of monitoring understanding or familiarity. For example, Renee also stated, “At first I was kind of happy because I knew what was going on.” Additionally, some students revealed their use of a strategy explicitly to engender positive emotions or to avoid negative emotions, like Tabitha: “I looked at the first box, I tried to break it up into certain sections, so I did not get overwhelmed by looking at it.”

What happened after evaluating experience?

When students evaluated their experience problem solving in this study, they usually evaluated the problems as difficult and not easy. Their evaluations of experience were directly connected to aspects of their monitoring while solving. They associated positive emotions and ease with understanding and negative emotions and difficulty with confusion, a lack of familiarity, or a lack of understanding. Additionally, they identified the purpose of some strategy use was to avoid negative experiences. Because their evaluations of experience occurred after solving the problems, most students did not act on this evaluation in the context of this study. We speculate that students may act on evaluations of experience by making plans for future problem solving, but our study design did not necessarily provide students with this opportunity. Exploring how students respond to this kind of evaluation in other study designs would be illuminating.

Implications for Instruction & Research about Evaluating

Our data indicate that some first-year life science students are evaluating their solution and experience after individual problem solving. As instructors, we can encourage students to further evaluate their solutions by prompting them to: 1) rethink or redo a problem to see whether they come up with the same answer or wish to amend their initial solution, and 2) predict whether they think their solution is right or wrong. Encouraging students to evaluate by predicting whether their solution is right or wrong is limited by content knowledge. Therefore, it is imperative to help students develop their self-evaluation accuracy by following up their predictions with immediate feedback to help them become well-calibrated ( Osterhage, 2021 ). Additionally, encouraging students to reflect on their experience solving problems might help students identify and verbalize perceived problem-solving barriers to themselves and their instructors. There is likely a highly individualized level of desirable difficulty for each student where a problem is difficult enough to engage their curiosity and motivation to solve something unknown but also does not generate negative emotions associated with failure that could prevent problem solving from moving forward ( Zepeda et al. , 2020 ; de Bruin et al. , 2023 ). The link between feelings and metacognition in the present study is paralleled in other studies that used retrospective methods and found links between feelings of (dis)comfort and metacognition ( Dye and Stanton, 2017 ). This suggests that the feelings students associate with their metacognition is an important consideration when designing future research studies and interventions. For example, helping students coach themselves through the negative emotions associated with not knowing and pivoting to what they do know might increase the self-efficacy needed for problem-solving persistence.

To address our second research question, we looked for statements related to self-efficacy in our participants’ think aloud data. Self-efficacy is defined as one’s belief in their capability to carry out a specific task ( Bandura, 1997 ). Alternatively, self-efficacy is sometimes operationalized as one’s confidence in performing specific tasks ( Ainscough et al. , 2016 ). While we saw instances of students making high-self efficacy statements (“I’m confident that I was going in somewhat of the right direction”) and low self-efficacy statements (“I’m not gonna understand it anyways”) during their think aloud interviews, we were particularly intrigued by a distinct form of self-efficacy that appeared in our data that we call “self-coaching” ( Table 4 ). We posit that self-coaching is similar to the ideas of self-modeling or efficacy self-talk that other researchers have described in the past ( Wolters, 2003 ; Usher, 2009 ). In our data, these self-encouraging statements either: 1) reassured themselves about a lack of understanding, 2) reassured themselves that it’s okay to be wrong, 3) encouraged themselves to keep going despite not knowing, or 4) reminded themselves of their prior experience. To highlight the role that self-coaching played in problem solving in our dataset, we first present examples where self-coaching was absent and could have been beneficial for the students in our study. Then we present examples where self-coaching was used.

Examples of aspects of self-efficacy revealed during individual problem solving

CategoryDescriptionExample data
High self-efficacyStudent expresses confidence in their knowledge or ability to do something.
Low self-efficacyStudent expresses a lack of confidence in their knowledge or ability to do something.
Self-coachingStudent makes a self-encouraging statement about their lack of understanding.
Student makes a self-encouraging statement about being wrong.
Student makes a self-encouraging statement to keep going despite not knowing.
Student makes a self-encouraging statement about their prior experience.

When students monitored without self-coaching, they had hard time moving forward in their problem-solving

When solving the challenging biochemistry problems in this study, first-year life science students often came across pieces of information or parts of the figures that they were unfamiliar with or did not understand. In the Monitoring section, we described how students monitored their understanding and familiarity, but perhaps what is more interesting is how students responded to not knowing and their lack of familiarity ( Figure 2 ). In a handful of cases, we witnessed students get stuck or hung up on what they did not know. We posit that the feeling of not knowing could increase anxiety, cause concern, and increase self-doubt, all of which can negatively impact a student’s self-efficacy and cause them to stop problem solving. One example of this in our data comes from Tiffani. Tiffani stated her lack of knowledge about how to proceed and followed this up with a statement on her lack of ability to solve the problem, “I am actually not sure how to solve this. I do not think I can solve this one.” A few lines later, Tiffani clarified where her lack of understanding rested, but again stated she cannot solve the problem, “I’m not really sure how these type of amino acids pair up, so I can’t really solve it.” In this instance, Tiffani’s lack of understanding is linked to a perceived inability to solve the problem.

Some students also linked not knowing with perceived deficits. For example, in the following quote Chandra linked not knowing how to answer the second part of the Protein X problem with the idea that she is “not very good” with noncovalent interactions: “ I’m not really sure about the second part. I do not know what to say at all for that, to predict any new noncovalent, I’m not very good with noncovalent at all.” When asked where she got stuck during problem solving, Chandra stated, “The “predict any new noncovalent” cause [I’m] not good with bonds. So, I cannot predict anything really.” In Chandra’s case, her lack of understanding was linked to a perceived deficit and inability to solve the problem. As instructors, it is moments like these where we would hope to intervene and help our students persist in problem solving. However, targeted coaching for all students each time they solve a problem can seem like an impossible feat to accomplish in large, lecture-style college classrooms. Therefore, from our data we suggest that encouraging students to self-coach themselves through these situations is one approach we could use to achieve this goal.

When students monitored and self-coached, they persisted in their problem-solving

In contrast to the cases of Tiffani and Chandra shared above, we found instances of students self-coaching after acknowledging their lack of understanding about parts of the problem by immediately reassuring themselves that it was okay to not know ( Table 4 ). For example, when exploring the arrows in the Pathway Flux problem figure Ivy states, “I don’t really know what that little negative means, but that’s okay .” After making this self-coaching statement Ivy moves on to thinking about the other arrows in the figure and what they mean to formulate an answer. In a similar vein, when some students were faced with their lack of understanding, one strategy they deployed was not dwelling on their lack of knowledge and pivoting to look for a foothold of something they do know. For example, in the following quote we see Viola acknowledge her initial lack of understanding and familiarity with the Pathway Flux problem and then find a foothold with the term enzymes which she knows she has learned about in the past, “I’m thinking there’s very little here that I recognize or understand. Just… okay. So, talking about enzymes, I know we learned a little bit about that.”

Some students acknowledged this strategy of pivoting to what they do know in their problem reflections. In their problem reflections, Quinn and Gerald expanded that they will rely on what they do know, even if it is not accurate. As Quinn put it, “ taking what I think I know, even if it ’ s wrong , like I kind of have to, you have to go off of something.” Similarly, Gerald acknowledged his strategy of “ it’s okay to get it wrong ” when he doesn’t know and connects this strategy to his experience solving problems on high-stakes exams.

I try to use information that I knew and I didn’t know a lot. So, I had to kind of use my strategy where I’m like, if this was on a test, this is one of the questions that I would either skip and come back to or write down a really quick answer and then come back to . So , my strategy for this one is it ’ s okay to get it wrong. You need to move on and make estimated guess. Like if I wasn’t sure what the arrows meant, so I was like, "okay, make an estimated guess on what you think the arrows mean. And then using the information that you kind of came up with try to get a right answer using that and like, explain your answer so maybe they’ll give you half points…" – Gerald

We also observed students encouraging themselves to persist despite not knowing ( Table 4 ). In the following quote we see Kyle acknowledge a term he doesn’t know at the start of his think aloud and verbally choose to keep going, “So the title is pathway flux problem. I’m not too sure what flux means, but I ’ m going to keep on going .” Sometimes this took the form of persisting to write an answer to the problem despite not knowing. For example, Viola stated, “I’m not even really sure what pathway flux is. So, I’m also not really sure what the little negative sign is and it pointing to B. But I ’ m going to try to type an answer .” Rather than getting stuck on not knowing what the negative feedback loop symbol depicted, she moved past it to come to a solution.

We also saw students use self-coaching to remind themselves of their prior experience ( Table 4 ). In the following example, we see Mila talk herself through the substitution of serine with valine in the Protein X problem: “So, there’s not going to be a hydroxyl anymore, but I don’t know if that even matters, but there, valine, has more to it. I don’t know if that means there would be an effect on the covalent interaction. I haven’t had chemistry in such a long time [pause], but at the same time, this is bio. So , I should still know it. [laughs]” Mila’s tone as she made this statement was very matter-of-fact. Her laugh at the end suggests she did not take what she said too seriously. After making this self-coaching statement, Mila rereads the question a few times and ultimately decides that the noncovalent interaction is affected because of the structural difference in valine and serine. Prior experiences, sometimes called mastery experiences, are one established source of self-efficacy that Mila might have been drawing on when she made this self-coaching statement ( Bandura, 1977 ; Pajares, 1996 ).

Implications for Instruction about Self-Coaching

Students can be encouraged to self-coach by using some of the phrases we identified in our data as prompts ( Table 4 ). However, we would encourage instructors to rephrase some of self-coaching statements in our data by removing the word “should” because this term might make students feel inadequate if they think they are expected to know things they don’t yet know. Instead, we could encourage students to remind themselves of when they’ve successfully solved challenging biology problems in the past by saying things like, “I’ve solved challenging problems like this before, so I can solve this one.” Taken together, we posit that self-coaching could be used by students to decrease anxiety and increase confidence when faced with the feeling of not knowing that can result from monitoring, which could potentially positively impact a student’s self-efficacy and metacognitive regulation. Our results reveal first-year students are monitoring in a myriad of ways. Sometimes when students monitor, they may not act further on the resulting information because it makes them feel bad or uncomfortable. Self-coaching could support students to act on their metacognition or not actively avoid being metacognitive.

LIMITATIONS

Even with the use of in-the-moment methods like think aloud interviews, we are limited to the metacognition that students verbalized. For example, students may have been employing metacognition while solving that they simply did not verbalize. However, using a think aloud approach in this study ensured we were accessing students’ metacognition in use, rather than their remembrance of metacognition they used in the past which is subject to recall bias ( Schellings et al. , 2013 ). Our study, like most education research, may suffer from selection bias where the students who volunteer to participate represent a biased sample ( Collins, 2017 ). To address this potential pitfall, we attempted to ensure our sample represented the student body at each institution by using purposeful sampling based on self-reported demographics and varied responses to the revised Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ). Lastly, while our sample size is large ( N = 52) for qualitative analyses and includes students from three different institutional types, the data are not necessarily generalizable to contexts beyond the scope of the study.

The goal of this study was to investigate first-year life science students’ metacognition and self-efficacy in-the-moment while they solved challenging problems. Think aloud interviews with 52 students across three institutions revealed that while first-year life science students plan, monitor, and evaluate while solving challenging problems, they predominantly monitor. First-year life science students associated monitoring a lack of conceptual understanding with negative feelings whereas they associated positive feelings with monitoring conceptual understanding. We found that what students chose to do after they monitored a lack of conceptual understanding impacted whether their monitoring moved problem solving forward or not. For example, after monitoring a lack of conceptual understanding, students could either not use a strategy and remain stuck or they could use a strategy to move their problem solving forward. One critical finding revealed in this study was that self-coaching helped students use their metacognition to take action and persist in problem solving. This type of self-efficacy related encouragement helped some students move past the discomfort associated with monitoring a lack of conceptual understanding and enabled them to select and use a strategy. Together these findings about in-the-moment metacognition and self-efficacy offer a positive outlook on ways we can encourage students to couple their developing metacognitive regulation skills and self-efficacy to persist when faced with challenging life science problems.

ACKNOWLEDGMENTS

We would like to thank Dr. Paula Lemons for allowing us to use problems developed in her research program for this study and for her helpful feedback during the writing process, the College Learning Study participants for their willingness to participate in this study, Dr. Mariel Pfeifer for her assistance conducting interviews and continued discussion of the data during the writing of this manuscript, C.J. Zajic for his contribution to preliminary data analysis, and Emily K. Bremers, Rayna Carter, and the UGA BERG community for their thoughtful feedback on earlier versions of this manuscript. We are also grateful for the feedback from the monitoring editor and reviewers at LSE , which strengthened the manuscript. This material is based on work supported by the National Science Foundation under Grant Number 1942318. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

  • Ainscough, L., Foulis, E., Colthorpe, K., Zimbardi, K., Robertson-Dean, M., Chunduri, P., & Lluka, L. ( 2016 ). Changes in biology self-efficacy during a first-year university course . CBE—Life Sciences Education , 15 (2), ar19.  https://doi.org/10.1187/cbe.15-04-0092 Link ,  Google Scholar
  • Bandura, A. ( 1977 ). Self-efficacy: Toward a unifying theory of behavioral change . Psychological Review , 84 , 191–215.  https://doi.org/10.1037/0033-295X.84.2.191 Medline ,  Google Scholar
  • Bandura, A. ( 1997 ). Self-Efficacy: The Exercise of Control . New York, NY: W H Freeman and Company. Google Scholar
  • Bannert, M., & Mengelkamp, C. ( 2008 ). Assessment of metacognitive skills by means of instruction to think aloud and reflect when prompted. Does the verbalisation method affect learning? Metacognition and Learning , 3 (1), 39–58. https://doi.org/10.1007/s11409-007-9009-6 Google Scholar
  • Bhatia, K. S., Stack, A., Sensibaugh, C. A., & Lemons, P. P. ( 2022 ). Putting the pieces together: Student thinking about transformations of energy and matter . CBE—Life Sciences Education , 21 (4), ar60.  https://doi.org/10.1187/cbe.20-11-0264 Medline ,  Google Scholar
  • Blackford, K. A., Greenbaum, J. C., Redkar, N. S., Gaillard, N. T., Helix, M. R., & Baranger, A. M. ( 2023 ). Metacognitive regulation in organic chemistry students: How and why students use metacognitive strategies when predicting reactivity [10.1039/D2RP00208F] . Chemistry Education Research and Practice , 24 (3), 828–851.  https://doi.org/10.1039/D2RP00208F Google Scholar
  • Carr, M., & Taasoobshirazi, G. ( 2008 ). Metacognition in the gifted: Connections to expertise . In Shaughnessy, M. F.Veenman, M.Kennedy, C. K. (Eds.), Meta-cognition: A recent review of research, theory and perspectives (pp. 109–125). New York, NY: Nova Science Publishers. Google Scholar
  • Cartrette, D. P., & Bodner, G. M. ( 2010 ). Non-mathematical problem solving in organic chemistry . Journal of Research in Science Teaching , 47 (6), 643–660.  https://doi.org/10.1002/tea.20306 Google Scholar
  • Charters, E. ( 2003 ). The use of think-aloud methods in qualitative research: An introduction to think-aloud methods . Brock Education Journal , 12 (2), 68–82. Google Scholar
  • Chi, M. T. H., Feltovich, P. J., & Glaser, R. ( 1981 ). Categorization and representation of physics problems by experts and novices . Cognitive Science , 5 (2), 121–152. www.sciencedirect.com/science/article/pii/S0364021381800298 Google Scholar
  • Collins, K. M. ( 2017 ). Sampling decisions in educational research . In Wyse, D.Selwyn, N.Smith, E.Suter, L. E. (Eds.), The BERA/SAGE handbook of educational research (pp. 280–292). London, UK: SAGE Publications Ltd. Google Scholar
  • Coutinho, S. A., & Neuman, G. ( 2008 ). A model of metacognition, achievement goal orientation, learning style and self-efficacy . Learning Environments Research , 11 (2), 131–151.  https://doi.org/10.1007/s10984-008-9042-7 Google Scholar
  • Cross, D. R., & Paris, S. G. ( 1988 ). Developmental and instructional analyses of children’s metacognition and reading comprehension . Journal of Educational Psychology , 80 (2), 131–142.  https://doi.org/10.1037/0022-0663.80.2.131 Google Scholar
  • Davidson, J. E., & Sternberg, R. J. ( 1998 ). Smart problem solving: How metacognition helps . In Douglas, J. H.John, D.Arthur, C. G. (Eds.), Metacognition in Educational Theory and Practice (pp. 30–38). Mahwah, NJ: Routledge. Google Scholar
  • de Bruin, A. B. H., Biwer, F., Hui, L., Onan, E., David, L., & Wiradhany, W. ( 2023 ). Worth the effort: The start and stick to desirable difficulties (S2D2) framework . Educational Psychology Review , 35 (2), 41.  https://doi.org/10.1007/s10648-023-09766-w Google Scholar
  • Dye, K. M., & Stanton, J. D. ( 2017 ). Metacognition in upper-division biology students: Awareness does not always lead to control . CBE—Life Sciences Education , 16 (2), ar31.  https://doi.org/10.1187/cbe.16-09-0286 Link ,  Google Scholar
  • Ericsson, K. A., & Simon, H. A. ( 1980 ). Verbal reports as data [Article] . Psychological Review , 87 , 215–251.  https://doi.org/10.1037/0033-295X.87.3.215 Google Scholar
  • Frey, R. F., Brame, C. J., Fink, A., & Lemons, P. P. ( 2022 ). Teaching discipline-based problem solving . CBE—Life Sciences Education , 21 (2), fe1.  https://doi.org/10.1187/cbe.22-02-0030 Medline ,  Google Scholar
  • Halmo, S. M., Bremers, E. K., Fuller, S., & Stanton, J. D. ( 2022 ). “ Oh, that makes sense”: Social metacognition in small-group problem solving . CBE—Life Sciences Education , 21 (3), ar58.  https://doi.org/10.1187/cbe.22-01-0009 Medline ,  Google Scholar
  • Halmo, S. M., Sensibaugh, C. A., Bhatia, K. S., Howell, A., Ferryanto, E. P., Choe, B., ... & Lemons, P. P. ( 2018 ). Student difficulties during structure–function problem solving . Biochemistry and Molecular Biology Education , 46 (5), 453–463.  https://doi.org/10.1002/bmb.21166 Medline ,  Google Scholar
  • Halmo, S. M., Sensibaugh, C. A., Reinhart, P., Stogniy, O., Fiorella, L., & Lemons, P. P. ( 2020 ). Advancing the guidance debate: Lessons from educational psychology and implications for biochemistry learning . CBE—Life Sciences Education , 19 (3), ar41.  https://doi.org/10.1187/cbe.19-11-0260 Link ,  Google Scholar
  • Harrison, G. M., & Vallin, L. M. ( 2018 ). Evaluating the metacognitive awareness inventory using empirical factor-structure evidence . Metacognition and Learning , 13 (1), 15–38.  https://doi.org/10.1007/s11409-017-9176-z Google Scholar
  • Heidbrink, A., & Weinrich, M. ( 2021 ). Encouraging biochemistry students’ metacognition: Reflecting on how another student might not carefully reflect . Journal of Chemical Education , 98 (9), 2765–2774.  https://doi.org/10.1021/acs.jchemed.1c00311 Google Scholar
  • Hodges, L. C., Beall, L. C., Anderson, E. C., Carpenter, T. S., Cui, L., Feeser, E., ... & Wagner, C. ( 2020 ). Effect of exam wrappers on student achievement in multiple, large STEM courses . Journal of College Science Teaching , 50 (1), 69–79. www.jstor.org/stable/27119232 Google Scholar
  • Huang, X., Bernacki, M. L., Kim, D., & Hong, W. ( 2022 ). Examining the role of self-efficacy and online metacognitive monitoring behaviors in undergraduate life science education . Learning and Instruction , 80 , 101577. https://doi.org/10.1016/j.learninstruc.2021.101577 Google Scholar
  • Hurme, T.-R., Palonen, T., & Järvelä, S. ( 2006 ). Metacognition in joint discussions: An analysis of the patterns of interaction and the metacognitive content of the networked discussions in mathematics . Metacognition and Learning , 1 (2), 181–200.  https://doi.org/10.1007/s11409-006-9792-5 Google Scholar
  • Kelemen, W. L., Frost, P. J., & Weaver, C. A., 3rd. ( 2000 ). Individual differences in metacognition: Evidence against a general metacognitive ability . Memory & Cognition , 28 (1), 92–107.  https://doi.org/10.3758/bf03211579 Medline ,  Google Scholar
  • Ku, K. Y. L., & Ho, I. T. ( 2010 ). Metacognitive strategies that enhance critical thinking . Metacognition and Learning , 5 (3), 251–267.  https://doi.org/10.1007/s11409-010-9060-6 Google Scholar
  • Kuhn, D. ( 2000 ). Metacognitive development . Current Directions in Psychological Science , 9 (5), 178–181.  https://doi.org/10.1111/1467-8721.00088 Google Scholar
  • McGee, E. O. ( 2020 ). Interrogating structural racism in STEM higher education . Educational Researcher , 49 (9), 633–644.  https://doi.org/10.3102/0013189×20972718 Google Scholar
  • Meijer, J., Veenman, M. V. J., & van Hout-Wolters, B. H. A. M. ( 2006 ). Metacognitive activities in text-studying and problem-solving: Development of a taxonomy . Educational Research and Evaluation , 12 (3), 209–237.  https://doi.org/10.1080/13803610500479991 Google Scholar
  • Muncer, G., Higham, P. A., Gosling, C. J., Cortese, S., Wood-Downie, H., & Hadwin, J. A. ( 2022 ). A meta-analysis investigating the association between metacognition and math performance in adolescence . Educational Psychology Review , 34 (1), 301–334.  https://doi.org/10.1007/s10648-021-09620-x Google Scholar
  • Ohtani, K., & Hisasaka, T. ( 2018 ). Beyond intelligence: A meta-analytic review of the relationship among metacognition, intelligence, and academic performance . Metacognition and Learning , 13 (2), 179–212.  https://doi.org/10.1007/s11409-018-9183-8 Google Scholar
  • Osterhage, J. L. ( 2021 ). Persistent miscalibration for low and high achievers despite practice test feedback in an introductory biology course . Journal of Microbiology & Biology Education , 22 (2), e00139–e00121.  https://doi.org /doi: 10.1128/jmbe.00139-21 Medline ,  Google Scholar
  • Pajares, F. ( 1996 ). Self-efficacy beliefs in academic settings . Review of Educational Research , 66 (4), 543–578.  https://doi.org/10.3102/00346543066004543 Google Scholar
  • Pajares, F. ( 2002 ). Gender and perceived self-efficacy in self-regulated learning . Theory Into Practice , 41 (2), 116–125.  https://doi.org/10.1207/s15430421tip4102_8 Google Scholar
  • Pfeifer, M. A., & Dolan, E. L. ( 2023 ). Venturing into qualitative research: A practical guide to getting started . Scholarship and Practice of Undergraduate Research , 7 (1), 10–20. Google Scholar
  • Pintrich, P. R., & De Groot, E. V. ( 1990 ). Motivational and self-regulated learning components of classroom academic performance . Journal of Educational Psychology , 82 , 33–40.  https://doi.org/10.1037/0022-0663.82.1.33 Google Scholar
  • Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mckeachie, W. J. ( 1993 ). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ) . Educational and Psychological Measurement , 53 (3), 801–813.  https://doi.org/10.1177/0013164493053003024 Google Scholar
  • Pressley, M., Borkowski, J., & Schneider, W. ( 1987 ). Cognitive strategies: Good strategy users coordinate metacognition and knowledge . In: Annals of child development (1987) , 4 , 89–129. Google Scholar
  • Prevost, L. B., & Lemons, P. P. ( 2016 ). Step by step: Biology undergraduates' problem-solving procedures during multiple-choice assessment . CBE—Life Science Education , 15 (4).  https://doi.org/10.1187/cbe.15-12-0255 Google Scholar
  • R Core Team . ( 2021 ). R: A language and environment for statistical computing . Vienna, Austria: R Foundation for Statistical Computing: www.R-project.org Google Scholar
  • Reber, R., & Greifeneder, R. ( 2017 ). Processing fluency in education: How metacognitive feelings shape learning, belief formation, and affect . Educational Psychologist , 52 (2), 84–103.  https://doi.org/10.1080/00461520.2016.1258173 Google Scholar
  • Rickey, D., & Stacy, A. M. ( 2000 ). The role of metacognition in learning chemistry . Journal of Chemical Education , 77 (7), 915.  https://doi.org/10.1021/ed077p915 Google Scholar
  • Saldaña, J. ( 2021 ). The coding manual for qualitative researchers, 4E ed. Thousand Oaks, CA: SAGE Publications Inc. Google Scholar
  • Samuels, S. J., Ediger, K.-A. M., Willcutt, J. R., & Palumbo, T. J. ( 2005 ). Role of automaticity in metacognition and literacy instruction . In Israel, S. E.Block, C. C.Bauserman, K. L.Kinnucan-Welsch, K. (Eds.), Metacognition in Literacy Learning (pp. 41–59). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Google Scholar
  • Sandi-Urena, S., Cooper, M. M., & Stevens, R. H. ( 2011 ). Enhancement of metacognition use and awareness by means of a collaborative intervention . International Journal of Science Education , 33 (3), 323–340.  https://doi.org/10.1080/09500690903452922 Google Scholar
  • Schellings, G. L. M., van Hout-Wolters, B. H. A. M., Veenman, M. V. J., & Meijer, J. ( 2013 ). Assessing metacognitive activities: The in-depth comparison of a task-specific questionnaire with think-aloud protocols . European Journal of Psychology of Education , 28 (3), 963–990. www.jstor.org/stable/23581531 Google Scholar
  • Schraw, G. ( 1998 ). Promoting general metacognitive awareness . Instructional Science , 26 (1), 113–125.  https://doi.org/10.1023/A:1003044231033 Google Scholar
  • Schraw, G., & Dennison, R. S. ( 1994 ). Assessing metacognitive awareness . Contemporary Educational Psychology , 19 (4), 460–475.  https://doi.org/10.1006/ceps.1994.1033 Google Scholar
  • Schraw, G., Dunkle, M. E., & Bendixen, L. D. ( 1995 ). Cognitive processes in well-defined and ill-defined problem solving . Applied Cognitive Psychology , 9 (6), 523–538.  https://doi.org/10.1002/acp.2350090605 Google Scholar
  • Schraw, G., & Moshman, D. ( 1995 ). Metacognitive theories . Educational Psychology Review , 7 (4), 351–371. Google Scholar
  • Smith, J. I., Combs, E. D., Nagami, P. H., Alto, V. M., Goh, H. G., Gourdet, M. A. A., ... & Tanner, K. D. ( 2013 ). Development of the biology card sorting task to measure conceptual expertise in biology . CBE—Life Sciences Education , 12 (4), 628–644.  https://doi.org/10.1187/cbe.13-05-0096 Link ,  Google Scholar
  • Stanton, J. D., Neider, X. N., Gallegos, I. J., & Clark, N. C. ( 2015 ). Differences in Metacognitive Regulation in Introductory Biology Students: When Prompts Are Not Enough . CBE—Life Sciences Education , 14 (2), ar15.  https://doi.org/10.1187/cbe.14-08-0135 Link ,  Google Scholar
  • Stanton, J. D., Dye, K. M., & Johnson, M. S. ( 2019 ). Knowledge of Learning Makes a Difference: A Comparison of Metacognition in Introductory and Senior-Level Biology Students . CBE—Life Sciences Education , 18 (2), ar24.  https://doi.org/10.1187/cbe.18-12-0239 Link ,  Google Scholar
  • Stanton, J. D., Sebesta, A. J., & Dunlosky, J. ( 2021 ). Fostering Metacognition to Support Student Learning and Performance . CBE—Life Sciences Education , 20 (2), fe3.  https://doi.org/10.1187/cbe.20-12-0289 Link ,  Google Scholar
  • Stewart, J., Henderson, R., Michaluk, L., Deshler, J., Fuller, E., & Rambo-Hernandez, K. ( 2020 ). Using the social cognitive theory framework to chart gender differences in the developmental trajectory of STEM self-efficacy in science and engineering students . Journal of Science Education and Technology , 29 (6), 758–773.  https://doi.org/10.1007/s10956-020-09853-5 Google Scholar
  • Stylianou-Georgiou, A., & Papanastasiou, E. C. ( 2017 ). Answer changing in testing situations: The role of metacognition in deciding which answers to review . Educational Research and Evaluation , 23 (3-4), 102–118. https://doi.org/10.1080/13803611.2017.1390479 Google Scholar
  • Taasoobshirazi, G., & Glynn, S. M. ( 2009 ). College students solving chemistry problems: A theoretical model of expertise . Journal of Research in Science Teaching , 46 (10), 1070–1089.  https://doi.org/10.1002/tea.20301 Google Scholar
  • Tomanek, D., & Montplaisir, L. ( 2004 ). Students' studying and approaches to learning in introductory biology . Cell Biology Education , 3 (4), 253–262.  https://doi.org/10.1187/cbe.04-06-0041 Link ,  Google Scholar
  • Tracy, S. J. ( 2010 ). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research . Qualitative Inquiry , 16 (10), 837–851.  https://doi.org/10.1177/1077800410383121 Google Scholar
  • Usher, E. L. ( 2009 ). Sources of middle school students’ self-efficacy in mathematics: A qualitative investigation . American Educational Research Journal , 46 (1), 275–314.  https://doi.org/10.3102/0002831208324517 Google Scholar
  • Veenman, M. V. J., & Spaans, M. A. ( 2005 ). Relation between intellectual and metacognitive skills: Age and task differences . Learning and Individual Differences , 15 (2), 159–176.  https://doi.org/10.1016/j.lindif.2004.12.001 Google Scholar
  • Veenman, M. V. J., Van Hout-Wolters, B. H. A. M., & Afflerbach, P. ( 2006 ). Metacognition and learning: Conceptual and methodological considerations . Metacognition and Learning , 1 (1), 3–14.  https://doi.org/10.1007/s11409-006-6893-0 Google Scholar
  • Versteeg, M., Bressers, G., Wijnen-Meijer, M., Ommering, B. W. C., de Beaufort, A. J., & Steendijk, P. ( 2021 ). What were you thinking? Medical students’ metacognition and perceptions of self-regulated learning . Teaching and Learning in Medicine , 33 (5), 473–482.  https://doi.org/10.1080/10401334.2021.1889559 Medline ,  Google Scholar
  • Villafañe, S. M., Bailey, C. P., Loertscher, J., Minderhout, V., & Lewis, J. E. ( 2011 ). Development and analysis of an instrument to assess student understanding of foundational concepts before biochemistry coursework* . Biochemistry and Molecular Biology Education , 39 (2), 102–109.  https://doi.org/10.1002/bmb.20464 Medline ,  Google Scholar
  • Wolters, C. A. ( 1998 ). Self-regulated learning and college students' regulation of motivation . Journal of Educational Psychology , 90 , 224–235. Google Scholar
  • Wolters, C. A. ( 2003 ). Regulation of motivation: Evaluating an underemphasized aspect of self-regulated learning . Educational Psychologist , 38 (4), 189–205. Google Scholar
  • Yeo, G. B., & Neal, A. ( 2006 ). An examination of the dynamic relationship between self-efficacy and performance across levels of analysis and levels of specificity . Journal of Applied Psychology , 91 (5), 1088–1101. https://doi.org/10.1037/0021-9010.91.5.1088 Medline ,  Google Scholar
  • Zepeda, C. D., Martin, R. S., & Butler, A. C. ( 2020 ). Motivational strategies to engage learners in desirable difficulties . Journal of Applied Research in Memory and Cognition , 9 (4), 468–474.  https://doi.org/10.1016/j.jarmac.2020.08.007 Google Scholar
  • Zheng, J., Xing, W., & Zhu, G. ( 2019 ). Examining sequential patterns of self- and socially shared regulation of STEM learning in a CSCL environment . Computers & Education , 136 , 34–48.  https://doi.org/10.1016/j.compedu.2019.03.005 Google Scholar
  • Examining the Trade-Offs Between Simplified and Realistic Coding Environments in an Introductory Python Programming Class 13 September 2024

modular approach in teaching problem solving a metacognitive process

Submitted: 21 August 2023 Revised: 26 January 2024 Accepted: 9 February 2024

© 2024 S. M. Halmo et al. CBE—Life Sciences Education © 2024 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

We would like to thank Dr. Paula Lemons for allowing us to use problems developed in her research program for this study and for her helpful feedback during the writing process, the College Learning Study participants for their willingness to participate in this study, Dr. Mariel Pfeifer for her assistance conducting interviews and continued discussion of the data during the writing of this manuscript, C.J. Zajic for his contribution to preliminary data analysis, and Emily K. Bremers, Rayna Carter, and the UGA BERG community for their thoughtful feedback on earlier versions of this manuscript. We are also grateful for the feedback from the monitoring editor and reviewers at LSE, which strengthened the manuscript. This material is based on work supported by the National Science Foundation under Grant Number 1942318. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

International Journal of Science and Research (IJSR) Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed

ISSN: 2319-7064

Downloads: 190 | Views: 366 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1

Research Paper | Education Management | Philippines | Volume 6 Issue 8, August 2017 | Rating: 6.5 / 10

Rate This Article!      

Modular Approach in Teaching Problem Solving: A Metacognitive Process

Carmela J. Go Silk, Byron B. Go Silk, Ricardo A. Somblingo --> Carmela J. Go Silk , Byron B. Go Silk , Ricardo A. Somblingo

Abstract: The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144and 146 students for the control and experimental group, respectively. A TIMSS-based mathematics test was used to assess readiness, while a problem solving test was used for problem solving proficiency. Both groups showed an intermediate level of math readiness. Also, the experimental group showed significantly higher problem solving proficiency than the control group. Thus, the experimental group showed better metacognitive skills.

Keywords: problem solving proficiency, metacognitive skills, modular approach, mathematics education

Edition: Volume 6 Issue 8, August 2017

Pages: 670 - 677

How to Download this Article?

Type Your Valid Email Address below to Receive the Article PDF Link

Verification Code will appear in 2 Seconds ... Wait

Type This Verification Code Below: 1234

Click to Cite this Article

Text copied to Clipboard! Carmela J. Go Silk, Byron B. Go Silk, Ricardo A. Somblingo, " Modular Approach in Teaching Problem Solving: A Metacognitive Process ", International Journal of Science and Research (IJSR), Volume 6 Issue 8, August 2017, pp. 670-677, https://www.ijsr.net/getabstract.php?paperid=ART20175782

Similar Articles

Downloads: 1 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1

Analysis Study Research Paper, Education Management, Indonesia, Volume 11 Issue 7, July 2022

Development of Learning Tools Based on Realistic Mathematical Education Approach for Quadrangle Topics for Grade VII Junior High School

Merry Singkoh, Philoteus E. A. Tuerah, Ichdar Domu --> Merry Singkoh , Philoteus E. A. Tuerah , Ichdar Domu

Research Paper, Education Management, Indonesia, Volume 12 Issue 6, June 2023

Development of Learning Devices with a Realistic Mathematics Education Approach in Class Geometry Material VIII SMP

Fabian Yoel Paisa, Philoteus E. A. Tuera, Victor R. Sulangi --> Fabian Yoel Paisa , Philoteus E. A. Tuera , Victor R. Sulangi

Downloads: 2 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1

Analysis Study Research Paper, Education Management, Indonesia, Volume 11 Issue 6, June 2022

Development of Mathematics Learning Tools Using PMR Approach to Teach Trigonometry Comparison for High School Level

Rijani Ivanda Kolibu, Santje M. Salajang, Victor R. Sulangi --> Rijani Ivanda Kolibu , Santje M. Salajang , Victor R. Sulangi

Downloads: 47 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1

Research Paper, Education Management, Indonesia, Volume 6 Issue 9, September 2017

The Effectiveness of Materials Based on Metacognitive Skills

Mas'ud B, Arifin Ahmad, Marwati Abd. Malik, Wa Karmila --> Mas'ud B , Arifin Ahmad , Marwati Abd. Malik , Wa Karmila

Downloads: 49 | Weekly Hits: ⮙3 | Monthly Hits: ⮙3

Research Paper, Education Management, Indonesia, Volume 6 Issue 7, July 2017

Model Eliciting Activities (MEA) Application in Online Group Discussion for Mathematics Learning

Nurul Husna Lubis, Putri Su'aidah Pulungan, Dr. KMS. M. Amin Fauzi --> Nurul Husna Lubis , Putri Su'aidah Pulungan , Dr. KMS. M. Amin Fauzi

Cognitive, Metacognitive, and Motivational Aspects of Problem Solving

Cite this chapter.

modular approach in teaching problem solving a metacognitive process

  • Richard E. Mayer 3  

Part of the book series: Neuropsychology and Cognition ((NPCO,volume 19))

1518 Accesses

13 Citations

This chapter examines the role of cognitive, metacognitive, and motivational skills in problem solving. Cognitive skills include instructional objectives, components in a learning hierarchy, and components in information processing. Metacognitive skills include strategies for reading comprehension, writing, and mathematics. Motivational skills include motivation based on interest, selfefficacy, and attributions. All three kinds of skills are required for successful problem solving in academic settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Similar content being viewed by others

modular approach in teaching problem solving a metacognitive process

Problem Solving from a Behavioral Perspective: Implications for Behavior Analysts and Educators

modular approach in teaching problem solving a metacognitive process

Assessing Problem Solving

modular approach in teaching problem solving a metacognitive process

Building on Schoenfeld’s Studies of Metacognitive Control Towards Social Metacognitive Control

Anand, P.G. and Ross, S.M. (1987). Using computer-assisted instruction to personalize arithmetic materials for elementary school children. Journal of Educational Psychology , 79, 72–78

Article   Google Scholar  

Bean, T. W.. and Steenwyk, F..L. (1984). The effect of three forms of summarization instruction on sixth graders’ summary writing and comprehension. Journal of Reading Behavior . 16, 297–306.

Google Scholar  

Block, J..11. and Bums, R..B. (1976). Mastery learning. in L.S. Shulman (Ed.), Review of Research in Education . Volume 4. Itsaca, IL: Peacock. Bloom, B.S. (1976). Human characteristics and school learning . New York: McGraw-Hill.

Bloom, B.S., Englehart, M.D., Furst, Echrw(133)1, Hill, W.,H and Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook T . Cognitive domain . New York: McKay.

Borkowski, J. G., Weyhing, R.S., and Carr, M. (1988). Effects of attributional retraining on strategy-based reading comprehension in learning disabled students. Journal ofEducational Psychology , 80, 46–53.

Brown, A..L. and Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior , 22, 1–14.

Chi, M..T. I1., Glaser, R., and Fart, M.. A (Ed.). (1988). The nature of expertise . Hillsdale, NJ: Erlbaum. Chipman, S.F., Segal, 1..W. and Glaser, R. (Eds.). (1985). Thinking and learning skills. Volume 2: Research and open questions . Hillsdale, NJ: Erlbaum.

Cook, L.. K. and Mayer, R.E. (1988). Teaching readers about the structure of scientific text. Journal of Educational Psychology , 80, 448–456.

Dewey, J. (1913). Interest and effort in education . Cambridge, MA: Riverside Press.

Ericsson, K. A., Smith, J. (Eds.). (1991). Toward a general theory of expertise . Cambridge, England: Cambridge University Press.

Fitzgerald, J. and Teasley, A.B. (1986). Effects of instruction in narrative structure on children’s writing. Journal of Educational Psychology , 78, 424–432.

Gagne, R..M. (1968). Learning hierarchies. Educational Psychologist , 6, 19.

Gagne, R..M., Mayor, J. R., Garstens, H.L., and Paradise, N. E. (1962). Factors in acquiring knowledge in a mathematics task. Psychological Monographs , No. 7 (Whole No. 526).

Gamer, R., Gillingham, M.G., and White, C. S. (1989). Effects of “seductive details” on macroprocessing and microprocessing in adults and children. Cognition and instruction , 6, 41–57.

Graham. S. (1984). Communicating sympathy and anger to black and white children: The cognitive attributional) consequences of affective cues. Journal of Personality and Social Psychology , 47, 4054.

Graham, S. and Barker, G.P. (1990). The down side of help: An attributional developmental analysis of helping behavior as a low-ability cue. Journal of Educational Psychology , 82, 7–14.

Graham, S. and Harris, K..R. (1988). instructional recommendations for teaching writing to exceptional students. Exceptional Children . 54, 506–512.

Halpem, D.F. (Ed.). (1992). Enhancing thinking skills in the sciences and mathematics._ Hillsdale , NJ: Erlbaum.

Hayes, J..R. and Flower, L.S. (1986). Writing research and the writer. American Psychologist , 41, 1106–1113.

Lewis, A.B. (1989). Training students to represent arithmetic word problems. Journal of Educational Psychology , 79, 363–371.

Luchins, A.S. and Luchins, E. H. (1970). Wertheimer’s seminars revisited : Problem solving and thinking. Vol. 1.. Albany, NY: State University of New York.

Mayer, R.E. (1985). Mathematical ability. In R..A Sternberg (Ed.), Human abilities An information processing approach (pp. 127–150 ). New York: Freeman.

Mayer, R.E. (1987). Educational psychology: A cognitive approach New York: Harper Collins.

Mayer. R.E. (1992). Thinking. problem solving cognition; second edition. New York: Freeman. Mayer, R.E. and Wittrock, M.C. (in press). Problem solving and transfer. in D. Berliner and R. Calfee (Eds.), Handbook of educational psychology New York: Macmillan.

Nickerson, R.S., Perkins, D..N., and Smith, E.E. (Eds.). (1985). The reaching of thinking . Hillsdale, NJ: Erlbaum.

Pintrich, P.R. and De Groot, E. V. (1990). Motivation and self-regulated learning components of classroom academic performance. Journal of Educational Psychology , - 33–40.

Pressley, M. (1990). Cognitive strategy instruction . Cambridge, MA: Brookline Books.

Renninger, K. A., Hidi, S., and Krapp, A. (Eds.). (1992) The role of interest in learning and development . Hillsdale, NJ: Erlbaum.

Rinehart, S D., Stahl, S.A., and Erickson, L.G. (1986). Some effects of summarization training on reading and studying. Reading Research Quarterly , 21, 422–438.

Robins, S. and Mayer, R.E. (1993). Schema training in analogical reasoning. Journal of Educational Psychology , 85, 529–538.

Ross, S.M., McCormick, D., Krisak, N., and Anand, P. (1985). Personalizing context in teaching mathematical concepts: Teacher-managed and computer-managed models. Educational Communication Technology Journal , 133, 169–178.

Schiefele, U. (1992). Topic interest and level of text comprehension. In K. A. Renninger, S. Ilidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 1 51–182 ). Hillsdale, NJ: Erlbaum.

Schiefele, U., Krapp, A., and Winteler, A. (1992). In K. A. Renninger, S. Hidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 183–212 ). Hillsdale, NJ: Erlbaum.

Schoenfeld, A.H. (1979). Explicit heuristic training as a variable in problem-solving performance. Journal for Research in Mathematics Education , 1979, 10, 173–187.

Schoenfeld, A.H. (1985). Mathematical problem solving . Orlando, FL: Academic Press.

Schunk, D. (1991). Self-efficacy and academic motivation. Educational Psychologist . 26, 207–231.

Schunk, D.I1. and Hanson, A.R. (1985). Peer models: Influences on children’s self-efficacy and achievement. Journal of Educational Psychology , 77, 313–322.

Smith, M. U. (Ed.). (1991). Toivard a unified theory ofproblem solving; Views from the content domains . Hillsdale, NJ: Erlbaum.

Segall W., Chipman, S..F, and Glaser, R. (Eds.). (1985). Thinking and learning skills. Volume 1: Relating instruction to research . Hillsdale, NJ: Erlbaum.

Sternberg, R.. A (1985). Beyond /Q: A triarchic theory of human intelligence . Cambridge, England: Cambridge University Press.

Sternberg, R..A and Frensch, P.A. (Eds.). (1991). Complex problem solving: Principles and mechanisms . Hillsdale, NJ: Erlbaum.

Sternberg, R..A and Gardner, M..K.. (1983). Unities in inductive reasoning. Journal of Experimental Psychology: General 112, 80–116.

Taylor, B.M. and Beach, R.W. (1984). The effects of text structure instruction on middle-grade students’ comprehension and production of expository text. Reading Research Quarterly . 19, 134–6.

Wade, S.. E. (1992). How interest affects learning from text. In K. A. Renninger, S. Hidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 255–278 ).

Hillsdale, NJ: Erlbaum. Weiner, B. (1986). An attributional theory of motivation and emotion New York: Springer-Verlag. Wertheimer, M. (I 959). Productive thinking New York: Harper and Row.

White, R. T. (1974). The validation of a learning hierarchy. American Educational Research Journal . 11, 121–236.

Zimmerman, B.J. and Martinez-Pons, M. (1990). Student differences in self-regulated learning: Relating grade, sex, and giftedness to self-efficacy and strategy use. Journal of Educational Psychology 82, 51–59.

Download references

Author information

Authors and affiliations.

Department of Psychology, University of California, Santa Barbara, USA

Richard E. Mayer

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Department of Education, The City College of the City University of New York, New York, NY, USA

Hope J. Hartman ( Professor of Education, Professor of Educational Psychology ) ( Professor of Education, Professor of Educational Psychology )

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer Science+Business Media Dordrecht

About this chapter

Mayer, R.E. (2001). Cognitive, Metacognitive, and Motivational Aspects of Problem Solving. In: Hartman, H.J. (eds) Metacognition in Learning and Instruction. Neuropsychology and Cognition, vol 19. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2243-8_5

Download citation

DOI : https://doi.org/10.1007/978-94-017-2243-8_5

Publisher Name : Springer, Dordrecht

Print ISBN : 978-90-481-5661-0

Online ISBN : 978-94-017-2243-8

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Effectiveness of Modular Instruction in Word Problem Solving of BEED Students

Profile image of julius lim

This study used a Quasi-experimental Design to determine the effects of modular instruction to third year BEED students of Eastern Samar State University (ESSU) who were exposed to lecture method and modular instruction in teaching word problem solving. Its purpose was to seek answers to the following questions: (1) Is there a significant difference in the pretest mean score? (2) Is there a significant difference in the posttest mean scores? (3)Is there a significant difference between the mean gainscores?Based on the pretest and posttest mean scores of both control and experimental groups, the following findings were formulated; (1) there is no significant difference between the pretest mean scores of the subjects; (2) there is a significant difference between the post-test mean scores of subjects; and (3) there is a significant difference between the mean gain scores of the two groups of respondents – experimental and control groups. The experimental group who were taught by modul...

Related Papers

Journal of Mathematical Sciences & Computational Mathematics

Ariel Villar

This study is an experimental pre-test and post-test design which essentially needed to compare the effectiveness of Computer Aided Modular Instruction with the Traditional Method of teaching word problem involving fractions to the grade six (6) pupils of Gadgaran Integrated School, Calbayog City, Samar during the school year 2019-2020.Computer aided modular instruction is a teaching technique that enable pupils interact with the lesson programmed to the computer given to the experimental group. The Traditional Method on the other hand, is a usual way of teaching composing with lecture-discussion given to the control group. A single class consisting of regular grade six (6) pupils was chosen as the subject of the study. Their average grade is approaching proficiency level in Mathematics subject during the first grading period both of experimental and control group. They were randomly assigned and chosen using odd or even technique. The instrument used in this study was researcher ma...

modular approach in teaching problem solving a metacognitive process

International Journal of Science and Research

byron gosilk

The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144 and 146 students for the control and experimental group, respectively. A TIMSS-based mathematics test was used to assess readiness, while a problem solving test was used for problem solving proficiency. Both groups showed an intermediate level of math readiness. Also, the experimental group showed significantly higher problem solving proficiency than the control group. Thus, the experimental group showed better metacognitive skills.

SAMUEL QUIROZ

American Journal of Education and Technology

Angel Dela cruz

The study aimed to determine the learning styles and learning abilities of grade 6 pupils in dealing with modular learning. A descriptive design was used in this study. The survey was conducted in Lt. Andres Calungsud Elementary School to 30 elementary pupils who are enrolled in modular learning for School Year 2021-2022 and the majority of them were male pupils. A Researcher-made survey questionnaire was used in data gathering. Frequency and percentage distribution, mean and standard deviation, megastat were used in treating the data. The study revealed that the pupils have difficulty dealing with terms in their modules. Data show that the respondents got the highest mean in visual learning style interpreted as Often (M=2.60) and the lowest mean in auditory learning style. The respondent’s learning style in reading/writing got the lowest overall mean Sometimes (OM=1.94). The study found out that the highest problem encountered by the students in dealing with modular learning is con...

Majid Haghverdi

This paper focuses on two approaches for facilitating the process of word problems solving. The first approach distinguishes different kinds of occurred errors and the second one recognizes various required and underlying knowledge. The first approach applies Kinfong and Holtan&#39;s framework of occurred errors and the second approach applies Mayer’s theory (1992) of underlying knowledge for solving word problems. The main aim of this paper is to examine the relationship between different kinds of occurred errors and various required knowledge in solving Arithmetic word problems. The research methodology is a semi experimental method. The subjects include 89 eight grade students (male and female). The research tools are a descriptive math test regarding six word problems and a directed interview. The results indicate that in solving the arithmetic word problems, increasing students&#39; errors result from lack of linguistic, semantic, structural and communicational knowledge. This ...

Psychology and Education: A Multidisciplinary Journal

Psychology and Education , Meridel Tinonas , Jennifer B . Jalique , Anna Mae Joy T. Tamon

This study was conducted to determine the effectiveness of the modular instruction modality of Central Philippines State University in the lens of students. This study employed a descriptive study with 376 respondents obtained through stratified sampling. The study determined the student's demographic profile, extent, and level of effectiveness of modular instruction in clarity, constructive alignment, and content, and significant difference in extent and level of effectiveness of modular instruction modality when grouped to students' demographic profile. The study's respondents were the first-year and second-year students of ten campuses of CPSU who were enrolled in the school year 2020-2021. The level of effectiveness of modular learning in three areas was effective. There was a significant difference in the extent of modular learning in the three areas. In contrast, content showed a significant difference when grouped according to respondents' sex and campus. The same result was obtained regarding clarity when grouped according to respondents' course but not in terms of constructive alignment and content. However, when grouped to respondents' age and year level, the extent of modular learning in all aspects showed an insignificant result. There was a significant difference in effectiveness in the three areas when grouped according to campus and sex, except in clarity. However, when grouped according to respondents' age, course, and year level, the level of modular learning in all aspects showed no significant result. A significant relationship between the extent and level of effectiveness of modular instruction modality in all aspects were found.

Romiro Bautista

This study investigated the effects of personalized instruction on the attitude and performance of Bahraini students towards algebraic word problem solving. A total of 49 students in College Algebra enrolled in the first trimester, SY 2011 – 2012 was used as subjects of the study. A pre-test was administered and scored as the basis of determining the high and low ability levels of students in Mathematics. The examination used as pre-test was formulated by the author and was field tested by the Algebra professors before it was conducted for this purpose. Personalization in instruction was introduced through a personalized modular instruction (in terms of content and procedure with translation in Arabic) followed by exercises/drills (also written in English and translated in Arabic). Students were engaged in active learning through direct instruction using the Mayer’s model from the teacher, small group discussion, peer mentoring and follow-up session/s by the teacher. Analysis of transcripts was done to determine the remediation to be utilized. After the execution of the lessons for 6 sessions, the students were given a post-test and student attitude survey. It was found out that students who were exposed to the constructive learning environment through personalized instruction performed better and developed better attitude towards algebraic word problem solving tasks: a highly significant effect on the academic performance of the student towards problem solving and a moderately high impact model of variability (90.8 %) in their academic performance. Keywords: Personalized Instruction, Academic Performance, Student Attitude, Constructive learning environment, Cooperative Learning, Direct Instruction, Active Learning, Small Group Discussion.

james royol

Ma. Victoria Naboya

International Journal of Learning and Teaching

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Asian Journal of Social Sciences and Management Studies

Jasmin Sumipo

Psychology and Education , Rashmia Baraguir

Angelo Segalla

Psychology and Education

The Mathematics Educator

Lynda Wiest

sadia sadiq

International Journal of Scientific and Research Publications (IJSRP)

Girley Mingke

Georgia Educational Researcher

Josh Cuevas

Yan Ping XIn

Dr Sasikumar Nagu

Mosharafa: Jurnal Pendidikan Matematika

Syaharuddin Syaharuddin

Psychology in the Schools

Christopher Skinner

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

IMAGES

  1. (PDF) Modular Approach in Teaching Problem Solving: A Metacognitive Process

    modular approach in teaching problem solving a metacognitive process

  2. [PDF] Modular Approach in Teaching Problem Solving : A Metacognitive

    modular approach in teaching problem solving a metacognitive process

  3. The Metacognitive Teaching Framework (MTF).

    modular approach in teaching problem solving a metacognitive process

  4. 10 Metacognitive Strategies To Develop Independent Learners

    modular approach in teaching problem solving a metacognitive process

  5. Metacognitive process and problem solving.

    modular approach in teaching problem solving a metacognitive process

  6. Modular Method Of Teaching

    modular approach in teaching problem solving a metacognitive process

VIDEO

  1. Let's think about metacognition: how deliberate teacher modelling develops expert learners

  2. METACOGNITIVE PROCESS IN LEARNING

  3. Unit 6

  4. METACOGNITIVE STRATEGIES

  5. For Modular Class (Problem Solving)

  6. Conscious and Unconscious Development

COMMENTS

  1. Modular Approach in Teaching Problem Solving: A Metacognitive Process

    Abstract. The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a ...

  2. PDF Modular Approach in Teaching Problem Solving: A Metacognitive Process

    Tabl The study started with the development of a module using Kolb's experiential Unit theory focusing on the development of metacognitive skills. The module followed the 4A's approach, i.e ...

  3. Modular Approach in Teaching Problem Solving: A Metacognitive Process

    The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups

  4. Modular Approach in Teaching Problem Solving: A Metacognitive Process

    The study assessed the mathematics readiness of students, and investigated whether the modular approach to teachin, IJSR, Call for Papers, Online Journal

  5. Effects of teaching through problem solving on students' metacognition

    The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than ...

  6. The sub-dimensions of metacognition and their influence on modeling

    The success of metacognitive activity can be attributed to students' responses to specific problem-solving scenarios that can activate metacognition (Vorhölter, 2021). Metacognition is an ...

  7. PDF Metacognitive Skills and Problem- Solving

    using strategies for solution, whereas metacognitive skills help to regulate the problem-solving process and make decisions (Goos, et al., 2000). Lucangeli and Cornoldi (1997) emphasized the vital role of metacognition in mathematics education. For example, in the early stages, problem-solving such as the representation of a

  8. PDF Models of Metacognition

    A metacognitive process consists of planning, strategies, knowledge, monitoring, evaluating and terminating. The Automation of Cognitive ... A problem-solving approach in teaching argues that thinking is essentially unfinished. It is an ongoing activity not about knowledge which once known becomes dead. Knowledge is an

  9. The Effect of Metacognitive Instruction on Problem Solving Skills in

    Self-monitoring is the ability of a person to self-check during problem solving process and planning refers to the ability of an individual to break the problem into secondary objectives that can be separately solved. The metacognitive approach to problem solving instruction was proposed by Kapa (2001). He presented five steps to problem ...

  10. Assessing Metacognitive Regulation during Problem Solving: A Comparison

    1. Introduction. Metacognition is a multi-faceted phenomenon that involves both the awareness and regulation of one's cognitions (Flavell 1979).Past research has shown that metacognitive regulation, or the skills learners use to manage their cognitions, is positively related to effective problem-solving (Berardi-Coletta et al. 1995), transfer (Lin and Lehman 1999), and self-regulated ...

  11. Cognitive, metacognitive, and motivational aspects of problem solving

    This article examines the role of cognitive, metacognitive, and motivational in problem solving. Cognitive skills include instructional objectives, components in a hierarchy, and components in information processing. Metacognitive skills include for reading comprehension, writing, and mathematics. Motivational skills include.

  12. Modular Approach in Teaching Problem Solving : A Metacognitive Process

    The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretestposttest design, with 144and 146 students for the control and experimental group, respectively.

  13. Metacognitive Theory: A Framework for Teaching Literacy, Writing, and

    This set of articles-on the three Rs-has provided us with innovative, wide-ranging perspectives on how teachers can enhance academic performance. I could devote considerable space to emphasizing th...

  14. (PDF) Metacognitive Skills and Problem-Solving

    Abstract and Figures. The purpose of this study is to investigate the metacognitive strategies that middle school students used in the process of solving problems individually. The study group ...

  15. Metacognitive strategies improve learning

    Model your metacognitive processes with students. Show students the thinking process behind your approach to solving problems (Ambrose, 2010). This can take the form of a think-aloud where you talk through the steps you would take to plan, monitor, and reflect on your problem-solving approach.

  16. Metacognition and Self-Efficacy in Action: How First-Year Students

    Metacognition and Problem Solving. Metacognition, or one's awareness and control of their own thinking for the purpose of learning (Cross and Paris, 1988), is linked to improved problem-solving performance and academic achievement.In one meta-analysis of studies that spanned developmental stages from elementary school to adulthood, metacognition predicted academic performance when ...

  17. Scaffolding students' use of metacognitive activities using discipline

    To become proficient problem solvers, science and engineering students have to acquire the skill of self-regulating their problem-solving processes, a skill supported by their metacognitive abilities. The Disciplinary Learning Companion (DLC) is an online tool designed to scaffold students' use of metacognitive activities through discipline-specific and even topic-specific reflective prompts ...

  18. Modular Approach in Teaching Problem Solving: A Metacognitive Process

    Abstract: The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144and 146 students for the control and experimental group ...

  19. TEAL Center Fact Sheet No. 4: Metacognitive Processes

    Fogarty (1994) suggests that Metacognition is a process that spans three distinct phases, and that, to be successful thinkers, students must do the following: Develop a plan before approaching a learning task, such as reading for comprehension or solving a math problem. Monitor their understanding; use "fix-up" strategies when meaning ...

  20. PDF Metacognitive Processes

    Metacognition refers to awareness of one's own knowledge—what one does and doesn't know—and one's ability to understand, control, and manipulate one's cognitive processes (Meichenbaum, 1985). It includes knowing when and where to use particular strategies for learning and problem solving as well as how and why to use specific ...

  21. Cognitive, Metacognitive, and Motivational Aspects of Problem Solving

    Abstract. This chapter examines the role of cognitive, metacognitive, and motivational skills in problem solving. Cognitive skills include instructional objectives, components in a learning hierarchy, and components in information processing. Metacognitive skills include strategies for reading comprehension, writing, and mathematics.

  22. Metacognition and problem solving: A process-oriented approach

    Four studies were conducted to demonstrate that the positive effects of verbalization on solution transfer found in previous studies were not due to verbalization per se but to the metacognitive processing involved in the effort required to produce explanation for solution behaviors. In Experiments 1, 2, and 3, a distinction was made between process-oriented, problem-oriented, and simple ...

  23. Effectiveness of Modular Instruction in Word Problem Solving of BEED

    The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144 and 146 students for the control and experimental group, respectively.