“That doesn’t make sense”.
All videos were transcribed and coded from the first learning activity on variability. Statement length was identified by clauses and natural pauses in speech. Then, two coders independently coded 20% of the data and reached an agreement as examined by an inter-coder reliability analysis ( k > .7). The coders discussed and resolved their discrepancies. Then, they independently coded the rest of the transcripts. The verbal protocol coding was based on prior rubrics and is represented with examples from the transcripts in Table 2 . Due to an experimental error, one participant was not recorded and was therefore excluded from all analyses involving the verbal protocols. For each student, we counted the number of statements generated for each coding category and divided this number by their total number of statements. On average students generated 58.79 statements with much variation ( SD = 34.10). Students engaged in monitoring the most ( M = 3.05 statements per student) followed by evaluation ( M = 2.71 statements per student). Students rarely employed control/debugging, conceptual error correction, and calculation error correction ( M = .23, .05, and .61, respectively). Therefore, we combined these scores into one control/debugging verbal protocol code ( M = .88 statements per student).
We also examined the relations between the total number of statements generated (i.e., verbosity) and the number of statements for each type of metacognitive category. The amount students monitored ( r = .59, p < .001), control/debugged ( r = .69, p < .001), and evaluated ( r = .72, p < .001) their understanding was related to the total number of utterances. Given this relationship, we divided each type of verbal protocol by the total number of utterances to control for the number of utterances.
We adapted questionnaire items from previously validated questionnaires and verbal protocol coding rubrics ( Chi et al. 1989 ; Gadgil et al. 2012 ; Renkl 1997 ) as indicated in Table 3 . Informed by this research and Schellings and Van Hout-Wolters’ ( 2011 ) in-depth analysis of the use of questionnaires and their emphasis on selecting an appropriate questionnaire given the nature of the to-be-assessed activity, we created a task-based questionnaire and adapted items from the MAI, MSLQ, Awareness of Independent Learning Inventory (AILI, Meijer et al. 2013 ), a problem-solving based questionnaire ( Howard et al. 2000 ; Inventory of Metacognitive Self-Regulation [IMSR] that was developed from the MAI and Jr. MAI as well as Fortunato et al. 1991 ), and a state-based questionnaire ( O’Neil and Abedi 1996 ; State Metacognitive Inventory [SMI]). In total, there were 24 metacognitive questions: 8 for monitoring, 9 for control/debugging, and 7 for evaluation. Students responded to each item using a Likert scale ranging from 1, strongly disagree, to 7, strongly agree . All items and their descriptive statistics are presented in Table 3 . We chose to develop and validate a task-based metacognitive questionnaire for three reasons. First, there is mixed evidence about the generality of metacognitive skills ( Van der Stel and Veenman 2014 ). Second, there are no task-based metacognitive measures for a problem-solving activity. Third, to our knowledge, no existing domain-general questionnaires reliably distinguish between the metacognitive skills of monitoring, control/debugging, and evaluation.
Descriptive statistics and factor loading for questionnaire items.
Item | Original Construct | [Min, Max] | Standardized Factor | Residual Estimate | Variance | |
---|---|---|---|---|---|---|
During the activity, I found myself pausing to regularly to check my comprehension. | MAI ( ) | [1, 7] | 4.20 (1.78) | .90 | .81 | 0.19 |
During the activity, I kept track of how much I understood the material, not just if I was getting the right answers. | MSLQ Adaptation ( ) | [1, 7] | 4.18 (1.60) | .83 | .69 | 0.31 |
During the activity, I checked whether my understanding was sufficient to solve new problems. | Based on verbal protocols | [1, 7] | 4.47 (1.59) | .77 | .59 | 0.41 |
During the activity, I tried to determine which concepts I didn’t understand well. | MSLQ ( ) | [1, 7] | 4.44 (1.65) | .85 | .73 | 0.27 |
During the activity, I felt that I was gradually gaining insight into the concepts and procedures of the problems. | AILI ( ) | [2, 7] | 5.31 (1.28) | .75 | .56 | 0.44 |
During the activity, I made sure I understood how to correctly solve the problems. | Based on verbal protocols | [1, 7] | 4.71 (1.46) | .90 | .80 | 0.20 |
During the activity, I tried to understand why the procedure I was using worked. | Strategies ( ) | [1, 7] | 4.40 (1.74) | .78 | .62 | 0.39 |
During the activity, I was concerned with how well I understood the procedure I was using. | Strategies ( ) | [1, 7] | 4.38 (1.81) | .74 | .55 | 0.45 |
During the activity, I reevaluated my assumptions when I got confused. | MAI ( ) | [2, 7] | 5.09 (1.58) | .94 | .89 | 0.11 |
During the activity, I stopped and went back over new information that was not clear. | MAI ( ) | [1, 7] | 5.09 (1.54) | .65 | .42 | 0.58 |
During the activity, I changed strategies when I failed to understand the problem. | MAI ( ) | [1, 7] | 4.11 (1.67) | .77 | .60 | 0.40 |
During the activity, I kept track of my progress and, if necessary, I changed my techniques or strategies. | SMI ( ) | [1, 7] | 4.51 (1.52) | .89 | .79 | 0.21 |
During the activity, I corrected my errors when I realized I was solving problems incorrectly. | SMI ( ) | [2, 7] | 5.36 (1.35) | .50 | .25 | 0.75 |
During the activity, I went back and tried to figure something out when I became confused about something. | MSLQ ( ) | [2, 7] | 5.20 (1.58) | .87 | .75 | 0.25 |
During the activity, I changed the way I was studying in order to make sure I understood the material. | MSLQ ( ) | [1, 7] | 3.82 (1.48) | .70 | .49 | 0.52 |
During the activity, I asked myself questions to make sure I understood the material. | MSLQ ( ) | [1, 7] | 3.60 (1.59) | .49 | .25 | 0.76 |
REVERSE During the activity, I did not think about how well I was understanding the material, instead I was trying to solve the problems as quickly as possible. | Based on verbal protocols | [1, 7] | 3.82 (1.72) | .54 | .30 | 0.71 |
During the activity, I found myself analyzing the usefulness of strategies I was using. | MAI ( ) | [1, 7] | 5.02 (1.55) | .48 | .23 | 0.77 |
During the activity, I reviewed what I had learned. | Based on verbal protocols | [2, 7] | 5.04 (1.40) | .57 | .33 | 0.67 |
During the activity, I checked my work all the way through each problem. | IMSR ( ) | [1, 7] | 4.62 (1.72) | .94 | .88 | 0.12 |
During the activity, I checked to see if my calculations were correct. | IMSR ( ) | [1, 7] | 4.73 (1.97) | .95 | .91 | 0.09 |
During the activity, I double-checked my work to make sure I did it right. | IMSR ( ) | [1, 7] | 4.38 (1.87) | .89 | .79 | 0.21 |
During the activity, I reviewed the material to make sure I understood the information. | MAI ( ) | [1, 7] | 4.49 (1.71) | .69 | .48 | 0.52 |
During the activity, I checked to make sure I understood how to correctly solve each problem. | Based on verbal protocols | [1, 7] | 4.64 (1.57) | .86 | .75 | 0.26 |
Note. The bolded italics represents each of the three factors with their respective items listed below each factor.
To evaluate the substantive validity of the questionnaire, we used a second-order CFA model consisting of three correlated factors (i.e., monitoring, control/debugging, and evaluation) and one superordinate factor (i.e., metacognitive regulation) with MPlus version 6.11. A robust weighted least squares estimation (WLSMV) was applied. Prior to running the model, normality assumptions were tested and met. The resulting second-order CFA model had an adequate goodness of fit, CFI = .96 TLI = .96, RMSEA = .096, X 2 (276) = 2862.30, p < .001 ( Hu and Bentler 1999 ). This finalized model also had a high internal reliability for each of the factors: superordinate, α = .95, monitoring, α = .92, control/debugging, α = .86 and evaluation, α = .87. For factor loadings and item descriptive statistics, see Table 3 . On average, students reported a moderate use of monitoring ( M = 4.51), control/debugging ( M = 4.51), and evaluation ( M = 4.70).
We also analyzed the JOKs (α = .86) using different calculations. As mentioned in the introduction, we calculated the mean absolute accuracy, gamma, and discrimination (see Schraw 2009 for the formulas). Gamma could not be computed for 9 participants (25% of the sample) since they responded with the same confidence rating for all seven items. Therefore, we did not examine gamma in our analyses. Absolute accuracy ranged from .06 to .57, with a lower score indicating better precision in their judgments, whereas discrimination in this study ranged from −3.75 to 4.50, with more positive scores indicating that students were able to indicate when they knew something.
The study took approximately 120 min to complete (see Figure 4 an overview). At the beginning of the study, students were informed that they were going to be videotaped during the experiment and consented to participating in the study. Then, they moved on to complete the pre-test (15 min), followed by the experimenter instructing students to say their thoughts aloud. Then, the experimenter gave the students a sheet of paper with three multiplication problems on it. If students struggled to think aloud while solving problems (i.e., they did not say anything), then the experimenter modeled how to think aloud. Once students completed all three problems and the experimenter was satisfied that they understood how to think aloud (3 min), the experimenter moved onto the learning activity. Students had 15 min to complete the variability learning activity. After the variability activity, students watched a consolidation video (15 min) and worked through a standard deviation activity (15 min). Then, they were asked to complete the task-based questionnaire (10 min). Once the questionnaire was completed, the students had 35 min to complete the post-test. Upon completion of the post-test, students completed several questionnaires, a demographic survey, and then students were debriefed (12 min).
The first set of analyses examined whether the three measures were related to one another. The second set of analyses evaluated the degree to which the different measures related to learning, transfer, and PFL, providing external reliability for the measurements. Descriptive statistics for each measure are represented in Table 4 . For all analyses, alpha was set to .05 and results were interpreted as trending if p < .10.
Descriptive statistics for each measure.
Measure | Variable | Min | Max | ||||
---|---|---|---|---|---|---|---|
Verbal Protocols | Monitoring | 44 | 0.00 | 0.29 | 0.05 | 0.01 | 0.06 |
Control/Debugging | 44 | 0.00 | 0.06 | 0.01 | 0.002 | 0.02 | |
Evaluation | 44 | 0.00 | 0.16 | 0.04 | 0.01 | 0.04 | |
Questionnaire | Monitoring | 45 | 1.13 | 6.75 | 4.51 | 0.19 | 1.29 |
Control/Debugging | 45 | 2.33 | 6.44 | 4.51 | 0.16 | 1.08 | |
Evaluation | 45 | 2.14 | 7.00 | 4.70 | 0.19 | 1.28 | |
JOKs | Mean | 45 | 2.00 | 5.00 | 4.31 | 0.09 | 0.60 |
Mean Absolute Accuracy | 45 | 0.06 | 0.57 | 0.22 | 0.02 | 0.13 | |
Discrimination | 45 | −3.75 | 4.5 | 1.43 | 0.33 | 2.21 |
Note. To control for the variation in the length of the verbal protocols across participants, the verbal protocol measures were calculated by taking the total number of times the specified verbal protocol measure occurred by a participant and dividing that by the total number of utterances that participant made during the learning activity.
To evaluate whether the measures revealed similar associations between the different skills both within and across the measures, we used Pearson correlation analyses. See Table 5 for all correlations. Within the measures, we found that there were no associations among the skills in the verbal protocol codes, but there were positive associations between all the skills in the task-based questionnaire (monitoring, control/debugging, and evaluation). For the JOKs, there was a negative association between mean absolute accuracy and discrimination, meaning that the more accurate participants were at judging their confidence (a score closer to zero for absolute accuracy), the more likely they were aware of their correct performance (positive discrimination score). There was also a positive association between the average ratings of the JOKs and discrimination, meaning those who were assigning higher values in their confidence were also more aware of their correct performance.
Correlations between the task-based questionnaire, verbal protocols, and judgments of knowing.
Variable | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
---|---|---|---|---|---|---|---|---|---|---|
VPs | 1. Monitoring | - | .09 | .01 | −.36 * | −.10 | −.16 | −.41 * | − .07 | −.14 |
2. Control/Debugging | - | .16 | .12 | −.08 | .14 | −.16 | .03 | −.08 | ||
3. Evaluation | - | .29 | .31 * | .37 * | −.10 | .02 | .01 | |||
Qs | 4. Monitoring | - | .73 ** | .73 ** | .26 | .06 | .02 | |||
5. Control/Debugging | - | .65 ** | .02 | −.02 | −.03 | |||||
6. Evaluation | - | .15 | .11 | −.09 | ||||||
JOKs | 7. Average | - | .14 | .39 ** | ||||||
8. Mean Absolute Accuracy | - | − .76 ** | ||||||||
9. Discrimination | - |
Note. VPs = Verbal Protocols, Qs = Questionnaire, JOKs = Judgments of Knowing, † = p < .10, * = p < .05, and ** p < .01.
Across the measures, an interesting pattern emerged. The proportion of monitoring statements was negatively associated with the monitoring questionnaire and the average JOK ratings. However, there was no relationship between the monitoring questionnaire and the average JOK ratings. For the other skills, control/debugging and evaluation questionnaire responses positively correlated with the proportion of evaluation statements. There were also two trends for the monitoring questionnaire, such that it was positively related to the proportion of evaluation statements and the average JOK ratings. Otherwise, there were no other associations.
3.2.1. learning and test performance.
The learning materials included the first and second learning activities, and a post-test that included transfer items and a PFL item. For the first learning activity, the scores ranged from 0 to 3 (out of 4) with an average score of 1.6 points ( SD = .72, 40%). For the second learning activity, the scores ranged between 0 and 2 (out of 5) with an average score of 1.56 points ( SD = .59; 31%). Given the low performance when solving the second activity and the observation that most students were applying mean deviation to the second activity, instead of inventing a new procedure, we did not analyze these results. For the post-test transfer items, the scores ranged from 1 to 5.67 (out of 6) with an average score of 3.86 points ( SD = 1.26). We did not include the PFL in the transfer score, as we were particularly interested in examining the relation between the metacognitive measures and PFL. The PFL scores ranged from 0 to 1 (out of 1) with an average score of 0.49 ( SD = 0.51). For ease of interpretation, we converted student scores for all learning measures into the correct proportion in Table 6 .
Descriptive statistics for each learning measure.
Measure | Min | Max | ||||
---|---|---|---|---|---|---|
First Learning Activity | 45 | 0.00 | 0.75 | 0.40 | 0.03 | 0.18 |
Transfer | 45 | 0.17 | 0.94 | 0.64 | 0.03 | 0.21 |
PFL | 45 | 0.00 | 1.00 | 0.49 | 0.08 | 0.51 |
To evaluate the relation between each metacognitive measure and the learning materials, we used a series of regressions. We used multiple linear regressions to test the amount of variance explained in the first learning activity and post-test performance by each measure. Then, to test the amount of variance explained by each metacognitive measure in the PFL performance, we used multiple logistic regression. In addition to these models, we also regressed the learning outcomes on the most predictive variables from each of the measures and entered them into a competing model to evaluate whether and how much they uniquely contribute to the overall variance.
For verbal protocols, we entered each of the codes into the model. The model predicting performance on the first learning activity explained 14.2% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 2.21, p = .10. Within the model, there was only an effect of monitoring, β = −0.37, t = −2.51, p = .02, VIF = 1.00 ( Table 7 ). The models predicting transfer, F (3, 40) = 0.19, p = .90, and PFL scores, χ 2 (3, N = 44) = 5.05, p = .17, were not significant.
Multiple linear regression model predicting performance on the first activity with verbal protocols.
Variable | VIF | |||
---|---|---|---|---|
Monitoring statements | −0.37 | −2.51 | .02 * | 1.01 |
Control/Debugging statements | −0.05 | −0.32 | .75 | 1.03 |
Evaluation statements | −0.03 | −0.17 | .87 | 1.02 |
Constant | 10.06 | <.001 *** |
Note. * = p < .05 and *** p < .001.
For the task-based questionnaire, we computed two types of models: one with all three metacognitive skills and the other with each metacognitive skill entered separately. Entering all three skills simultaneously led to no significant relations for the first learning activity, F (3, 41) = 1.46, p = .24, transfer, F (3, 41) = 0.15, p = .93, or PFL χ 2 (1, N = 45) = 2.97, p = .40. However, because the three factors were highly correlated, we entered each factor into three separate models ( Kraha et al. 2012 ).
Entering the skills into separate models revealed a marginal effect of self-reported monitoring, β = 0.27, t = 1.87, p = .07, VIF = 1.00, and self-reported evaluation, β = 0.29, t = 2.0, p = .05, VIF = 1.00, on the first learning activity. The model predicting performance on the first learning activity with self-reported monitoring explained 7.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 3.50, p = .07, whereas the model predicting performance on the first learning activity with self-reported evaluation explained 8.5% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 4.01, p = .05. Otherwise, there were no significant relations. Self-reported monitoring and evaluation were not related to performance on transfer, F (1, 43) = 0.1, p = .75 and F (1, 43) = 0.02, p = .88), respectively, or PFL scores, χ 2 (1, N = 45) = 0.01, p = .91, χ 2 (1, N = 45) = 1.29, p = .26), respectively, and self-reported control/debugging had no relation to any of the learning outcomes (learning activity: F (1, 43) = 1.52, p = .22; transfer: F (1, 43) = 0.07, p = .79; PFL: χ 2 (1, N = 45) = .69, p = .41).
The JOK calculations were entered into three separate models for each learning outcome, since they were highly correlated with each other.
Average ratings . The model predicting first activity explained 10.4% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 6.11, p = .02, in which there was an effect of average JOK ratings, β = 0.35, t = 2.47, p = .02, VIF = 1.00. The model predicting transfer explained 14.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 43) = 7.07, p = .01, in which there was an effect of average JOK ratings, β = 0.38, t = 2.66, p = .01, VIF = 1.00. The logistic model predicting PFL scores explained 15.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 5.6, p < .05. There was an effect of average JOK ratings, B = 4.17, Exp (B) = 64.71, Wald’s χ 2 (1, N = 44) = 4.21, p = .04. Thus, higher average JOK ratings were associated with an increase in the likelihood of solving the PFL problem.
Mean absolute accuracy . The model predicting first activity explained 4.2% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 1.85, p =.18. The model predicting transfer explained 50.8% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 43.42, p < .001, in which there was an effect of mean absolute accuracy, β = −0.71, t = −6.59, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 8.9% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 3.03, p = .08, in which there was a marginal effect of mean absolute accuracy, B = −4.26, Exp (B) = 0.01, Wald’s χ 2 (1, N = 44) = 2.74, p = .098. Thus, increasing mean absolute accuracy (i.e., worse accuracy) was associated with a reduction in the likelihood of solving the PFL problem.
Discrimination . The model predicting performance on the first activity explained 0.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 0.05, p = .83. The model predicting transfer explained 88.1% of the variance as indexed by the adjusted R 2 statistic, F (1, 42) = 318.61, p < .001, in which there was an effect of discrimination, β = 0.94, t = 17.85, p < .001, VIF = 1.00. The logistic model predicting PFL scores explained 33.6% of the variance as indexed by the adjusted Nagelkerke R 2 statistic, χ 2 (1, N = 43) = 12.80, p < .001, in which there was an effect of discrimination, B = 0.60, Exp (B) = 1.82, Wald’s χ 2 (1, N = 44) = 8.88, p = .003. Thus, increasing discrimination was associated with an increased likelihood of solving the PFL problem.
We evaluated the competing models for the learning activity to determine whether constructs from different measurements were predictive of differential variances within these learning outcomes. The models predicting transfer and PFL were not computed, as only the JOKs were predictive. For the model predicting the first learning activity, we regressed it on self-reported evaluation, monitoring statements, and JOK average. The model explained 24.7% of the variance as indexed by the adjusted R 2 statistic, F (3, 40) = 4.37, p = .009. Within the model, there was a marginal effect of self-reported evaluation, β = 0.24, t = 1.71, p = .095, VIF = 1.03. Otherwise, there were no other significant effects ( Table 8 ).
Multiple linear regression model predicting performance on the first activity with self-reported evaluation, monitoring statements, and JOK average.
Variable | VIF | |||
---|---|---|---|---|
Self-reported Evaluation | 0.24 | 1.71 | .095 | 1.03 |
Monitoring Statements | −0.24 | −1.60 | .12 | 1.22 |
JOK Average | 0.23 | 1.53 | .13 | 1.21 |
Constant | −0.08 | .93 |
From these results, we raise some important questions about the measures of metacognitive regulation, specifically those that assess the skills of monitoring, control/debugging, and evaluation. Not only do we find that the task-based questionnaire, verbal protocols, and JOK measures assessing these skills show little relation to one another, but they also predict different learning outcomes. Although these results suggest that these measures are capturing different processes, one aspect of these results suggests that they capture some overlapping variance, such that the different types of measures did not result in a significant model in the competing model for the learning activity. Below, we discuss these results further by first focusing on relation among the measures and their relation to learning outcomes and then turning to their implications and areas for future research.
A central goal of this study was to examine the degree to which these different measures of metacognitive regulation relate to each other for a subset of metacognitive skills (monitoring, control/debugging, and evaluation). The results demonstrated that there is little association between the task-based metacognitive regulation questionnaire and the corresponding verbal protocols, suggesting that these measurements are inaccurate, measure different processes than intended, or some combination of the two. For example, self-reported monitoring was negatively related to the monitoring statements. This finding suggests that the more students monitored their understanding, the less likely they were to report doing so on a questionnaire, reflecting a disconnect between what students do versus what they think they do. This misalignment might be particularly true for students who are struggling with the content and are making more monitoring statements. It also implies that students are unaware of the amount they are struggling—or worse, they are aware of it, but when asked about it, they are biased to say the opposite, perhaps because they do not want to appear incompetent. This speculation is also related to the observational finding that when students monitored their understanding, they were more likely to share negative monitoring statements such as “I do not understand this.” Therefore, perhaps a more in-depth analysis of the monitoring statements might provide clarity on the relation between these two measures. Another possibility is a mismatch of monitoring valence across the two measures because the monitoring questionnaire items are almost all positively framed (e.g., “ During the activity , I felt that I was gradually gaining insight into the concepts and procedures of the problems ”), whereas the verbal protocols could capture either positive or negative framings. If what is being expressed in the verbal protocols is just monitoring what one does not understand, then we would expect to see a negative correlation such as the one we found. That is, self-reported monitoring is likely to be negatively aligned with negative monitoring statements but potentially not positive monitoring statements. A similar pattern might also be true of the JOK average ratings and the monitoring statements, as they were also negatively associated with each other, especially since the JOKs capture one’s confidence.
The frequency of evaluation statements was associated with self-reported evaluation as well as self-reported control/debugging, which suggests that the different self-reported constructs capture a similar aspect of metacognitive behavior. There was also a trend in which self-reported monitoring was also positively related to evaluation statements. This partial alignment between the questionnaire and verbal protocols might be due to students’ awareness in the moment in which some processes are more explicit (e.g., evaluation) than others (e.g., control/debugging). The lack of differentiation on the questionnaire could also be attributed to students not being very accurate at knowing what they did and did not do during a learning task. This interpretation is consistent with work by Veenman et al. ( 2003 ), in which students’ self-reports had little relation to their actual behaviors. Instead, students might be self-reporting the gist of their actions and not their specific behaviors which are captured in the verbal protocols. It is also possible that there could have been more overlap between the two measures if we coded the verbal protocols for the entire set of learning activities that the students were self-reporting about (not just the first learning activity). It is also unclear as to what students were referencing when answering the self-reports. They could have been referencing their behaviors on the most recent task (i.e., the standard deviation activity) in which we did not code for their metacognitive verbalizations.
There was also a trend in which the average JOK ratings were positively related to self-reported monitoring, suggesting that the average JOK ratings reflected some aspects of monitoring that were captured in the questionnaire. Otherwise, there were no associations between the JOKs and the monitoring and evaluation statements or questions. As mentioned earlier, JOKs capture the accuracy of one’s monitoring and evaluating, not just the act of performing the skill or recounting how many times they engaged in an instance. This result reveals that perhaps being able to identify when one engages in the skills is different from gauging whether one is understanding information or self-reporting on whether one was engaged in checking one’s understanding. Another interpretation is that the JOK accuracy might benefit from the additional learning experiences that took place after the verbal protocols (i.e., the consolidation video) and after the questionnaire (i.e., the embedded resource). These additional resources may provide a more comprehensive picture of the learner’s understanding and might have allowed them to resolve some of their misunderstandings. Prior research also shows that students can learn from a test ( Pan and Rickard 2018 ), providing them with additional information to inform their judgments.
The learning activity might have also played a role in the relationship across the different measures. As mentioned, the structured inquiry task allows for more opportunities to engage in metacognition. This opportunity might also allow for instances in which the metacognitive skills are difficult to distinguish, as they might co-occur or overlap with each other. Perhaps if the learning activity were designed to elicit a specific metacognitive behavior, different associations would emerge.
In terms of learning, we see that students’ self-reported use of monitoring and evaluation has a marginal relation to their performance on the first activity, which provides some external validity for those two components. However, there was not a relation between the self-reports and the transfer or PFL performance. It could be that the monitoring and evaluation components of the questionnaire were able to predict performance specific to the task with which they were based on but not the application of the knowledge beyond the task. This finding suggests that these questionnaire measures are limited in the types of learning outcomes they can predict. It is also important to note the differences between this work and past; here, the questionnaire was task specific and involved a problem-solving activity, whereas other work has looked at more domain-general content and related the questionnaires to achievement. Therefore, it is difficult to know whether the task specific framing of the questionnaire limits its predictability, or the change in assessment, or both.
The low internal reliability of the transfer post-test could have also posed difficulties in examining these analyses, as students were responding very differently across the items. The lack of internal reliability might be attributed to the combination of different types of transfer items within the assessment. Future work could employ an assessment with multiple items per concept and per transfer type (e.g., near versus intermediate) to determine the extent to which the reliability of the test items impacted the results.
As predicted, there was an association between monitoring verbal protocols and performance on the first learning activity. The negative association, as well as the observation that the majority of the metacognitive statements reflected a lack of understanding, aligns well with Renkl’s ( 1997 ) findings, in which negative monitoring was related to transfer outcomes. Although monitoring was not a positive predictor, we used a verbal protocol rubric that differs from those who have found positive learning outcomes as we coded for the frequency of the metacognitive statements and not other aspects of a metacognitive event, such as the quality or valence; (e.g., Van der Stel and Veenman 2010 ). For example, the quality of the metacognitive event can be meaningful and add precision to the outcomes they predict ( Binbasaran-Tuysuzoglu and Greene 2015 ). We did not see an association between the verbal protocols with performance on the transfer or PFL problems. One reason for the lack of relationship might be that the verbal protocols occurred during encoding stage with different materials and were not identical to the retrieval- and application-based materials that were used at the post-test. Although there is no prior work evaluating PFL with verbal protocols, other work evaluating transfer suggests that we would have found some relation (e.g., Renkl 1997 ). It would be productive for research to explore how different verbal protocol rubrics relate to one another and whether the types of verbal protocols elicited from different tasks result in different relations to robust learning.
Students’ average JOK ratings, absolute accuracy (knowing when they knew something), and discrimination (rating correct items with higher confidence than incorrect items) were strong predictors of performance on transfer and PFL. These relations could be due to the time-contingent and content-dependent aspects of JOKs, as they were tied to the test which occurred after the learning, whereas the verbal protocols and questionnaires were tied to the learning materials and occurred during and after the learning materials, respectively. Regardless, these findings suggest that being able to monitor one’s understanding is important for learning outcomes. Given there was a strong negative relation between the average JOK ratings and monitoring questionnaire and no relationship between the questionnaire and discrimination and absolute accuracy, it also supports that these measures capture different aspects of metacognition. JOKs might be assessing one’s accuracy at identifying their understanding (i.e., monitoring accuracy) whereas the average JOKs and the monitoring questionnaire might be assessing one’s awareness of checking one’s understanding. However, when comparing the average JOK ratings to the monitoring questionnaire on performance for the first learning activity, the average JOKs have a stronger relationship, implying that after a learning experience and consolidation lecture, students are more accurate at recognizing their understanding.
Although prior work has argued that JOKs are domain general ( Schraw 1996 ), we do not find discrimination or absolute accuracy to be predictive of the learning activity; however, the average JOK ratings were predictive. Students who had higher average JOKs performed better on the learning activity, but it did not matter how accurate their JOKs were. However, for transfer and PFL measures, their accuracy in their monitoring did matter. This finding suggests that students’ ability to monitor their understanding might transfer across different learning measures, but their accuracy is more dependent on the actual learning measure. This assumption is consistent with prior work in which students’ monitoring accuracy varied as a function of the item difficulty ( Pulford and Colman 1997 ).
When generating competing models across the metacognitive measures, we were only able to examine one in which we predicted performance on the first activity with evaluation questionnaire, monitoring statements, and JOK average. The overall model was not significant. This finding suggests that they captured shared variances in their relation to learning, but that they are distinctly different in that they were not associated with each other.
One goal of this study was to explore the relation between different skills and at what level of specificity to describe the constructs. We were able to establish a second-order factor with a task-based survey in which the different skills represented the higher-order factor of metacognitive regulation, but also the unique factors for each skill, such that they were distinguishable. We were also able to distinguish between the different metacognitive skills in the verbal protocols with adequate inter-rater reliability between the two coders and the differential relations the codes had with each other and the learning and robust learning outcomes. The lack of correlation between the verbal protocol codes shows that they are not related to each other and suggests that they are capturing different skills. This finding is further supported when predicting learning outcomes, as the verbal protocol codes are related to different types of learning outcomes. This work highlights the need for future theory building to incorporate specific types of metacognitive skills and measures into a more cohesive metacognitive framework. Doing so would inform both future research examining how these processes operate, as well as educators who want to understand whether there are particular aspects of metacognition that their students could use more or less support in using.
This work also has practical implications for education. Although verbal protocols provide insight into what participants were thinking, they were least predictive of subsequent learning performance. However, the utility in using verbal protocols in classroom settings is still meaningful and relevant in certain situations. Of course, a teacher could not conduct verbal protocols for all their students, but it could be applied if they were concerned about how a particular student was engaging in the problem-solving process. In this case, a productive exercise might be to ask the student to verbalize their thoughts as they solve the problem and for the teacher to take notes on whether there are certain metacognitive prompts that may help guide the student during their problem-solving process.
The task-based questionnaire and the metacognitive judgment measures, which are more easily applied to several students at one time and thus are more easily applied in educational contexts, had stronger relations to learning outcomes. Given that the JOKs in this study were positively related to multiple learning outcomes, it might have more utility in the classroom settings. The use of these JOKs will allow teachers to measure how well students are able to monitor their learning performance. To compliment this approach, if teachers want to understand whether their students are engaging in different types of metacognitive skills as they learn the content in their courses, then the use of the task-based questionnaire could readily capture which types of metacognitive skills they are employing. The use of these measures can be used in a way that is complimentary, given the goals of the teacher.
This work examines a subset of metacognitive measures, but there are many more in the literature that should be compared to evaluate how metacognitive regulation functions. Given the nature of the monitoring examined in this work, it would be particularly interesting to examine how different metacognitive judgments such as judgments of learning relate to the monitoring assessed by the verbal protocols and the questionnaire. Kelemen et al. ( 2000 ) provide evidence that different metacognitive judgments assess different processes, so we might expect to find different associations. For example, perhaps judgments of learning are more related to monitoring statements than JOKs. Judgments of learning have a closer temporal proximity to the monitoring statements and target the same material as the verbal protocols. In contrast, JOKs typically occur at a delay and assess post-test materials that are not identical to the material presented in the learning activity. In this work, we were not able to capture both judgments of learning and JOKs because the learning activity did not allow for multiple measures of judgments of learning. Therefore, if a learning activity allowed for more flexibility in capturing multiple judgments of learning, then we might see different relations emerge due to the timing of the measures.
Future work could also explore the predictability the task-based questionnaire has over other validated self-report measures such as a domain-based adoption of the MAI or MSLQ. It would also be interesting to examine how these different measures relate to other external factors as predicted by theories of self-regulated learning. Some of these factors include examining the degree to which the task-based questionnaire, JOKs, and verbal protocols relate to motivational aspects such as achievement goal orientations, as well as more cognitive sense-making processes such as analogical comparison and self-explanation. Perhaps this type of research would provide more support for some self-regulated learning theories over others given their hypothesized relationships. More pertinent to this line of work, this approach has the potential to help refine theories of metacognitive regulation and their associated measures by providing greater insight into the different processes captured by each measure and skill.
We thank Christian Schunn, Vincent Aleven, and Ming-Te Wang for their feedback on the study. We also thank research assistants Christina Hlutkowsky, Morgan Everett, Sarah Honsaker, and Christine Ebdlahad for their help in transcribing and/or coding the data.
This research was supported by National Science Foundation (SBE 0836012) to the Pittsburgh Science of Learning Center ( http://www.learnlab.org ).
Conceptualization, C.D.Z. and T.J.N.-M.; Formal analysis, C.D.Z.; Writing—original draft, C.D.Z.; Writing—review & editing, C.D.Z. and T.J.N.-M.; Project administration, C.D.Z. All authors have read and agreed to the published version of the manuscript.
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the University of Pittsburgh (PRO13070080, approved on 2/3/2014).
Informed consent was obtained from all participants involved in the study.
Conflicts of interest.
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Contextualized problem solving : it ‟ s effect on students ‟ achievement , conceptual understanding and mathematics anxiety, high level cognitive process of high school students in solving mathematics problems based on learning style.
Enhancing the skill in stating research questions using a contextualized module, development and evaluation of i-promaths module in calculus among matriculation students, academic achievement in algebra of the public high school students in the new normal, development and validation of module for teaching human rights at higher education level, learning activity sheets (las) and the english achievement of grade 8 students, 41 references, metacognitive activities in text-studying and problem-solving: development of a taxonomy, quantitative problem solving in science: cognitive factors and directions for practice..
Enhancing mathematical literacy with the use of metacognitive guidance in forum discussion, assessing the mathematics achievement of college freshmen using piaget’s logical operations, preparing teachers to remediate reading disabilities in high school: what is needed for effective professional development., strategies and knowledge in problem solving: results and implication for education, promoting general metacognitive awareness, mathematics-related beliefs of filipino college students: factors affecting mathematics and problem solving performance, international association for the evaluation of educational achievement, related papers.
Showing 1 through 3 of 0 Related Papers
Discover the world's research
Center for Educational Innovation
Request a consultation
Metacognition refers to thinking about one's thinking and is a skill students can use as part of a broader collection of skills known as self-regulated learning. Metacognitive strategies for learning include planning and goal setting, monitoring, and reflecting on learning. Students can be instructed in the use of metacognitive strategies. Classroom interventions designed to improve students’ metacognitive approaches are associated with improved learning (Cogliano, 2021; Theobald, 2021).
Your password must have 8 characters or more and contain 3 of the following:.
Your password has been changed
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
Department of Cellular Biology, University of Georgia, Athens, GA 30602
Search for more papers by this author
*Address correspondence to: Julie Dangremond Stanton ( E-mail Address: [email protected] ).
Stronger metacognitive regulation skills and higher self-efficacy are linked to increased academic achievement. Metacognition and self-efficacy have primarily been studied using retrospective methods, but these methods limit access to students’ in-the-moment metacognition and self-efficacy. We investigated first-year life science students’ metacognition and self-efficacy while they solved challenging problems, and asked: 1) What metacognitive regulation skills are evident when first-year life science students solve problems on their own? and 2) What aspects of learning self-efficacy do first-year life science students reveal when they solve problems on their own? Think-aloud interviews were conducted with 52 first-year life science students across three institutions and analyzed using content analysis. Our results reveal that while first-year life science students plan, monitor, and evaluate when solving challenging problems, they monitor in a myriad of ways. One aspect of self-efficacy, which we call self-coaching, helped students move past the discomfort of monitoring a lack of understanding so they could take action. These verbalizations suggest ways we can encourage students to couple their metacognitive skills and self-efficacy to persist when faced with challenging problems. Based on our findings, we offer recommendations for helping first-year life science students develop and strengthen their metacognition to achieve improved problem-solving performance.
Have you ever asked a student to solve a problem, seen their solution, and then wondered what they were thinking while they were problem solving? As college instructors, we often ask students in our classes to solve problems. Sometimes we gain access to our students’ thought process or cognition through strategic question design and direct prompting. Far less often we gain access to how our students regulate and control their own thinking (metacognition) or their beliefs about their capability to solve the problem (self-efficacy). Retrospective methods can and have been used to access this information from students, but students often cannot remember what they were thinking a week or two later. We lack deep insight into students’ in-the-moment metacognition and self-efficacy because it is challenging to obtain their in-the-moment thoughts.
Educators and students alike are interested in metacognition because of its malleable nature and demonstrated potential to improve academic performance. Not having access to students’ metacognition in-the-moment presents a barrier towards developing effective metacognitive interventions to improve learning. Thus, there is a need to characterize how life science undergraduates use their metacognition during individual problem-solving and to offer evidence-based suggestions to instructors for supporting students’ metacognition. In particular, understanding the metacognitive skills first-year life science students bring to their introductory courses will position us to better support their learning earlier on in their college careers and set them up for future academic success.
Metacognition, or one’s awareness and control of their own thinking for the purpose of learning ( Cross and Paris, 1988 ), is linked to improved problem-solving performance and academic achievement. In one meta-analysis of studies that spanned developmental stages from elementary school to adulthood, metacognition predicted academic performance when controlling for intelligence ( Ohtani and Hisasaka, 2018 ). In another meta-analysis specific to mathematics, researchers found a significant positive correlation between metacognition and math performance in adolescences, indicating individuals who demonstrated stronger metacognition also performed better on math tasks ( Muncer et al. , 2022 ). The strong connection between metacognition and problem-solving performance and academic achievement represents a potential leverage point for enhancing student learning and success in the life sciences. If we explicitly teach life science undergraduates how to develop and use their metacognition, we can expect to increase the effectiveness of their learning and subsequent academic success. However, in order to provide appropriate guidance, we must first know how students in the target population are employing their metacognition.
Based on one theoretical framework of metacognition, metacognition is comprised of two components: metacognitive knowledge and metacognitive regulation ( Schraw and Moshman, 1995 ). Metacognitive knowledge includes one’s awareness of learning strategies and of themselves as a learner. Metacognitive regulation encompasses how students act on their metacognitive knowledge or the actions they take to learn ( Sandi-Urena et al. , 2011 ). Metacognitive regulation is broken up into three skills: 1) planning how to approach a learning task or goal, 2) monitoring progress towards achieving that learning task or goal, and 3) evaluating achievement of said learning task or goal ( Stanton et al. , 2021 ). These regulation skills can be thought of temporally: planning occurs before learning starts, monitoring occurs during learning, and evaluating takes place after learning has occurred. As biology education researchers, we are particularly interested in life science undergraduates’ metacognitive regulation skills or the actions they take to learn because regulation skills have been shown to have a more dramatic impact on learning than awareness alone ( Dye and Stanton, 2017 ).
Importantly, metacognition is context-dependent, meaning metacognition use may vary depending on factors such as the subject matter or learning task ( Kelemen et al. , 2000 ; Kuhn, 2000 ; Veenman and Spaans, 2005 ). For example, the metacognitive regulation skills a student may use to evaluate their learning after reading a text in their literature course may differ from those skills the same student uses to evaluate their learning on a genetics exam. This is why it is imperative to study metacognition in a particular context, like problem solving in the life sciences.
Metacognition helps a problem solver identify and work with the givens or initial problem state, reach the goal or final problem state, and overcome any obstacles presented in the problem ( Davidson and Sternberg, 1998 ). Specifically, metacognitive regulation skills help a solver select strategies, identify obstacles, and revise their strategies to accomplish a goal. Metacognition and problem solving are often thought of as domain-general skills because of their broad applicability across different disciplines. However, metacognitive skills are first developed in a domain-specific way and then those metacognitive skills can become more generalized over time as they are further developed and honed ( Kuhn, 2000 ; Veenman and Spaans, 2005 ). This is in alignment with research from the problem-solving literature that suggests stronger problem-solving skills are a result of deep knowledge within a domain ( Pressley et al. , 1987 ; Frey et al. , 2022 ). For example, experts are known to classify problems based on deep conceptual features because of their well-developed knowledge base whereas novices tend to classify problems based on superficial features ( Chi et al. , 1981 ). Research on problem solving in chemistry indicates that metacognition and self-efficacy are two key components of successful problem solving ( Rickey and Stacy, 2000 ; Taasoobshirazi and Glynn, 2009 ). College students who achieve greater problem-solving success are those who: 1) use their metacognition to conceptualize problems well, select appropriate strategies, and continually monitor and check their work, and 2) tend to have higher self-efficacy ( Taasoobshirazi and Glynn, 2009 ; Cartrette and Bodner, 2010 ).
Self-efficacy, or one’s belief in their capability to carry out a task ( Bandura, 1977 , 1997 ), is another construct that impacts problem solving performance and academic achievement. Research on self-efficacy has revealed its predictive power in regards to performance, academic achievement, and selection of a college major ( Pajares, 1996 ). The large body of research on self-efficacy suggests that students who believe they are capable academically, engage more metacognitive strategies and persist to obtain academic achievement compared with those who do not (e.g., Pintrich and De Groot, 1990 ; Pajares, 2002 ; Huang et al. , 2022 ). In STEM in particular, studies tend to reveal gender differences in self-efficacy with undergraduate men indicating higher self-efficacy in STEM disciplines compared with women ( Stewart et al. , 2020 ). In one study of first-year biology students, women were significantly less confident than men and students’ biology self-efficacy increased over the course of a single semester when measured at the beginning and end of the course ( Ainscough et al. , 2016 ). However, self-efficacy is known to be a dynamic construct, meaning one’s perception of their capability to carry out a task can vary widely across different task types and over time as struggles are encountered and expertise builds for certain tasks ( Yeo and Neal, 2006 ).
Both metacognition and self-efficacy are strong predictors of academic achievement and performance. For example, one study found that students with stronger metacognitive regulation skills and greater self-efficacy beliefs (as measured by self-reported survey responses) perform better and attain greater academic success (as measured by GPA; Coutinho and Neuman, 2008 ). Additionally, self-efficacy beliefs were strong predictors of metacognition, suggesting students with higher self-efficacy used more metacognition. Together, the results from this quantitative study using structural equation modeling of self-reported survey responses suggests that metacognition may act as a mediator in the relationship between self-efficacy and academic achievement ( Coutinho and Neuman, 2008 ).
Most of the research on self-efficacy has been quantitative in nature. In one qualitative study of self-efficacy, interviews were conducted with middle school students to explore the sources of their mathematics self-efficacy beliefs ( Usher, 2009 ). In this study, evidence of self-modeling was found. Self-modeling or visualizing one’s own self-coping during difficult tasks can strengthen one’s belief in their capabilities and can be an even stronger source of self-efficacy than observing a less similar peer succeed ( Bandura, 1997 ). Usher (2009) described self-modeling as students’ internal dialogues or what they say to themselves while doing mathematics. For example, students would tell themselves they can do it and that they would do okay as a way of keeping their confidence up or coaching themselves while doing mathematics. Other researchers have called this efficacy self-talk, or “thoughts or subvocal statements aimed at influencing their efficacy for an ongoing academic task” ( Wolters, 2003 , p. 199). For example, one study found that college students reported saying things to themselves like “You can do it, just keep working” in response to an open-ended questionnaire about how they would maintain effort on a given task ( Wolters, 1998 ; Wolters, 2003 ). As qualitative researchers, we were curious to uncover how both metacognition (planning, monitoring, and evaluating) and self-efficacy (such as self-coaching) might emerge out of more qualitative, in-the-moment data streams.
Researchers use two main methods to study metacognition: retrospective and in-the-moment methods. Retrospective methods ask learners to reflect on learning they’ve done in the past. In contrast, in-the-moment methods ask learners to reflect on learning they’re currently undertaking ( Veenman et al. , 2006 ). Retrospective methods include self-report data from surveys like the Metacognitive Awareness Inventory ( Schraw and Dennison, 1994 ) or exam “wrappers” or self-evaluations ( Hodges et al. , 2020 ). Whereas in-the-moment methods include think-aloud interviews, which ask students to verbalize all of their thoughts while they solve problems ( Bannert and Mengelkamp, 2008 ; Ku and Ho, 2010 ; Blackford et al. , 2023 ), or online computer chat log-files as groups of students work together to solve problems ( Hurme et al. , 2006 ; Zheng et al. , 2019 ).
Most metacognition research on life science undergraduates, including our own work, has utilized retrospective methods ( Stanton et al. , 2015 , 2019; Dye and Stanton, 2017 ). Important information about first-year life science students’ metacognition has been gleaned using retrospective methods, particularly in regard to planning and evaluating. For example, first-year life science students tend to use strategies that worked for them in high school, even if they do not work for them in college, suggesting first-year life science students may have trouble evaluating their study plans ( Stanton et al. , 2015 ). Additionally, first-year life science students abandon strategies they deem ineffective rather than modifying them for improvement ( Stanton et al. , 2019 ). Lastly, first-year life science students are willing to change their approach to learning, but they may lack knowledge about which approaches are effective or evidence-based ( Tomanek and Montplaisir, 2004 ; Stanton et al. , 2015 ).
In both of the meta-analyses described at the start of this Introduction , the effect sizes were larger for studies that used in-the-moment methods ( Ohtani and Hisasaka, 2018 ; Muncer et al. , 2022 ). This means the predictive power of metacognition for academic performance was more profound for studies that used in-the-moment methods to measure metacognition compared with studies that used retrospective methods. One implication of this finding is that studies using retrospective methods might be failing to capture metacognition’s profound effects on learning and performance. Less research has been done using in-the-moment methods to study metacognition in life science undergraduates likely because of the time-intensive nature of collecting and analyzing data using these methods. One study that used think-aloud methods to investigate biochemistry students’ metacognition when solving open-ended buffer problems found that monitoring was the most commonly used metacognitive regulation skill ( Heidbrink and Weinrich, 2021 ). Another study that used think-aloud methods to explore Dutch third-year medical school students’ metacognition when solving physiology problems about blood flow also revealed a focus on monitoring, with students also planning and evaluating but to a lesser extent ( Versteeg et al. , 2021 ). We hypothesize that in-the-moment methods like think-aloud interviews are likely to reveal greater insight into students monitoring skills because this metacognitive regulation skill occurs during learning tasks. Further investigation into the nature of the metacognition first-year life science students use when solving problems is needed in order to provide guidance to this population and their instructors on how to effectively use and develop their metacognitive regulation skills.
What metacognitive regulation skills are evident when first-year life science students solve problems on their own, what aspects of learning self-efficacy do first-year life science students reveal when they solve problems on their own, research participants & context.
This study is a part of a larger longitudinal research project investigating the development of metacognition in life science undergraduates which was classified by the Institutional Review Board at the University of Georgia (STUDY00006457) and University of North Georgia (2021-003) as exempt. For that project, 52 first-year students at three different institutions in the southeastern United States were recruited from their introductory biology or environmental science courses in the 2021–2022 academic year. Data was collected at three institutions to represent different academic environments because it is known that context can affect metacognition ( Table 1 ). Georgia Gwinnett College is classified as a baccalaureate college predominantly serving undergraduate students, University of Georgia is classified as doctoral R1 institution, and University of North Georgia is classified as a master’s university. Additionally, in our past work we found that first-year students from different institutions differed in their metacognitive skills ( Stanton et al. , 2015 , 2019). Our goal in collecting data from three different institution types was to ensure our qualitative study could be more generalizable than if we had only collected data from one institution.
Georgia Gwinnett College | University of Georgia | University of North Georgia | |
---|---|---|---|
Institution type | Baccalaureate College | Doctoral R1 | Master’s University |
Setting | Suburban | City | Suburban |
Number of undergraduates | 10,949 | 30,166 | 18,155 |
Students from racially minoritized groups | 57.8% | 14.4% | 19.3% |
Students who identify as women | 58.7% | 58.9% | 57.8% |
Students who identify as first-generation | 37% | 9% | 20.6% |
Average high school GPA | 3.0 | 4.1 | 3.5 |
Average SAT score | 1065 | 1355 | 1135 |
Students at each institution were invited to complete a survey to provide their contact information, answer the revised 19-item Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ), 32-item Epistemic Beliefs Inventory ( Schraw et al. , 1995 ), and 8-item Self-efficacy for Learning and Performance subscale from the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al. , 1993 ). They were also asked to self-report their demographic information including their age, gender, race/ethnicity, college experience, intended major, and first-generation status. First-year students who were 18 years or older and majoring in the life sciences were invited to participate in the larger study. We used purposeful sampling to select a sample that matched the demographics of the student body at each institution and also represented a range in metacognitive ability based on students’ responses to the revised Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ). In total, eight students from Georgia Gwinnett College, 23 students from the University of Georgia, and 21 students from the University of North Georgia participated in the present study ( Table 2 ). Participants received $40 (either in the form of a mailed check, or an electronic Starbucks or Amazon gift card) for their participation in Year 1 of the larger longitudinal study. Their participation in Year 1 included completing the survey, three inventories, and a 2-hour interview, of which the think aloud interview was one quarter of the total interview.
Georgia Gwinnett College | University of Georgia | University of North Georgia | |
---|---|---|---|
Number of participants | 8 | 23 | 21 |
Participants from underrepresented racially minoritized groups | 4 | 5 | 5 |
Participants who identify as women | 8 | 13 | 15 |
Participants who identify as first-generation | 5 | 3 | 6 |
Average High School GPA | 3.3 | 4.0 | 3.6 |
Average College GPA | 3.4 | 3.7 | 2.9 |
Note: We are using Ebony McGee’s rephrasing of URM as underrepresented racially minoritized groups ( McGee, 2020 ). In our work this means students who self-reported as Black or African American or Hispanic or Latinx. For average high school GPA, institutional data is missing from two GGC students.
All interviews were conducted over Zoom during the 2021–2022 academic year when participants had returned to the classroom. Participants ( n = 52) were asked to think aloud as they solved two challenging biochemistry problems ( Figure 1 ) that have been previously published ( Halmo et al. , 2018 , 2020; Bhatia et al. , 2022 ). We selected two challenging biochemistry problems for first-year students to solve because we know that students do not use metacognition unless they find a learning task challenging ( Carr and Taasoobshirazi, 2008 ). If the problems were easy, they may have solved them quickly without needing to use their metacognition or by employing metacognition that is so automatic they may have a hard time verbalizing it ( Samuels et al. , 2005 ). By having students solve problems we knew would be challenging, we hoped this would trigger them to use and verbalize their metacognition during their problem-solving process. This would enable us to study how they used their metacognition and what they did in response to their metacognition. The problems we selected met this criterion because participants had not yet taken biochemistry.
FIGURE 1. Think-Aloud Problems. Students were asked to think aloud as they solved two challenging biochemistry problems. Panel A depicts the Protein X Problem previously published in Halmo et al. , 2018 and 2020. Panel B depicts the Pathway Flux problem previously published in Bhatia et al. , 2022 . Both problems are open-ended and ask students to make predictions and provide scientific explanations for their predictions.
The problems were open-ended and asked students to make predictions and provide scientific explanations for their predictions about: 1) noncovalent interactions in a folded protein for the Protein X Problem ( Halmo et al. , 2018 , 2020) and 2) negative feedback regulation in a metabolic pathway for the Pathway Flux Problem ( Bhatia et al. , 2022 ). Even though the problems were challenging, we made it clear to students before they began that we were not interested in the correctness of their solutions but rather we were genuinely interested in their thought process. To elicit student thinking after participants fell silent for more than 5 seconds, interviewers used the following two prompts: “What are you thinking (now)?” and “Can you tell me more about that?” ( Ericsson and Simon, 1980 ; Charters, 2003 ). After participants solved the problems, they shared their written solutions with the interviewer using the chat feature in Zoom. Participants were then asked to describe their problem-solving process out loud and respond to up to four reflection questions (see Supplemental Material for full interview protocol). The think-aloud interviews were audio and video recorded and transcribed using a professional, machine-generated transcription service (Temi.com). All transcripts were checked for accuracy by members of the research team before analysis began.
The resulting transcripts were analyzed by a team of three researchers in three cycles. In the first cycle of data analysis, half of the transcripts were open coded by members of the research team (S.M.H., J.D.S., and K.A.Y.). S.M.H. entered this analysis as a postdoctoral researcher in biology education research with experience in qualitative methods and deep knowledge about student difficulties with the two problems students were asked to solve in this study. J.D.S., an associate professor of cell biology and a biology education researcher, entered this analysis as an educator and metacognition researcher with extensive experience in qualitative methods. K.A.Y. entered this analysis as an undergraduate student double majoring in biology and psychology and as an undergraduate researcher relatively new to qualitative research. During this open coding process, we individually reflected on the contents of the data, remained open to possible directions suggested by our interpretation of the data, and recorded our initial observations using analytic memos ( Saldaña, 2021 ). The research team (S.M.H., J.D.S., and K.A.Y.) then met to discuss our observations from the open coding process and suggest possible codes that were aligned with our observations, knowledge of metacognition and self-efficacy, and our guiding research questions. This discussion led to the development of an initial codebook consisting of inductive codes discerned from the data and deductive codes derived from theory on metacognition and self-efficacy. In the second cycle of data analysis, the codebook was applied to the dataset iteratively by two researchers (S.M.H. and K.A.Y.) using MaxQDA2020 software (VERBI Software; Berlin, Germany) until the codebook stabilized or no new codes or modifications to existing codes were needed. Coding disagreements between the two coders were discussed by all three researchers until consensus was reached. All transcripts were coded to consensus to identify aspects of metacognition and learning self-efficacy that were verbalized by participants. Coding to consensus allowed the team to consider and discuss their diverse interpretations of the data and ensure trustworthiness of the analytic process ( Tracy, 2010 ; Pfeifer and Dolan, 2023 ). In the third and final cycle of analysis, thematic analysis was used to uncover central themes in our dataset. As a part of thematic analysis, two researchers (S.M.H. and K.A.Y.) synthesized one-sentence summaries of each participant’s think aloud interview. Student quotes presented in the Results & Discussion have been lightly edited for clarity, and all names are pseudonyms.
To compare the potential effect of institution and gender on problem solving performance, we scored the final problem solutions, and then interrogated them using R Statistical Software (R Core Team, 2021 ). A one-way ANOVA was performed to compare the effect of institution on problem-solving performance. This analysis revealed that there was not a statistically significant difference in problem-solving performance between the three institutions (F[2, 49] = 0.085, p = 0.92). This indicates students performed similarly on the problems regardless of which institution they attended (Supplemental Data, Table 1 ). Another one-way ANOVA was performed to compare the effect of gender on problem-solving performance which revealed no statistically significant differences in problem-solving performance based on gender (F[1, 50] = 0.956, p = 0.33). Students performed similarly on the problems regardless of their gender (Supplemental Data, Table 2 ). Taken together, this analysis suggests a homogeneous sample in regard to problem-solving performance.
Participants’ final problem solutions were individually scored by two researchers (S.M.H. and K.A.Y.) using an established rubric and scores were discussed until complete consensus was reached. The rubric used to score the problems is available from the corresponding author upon request. The median problem-solving performance of students in our sample was two points on a 10-point rubric. Students in our sample scored low on the rubric because they either failed to answer part of the problem or struggled to provide accurate explanations or evidence to support their predictions. Despite the phrase “provide a scientific explanation to support your prediction” included in the prompt, most students’ solutions contained a prediction, but lacked an explanation. For example, the majority of the solutions for the Protein X problem predicted the noncovalent interaction would be affected by the substitution, but lacked categorization of the relevant amino acids or identification of the noncovalent interactions involved, which are critical problem-solving steps for this problem ( Halmo et al. , 2018 , 2020). The majority of the Pathway Flux solutions also predicted that flux would be affected, but lacked an accurate description of negative feedback inhibition or regulation release of the pathway, which are critical features of this problem ( Bhatia et al. , 2022 ). This lack of accurate explanations is not unexpected. Previous work shows that both introductory biology and biochemistry students struggle to provide accurate explanations to these problems without pedagogical support, and introductory biology students generally struggle more than biochemistry students ( Bhatia et al. , 2022 ; Lemons, personal communication).
To address our first research question, we looked for statements and questions related to the three skills of planning, monitoring, and evaluating in our participants’ think aloud data. Because metacognitive regulation skills encompass how students act on their metacognitive awareness, participants’ explicit awareness was a required aspect when analyzing our data for these skills. For example, the statement “this is a hydrogen bond” does not display awareness of one’s knowledge but rather the knowledge itself (cognition). In contrast, the statement “I know this is a hydrogen bond” does display awareness of one’s knowledge and is therefore considered evidence of metacognition. We found evidence of all three metacognitive regulation skills in our data. First-year life science students plan, monitor, and evaluate when solving challenging problems. However, our data collection method revealed more varied ways in which students monitor. We present our findings for each metacognitive regulation skill ( Table 3 ). For further demonstration of how students use these skills in concert when problem solving, we offer problem-solving vignettes of a student from each institution in Supplemental Data .
Metacognitive regulation skill | Category | Description | Example Data | Implications for instruction |
---|---|---|---|---|
Planning | Assessing the task | Student identifies what the problem is asking them to do either successfully or unsuccessfully. | Model planning for students by verbalizing how to assess the task and what strategies to use and why before walking through a worked example.Provide students with immediate feedback on the accuracy of their assessment of the task. | |
Monitoring | Relevance | Student describes what parts of the prompt or pieces of their own knowledge are relevant or irrelevant to solving the problem. | Explicitly teach students relevant strategies that can help resolve confusion, a lack of understanding, or uncertainty. See , 2021 for an evidence-based teaching guide on metacognition. | |
Confusion | Student expresses a general lack of understanding or knowledge about the problem. | |||
Familiarity | Student describes what is familiar or not familiar to them or something they remember or forget from class. | Encourage students to assess the effectiveness of their strategy use in response to their monitoring. For example, was acknowledging and using an assumption helpful in moving forward when you were uncertain? | ||
Understanding | Student describes specific pieces of knowledge they know or don’t know. | . | Provide guidance on how to keep track of the information gleaned from these types of monitoring during problem solving. For example, by writing down what they do and do not know. | |
Questions | Student asks themselves a question. | |||
Correctness | Student corrects themselves while talking out loud | . | ||
Evaluating | Solution | Student assesses the accuracy of their solution, double checks their answer, or rethinks their solution. | Provide students with immediate feedback about the accuracy of their solution(s) to help them evaluate and develop well-calibrated self-evaluation skills. For example, provide answer keys on formative assessments.Encourage students to self-coach during problem-solving to overcome potentially negative emotions or feelings of discomfort that may occur when they are metacognitive. | |
Experience | Student assesses the problem difficulty or the feelings associated with their thought process. | . |
Planning how to approach the task of solving problems individually involves selecting strategies to use and when to use them before starting the task ( Stanton et al. , 2021 ). Planning did not appear in our data in a classical sense. This finding is unsurprising because the task was: 1) well-defined, meaning there were a few potentially accurate solutions rather than an abundant number of accurate solutions, 2) straightforward meaning the goal of solving the problem was clearly stated, and 3) relatively short meaning students were not entering and exiting from the task multiple times like they might when studying for an exam. Additionally, the stakes were comparatively low meaning task completion and performance carried little to no weight in participants’ college careers. In other data from this same sample, we know that these participants make plans for high-stakes assessments like exams but often admit to not planning for lower stakes assessments like homework (Stanton, personal communication). Related to the skill of planning, we observed students assessing the task after reading the problem ( Table 3 ). We describe how students assessed the task and provide descriptions of what happened after students planned in this way.
While we did not observe students explicitly planning their approach to problem solving before beginning the task, we did observe students assessing the task or what other researchers have called “orientation” after reading the problems ( Meijer et al. , 2006 ; Schellings et al. , 2013 ). Students in our study either assessed the task successfully or unsuccessfully. For example, when Gerald states, “So I know that not only do I have to give my answer, but I also have to provide information on how I got my answer …” he successfully identified what the problem was asking him to do by providing a scientific explanation. In contrast, Simone admits her struggle with figuring out what the problem is asking when she states, “I’m still trying to figure out what the question’s asking. I don’t want to give up on this question just yet, but yeah, it’s just kinda hard because I can ’ t figure out what the question is asking me if I don’t know the terminology behind it.” In Simone’s case, the terminology she struggled to understand is what was meant by a scientific explanation. Assessing the task unsuccessfully also involved misinterpreting what the problem asked. This was a frequent issue for students in our sample during the Pathway Flux problem because students inaccurately interpreted the negative feedback loop, which is a known problematic visual representation in biochemistry ( Bhatia et al. , 2022 ). For example, students like Paulina and Kathleen misinterpreted the negative feedback loop as enzyme B no longer functioning when they stated, respectively, “So if enzyme B is taken out of the graph…” , or “…if B cannot catalyze…” Additionally some students misinterpreted the negative feedback loop as a visual cue of the change described in the problem prompt (IV-CoA can no longer bind to enzyme B). This can be seen in the following example quote from Mila: “So I was looking at it and I see what they’re talking about with the IV-CoA no longer binding to enzyme B and I think that’s what that arrow with the circle and the line through it is representing. It’s just telling me that it’s not binding to enzyme B.”
Misinterpretations of what the problem was asking like those shared above from Simone, Paulina, Kathleen, and Mila led to inaccurate answers for the Pathway Flux problem. In contrast, when students like Gerald could correctly interpret what the problem asked them to do, this led to more full and accurate answers for both problems. Accurately interpreting what a problem is asking you to do is critical for problem-solving success. A related procedural error identified in other research on written think-aloud protocols from students solving multiple-choice biology problems was categorized as misreading ( Prevost and Lemons, 2016 ).
In our study, we did not detect evidence of explicit planning beyond assessing the task. This suggests that first-year students’ approaches were either unplanned or automatic ( Samuels et al. , 2005 ). As metacognition researchers and instructors, we find first-year life science students’ absence of planning before solving and presence of assessing the task during solving illuminating. This means planning is likely one area in which we can help first-year life science students grow their metacognitive skills through practice. While we do not anticipate that undergraduate students will be able to plan how to solve a problem that is unfamiliar to them before reading a problem, we do think we can help students develop their planning skills through modeling when solving life science problems.
When modeling problem solving for students, we could make our planning explicit for students by verbalizing how we assess the task and what strategies we plan to use and why. From the problem-solving literature, it is known that experts assess a task by recognizing the deep structure or problem type and what is being asked of them ( Chi et al. , 1981 ; Smith et al. , 2013 ). This likely happens rapidly and automatically for experts through the identification of visual and key word cues. Forcing ourselves to think about what these cues might be and alerting students to them through modeling may help students more rapidly develop expert-level schema, approaches, and planning skills. Providing students with feedback on their assessment of a task and whether or not they misunderstood the problem also seems to be critical for problem-solving success ( Prevost and Lemons, 2016 ). Helping students realize they can plan for smaller tasks like solving a problem by listing the pros and cons of relevant strategies and what order they plan to use selected strategies before they begin could help students narrow the problem solving space, approach the task with focus, and achieve efficiency to become “good strategy users” ( Pressley et al. , 1987 ).
Monitoring progress towards problem-solving involves assessing conceptual understanding during the task ( Stanton et al. , 2021 ). First-year life science students in our study monitored their conceptual understanding during individual problem solving in a myriad of ways. In our analysis, we captured the specific aspects of conceptual understanding students monitored. Students in our sample monitored: 1) relevance, 2) confusion, 3) familiarity, 4) understanding, 5) questions, and 6) correctness ( Table 3 ). We describe each aspect of conceptual understanding that students monitored and we provide descriptions of what happened after students monitored in this way ( Figure 2 ).
FIGURE 2. How monitoring can impact the problem-solving process. The various ways first-year students in this study monitored are depicted as ovals. See Table 3 for detailed descriptions of the ways students monitored. How students in this study acted on their monitoring are shown as rectangles. In most cases, what happened after students monitored determined whether or not problem solving moved forward. Encouraging oneself using positive self-talk, or self-coaching, helped students move past the discomfort associated with monitoring a lack of conceptual understanding (confusion, lack of familiarity, or lack of understanding) and enabled them to use problem-solving strategies, which moved problem solving forward.
When students monitored relevance, they described what pieces of their own knowledge or aspects of the problem prompts were relevant or irrelevant to their thought process ( Table 3 ). For the Protein X problem, many students monitored the relevance of the provided information about pH. First-year life science students may have focused on this aspect of the problem prompt because pH is a topic often covered in introductory biology classes, which participants were enrolled in at the time of the study. However, students differentially decided whether this information was relevant or irrelevant. Quinn decided this piece of information was relevant: “The pH of the water surrounding it. I think it ’ s important because otherwise it wouldn’t really be mentioned.” In contrast, Ignacio decided the same piece of information was irrelevant: “So the pH has nothing to do with it . The water molecules had nothing to do with it as well. So basically, everything in that first half, everything in that first thing, right there is basically useless. So , I ’ m just going to exclude that information out of my thought process cause the pH has nothing to do with what’s going on right now…” From an instructional perspective, knowing the pH in the Protein X problem is relevant information for determining the ionization state of acidic and basic amino acids, like amino acids D and E shown in the figure, could be helpful. However, this specific problem asked students to consider amino acids A and B, so Ignacio’s decision that the pH was irrelevant may have helped him focus on more central parts of the problem. In addition to monitoring the relevance of the provided information, sometimes students would monitor the relevance of their own knowledge that they brought to bear on the problem. For example, consider the following quote from Regan: “I just think that it might be a hydrogen bond, which has nothing to do with the question .” Regan made this statement during her think aloud for the Protein X problem, which is intriguing because the Protein X problem deals solely with noncovalent interactions like hydrogen bonding.
Overall, monitoring relevance helped students narrow their focus during problem solving, but could be misleading if done inaccurately like in Regan’s case ( Figure 2 ).
When students monitored confusion when solving, they expressed a general lack of understanding or knowledge about the problem ( Table 3 ). As Sara put it, “ I have no clue what I’m looking at.” Sometimes monitoring confusion came as an acknowledgement of lack of prior knowledge students felt they needed to solve the problem. Take for instance when Ismail states, “I’ve never really had any prior knowledge on pathway fluxes and like how they work and it obviously doesn ’ t make much sense to me .” Students also expressed confusion about how to approach the problem, which is related to monitoring one’s procedural knowledge. For example, when Harper stated, “ I ’ m not sure how to approach the question ,” she was monitoring a lack of knowledge about how to begin. Similarly, after reading the problem Tiffani shared, “ I am not sure how to solve this one because I’ve actually never done it before…”
When students monitored their confusion, one of two things happened ( Figure 2 ). Rarely, students would give up on solving altogether. In fact, only one individual (Roland) submitted a final solution that read, “ I have no idea .” More often students persisted despite their confusion. Rereading the problem was a common strategy students in our sample used after identifying general confusion. As Jeffery stated after reading the problem, “I didn’t really understand that, so I’m gonna read that again.” After rereading the problem a few times, Jeffery stated, “Oh, and we have valine here. I didn’t see that before.” Some students like Valentina revealed their rereading strategy rationale after solving, “First I just read it a couple of times because I wasn’t really understanding what it was saying.” After rereading the problem a few times Valentina was able to accurately assess the task by stating “amino acid (A) turns into valine.” When solving, some students linked their general confusion with an inability solve. As Harper shared, “I don’t think that I have enough like basis or learning to where I’m able to answer that question.” Despite making this claim of self-doubt in their ability to solve, Harper monitored in other ways and ultimately came up with a solution beyond a simple, “I don’t know.” In sum, when students acknowledged their confusion in this study, they usually did not stop there. They used their confusion as an indicator to use a strategy, like rereading, to resolve their confusion or as a jumping off point to further monitor by identifying more specifically what they did not understand. Persisting despite confusion is likely dependent on other factors, like self-efficacy.
When students monitored familiarity, they described knowledge or aspects of the problem prompt that were familiar or not familiar to them ( Table 3 ). This category also captured when students would describe remembering or forgetting something from class. For example, when Simone states, “ I remember learning covalent bonds in chemistry, but I don ’ t remember right now what that meant” she is acknowledging her familiarity with the term covalent from her chemistry course. Similarly, Oliver acknowledges his familiarity with tertiary structure from his class when solving the Protein X problem. He first shared , “ This reminds me of something that we’ve looked at in class of a tertiary structure. It was shown differently but I do remember something similar to this .” Then later, he acknowledges his lack of familiarity with the term flux when solving the Pathway Fux problem, “That word flux. I ’ ve never heard that word before .” Quinn aptly pointed out that being familiar with a term or recognizing a word in the problem did not equate to her understanding, “I mean, I know amino acids, but that doesn’t… like I recognize the word , but it doesn’t really mean anything to me. And then non-covalent, I recognize the conjunction of words, but again, it's like somewhere deep in there…”
When students recognized what was familiar to them in the problem, it sometimes helped them connect to related prior knowledge ( Figure 2 ). In some cases, though, students connected words in the problem that were familiar to them to unrelated prior knowledge. Erika, for example, revealed in her problem reflection that she was familiar with the term mutation in the Protein X problem and formulated her solution based on her knowledge of the different types of DNA mutations, not noncovalent interactions. In this case, Erika’s familiarity with the term mutation and failure to monitor the relevance of this knowledge when problem solving impeded her development of an accurate solution to the problem. This is why Quinn’s recognition that her familiarity with terms does not equate to understanding is critical. This recognition can help students like Erika avoid false feelings of knowing that might come from the rapid and fluent recall of unrelated knowledge ( Reber and Greifeneder, 2017 ). When students recognized parts of the problem they were unfamiliar with, they often searched for familiar terms to use as footholds ( Figure 2 ). For example, Lucy revealed the following in her problem reflection: “So first I tried to look at the beginning introduction to see if I knew anything about the topic. Unfortunately, I did not know anything about it. So, I just tried to look for any trigger words that I did recognize.” After stating this, Lucy said she recognized the words protein and tertiary structure and was able to access some prior knowledge about hydrogen bonds for her solution.
When students monitored understanding, they described specific pieces of knowledge they either knew or did not know, beyond what was provided in the problem prompt ( Table 3 ). Monitoring understanding is distinct from monitoring confusion. When students displayed awareness of a specific piece of knowledge they did not know (e.g., “I don’t know what these arrows really mean.” ) this was considered monitoring (a lack of) understanding. In contrast, monitoring confusion was a more general awareness of their overall lack of understanding (e.g., “Well, I first look at the image and I ’ m already kind of confused with it [laughs].” ). For example, Kathleen demonstrated an awareness of her understanding about amino acid properties when she said, “ I know that like the different amino acids all have different properties like some are, what’s it called? Like hydrophobic, hydrophilic, and then some are much more reactive.” Willibald monitored his understanding using the mnemonic “when in doubt, van der Waals it out” by sharing, “So, cause I know basically everything has, well not basically everything, but a lot of things have van der Waal forces in them. So that’s why I say that a lot of times. But it’s a temporary dipole, I think.” In contrast, Jeffery monitored his lack of understanding of a specific part of the Pathway Flux figure when he stated, “I guess I don ’ t understand what this dotted arrow is meaning.” Ignoring or misinterpreting the negative feedback loop was a common issue as students solved this problem, so it’s notable that Jeffery acknowledged his lack of understanding about this symbol. When students identified what they knew, the incomplete knowledge they revealed sometimes had the potential to lead to a misunderstanding. Take for example Lucy’s quote: “ I know a hydrogen bond has to have a hydrogen. I know that much. And it looks like they both have hydrogen.” This statement suggests Lucy might be displaying a known misconception about hydrogen bonding – that all hydrogens participate in hydrogen bonding ( Villafañe et al. , 2011 ).
When students could identify what they knew, they used this information to formulate a solution ( Figure 2 ). When students could identify what they did not know, they either did not know what to do next or they used strategies to move beyond their lack of understanding ( Figure 2 ). Two strategies students used after identifying a lack of understanding included disregarding information and writing what they knew. Kyle disregarded information when he didn’t understand the negative feedback loop in the Pathway Flux problem: “…there is another arrow on the side I see with a little minus sign. I’m not sure what that means… it’s not the same as [the arrows by] A and C. So, I’m just going to disregard it sort of for now. It’s not the same. Just like note that in my mind that it’s not the same.” In this example, Kyle disregards a critical part of the problem, the negative feedback loop, and does not revisit the disregarded information which ultimately led him to an incorrect prediction for this problem. We also saw one example of a student, Elaine, use the strategy of writing what she knew when she was struggling to provide an explanation for her answer: “I should know this more, but I don’t know, like a specific scientific explanation answer, but I’m just going to write what I do know so I can try to organize my thoughts.” Elaine’s focus on writing what she knew allowed her to organize the knowledge she did have into a plausible solution that specified which amino acids would participate in new noncovalent interactions (“I predict there will be a bond between A and B and possibly A and C.” ) despite not knowing “what would be required in order for it to create a new noncovalent interaction with another amino acid.” The strategies that Kyle and Elaine used in response to monitoring a lack of understanding shared the common goal of helping them get unstuck in their problem-solving process.
When students monitored through questions, they asked themselves a question out loud ( Table 3 ). These questions were either about the problem itself or their own knowledge. An example of monitoring through a question about the problem itself comes from Elaine who asked herself after reading the problem and sharing her initial thoughts, “ What is this asking me? ” Elaine’s question helped reorient her to the problem and put herself back on track with answering the question asked. After Edith came to a tentative solution, she asked herself, “But what about the other information? How does that pertain to this? ” which helped her initiate monitoring the relevance of the information provided in the prompt. Students also posed questions to themselves about their own content knowledge. Take for instance Phillip when he asked himself, “So, would noncovalent be ionic bonds or would it be something else? Covalent bonds are sharing a bond, but what does noncovalent mean? ” After Phillip asked himself this question, he reread the problem but ultimately acknowledged he was “not too sure what noncovalent would mean.”
After students posed questions to themselves while solving, they either answered their question or they didn’t ( Figure 2 ). Students who answered their self-posed questions relied on other forms of monitoring and rereading the prompt to do so. For example, after questioning themselves about their conceptual knowledge, some students acknowledged they did not know the answer to their question by monitoring their understanding. Students who did not answer their self-posed questions moved on without answering their question directly out loud.
When students monitored correctness, they corrected their thinking out loud ( Table 3 ). A prime example of this comes from Kyle’s think aloud, where he corrects his interpretation of the problem not once but twice: “It said the blue one highlighted is actually a valine, which substituted the serine, so that’s valine right there. And then I’m reading the question. No, no, no. It ’ s the other way around. So, serine would substitute the valine and the valine is below… Oh wait wait , I had it right the first time. So, the blue highlighted is this serine and that’s supposed to be there, but a mutation occurs where the valine gets substituted.” Kyle first corrects his interpretation of the problem in the wrong direction but corrects himself again to put him on the right track. Icarus also caught himself reading the problem incorrectly by replacing the word noncovalent with the word covalent, which was a common error students made: “ Oh, wait, I think I read that wrong. I think I read it wrong. Well, yeah. Then that will affect it. I didn’t read the noncovalent part. I just read covalent.” Students also corrected their language use during the think aloud interviews, like Edith: “ because enzyme B is no longer functioning… No, not enzyme B… because IV-CoA is no longer functional and able to bind to enzyme B, the metabolic pathway is halted.” Edith’s correction of her own wording, while minor, is worth noting because students in this study often misinterpreted the Pathway Flux problem to read as “enzyme B no longer works”. There were also instances when students corrected their own knowledge that they brought to bear on the problem. This can be seen in the following quote from Tiffani when she says, “And tertiary structure. It has multiple… No, no, no. That ’ s primary structure. Tertiary structure’s when like the proteins are folded in on each other.”
When students corrected themselves, this resulted in more accurate interpretations of the problem and thus more accurate solutions ( Figure 2 ). Specifically, monitoring correctness helped students avoid common mistakes when assessing the task which was the case for Kyle, Icarus, and Edith described above. When students do not monitor correctness, incorrect ideas can go unchecked throughout their problem-solving process, leading to more inaccurate solutions. In other research, contradicting and misunderstanding content were two procedural errors students experienced when solving multiple-choice biology problems ( Prevost and Lemons, 2016 ), which could be alleviated through monitoring correctness.
Monitoring is the last metacognitive regulation skill to develop, and it develops slowly and well into adulthood ( Schraw, 1998 ). Based on our data, first-year life science students are monitoring in the moment in a myriad of ways. This may suggest that college-aged students have already developed monitoring skills by the time they enter college. This finding has implications for both instruction and research. For instruction, we may need to help our students keep track of and learn what do with the information and insight they glean from their in-situ monitoring when solving life science problems. For example, students in our study could readily identify what they did and did not know, but they sometimes struggled to identify ways in which they could potentially resolve their lack of understanding, confusion, or uncertainty or use this insight in expert-like ways when formulating a solution.
As instructors who teach students about metacognition, we can normalize the temporary discomfort monitoring may bring as an integral part of the learning process and model for students what to do after they monitor. For example, when students glean insight from monitoring familiarity , we could help them learn how to properly use this information so that they do not equate familiarity with understanding when practicing problem solving on their own. This could help students avoid the fluency fallacy or the false sense that they understand something simply because they recognize it or remember learning about it ( Reber and Greifeneder, 2017 ).
The majority of the research on metacognition, including our own, has been conducted using retrospective methods. However, retrospective methods may provide little insight into true monitoring skills since these skills are used during learning rather than after learning has occurred ( Schraw and Moshman, 1995 ; Stanton et al. , 2021 ). More research using in-the-moment methods, which are used widely in the problem-solving literature, are needed to fully understand the rich monitoring skills of life science students and how they may develop over time. The monitoring skills of life science students in both individual and small group settings, and the relationship of monitoring skills across these two settings, warrants further exploration. This seems particularly salient given that questioning and responding to questions seems to be an important aspect of both individual metacognition in the present study and social metacognition in our prior study, which also used in-the-moment methods ( Halmo et al. , 2022 ).
Evaluating achievement of individual problem solving involves appraising an implemented plan and how it could be improved for future learning after completing the task ( Stanton et al. , 2021 ). Students in our sample revealed some of the ways they evaluate when solving problems on their own ( Table 3 ). They evaluated both their solution and their experience of problem solving.
Evaluating a solution occurred when students assessed the accuracy of their solution, double-checked their answer, or rethought their solution ( Table 3 ). While some students evaluated their accuracy in the affirmative (that their solution is right), most students evaluated the accuracy of their solution in the negative (that their solution is wrong). For example, when Kyle states, “ I don ’ t think hydrogen bonding is correct .” Kyle clarified in his problem reflection, “I noticed [valine] did have hydrogens and the only noncovalent interaction I know of is probably hydrogen bonding. So, I just sort of stuck with that and just said more hydrogen bonding would happen with the same oxygen over there [in glutamine].” Through this quote, we see that Kyle went with hydrogen bonding as his prediction because it’s the only noncovalent interaction he could recall. However, Kyle accurately evaluated the accuracy of his solution by noting that hydrogen bonding was not the correct answer. Evaluating accuracy in the negative often seemed like hedging or self-doubt. Take for instance Regan’s quote that she shared right after submitting her final solution: “ The chances of being wrong are 100% , just like, you know [laughs].”
Students also evaluated their solution by double-checking their work. Kyle used a very clearly-defined approach for double checking his work by solving the problem twice: “So that’s just my initial answer I would put, and then what I do next was I ’ d just like reread the question and sort of see if I come up with the same answer after rereading and redoing the problem. So, I’m just going to do that real quick.” Checking one’s work is a well-established problem-solving step that most successful problem solvers undertake ( Cartrette and Bodner, 2010 ; Prevost and Lemons, 2016 ).
Students also evaluated by rethinking their initial solution. In the following case, Mila’s evaluation of her solution did not improve her final answer. Mila initially predicted that the change described in the Pathway Flux problem would affect flux, which is correct. However, she evaluates her solution when she states, “Oh, wait a minute, now that I’m saying this out loud, I don’t think it’ll affect it because I think IV-CoA will be binding to enzyme B or C. Sorry, hold on. Now I ’ m like rethinking my whole answer .” After this evaluation, Mila changes her prediction to “it won’t affect flux” , which is incorrect. In contrast, some students’ evaluations of their solutions resulted in improved final answers. For example, after submitting his solution and during his problem reflection, Willibald states, “Oh, I just noticed. I said there’ll be no effect on the interaction, but then I said van der Waals forces which is an interaction. So, I just contradicted myself in there .” After this recognition, Willibald decides to amend his first solution, ultimately improving his prediction. We also observed one student, Jeffery, evaluating whether or not his solution answered the problem asked, which is notable because we also observed students evaluating in this way when solving problems in small groups ( Halmo et al. , 2022 ): “I guess I can’t say for sure, but I’ll say this new amino acid form[s] a bond with the neighboring amino acids and results in a new protein shape. The only issue with that answer is I feel like I ’ m not really answering the question : Predict any new noncovalent interactions that might occur with such a mutation.” While the above examples of evaluating solution occurred spontaneously without prompting, having students describe their thinking process after solving the problems may have been sufficient to prompt them to evaluate their solution.
When students evaluated the accuracy of their solution, double-checked their answer, or rethought their solution it helped them recognize potential flaws or mistakes in their answers. After evaluating their solution, they either decided to stick with their original answer or amend their solution. Evaluating a solution often resulted in students adding to or refining their final answer. However, these solution amendments were not always beneficial or in the correct direction because of limited content knowledge. In other work on the metacognition involved in changing answers, answer-changing neither reduced or significantly boosted performance ( Stylianou-Georgiou and Papanastasiou, 2017 ). The fact that Mila’s evaluation of her solution led to a less correct answer, whereas Willibald’s evaluation of his solutions led to a more correct answer further contributes to the variable success of answer-changing on performance.
Evaluating experience occurred when students assessed the difficulty level of the problem or the feelings associated with their thought process ( Table 3 ). This type of evaluation occurred after solving in their problem reflection or in response to the closing questions of the think aloud interview. Students evaluated the problems as difficult based on the confusion, lack of understanding, or low self-efficacy they experienced when solving. For example, Ivy stated, “I just didn’t really have any background knowledge on them, which kind of made it difficult .” In one instance, Willibald’s evaluation of difficulty while amending his solution was followed up with a statement about self-efficacy: “ This one was a difficult one. I told you I’m bad with proteins [laughs].” Students also compared the difficulty of the two problems we asked them to solve. For example, Elena determined that the Pathway Flux problem was easier for her compared with the Protein X problem in her problem reflection: “ I didn ’ t find this question as hard as the last question just cause it was a little bit more simple.” In contrast, Elaine revealed that she found the Protein X problem challenging because of the open-ended nature of the question: “ I just thought that was a little more difficult because it’s just asking me to predict what possibly could happen instead of like something that’s like, definite, like I know the answer to. So, I just tried to think about what I know…” Importantly, Elaine indicated her strategy of thinking about what she knew in the problem in response to her evaluation of difficulty.
Evaluating experience also occurred when students assessed how their feelings were associated with their thought process. The feelings they described were directly tied to aspects of their monitoring. We found that students associated negative emotions (nervousness, worry, and panic) with a lack of understanding or a lack of familiarity. For example, in Renee’s problem reflection, she connected feelings of panic to when she monitored a lack of understanding: “I kind of panicked for a second, not really panicked cause I know this isn’t like graded or anything, but I do not know what a metabolic pathway is.” In contrast, students associated more positive feelings when they reflected on moments of monitoring understanding or familiarity. For example, Renee also stated, “At first I was kind of happy because I knew what was going on.” Additionally, some students revealed their use of a strategy explicitly to engender positive emotions or to avoid negative emotions, like Tabitha: “I looked at the first box, I tried to break it up into certain sections, so I did not get overwhelmed by looking at it.”
When students evaluated their experience problem solving in this study, they usually evaluated the problems as difficult and not easy. Their evaluations of experience were directly connected to aspects of their monitoring while solving. They associated positive emotions and ease with understanding and negative emotions and difficulty with confusion, a lack of familiarity, or a lack of understanding. Additionally, they identified the purpose of some strategy use was to avoid negative experiences. Because their evaluations of experience occurred after solving the problems, most students did not act on this evaluation in the context of this study. We speculate that students may act on evaluations of experience by making plans for future problem solving, but our study design did not necessarily provide students with this opportunity. Exploring how students respond to this kind of evaluation in other study designs would be illuminating.
Our data indicate that some first-year life science students are evaluating their solution and experience after individual problem solving. As instructors, we can encourage students to further evaluate their solutions by prompting them to: 1) rethink or redo a problem to see whether they come up with the same answer or wish to amend their initial solution, and 2) predict whether they think their solution is right or wrong. Encouraging students to evaluate by predicting whether their solution is right or wrong is limited by content knowledge. Therefore, it is imperative to help students develop their self-evaluation accuracy by following up their predictions with immediate feedback to help them become well-calibrated ( Osterhage, 2021 ). Additionally, encouraging students to reflect on their experience solving problems might help students identify and verbalize perceived problem-solving barriers to themselves and their instructors. There is likely a highly individualized level of desirable difficulty for each student where a problem is difficult enough to engage their curiosity and motivation to solve something unknown but also does not generate negative emotions associated with failure that could prevent problem solving from moving forward ( Zepeda et al. , 2020 ; de Bruin et al. , 2023 ). The link between feelings and metacognition in the present study is paralleled in other studies that used retrospective methods and found links between feelings of (dis)comfort and metacognition ( Dye and Stanton, 2017 ). This suggests that the feelings students associate with their metacognition is an important consideration when designing future research studies and interventions. For example, helping students coach themselves through the negative emotions associated with not knowing and pivoting to what they do know might increase the self-efficacy needed for problem-solving persistence.
To address our second research question, we looked for statements related to self-efficacy in our participants’ think aloud data. Self-efficacy is defined as one’s belief in their capability to carry out a specific task ( Bandura, 1997 ). Alternatively, self-efficacy is sometimes operationalized as one’s confidence in performing specific tasks ( Ainscough et al. , 2016 ). While we saw instances of students making high-self efficacy statements (“I’m confident that I was going in somewhat of the right direction”) and low self-efficacy statements (“I’m not gonna understand it anyways”) during their think aloud interviews, we were particularly intrigued by a distinct form of self-efficacy that appeared in our data that we call “self-coaching” ( Table 4 ). We posit that self-coaching is similar to the ideas of self-modeling or efficacy self-talk that other researchers have described in the past ( Wolters, 2003 ; Usher, 2009 ). In our data, these self-encouraging statements either: 1) reassured themselves about a lack of understanding, 2) reassured themselves that it’s okay to be wrong, 3) encouraged themselves to keep going despite not knowing, or 4) reminded themselves of their prior experience. To highlight the role that self-coaching played in problem solving in our dataset, we first present examples where self-coaching was absent and could have been beneficial for the students in our study. Then we present examples where self-coaching was used.
Category | Description | Example data |
---|---|---|
High self-efficacy | Student expresses confidence in their knowledge or ability to do something. | |
Low self-efficacy | Student expresses a lack of confidence in their knowledge or ability to do something. | |
Self-coaching | Student makes a self-encouraging statement about their lack of understanding. | |
Student makes a self-encouraging statement about being wrong. | ||
Student makes a self-encouraging statement to keep going despite not knowing. | ||
Student makes a self-encouraging statement about their prior experience. |
When solving the challenging biochemistry problems in this study, first-year life science students often came across pieces of information or parts of the figures that they were unfamiliar with or did not understand. In the Monitoring section, we described how students monitored their understanding and familiarity, but perhaps what is more interesting is how students responded to not knowing and their lack of familiarity ( Figure 2 ). In a handful of cases, we witnessed students get stuck or hung up on what they did not know. We posit that the feeling of not knowing could increase anxiety, cause concern, and increase self-doubt, all of which can negatively impact a student’s self-efficacy and cause them to stop problem solving. One example of this in our data comes from Tiffani. Tiffani stated her lack of knowledge about how to proceed and followed this up with a statement on her lack of ability to solve the problem, “I am actually not sure how to solve this. I do not think I can solve this one.” A few lines later, Tiffani clarified where her lack of understanding rested, but again stated she cannot solve the problem, “I’m not really sure how these type of amino acids pair up, so I can’t really solve it.” In this instance, Tiffani’s lack of understanding is linked to a perceived inability to solve the problem.
Some students also linked not knowing with perceived deficits. For example, in the following quote Chandra linked not knowing how to answer the second part of the Protein X problem with the idea that she is “not very good” with noncovalent interactions: “ I’m not really sure about the second part. I do not know what to say at all for that, to predict any new noncovalent, I’m not very good with noncovalent at all.” When asked where she got stuck during problem solving, Chandra stated, “The “predict any new noncovalent” cause [I’m] not good with bonds. So, I cannot predict anything really.” In Chandra’s case, her lack of understanding was linked to a perceived deficit and inability to solve the problem. As instructors, it is moments like these where we would hope to intervene and help our students persist in problem solving. However, targeted coaching for all students each time they solve a problem can seem like an impossible feat to accomplish in large, lecture-style college classrooms. Therefore, from our data we suggest that encouraging students to self-coach themselves through these situations is one approach we could use to achieve this goal.
In contrast to the cases of Tiffani and Chandra shared above, we found instances of students self-coaching after acknowledging their lack of understanding about parts of the problem by immediately reassuring themselves that it was okay to not know ( Table 4 ). For example, when exploring the arrows in the Pathway Flux problem figure Ivy states, “I don’t really know what that little negative means, but that’s okay .” After making this self-coaching statement Ivy moves on to thinking about the other arrows in the figure and what they mean to formulate an answer. In a similar vein, when some students were faced with their lack of understanding, one strategy they deployed was not dwelling on their lack of knowledge and pivoting to look for a foothold of something they do know. For example, in the following quote we see Viola acknowledge her initial lack of understanding and familiarity with the Pathway Flux problem and then find a foothold with the term enzymes which she knows she has learned about in the past, “I’m thinking there’s very little here that I recognize or understand. Just… okay. So, talking about enzymes, I know we learned a little bit about that.”
Some students acknowledged this strategy of pivoting to what they do know in their problem reflections. In their problem reflections, Quinn and Gerald expanded that they will rely on what they do know, even if it is not accurate. As Quinn put it, “ taking what I think I know, even if it ’ s wrong , like I kind of have to, you have to go off of something.” Similarly, Gerald acknowledged his strategy of “ it’s okay to get it wrong ” when he doesn’t know and connects this strategy to his experience solving problems on high-stakes exams.
I try to use information that I knew and I didn’t know a lot. So, I had to kind of use my strategy where I’m like, if this was on a test, this is one of the questions that I would either skip and come back to or write down a really quick answer and then come back to . So , my strategy for this one is it ’ s okay to get it wrong. You need to move on and make estimated guess. Like if I wasn’t sure what the arrows meant, so I was like, "okay, make an estimated guess on what you think the arrows mean. And then using the information that you kind of came up with try to get a right answer using that and like, explain your answer so maybe they’ll give you half points…" – Gerald
We also observed students encouraging themselves to persist despite not knowing ( Table 4 ). In the following quote we see Kyle acknowledge a term he doesn’t know at the start of his think aloud and verbally choose to keep going, “So the title is pathway flux problem. I’m not too sure what flux means, but I ’ m going to keep on going .” Sometimes this took the form of persisting to write an answer to the problem despite not knowing. For example, Viola stated, “I’m not even really sure what pathway flux is. So, I’m also not really sure what the little negative sign is and it pointing to B. But I ’ m going to try to type an answer .” Rather than getting stuck on not knowing what the negative feedback loop symbol depicted, she moved past it to come to a solution.
We also saw students use self-coaching to remind themselves of their prior experience ( Table 4 ). In the following example, we see Mila talk herself through the substitution of serine with valine in the Protein X problem: “So, there’s not going to be a hydroxyl anymore, but I don’t know if that even matters, but there, valine, has more to it. I don’t know if that means there would be an effect on the covalent interaction. I haven’t had chemistry in such a long time [pause], but at the same time, this is bio. So , I should still know it. [laughs]” Mila’s tone as she made this statement was very matter-of-fact. Her laugh at the end suggests she did not take what she said too seriously. After making this self-coaching statement, Mila rereads the question a few times and ultimately decides that the noncovalent interaction is affected because of the structural difference in valine and serine. Prior experiences, sometimes called mastery experiences, are one established source of self-efficacy that Mila might have been drawing on when she made this self-coaching statement ( Bandura, 1977 ; Pajares, 1996 ).
Students can be encouraged to self-coach by using some of the phrases we identified in our data as prompts ( Table 4 ). However, we would encourage instructors to rephrase some of self-coaching statements in our data by removing the word “should” because this term might make students feel inadequate if they think they are expected to know things they don’t yet know. Instead, we could encourage students to remind themselves of when they’ve successfully solved challenging biology problems in the past by saying things like, “I’ve solved challenging problems like this before, so I can solve this one.” Taken together, we posit that self-coaching could be used by students to decrease anxiety and increase confidence when faced with the feeling of not knowing that can result from monitoring, which could potentially positively impact a student’s self-efficacy and metacognitive regulation. Our results reveal first-year students are monitoring in a myriad of ways. Sometimes when students monitor, they may not act further on the resulting information because it makes them feel bad or uncomfortable. Self-coaching could support students to act on their metacognition or not actively avoid being metacognitive.
Even with the use of in-the-moment methods like think aloud interviews, we are limited to the metacognition that students verbalized. For example, students may have been employing metacognition while solving that they simply did not verbalize. However, using a think aloud approach in this study ensured we were accessing students’ metacognition in use, rather than their remembrance of metacognition they used in the past which is subject to recall bias ( Schellings et al. , 2013 ). Our study, like most education research, may suffer from selection bias where the students who volunteer to participate represent a biased sample ( Collins, 2017 ). To address this potential pitfall, we attempted to ensure our sample represented the student body at each institution by using purposeful sampling based on self-reported demographics and varied responses to the revised Metacognitive Awareness Inventory ( Harrison and Vallin, 2018 ). Lastly, while our sample size is large ( N = 52) for qualitative analyses and includes students from three different institutional types, the data are not necessarily generalizable to contexts beyond the scope of the study.
The goal of this study was to investigate first-year life science students’ metacognition and self-efficacy in-the-moment while they solved challenging problems. Think aloud interviews with 52 students across three institutions revealed that while first-year life science students plan, monitor, and evaluate while solving challenging problems, they predominantly monitor. First-year life science students associated monitoring a lack of conceptual understanding with negative feelings whereas they associated positive feelings with monitoring conceptual understanding. We found that what students chose to do after they monitored a lack of conceptual understanding impacted whether their monitoring moved problem solving forward or not. For example, after monitoring a lack of conceptual understanding, students could either not use a strategy and remain stuck or they could use a strategy to move their problem solving forward. One critical finding revealed in this study was that self-coaching helped students use their metacognition to take action and persist in problem solving. This type of self-efficacy related encouragement helped some students move past the discomfort associated with monitoring a lack of conceptual understanding and enabled them to select and use a strategy. Together these findings about in-the-moment metacognition and self-efficacy offer a positive outlook on ways we can encourage students to couple their developing metacognitive regulation skills and self-efficacy to persist when faced with challenging life science problems.
We would like to thank Dr. Paula Lemons for allowing us to use problems developed in her research program for this study and for her helpful feedback during the writing process, the College Learning Study participants for their willingness to participate in this study, Dr. Mariel Pfeifer for her assistance conducting interviews and continued discussion of the data during the writing of this manuscript, C.J. Zajic for his contribution to preliminary data analysis, and Emily K. Bremers, Rayna Carter, and the UGA BERG community for their thoughtful feedback on earlier versions of this manuscript. We are also grateful for the feedback from the monitoring editor and reviewers at LSE , which strengthened the manuscript. This material is based on work supported by the National Science Foundation under Grant Number 1942318. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Submitted: 21 August 2023 Revised: 26 January 2024 Accepted: 9 February 2024
© 2024 S. M. Halmo et al. CBE—Life Sciences Education © 2024 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
We would like to thank Dr. Paula Lemons for allowing us to use problems developed in her research program for this study and for her helpful feedback during the writing process, the College Learning Study participants for their willingness to participate in this study, Dr. Mariel Pfeifer for her assistance conducting interviews and continued discussion of the data during the writing of this manuscript, C.J. Zajic for his contribution to preliminary data analysis, and Emily K. Bremers, Rayna Carter, and the UGA BERG community for their thoughtful feedback on earlier versions of this manuscript. We are also grateful for the feedback from the monitoring editor and reviewers at LSE, which strengthened the manuscript. This material is based on work supported by the National Science Foundation under Grant Number 1942318. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
International Journal of Science and Research (IJSR) Call for Papers | Fully Refereed | Open Access | Double Blind Peer Reviewed
ISSN: 2319-7064
Downloads: 190 | Views: 366 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1
Research Paper | Education Management | Philippines | Volume 6 Issue 8, August 2017 | Rating: 6.5 / 10
Rate This Article!
Carmela J. Go Silk, Byron B. Go Silk, Ricardo A. Somblingo --> Carmela J. Go Silk , Byron B. Go Silk , Ricardo A. Somblingo
Abstract: The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144and 146 students for the control and experimental group, respectively. A TIMSS-based mathematics test was used to assess readiness, while a problem solving test was used for problem solving proficiency. Both groups showed an intermediate level of math readiness. Also, the experimental group showed significantly higher problem solving proficiency than the control group. Thus, the experimental group showed better metacognitive skills.
Keywords: problem solving proficiency, metacognitive skills, modular approach, mathematics education
Edition: Volume 6 Issue 8, August 2017
Pages: 670 - 677
How to Download this Article?
Type Your Valid Email Address below to Receive the Article PDF Link
Verification Code will appear in 2 Seconds ... Wait
Type This Verification Code Below: 1234
Click to Cite this Article
Text copied to Clipboard! Carmela J. Go Silk, Byron B. Go Silk, Ricardo A. Somblingo, " Modular Approach in Teaching Problem Solving: A Metacognitive Process ", International Journal of Science and Research (IJSR), Volume 6 Issue 8, August 2017, pp. 670-677, https://www.ijsr.net/getabstract.php?paperid=ART20175782
Downloads: 1 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1
Analysis Study Research Paper, Education Management, Indonesia, Volume 11 Issue 7, July 2022
Merry Singkoh, Philoteus E. A. Tuerah, Ichdar Domu --> Merry Singkoh , Philoteus E. A. Tuerah , Ichdar Domu
Research Paper, Education Management, Indonesia, Volume 12 Issue 6, June 2023
Fabian Yoel Paisa, Philoteus E. A. Tuera, Victor R. Sulangi --> Fabian Yoel Paisa , Philoteus E. A. Tuera , Victor R. Sulangi
Downloads: 2 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1
Analysis Study Research Paper, Education Management, Indonesia, Volume 11 Issue 6, June 2022
Rijani Ivanda Kolibu, Santje M. Salajang, Victor R. Sulangi --> Rijani Ivanda Kolibu , Santje M. Salajang , Victor R. Sulangi
Downloads: 47 | Weekly Hits: ⮙1 | Monthly Hits: ⮙1
Research Paper, Education Management, Indonesia, Volume 6 Issue 9, September 2017
Mas'ud B, Arifin Ahmad, Marwati Abd. Malik, Wa Karmila --> Mas'ud B , Arifin Ahmad , Marwati Abd. Malik , Wa Karmila
Downloads: 49 | Weekly Hits: ⮙3 | Monthly Hits: ⮙3
Research Paper, Education Management, Indonesia, Volume 6 Issue 7, July 2017
Nurul Husna Lubis, Putri Su'aidah Pulungan, Dr. KMS. M. Amin Fauzi --> Nurul Husna Lubis , Putri Su'aidah Pulungan , Dr. KMS. M. Amin Fauzi
Cite this chapter.
Part of the book series: Neuropsychology and Cognition ((NPCO,volume 19))
1518 Accesses
13 Citations
This chapter examines the role of cognitive, metacognitive, and motivational skills in problem solving. Cognitive skills include instructional objectives, components in a learning hierarchy, and components in information processing. Metacognitive skills include strategies for reading comprehension, writing, and mathematics. Motivational skills include motivation based on interest, selfefficacy, and attributions. All three kinds of skills are required for successful problem solving in academic settings.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Unable to display preview. Download preview PDF.
Anand, P.G. and Ross, S.M. (1987). Using computer-assisted instruction to personalize arithmetic materials for elementary school children. Journal of Educational Psychology , 79, 72–78
Article Google Scholar
Bean, T. W.. and Steenwyk, F..L. (1984). The effect of three forms of summarization instruction on sixth graders’ summary writing and comprehension. Journal of Reading Behavior . 16, 297–306.
Google Scholar
Block, J..11. and Bums, R..B. (1976). Mastery learning. in L.S. Shulman (Ed.), Review of Research in Education . Volume 4. Itsaca, IL: Peacock. Bloom, B.S. (1976). Human characteristics and school learning . New York: McGraw-Hill.
Bloom, B.S., Englehart, M.D., Furst, Echrw(133)1, Hill, W.,H and Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook T . Cognitive domain . New York: McKay.
Borkowski, J. G., Weyhing, R.S., and Carr, M. (1988). Effects of attributional retraining on strategy-based reading comprehension in learning disabled students. Journal ofEducational Psychology , 80, 46–53.
Brown, A..L. and Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior , 22, 1–14.
Chi, M..T. I1., Glaser, R., and Fart, M.. A (Ed.). (1988). The nature of expertise . Hillsdale, NJ: Erlbaum. Chipman, S.F., Segal, 1..W. and Glaser, R. (Eds.). (1985). Thinking and learning skills. Volume 2: Research and open questions . Hillsdale, NJ: Erlbaum.
Cook, L.. K. and Mayer, R.E. (1988). Teaching readers about the structure of scientific text. Journal of Educational Psychology , 80, 448–456.
Dewey, J. (1913). Interest and effort in education . Cambridge, MA: Riverside Press.
Ericsson, K. A., Smith, J. (Eds.). (1991). Toward a general theory of expertise . Cambridge, England: Cambridge University Press.
Fitzgerald, J. and Teasley, A.B. (1986). Effects of instruction in narrative structure on children’s writing. Journal of Educational Psychology , 78, 424–432.
Gagne, R..M. (1968). Learning hierarchies. Educational Psychologist , 6, 19.
Gagne, R..M., Mayor, J. R., Garstens, H.L., and Paradise, N. E. (1962). Factors in acquiring knowledge in a mathematics task. Psychological Monographs , No. 7 (Whole No. 526).
Gamer, R., Gillingham, M.G., and White, C. S. (1989). Effects of “seductive details” on macroprocessing and microprocessing in adults and children. Cognition and instruction , 6, 41–57.
Graham. S. (1984). Communicating sympathy and anger to black and white children: The cognitive attributional) consequences of affective cues. Journal of Personality and Social Psychology , 47, 4054.
Graham, S. and Barker, G.P. (1990). The down side of help: An attributional developmental analysis of helping behavior as a low-ability cue. Journal of Educational Psychology , 82, 7–14.
Graham, S. and Harris, K..R. (1988). instructional recommendations for teaching writing to exceptional students. Exceptional Children . 54, 506–512.
Halpem, D.F. (Ed.). (1992). Enhancing thinking skills in the sciences and mathematics._ Hillsdale , NJ: Erlbaum.
Hayes, J..R. and Flower, L.S. (1986). Writing research and the writer. American Psychologist , 41, 1106–1113.
Lewis, A.B. (1989). Training students to represent arithmetic word problems. Journal of Educational Psychology , 79, 363–371.
Luchins, A.S. and Luchins, E. H. (1970). Wertheimer’s seminars revisited : Problem solving and thinking. Vol. 1.. Albany, NY: State University of New York.
Mayer, R.E. (1985). Mathematical ability. In R..A Sternberg (Ed.), Human abilities An information processing approach (pp. 127–150 ). New York: Freeman.
Mayer, R.E. (1987). Educational psychology: A cognitive approach New York: Harper Collins.
Mayer. R.E. (1992). Thinking. problem solving cognition; second edition. New York: Freeman. Mayer, R.E. and Wittrock, M.C. (in press). Problem solving and transfer. in D. Berliner and R. Calfee (Eds.), Handbook of educational psychology New York: Macmillan.
Nickerson, R.S., Perkins, D..N., and Smith, E.E. (Eds.). (1985). The reaching of thinking . Hillsdale, NJ: Erlbaum.
Pintrich, P.R. and De Groot, E. V. (1990). Motivation and self-regulated learning components of classroom academic performance. Journal of Educational Psychology , - 33–40.
Pressley, M. (1990). Cognitive strategy instruction . Cambridge, MA: Brookline Books.
Renninger, K. A., Hidi, S., and Krapp, A. (Eds.). (1992) The role of interest in learning and development . Hillsdale, NJ: Erlbaum.
Rinehart, S D., Stahl, S.A., and Erickson, L.G. (1986). Some effects of summarization training on reading and studying. Reading Research Quarterly , 21, 422–438.
Robins, S. and Mayer, R.E. (1993). Schema training in analogical reasoning. Journal of Educational Psychology , 85, 529–538.
Ross, S.M., McCormick, D., Krisak, N., and Anand, P. (1985). Personalizing context in teaching mathematical concepts: Teacher-managed and computer-managed models. Educational Communication Technology Journal , 133, 169–178.
Schiefele, U. (1992). Topic interest and level of text comprehension. In K. A. Renninger, S. Ilidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 1 51–182 ). Hillsdale, NJ: Erlbaum.
Schiefele, U., Krapp, A., and Winteler, A. (1992). In K. A. Renninger, S. Hidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 183–212 ). Hillsdale, NJ: Erlbaum.
Schoenfeld, A.H. (1979). Explicit heuristic training as a variable in problem-solving performance. Journal for Research in Mathematics Education , 1979, 10, 173–187.
Schoenfeld, A.H. (1985). Mathematical problem solving . Orlando, FL: Academic Press.
Schunk, D. (1991). Self-efficacy and academic motivation. Educational Psychologist . 26, 207–231.
Schunk, D.I1. and Hanson, A.R. (1985). Peer models: Influences on children’s self-efficacy and achievement. Journal of Educational Psychology , 77, 313–322.
Smith, M. U. (Ed.). (1991). Toivard a unified theory ofproblem solving; Views from the content domains . Hillsdale, NJ: Erlbaum.
Segall W., Chipman, S..F, and Glaser, R. (Eds.). (1985). Thinking and learning skills. Volume 1: Relating instruction to research . Hillsdale, NJ: Erlbaum.
Sternberg, R.. A (1985). Beyond /Q: A triarchic theory of human intelligence . Cambridge, England: Cambridge University Press.
Sternberg, R..A and Frensch, P.A. (Eds.). (1991). Complex problem solving: Principles and mechanisms . Hillsdale, NJ: Erlbaum.
Sternberg, R..A and Gardner, M..K.. (1983). Unities in inductive reasoning. Journal of Experimental Psychology: General 112, 80–116.
Taylor, B.M. and Beach, R.W. (1984). The effects of text structure instruction on middle-grade students’ comprehension and production of expository text. Reading Research Quarterly . 19, 134–6.
Wade, S.. E. (1992). How interest affects learning from text. In K. A. Renninger, S. Hidi, and A. Krapp (Eds.), The role of interest in learning and development (pp. 255–278 ).
Hillsdale, NJ: Erlbaum. Weiner, B. (1986). An attributional theory of motivation and emotion New York: Springer-Verlag. Wertheimer, M. (I 959). Productive thinking New York: Harper and Row.
White, R. T. (1974). The validation of a learning hierarchy. American Educational Research Journal . 11, 121–236.
Zimmerman, B.J. and Martinez-Pons, M. (1990). Student differences in self-regulated learning: Relating grade, sex, and giftedness to self-efficacy and strategy use. Journal of Educational Psychology 82, 51–59.
Download references
Authors and affiliations.
Department of Psychology, University of California, Santa Barbara, USA
Richard E. Mayer
You can also search for this author in PubMed Google Scholar
Editors and affiliations.
Department of Education, The City College of the City University of New York, New York, NY, USA
Hope J. Hartman ( Professor of Education, Professor of Educational Psychology ) ( Professor of Education, Professor of Educational Psychology )
Reprints and permissions
© 2001 Springer Science+Business Media Dordrecht
Mayer, R.E. (2001). Cognitive, Metacognitive, and Motivational Aspects of Problem Solving. In: Hartman, H.J. (eds) Metacognition in Learning and Instruction. Neuropsychology and Cognition, vol 19. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2243-8_5
DOI : https://doi.org/10.1007/978-94-017-2243-8_5
Publisher Name : Springer, Dordrecht
Print ISBN : 978-90-481-5661-0
Online ISBN : 978-94-017-2243-8
eBook Packages : Springer Book Archive
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
This study used a Quasi-experimental Design to determine the effects of modular instruction to third year BEED students of Eastern Samar State University (ESSU) who were exposed to lecture method and modular instruction in teaching word problem solving. Its purpose was to seek answers to the following questions: (1) Is there a significant difference in the pretest mean score? (2) Is there a significant difference in the posttest mean scores? (3)Is there a significant difference between the mean gainscores?Based on the pretest and posttest mean scores of both control and experimental groups, the following findings were formulated; (1) there is no significant difference between the pretest mean scores of the subjects; (2) there is a significant difference between the post-test mean scores of subjects; and (3) there is a significant difference between the mean gain scores of the two groups of respondents – experimental and control groups. The experimental group who were taught by modul...
Journal of Mathematical Sciences & Computational Mathematics
Ariel Villar
This study is an experimental pre-test and post-test design which essentially needed to compare the effectiveness of Computer Aided Modular Instruction with the Traditional Method of teaching word problem involving fractions to the grade six (6) pupils of Gadgaran Integrated School, Calbayog City, Samar during the school year 2019-2020.Computer aided modular instruction is a teaching technique that enable pupils interact with the lesson programmed to the computer given to the experimental group. The Traditional Method on the other hand, is a usual way of teaching composing with lecture-discussion given to the control group. A single class consisting of regular grade six (6) pupils was chosen as the subject of the study. Their average grade is approaching proficiency level in Mathematics subject during the first grading period both of experimental and control group. They were randomly assigned and chosen using odd or even technique. The instrument used in this study was researcher ma...
International Journal of Science and Research
byron gosilk
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144 and 146 students for the control and experimental group, respectively. A TIMSS-based mathematics test was used to assess readiness, while a problem solving test was used for problem solving proficiency. Both groups showed an intermediate level of math readiness. Also, the experimental group showed significantly higher problem solving proficiency than the control group. Thus, the experimental group showed better metacognitive skills.
SAMUEL QUIROZ
American Journal of Education and Technology
Angel Dela cruz
The study aimed to determine the learning styles and learning abilities of grade 6 pupils in dealing with modular learning. A descriptive design was used in this study. The survey was conducted in Lt. Andres Calungsud Elementary School to 30 elementary pupils who are enrolled in modular learning for School Year 2021-2022 and the majority of them were male pupils. A Researcher-made survey questionnaire was used in data gathering. Frequency and percentage distribution, mean and standard deviation, megastat were used in treating the data. The study revealed that the pupils have difficulty dealing with terms in their modules. Data show that the respondents got the highest mean in visual learning style interpreted as Often (M=2.60) and the lowest mean in auditory learning style. The respondent’s learning style in reading/writing got the lowest overall mean Sometimes (OM=1.94). The study found out that the highest problem encountered by the students in dealing with modular learning is con...
Majid Haghverdi
This paper focuses on two approaches for facilitating the process of word problems solving. The first approach distinguishes different kinds of occurred errors and the second one recognizes various required and underlying knowledge. The first approach applies Kinfong and Holtan's framework of occurred errors and the second approach applies Mayer’s theory (1992) of underlying knowledge for solving word problems. The main aim of this paper is to examine the relationship between different kinds of occurred errors and various required knowledge in solving Arithmetic word problems. The research methodology is a semi experimental method. The subjects include 89 eight grade students (male and female). The research tools are a descriptive math test regarding six word problems and a directed interview. The results indicate that in solving the arithmetic word problems, increasing students' errors result from lack of linguistic, semantic, structural and communicational knowledge. This ...
Psychology and Education: A Multidisciplinary Journal
Psychology and Education , Meridel Tinonas , Jennifer B . Jalique , Anna Mae Joy T. Tamon
This study was conducted to determine the effectiveness of the modular instruction modality of Central Philippines State University in the lens of students. This study employed a descriptive study with 376 respondents obtained through stratified sampling. The study determined the student's demographic profile, extent, and level of effectiveness of modular instruction in clarity, constructive alignment, and content, and significant difference in extent and level of effectiveness of modular instruction modality when grouped to students' demographic profile. The study's respondents were the first-year and second-year students of ten campuses of CPSU who were enrolled in the school year 2020-2021. The level of effectiveness of modular learning in three areas was effective. There was a significant difference in the extent of modular learning in the three areas. In contrast, content showed a significant difference when grouped according to respondents' sex and campus. The same result was obtained regarding clarity when grouped according to respondents' course but not in terms of constructive alignment and content. However, when grouped to respondents' age and year level, the extent of modular learning in all aspects showed an insignificant result. There was a significant difference in effectiveness in the three areas when grouped according to campus and sex, except in clarity. However, when grouped according to respondents' age, course, and year level, the level of modular learning in all aspects showed no significant result. A significant relationship between the extent and level of effectiveness of modular instruction modality in all aspects were found.
Romiro Bautista
This study investigated the effects of personalized instruction on the attitude and performance of Bahraini students towards algebraic word problem solving. A total of 49 students in College Algebra enrolled in the first trimester, SY 2011 – 2012 was used as subjects of the study. A pre-test was administered and scored as the basis of determining the high and low ability levels of students in Mathematics. The examination used as pre-test was formulated by the author and was field tested by the Algebra professors before it was conducted for this purpose. Personalization in instruction was introduced through a personalized modular instruction (in terms of content and procedure with translation in Arabic) followed by exercises/drills (also written in English and translated in Arabic). Students were engaged in active learning through direct instruction using the Mayer’s model from the teacher, small group discussion, peer mentoring and follow-up session/s by the teacher. Analysis of transcripts was done to determine the remediation to be utilized. After the execution of the lessons for 6 sessions, the students were given a post-test and student attitude survey. It was found out that students who were exposed to the constructive learning environment through personalized instruction performed better and developed better attitude towards algebraic word problem solving tasks: a highly significant effect on the academic performance of the student towards problem solving and a moderately high impact model of variability (90.8 %) in their academic performance. Keywords: Personalized Instruction, Academic Performance, Student Attitude, Constructive learning environment, Cooperative Learning, Direct Instruction, Active Learning, Small Group Discussion.
james royol
Ma. Victoria Naboya
International Journal of Learning and Teaching
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Asian Journal of Social Sciences and Management Studies
Jasmin Sumipo
Psychology and Education , Rashmia Baraguir
Angelo Segalla
Psychology and Education
The Mathematics Educator
Lynda Wiest
sadia sadiq
International Journal of Scientific and Research Publications (IJSRP)
Girley Mingke
Georgia Educational Researcher
Josh Cuevas
Yan Ping XIn
Dr Sasikumar Nagu
Mosharafa: Jurnal Pendidikan Matematika
Syaharuddin Syaharuddin
Psychology in the Schools
Christopher Skinner
IMAGES
VIDEO
COMMENTS
Abstract. The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a ...
Tabl The study started with the development of a module using Kolb's experiential Unit theory focusing on the development of metacognitive skills. The module followed the 4A's approach, i.e ...
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teachin, IJSR, Call for Papers, Online Journal
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than ...
The success of metacognitive activity can be attributed to students' responses to specific problem-solving scenarios that can activate metacognition (Vorhölter, 2021). Metacognition is an ...
using strategies for solution, whereas metacognitive skills help to regulate the problem-solving process and make decisions (Goos, et al., 2000). Lucangeli and Cornoldi (1997) emphasized the vital role of metacognition in mathematics education. For example, in the early stages, problem-solving such as the representation of a
A metacognitive process consists of planning, strategies, knowledge, monitoring, evaluating and terminating. The Automation of Cognitive ... A problem-solving approach in teaching argues that thinking is essentially unfinished. It is an ongoing activity not about knowledge which once known becomes dead. Knowledge is an
Self-monitoring is the ability of a person to self-check during problem solving process and planning refers to the ability of an individual to break the problem into secondary objectives that can be separately solved. The metacognitive approach to problem solving instruction was proposed by Kapa (2001). He presented five steps to problem ...
1. Introduction. Metacognition is a multi-faceted phenomenon that involves both the awareness and regulation of one's cognitions (Flavell 1979).Past research has shown that metacognitive regulation, or the skills learners use to manage their cognitions, is positively related to effective problem-solving (Berardi-Coletta et al. 1995), transfer (Lin and Lehman 1999), and self-regulated ...
This article examines the role of cognitive, metacognitive, and motivational in problem solving. Cognitive skills include instructional objectives, components in a hierarchy, and components in information processing. Metacognitive skills include for reading comprehension, writing, and mathematics. Motivational skills include.
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretestposttest design, with 144and 146 students for the control and experimental group, respectively.
This set of articles-on the three Rs-has provided us with innovative, wide-ranging perspectives on how teachers can enhance academic performance. I could devote considerable space to emphasizing th...
Abstract and Figures. The purpose of this study is to investigate the metacognitive strategies that middle school students used in the process of solving problems individually. The study group ...
Model your metacognitive processes with students. Show students the thinking process behind your approach to solving problems (Ambrose, 2010). This can take the form of a think-aloud where you talk through the steps you would take to plan, monitor, and reflect on your problem-solving approach.
Metacognition and Problem Solving. Metacognition, or one's awareness and control of their own thinking for the purpose of learning (Cross and Paris, 1988), is linked to improved problem-solving performance and academic achievement.In one meta-analysis of studies that spanned developmental stages from elementary school to adulthood, metacognition predicted academic performance when ...
To become proficient problem solvers, science and engineering students have to acquire the skill of self-regulating their problem-solving processes, a skill supported by their metacognitive abilities. The Disciplinary Learning Companion (DLC) is an online tool designed to scaffold students' use of metacognitive activities through discipline-specific and even topic-specific reflective prompts ...
Abstract: The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144and 146 students for the control and experimental group ...
Fogarty (1994) suggests that Metacognition is a process that spans three distinct phases, and that, to be successful thinkers, students must do the following: Develop a plan before approaching a learning task, such as reading for comprehension or solving a math problem. Monitor their understanding; use "fix-up" strategies when meaning ...
Metacognition refers to awareness of one's own knowledge—what one does and doesn't know—and one's ability to understand, control, and manipulate one's cognitive processes (Meichenbaum, 1985). It includes knowing when and where to use particular strategies for learning and problem solving as well as how and why to use specific ...
Abstract. This chapter examines the role of cognitive, metacognitive, and motivational skills in problem solving. Cognitive skills include instructional objectives, components in a learning hierarchy, and components in information processing. Metacognitive skills include strategies for reading comprehension, writing, and mathematics.
Four studies were conducted to demonstrate that the positive effects of verbalization on solution transfer found in previous studies were not due to verbalization per se but to the metacognitive processing involved in the effort required to produce explanation for solution behaviors. In Experiments 1, 2, and 3, a distinction was made between process-oriented, problem-oriented, and simple ...
The study assessed the mathematics readiness of students, and investigated whether the modular approach to teaching mathematical problem solving focused on metacognitive skills is a better than conventional teaching. It used a static-groups pretest-posttest design, with 144 and 146 students for the control and experimental group, respectively.