GEEs do not allow inference on correlation structure of the repeated responses, but MERs do.
Study of Total Motor Scores. One way of assessing HD progression is through clinical evaluations using the Unified Huntington’s Disease Rating Scale (UHDRS [ ]). The UHDRS includes components that rate motor, cognitive, functional, and behavioral performance. In all cases, our outcome of interest is the total motor score (TMS), a component of the UHDRS that assesses the subject’s overall motor performance from 0 (no impairment) to 124 (high impairment). |
Case 1: Single site study of TMS values collected at two time points. Suppose study participants from a single site are divided into three disease categories: “low,” “medium,” and “high” corresponding to the likeliness of being diagnosed with HD based on motor signs in the next 5 years. Inclusion into a specific disease category is based on percentile cut-offs of the calculated CAG-Age Product (CAP) formula [ ]: age at baseline × (CAG repeats—33.66). In general, the upper end of the “low” disease category is the 25th–40th percentile, and the lower end of the “high” disease category is the 60th–75th percentile. Exact cutoffs are based on an algorithm [ ] applied to study data. For each participant, we collect TMS values at the beginning and end of the study. |
Case 2: Single site study of TMS values collected at multiple time points. Similar to case 1, except now we collect TMS values on each participant annually over 10 years. |
Case 3: Multiple site study of TMS values collected at multiple time points. Similar to case 2, except now participants come from multiple sites. |
In hierarchical modeling terms, cases 1 and 2 are two-level models: level 1 represents the repeated TMS values over time for each participant (“within-subject”) model and level 2 represents the TMS values between participants (“between-subject” model).). Case 3 is a three-level model which extends the two-level model with level 3 representing the TMS values among sites. |
Correlated data.
Measurements in longitudinal studies are correlated by design. Correlation exists between repeated measures on the same individual (e.g., cases 1, 2) or from clustering of individuals across sites (e.g., case 3). Correlation within a site exists because subjects from the same site may have similar responses due to the site investigator, study protocol variations or equipment (e.g., MRI scanners). Attempts are always made to standardize assessments through training and use of phantoms in the case of scanners (i.e., a specially designed object that helps to evaluate and tune a scanner for reliability purposes). Ignoring the different sources of correlation in longitudinal studies has severe consequences: higher false positive rates and invalid confidence intervals from underestimated standard errors [ 8 ].
Another concern of longitudinal studies, particularly with multi-site studies, is handling vastly different numbers of participants across sites. Unequal sample sizes between sites have three key consequences. First, it risks violating the constant variance assumption of ANOVA-based methods (“Starter methods for longitudinal data analysis” section) that is not an issue for more advanced modeling approaches (“Modern methods for longitudinal data” section). Second, power is affected and is determined by the site with the smallest sample size. Third, even in more advanced modeling approaches, some effects may not be detectable. For example, if one site has a large number of “high” disease category participants and another site has a very small number of “low” disease category participants, then the effect of disease category may not be easily identified.
Longitudinal studies generally encourage regularly occurring visits for data collection. But study participation frequency, and total study visits vary due to scheduling limitations and dropout. In our TMS example, individuals in the “high” or “medium” disease category may have limited mobility because they have more severe disease as the study progresses and may miss scheduled visits.
Missing data is the most problematic issue as there is no universally accepted correction, and inappropriate ones can have negative consequences.
Missing data can decrease the study’s statistical power and increase bias. Statistical power improves whenever the study’s sample size increases or variability of the study’s outcome measure (e.g., total motor score) is accurate. Unfortunately, missing data negatively impacts the sample size and variability. First, analyses that exclude participants with missing values inadvertently reduce the study’s sample size, potentially reducing the statistical power. Second, when participants who would have had extreme data values drop out (e.g., participants with very high or very low TMS), the variability of the study’s outcome measure is incorrectly underestimated.
Proper analysis of missing data requires understanding the “missingness mechanism” which describes why missing data occur [ 9 ]. Three mechanisms exist: missing completely at random, missing at random, and missing not at random. We provide examples of each mechanism using cases 1, 2, and 3 where the outcome variable is the total motor score (TMS). Table 1 provides a summary of how the methods discussed in this paper behave under each missingness mechanism.
MCAR is when the missingness of the outcome variable is completely unsystematic. For example, consider case 1 where TMS is measured on two occasions. Suppose budget cuts force the investigator to reduce the number of subjects assessed at the second evaluation. If the investigator randomly samples among those participants initially evaluated, then missingness at the second time point is MCAR. This is because the subsample is random and not related to any other variable in the study.
Verifying MCAR can be achieved with Little’s test [ 9 ] which examines group characteristics (e.g., mean) of participants with and without missing data. If characteristics are not equal for both groups, MCAR does not hold.
MAR is when the probability that an outcome is missing is related to some other fully observed variable in the model, but not the variable with the missing value itself. In case 1, for example, suppose family history information is additionally collected on all participants at the first visit. If participants with at least two family members who have HD are less likely to return for the second evaluation, then the missingness is MAR. This is because the likeliness of missing data depends on the observed family history information.
Testing between MAR versus MCAR can be achieved with the SPSS missing data module. The general rule, however, is to assume the missingness mechanism is MAR unless there are strong reasons to assume MCAR.
MNAR is when the missingness depends on the missing values themselves. In Case 1, for example, suppose TMS values are fully observed at the first evaluation, but some are missing at the second, and that no family history information was collected at the first visit. If the missing values are from participants who have at least two family members with HD, then this is MNAR because the missingness depends on the unobserved family history information. To better understand this, note the distinction between our examples from MAR and MNAR. For MAR, the missingness at the second evaluation depends on observed family history information, whereas missingness in the MNAR example depends on unobserved family history information.
It is impossible to distinguish MNAR from MAR because doing so involves comparisons with unobserved missing data. When missingness is suspected to be MNAR, it is important to consult with a statistician to develop an appropriate model that accounts for this missingness mechanism. The usual approach is a joint model where the missingness model is varied and tested using sensitivity analyses [ 8 ].
Several simple remedies have been proposed for missing data, but are not generally recommended.
This analyzes data only from those participants whose data is observed throughout the entire study. When missingness is MCAR, complete-case analysis yields unbiased parameter estimates. Otherwise, it yields biased, less precise estimates.
LOCF replaces a participant’s missing values with the last observed one. Assuming a participant will maintain that last observed value is unrealistic in most neurodegenerative disease studies.
Simple mean imputation replaces missing observations with the mean for that variable. Conditional mean imputation (or regression) replaces missing observations with predictions from regressing the outcome on other completely observed variables.
Despite their simplicity, both methods impute missing values only once and thus disregard the uncertainty of the imputed values. Such single imputation biases standard errors downward, leading to artificially narrow confidence intervals that give a false view of the estimate’s precision.
A remedy is multiple imputation where multiple copies of the original data are generated and missing values are replaced from an appropriate stochastic model. The copies are analyzed as complete data sets and parameter estimates from each set are combined to produce a single estimate. Standard errors take into account the uncertainty of the imputation process. Despite their advantages over single imputation, multiple imputation is still not recommended by the FDA.
The FDA recommends approaches that account for the missing data mechanism such as generalized estimating equations (“Generalized estimating equations (GEE)” section) and mixed random effect models (“Mixed effects regression (MER)” section), with the latter being most preferred because of its ability to handle a more general missingness mechanism (MAR compared to MCAR for generalized estimating equations).
Change score analysis.
When there are only two time points in the study (e.g., case 1), a straightforward approach is analyzing the change score: the differences between the measures at each time point. For case 1, a change score is the changes in TMS measured at the start and end of the study. To compare change scores between the “low,” “medium,” and “high” disease categories, one could use a one-way analysis of variance (ANOVA). A one-way ANOVA is valid here because we are analyzing change scores, not repeated measures individually (hence the correlation problem is removed).
Analyzing change scores has been widely used in neurodegenerative disease research. For HD, Sturrock and colleagues [ 10 ] used the approach to evaluate longitudinal in vivo brain metabolite profiles in HD over a 24-month period. Poudel and colleagues [ 11 ] assessed longitudinal changes in white matter microstructure in HD over an 18-month period.
ANOVA approaches for longitudinal data include a repeated measures ANOVA and multivariate ANOVA (MANOVA). Both focus on comparing group means (e.g., the TMS scores between “low,” “medium,” and “high” disease categories), but neither informs about subject-specific trends over time.
ANOVA approaches are limited in handling irregularly timed and missing data. Repeated measures ANOVA requires all participants be measured at the same number of time points, and MANOVA requires fully complete data. Applying ANOVA methods to data with missing observations yields biased parameter estimates [ 12 ].
Repeated measures ANOVA assesses group differences over time. Group sizes may be different, but subjects must be measured at the same number of time points. A repeated measures ANOVA is appropriate for case 2, and we describe the model in terms of this example.
The approach uses two main factors (time and disease category in case 2) and an interaction term (time × disease category) to assess group differences over time. For case 2, the time main effect tests if TMS significantly changes over time averaged across disease categories. The disease category main effect tests whether, on average, one disease group has higher TMS than another. The interaction term, when statistically significant, indicates that the effect of time varies between disease categories. This variation can be observed by plotting the sample means of TMS over time by disease category: one may observe if TMS for one disease category increases (or decreases) over time compared to another.
A downside of repeated measures ANOVA is it assumes the measured outcomes have equal variances and covariances over time. This may be unrealistic since variances tend to increase with time and covariances decrease with increasing intervals in time. The MANOVA model, in comparison, makes more flexible variance-covariance assumptions as discussed next.
MANOVA models treat repeated observations as a vector (i.e., observations are multivariate). For example, in case 2, for each person in each disease category, the multivariate observations are 10-dimensional vectors of the TMS scores measured over 10 years.
MANOVA makes no assumptions about the variance-covariance structure of the repeated measures, and thus removes misspecification concerns. Despite this flexibility, MANOVA requires complete data. Subjects with incomplete data are either removed from the analysis or have missing values imputed, both of which are disadvantageous (“Non-recommended practices for missing data” section). Furthermore, MANOVA models do not allow time-varying predictors which is critical to modeling disease dynamics.
The limitations of ANOVA approaches lend toward the use of modern approaches that robustly handle challenges of longitudinal studies as discussed next.
Two preferred methods for longitudinal data are generalized estimating equations model (GEE) [ 13 ] and mixed effects regression (MER) [ 14 ]. Both allow time-invariant predictors that never change (e.g., gender, genotype) and time-varying predictors (e.g., age), and handle irregularly timed and missing data without the need for explicit imputation.
A GEE model is designed for analyzing the regression relationship between covariates and repeated responses, but not the correlation structure of the repeated responses. If the latter is of interest, a GEE is inappropriate and one should consider MER (“Mixed effects regression (MER)” section). In estimating the regression parameters, the correlation structure in a GEE is represented using a working, potentially incorrect model (see “Modeling correlation” section). Even when the working model is incorrect, however, the GEE approach yields unbiased parameter estimates. Traditionally, GEEs are intended for two-level hierarchical data (e.g., cases 1 and 2), but recent work [ 15 ] has allowed extensions to three levels (e.g., case 3).
GEEs have been widely used in the neurodegenerative disease literature. For HD, Maroof and colleagues [ 16 ] used GEEs to model trajectories of cognitive scores (repeated response) in relation to time, education and baseline age. Keogh and colleagues [ 1 ] used GEEs to separately assess longitudinal performance of motor, cognitive and neuropsychiatric functions (repeated response) in relation to medication use.
Two primary advantages of GEEs are its robustness to misspecification of the repeated measures’ correlation structure and its computational simplicity. Estimation in GEEs uses a working correlation structure that may be inconsistent with the observed correlations of the repeated measures. Regardless, the regression parameter estimates are consistent, but associated standard errors are incorrect when the working correlation structure is wrong. Standard errors of time-dependent covariates are generally overestimated and time-independent covariates are underestimated. See [ 17 ] for recommended corrections to standard error estimates.
The ability to yield valid estimates even when the correlation structure is not correctly modeled is a similar benefit to that for MANOVA models (“MANOVA” section), but GEEs are more advantageous in that they do not disregard participants with incomplete data. Finally, estimation in GEEs is carried out with quasi-likelihood methods which is computationally easier than full-likelihood methods (as done for MER).
Limitations of GEEs are threefold. First, GEEs assume missing data are MCAR which may not hold for neurodegenerative disease studies. Extensions to the more flexible MAR assumption have been proposed including a weighted-estimating equations approach [ 8 ]. Second, one cannot perform hypothesis testing on correlation parameters since these are not directly estimated. Third, usual methods (e.g., likelihood ratio tests, Akaike/Bayesian Information Criterions) cannot be used to test and compare model fits because the focus is solely on regression parameters, not all model parameters (i.e., regression and correlation parameters). All of these limitations are handled by MER models.
Estimation of regression parameters in a GEE is carried out under a working correlation structure for the repeated measures, meaning that a (possibly incorrect) model is chosen to represent the correlation observed between repeated measures. The working structure is selected at the beginning of the analysis, and we recommended that it resemble the observed correlations for better estimation of standard errors. However, even if the working structure is incorrect, regression parameter estimates remain consistent. We describe next four common working structures and provide guidance on each.
The independent correlation structure assumes there is no correlation between repeated measures. This is a simple, yet unrealistic choice for longitudinal data, and one that results in large efficiency loss for time-varying covariates [ 18 ]. It is a fair choice for initial analyses to quickly assess the regression relationship between covariates and repeated responses.
The exchangeable correlation assumes correlations within a cluster are equal. In our example, consider TMS at baseline for all participants clustered by disease category (or by sites). An exchangeable correlation structure assumes that the correlation between TMS values of any two participants within a disease category (or within a site) is the same regardless of which participants are chosen. That is, participants are exchangeable within a disease category (or within a site). The correlation between participants from different disease categories (or different sites) is zero.
An example where exchangeable correlation is unreasonable is case 2 where clusters are the participant’s TMS values over 10 years. Assuming exchangeable correlation means that the correlation between TMS values in years 1 and 2 is the same as the correlation of TMS values between years 1 and 10. This is unrealistic since TMS values closer in time (years 1 and 2) are more likely to have higher correlation than those farther apart (years 1 and 10). In practice, an exchangeable correlation is reasonable when “objects” in a cluster can be moved without impact; e.g., participants in the same disease category or site, but not measures over time. An autoregressive correlation is more appropriate for case 2 as described next.
The autoregressive correlation accounts for time-varying correlation by assuming that measurements taken closer in time are more highly correlated than measurements taken farther apart. In practice, this structure is identified using an autocorrelation plot which displays the correlation by time-lag (e.g., ACFPLOT in SAS). A steadily decreasing plot is indicative of autoregressive correlation.
An unstructured correlation makes no assumptions about the correlation form and uses different parameters for each correlation component (i.e., for n time points, there are n(n-1)/2 components). Though flexible, this model is computationally costly. In case 2, with 10 time points, there are 10(10-1)/2=45 separate correlations to be estimated. The large number of computations decreases accuracy of parameter estimates and may even lead to model fitting failure. In practice, an unstructured correlation is recommended when there are few time points.
MER models provide information regarding the regression relationship between covariates and repeated responses, and about the correlation structure of the repeated response. It captures correlations of repeated measures using “random effects” that serve to describe cluster-specific trends over time. In case 2 where clusters are individuals, random effects can serve to describe each participant’s trend over time, and in case 3, an additional random effect can serve to differentiate sites. Random effects allow estimation of cluster-specific effects useful for understanding interindividual variability in longitudinal responses and cluster-specific predictions.
MERs have been widely used in neurodegenerative disease studies. For HD, Tabrizi and colleagues [ 19 ] used linear MERs to assess the longitudinal changes of different outcomes: clinical, cognitive, quantitative motor, neuropsychiatric assessments and MRI measures of the brain over a 36-month period. Each outcome was separately modeled using MERs and clusters corresponded to each person’s annual measures over the 36-month period. Long and colleagues [ 20 ] used linear MERs to estimate the timing of motor impairments and Collins and colleagues used it to assess finger tapping as a longitudinal marker of HD progression.
A MER model is advantageous over GEEs in that (i) it allows multi-level hierarchical models that allow predictions for each data hierarchy level. (ii) One may perform hypothesis testing on correlation parameters since they are directly estimated. (iii) Usual methods (e.g., likelihood ratio tests, Akaike/Bayesian Information Criterions) can be used to test and compare model fits because all model parameters (i.e., regression and correlation parameters) are estimated. (iv) It is more robust to missing data and assumes missingness is MAR which is more general than the MCAR assumption of GEEs.
A primary limitation of MER models is their computational complexity over GEEs particularly with nonlinear MER as it involves time-consuming numerical integration over the random effects. A second limitation is the reliance on correct specification of the mean and correlation structure of the repeated responses for valid hypothesis testing conclusions. We discuss next the impact of misspecification.
Correlation in MERs is captured through random effects and their associated distributions. In theory, correctly estimating model parameters requires accurately specifying the random effect distribution (the standard assumption is a normal distribution) [ 21 ]. But in practice, an incorrect distribution may not have severe consequences.
When the random effects distribution is specified incorrectly, but the covariates and random effects are independent, then parameter estimates and associated standard errors are valid [ 22• ]. Otherwise, when the covariates and random effects are dependent, then bias is incurred [ 23 , 24 ].
Covariates and random effects depend on each other when, for example, the variability of the random effect depends on the patient’s disease category (case 1) or site location (case 2). Testing for this dependence can be done using the Hausman chi-squared test [ 24 ]. If there is no evidence of dependence, then we recommend applying the MER assuming random effects are normally distributed. Otherwise, the investigator should consult with a statistician and use a procedure that makes no modeling assumptions about the random effect distribution [ 22• ].
GEEs and MERs can model time-varying predictors useful for understanding disease progression. For example, changing medication usage (yes/no response) or changing medication dosage, or changing blood pressure and weight.
Time-varying predictors are typically modeled using linear combinations of splines which are flexible curves that connect two or more points [ 25 ]. Spline modeling involves two decisions: (i) the choice of the spline functions and (ii) the number of splines used. These decisions impact how precisely and smoothly (wiggliness) the time-varying effects are captured. Fortunately, these decisions have been well-studied, and the recommended approach is using P-spline functions with the number of splines automatically selected from a criterion that maximizes accuracy and minimizes wiggliness [ 26 ]. This approach is available in R (mgcv) and SAS (PROC GAM).
We discussed challenges of longitudinal data from neurodegenerative disease studies (data that are correlated, irregularly timed and/or missing) and major techniques that handle them (GEEs and MERs). Simpler ANOVA-based approaches cannot handle irregularly timed and missing data. It resorts to modeling complete-cases or imputing missing values, and the focus rests on comparing group means rather than subject-specific trends over time.
GEEs and MERs overcome these challenges, the former providing estimates that are population averaged and the latter providing subject-specific estimates. These two estimates agree only for continuous normal outcomes with the identity link. When the missing data are MCAR, GEE and MER models produce unbiased parameter estimates. But when the missing data are MAR, GEE does not perform well, but MER models do as long as mean and variance-covariance structure are modeled properly. The greater flexibility of MERs lends preference to using them over GEEs for longitudinal data, and is recommended by the FDA for observational studies and clinical trials.
MERs have become a standard in studies of HD for properly modeling correlated longitudinal data. It has been frequently used in analyses of prospective, observational, multi-center longitudinal studies such as COHORT [ 27 ], PHAROS [ 3 ], PREDICT [ 2• ], and TRACK-HD [ 19 ]. Dorsey and colleagues [ 27 ] used a MER model to reveal a monotonic decline of movement, cognition, behavior and function using data from COHORT consisting of measures from participants and controls who had at least 3 consecutive years of longitudinal data. For PHAROS [ 3 ], Biglan and colleagues used a MER model, adjusted for age and sex, to differentiate linear trends of motor, cognitive, psychiatric, and functional decline between individuals with and without the HD mutation. Paulsen and colleagues [ 2• ] used a MER model on PREDICT data to reveal that imaging variables based on regional brain volumes had the largest effect sizes in detecting differences between premanifest HD participants and controls. Tabrizi and colleagues [ 19 ] also used a MER model to compare phenotypic differences between controls, premanifest HD, and early HD participants.
Each analysis encountered different challenges, particularly in dealing with missingness and timing of data collection. COHORT, PHAROS, and PREDICTwere at least 6 year studies, whereas TRACK-HD was only a 3-year study. A challenge, thus, in analyzing TRACK-HD was dealing with weak statistical power because of few HD converters. The analysis of COHORT also had issues of missing data. Follow-up in COHORT was intermittent, and of the 1514 participants, only 366 had at least 3 consecutive years of longitudinal data, meaning that the analysis was a type of complete-case analysis which may be improved upon using data from all 1514 participants. The analyses of PHAROS and PREDICT had fewer issues with missing data, having used all longitudinal data collected, and having dropout rates less than 5%. Follow-up times in data collection also differed: COHORT, PREDICT, and TRACK-HD had 1-year follow-ups and PHAROS had 9-month follow-ups. Despite the regularity of these observations, more frequent observations could help to minimize missing data and more accurately detect rates of decline. A modern push towards more frequent data collection is the use of sensors, microelectronics, and telecommunications that now provide inexpensive, wearable systems to track HD impairments more frequently, even at the convenience of a patient’s home setting [ 28 ]. Techniques discussed in this paper can serve as a starting point for analyzing the more frequently collected sensor data, but more advanced techniques [ 29 ] are recommend as the volume of data increases and validation against clinically collected UHDRS data should be considered for validation.
The authors would like to give a special thank you to Dr. Susan Fox for taking the time to review this manuscript.
Funding This work is supported in part by the National Institute Of Neurological Disorders And Stroke of the National Institutes of Health under Award Number K01NS099343, the Huntington’s Disease Society of America Human Biology Project Fellowship, Texas A&M School of Public Health Research Enhancement and Development Initiative (REDI-23-202059-36000), and National Center for Advancing Translational Sciences (2UL1RR024156-06).
This article is part of the Topical Collection on Dementia
Compliance with Ethical Standards
Conflict of Interest Tanya P. Garcia declares that she has no conflict of interest. Karen Marder reports grants from the Huntington’s Disease Society of America, CHDI, TEVA, 1UL1 RR024156-01, and non-financial support from Raptor Pharmaceutical.
Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors.
Papers of particular interest, published recently, have been highlighted as:
• Of importance
share this!
June 26, 2024
This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
trusted source
by Sonia Fernandez, University of California - Santa Barbara
Researchers continue to expand the case for the Younger Dryas Impact hypothesis. The idea proposes that a fragmented comet smashed into the Earth's atmosphere 12,800 years ago, causing a widespread climatic shift that, among other things, led to the abrupt reversal of the Earth's warming trend and into an anomalous near-glacial period called the Younger Dryas.
Now, UC Santa Barbara emeritus professor James Kennett and colleagues report the presence of proxies associated with the cosmic airburst distributed over several separate sites in the eastern United States (New Jersey, Maryland and South Carolina), materials indicative of the force and temperature involved in such an event, including platinum, microspherules, meltglass and shock-fractured quartz. The study appears in the journal Airbursts and Cratering .
"What we've found is that the pressures and temperatures were not characteristic of major crater-forming impacts but were consistent with so-called 'touchdown' airbursts that don't form much in the way of craters," Kennett said.
The Earth is bombarded every day by tons of celestial debris, in the form of tiny dust particles. On the other end of the scale are the extremely rare and cataclysmic impacts like the Chicxulub event that 65 million years ago caused the extinction of dinosaurs and other species. Its 150-kilometer-wide (93 miles) impact crater can be found in the Yucatán Peninsula in Mexico.
Somewhere in between are the impacts that don't leave craters on the Earth's surface but are nevertheless destructive. The shockwave from the 1908 Tunguska event knocked down 2,150 square kilometers (830 square miles) of forest, as the roughly 40-meter (130 ft) diameter asteroid collided with the atmosphere almost 10 kilometers (6 miles) above the Siberian taiga.
The comet thought to be responsible for the Younger Dryas cooling episode is estimated to have been 100 kilometers wide (62 miles)—much larger than the Tunguska object, and fragmented into thousands of pieces. The sediment layer associated with the airburst stretches across much of the northern hemisphere, but can also be found in locations south of the equator. This layer contains unusually high levels of rare materials associated with cosmic impacts, such as iridium and platinum, and materials formed under high pressures and temperatures, such as magnetic microspherules (cooled-down metallic droplets), meltglass and nanodiamonds.
The researchers are particularly interested in the presence of shocked quartz, indicated by a pattern of lines, called lamellae, that shows stress great enough to deform the crystal structure of quartz, a very hard material. This "crème de la crème" of cosmic impact evidence is present in impact craters, however linking shocked quartz to cosmic airbursts has proven to be more of a challenge.
"In the extreme form, such as when an asteroid hammers into the Earth's surface, all the fractures are very parallel," Kennett explained. In the realm of cosmic airbursts, different variables are present in the realm of cosmic airbursts. "When you think about it, the pressures and temperatures that produce these fractures will vary depending on the density, entry angle, altitude of the impact and the impactor's size.
"What we found—and this is what is characteristic of the impact layer, called the Younger Dryas Boundary—is that although we do occasionally see in the quartz grains examples of the 'traditional' shocked quartz with parallel fractures, we mostly see grains that are not parallel," he said. These fractures are seen in an irregular, web-like pattern of intersecting, meandering lines and surface and subsurface fissures, in contrast to the parallel and planar deformations of impact-associated shocked quartz found at craters. These subparallel and subplanar deformations are due in large part to the relatively lower pressures caused by explosions that occur above the ground, the researchers assert, as opposed to impacts that make contact with the Earth.
What these sediments do share with the shocked quartz at crater sites is the presence of amorphous silica—melted glass—in these fractures. And that, the researchers say, is evidence of the combination of pressure and high temperatures (greater than 2000 degrees Celsius) that could have come from a low-altitude bolide airburst. Similarly fractured quartz grains and meltglass have been found in more present-day samples of above ground explosions, such as at the Trinity atomic bomb test site in New Mexico. The roughly 20-kiloton bomb was detonated atop a 30.5 meter (100 foot) tower.
These lower-pressure shocked quartz grains join a growing suite of impact proxies that together make a case for a fragmented comet that not only caused widespread burning, but also abrupt climatic change that resulted in the extinctions of 35 genera of megafauna in North America, such as the mammoths and giant ground sloths , and led to the collapse of a flourishing human culture called Clovis, according to the researchers.
"There's a whole range of different shocked quartz , so we have to make a well-documented case that they are indeed significant for interpreting cosmic impact, even though they're not reflecting a traditional major crater-forming event," Kennett said. "These are from very-low-altitude 'touchdown' airbursts almost certainly associated with cometary impact."
Provided by University of California - Santa Barbara
Explore further
Feedback to editors
22 minutes ago
2 hours ago
Relevant physicsforums posts, early universe massive black holes question.
21 hours ago
Jun 26, 2024
Jun 25, 2024
Some photos of antares and sigma sagittarii (nunki).
Jun 23, 2024
More from Astronomy and Astrophysics
Oct 4, 2023
Sep 20, 2021
Mar 7, 2020
Feb 7, 2023
Oct 25, 2019
Mar 13, 2019
17 hours ago
18 hours ago
Let us know if there is a problem with our content.
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.
Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.
More information Privacy policy
We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.
IMAGES
VIDEO
COMMENTS
Revised on June 22, 2023. In a longitudinal study, researchers repeatedly examine the same individuals to detect any changes that might occur over a period of time. Longitudinal studies are a type of correlational research in which researchers observe and collect data on a number of variables without trying to influence those variables.
Panel Study. A panel study is a type of longitudinal study design in which the same set of participants are measured repeatedly over time. Data is gathered on the same variables of interest at each time point using consistent methods. This allows studying continuity and changes within individuals over time on the key measured constructs.
The Framingham study is widely recognised as the quintessential longitudinal study in the history of medical research. An original cohort of 5,209 subjects from Framingham, Massachusetts between the ages of 30 and 62 years of age was recruited and followed up for 20 years. A number of hypothesis were generated and described by Dawber et al.
Longitudinal studies, a type of correlational research, are usually observational, in contrast with cross-sectional research. Longitudinal research involves collecting data over an extended time, whereas cross-sectional research involves collecting data at a single point. To test this hypothesis, the researchers recruit participants who are in ...
A longitudinal study is observational and involves the continuous and repeated measurements of selected individuals followed over a period of time. Quantitative and qualitative data is gathered on "any combination of exposures and outcome." For instance, longitudinal studies are useful for observing relationships between the risk factors, development, and treatment outcomes of disease for ...
Key points • Longitudinal studies gather data by following subjects over time. • They may be prospective or retrospective in nature. • Prospective studies often require more time and resources to conduct; therefore, careful consideration should be taken to determine which type of longitudinal study design will adequately address the hypothesis being tested.
Longitudinal studies also allow repeated observations of the same individual over time. This means any changes in the outcome variable cannot be attributed to differences between individuals. Example: Individual differences. You decide to study how a particular weight-training program affects athletic performance.
LONGITUDINAL STUDY DESIGN. The design of longitudinal studies on aging should focus on a set of primary questions and hypotheses while taking into account the important contributions of function, comorbid health conditions, and behavioral and environmental factors. By focusing on primary questions and hypotheses, other methodological concerns ...
A longitudinal study (or longitudinal survey, or panel study) is a research design that involves repeated observations of the same variables (e.g., people) over long periods of time (i.e., uses longitudinal data).It is often a type of observational study, although it can also be structured as longitudinal randomized experiment.. Longitudinal studies are often used in social-personality and ...
A longitudinal study is a study that repeatedly measures observations (collects data) over time. It often involves following up with patients for a prolonged period, such as years, and measuring both explanatory and outcome variables at multiple points, usually more than two, of follow-up. Longitudinal studies are most commonly observational ...
Key Research Findings. Both cross-sectional and longitudinal studies are observational in nature, meaning that researchers measure variables of interest without manipulating them. Cross-sectional studies gather information and compare multiple population groups at a single point in time. They offer snapshots of the important current social ...
The research question in a longitudinal study can and should be reformulated as a hypothesis. The alternative hypothesis H 1 states there is a relationship between elements under certain conditions. The actual statistical test refers to the null hypothesis H 0 stating that there is no relationship (and the probability to observe the data if the ...
A longitudinal study is an experimental design that takes repeated measurements of the same subjects over time. These studies can span years or even decades. Unlike cross-sectional studies, which analyze data at a single point, longitudinal studies track changes and developments, producing a more dynamic assessment.
1. Introduction. A variety of longitudinal methods exist to model the course, cause, and consequences of repeated measures across time (Curran et al., 2010).With the advent of large-scale longitudinal data in the field of cognitive neuroscience, researchers are faced with choices as to which method most closely reflects the theoretical model they wish to apply to their data.
Studies that lack insight about designing, conducting, and publishing longitudinal the duration or timing of effects and relationships offer research. We structure the article around 12 judgment calls little prescriptive advice for practitioners and the general that typically confront researchers when conducting lon- public. gitudinal studies.
Qualitative longitudinal research (QLR) comprises qualitative studies, with repeated data collection, that focus on the temporality (e.g., time and change) of a phenomenon. The use of QLR is increasing in health research since many topics within health involve change (e.g., progressive illness, rehabilitation). A method study can provide an insightful understanding of the use, trends and ...
Longitudinal research designs can, with certain precautions, improve one's confidence in inferences about causality. ... will typically produce much less biased estimates and more accurate hypothesis tests when used on longitudinal designs (Newman, 2003). Indeed, ML missing data techniques are now the default techniques in LISREL, Mplus, HLM ...
Abstract. Longitudinal studies are observational studies used to measure the outcomes of an exposure over a period of time and determine if outcomes vary in time. These studies can be retrospective or prospective. In a retrospective study, data is collected from events that have already occurred or data that has already been collected.
A longitudinal study requires an investigator to observe the participants at different time intervals. A cross-sectional study is conducted over a specified period of time. Longitudinal studies can offer researchers a cause and effect relationship. Cross-sectional studies cannot offer researchers a cause-and-effect relationship.
Longitudinal Research, 2nd ed. by Scott Menard. Call Number: H62 M39 2002. Written in non-technical language, this popular and practical volume has been completely updated to bring readers the latest advice on major issues involved in longitudinal research. It covers: research design strategies; methods of data collection; and how longitudinal ...
2. Observational: As we mentioned earlier, longitudinal studies involve observing the research participants throughout the study and recording any changes in traits that you notice. 3. Timeline: A longitudinal study can span weeks, months, years, or even decades. This dramatically contrasts what is obtainable in cross-sectional studies that ...
Formulating Hypotheses for Different Study Designs. Generating a testable working hypothesis is the first step towards conducting original research. Such research may prove or disprove the proposed hypothesis. Case reports, case series, online surveys and other observational studies, clinical trials, and narrative reviews help to generate ...
The Journal of Clinical Psychology is a clinical psychology & psychotherapy journal devoted to research and practice in ... Gönner et al., 2007; Hypothesis 1A) and the standardized interview ... (OBQ-44), and anger suppression (STAXI-2) are part of the longitudinal analyses of this study. To investigate the change throughout the intervention ...
This study examined the all-cause and specific-cause mortality rates in individuals with ADHD and the influence of psychiatric comorbidities. Methods: Between 2003 and 2017, 1.17 million individuals were enrolled in the study, of which 233,886 received a diagnosis of ADHD from the Taiwan's National Health Insurance Research Database. A 1:4 sex ...
This study has several strengths. This longitudinal study is one of the first to examine the associations of loneliness chronicity on incident stroke in a nationally representative population of older adults in the U.S., an important research question given the high prevalence of loneliness among the aging population.
Disease progression can be assessed with longitudinal study designs in which outcomes are measured repeatedly over time and are assessed with respect to risk factors, either measured repeatedly or at baseline. ... Second, one cannot perform hypothesis testing on correlation parameters since these are not directly estimated. Third, usual methods ...
Researchers continue to expand the case for the Younger Dryas Impact hypothesis. The idea proposes that a fragmented comet smashed into the Earth's atmosphere 12,800 years ago, causing a ...