( What started as some in-house code turned into a flexible gui that I hope can help others.Īfter all, there's no need to rewrite the wheel! A Little More This project was started in December 2015 by Peter Clayson
pdf for the poster that I presented at SPR in Minneapolis. Improving ERP measurement, by ensuring score reliability,Ĭan improve our trust of the inferences drawn from observed scores and the likelihood of our findings replicating. Poor ERP score reliability from mismeasurement compromises validity. Mismeasurement of ERPs leads to misunderstood phenomena and mistaken conclusions My hope is that the ERA Toolbox will make it easier to demonstrate the reliability of ERP scores on a study-by-study basis. However, just because the observed data meet the previously recommended trial cutoff does not mean that the ERP scores are necessarily reliableĮRP score reliability cannot be inferred from trial counts.
That information can help guide decisions about, for example, the number of trials to present to a participant for a given population. Have been useful in suggesting cutoffs and characterizing the overall reliability of ERP components in those studies. The purpose of the ERA Toolbox is to facilitate the calculation of dependability estimates to characterize observed ERP scores. Reliability needs to be demonstrated on a population-by-population, study-by-study, component-by-component basis. The reliability of LPP scores in undergraduates at UCLA does not mean LPP scores recorded from children in New York can be assumed to be reliable. Since reliability is context dependent, demonstrating This means that P3, error-related negativity (ERN), late positive potential (LPP), (insert your favorite ERP component here) is not reliable in some "universal" sense Reliability is a property of scores (the data in hand), not a property of measures. The algorithms and their application by the ERA Toolbox are also covered in detail in The code used by the toolbox was based on the formulas discussed in Baldwin, Larson, and Clayson (2015). (e.g., a particular group may require more trials to achieve an acceptable level of dependability than another group). The ERA Toolbox provides information about the minimum number of trials needed for dependable ERP scores and describes the overall dependability of ERP measurementsĪll information provided by the ERA Toolbox is stratified by group and condition to allow the user to directly compare dependability The purpose of the toolbox is to characterize the dependability (G-theory analog of reliability) of ERP scores to facilitate their calculation on a study-by-studyīasis and increase the reporting of these estimates. The ERP Reliability Analysis (ERA) toolbox is an open-source Matlab program that uses generalizability (G) theory to evaluate the reliability of ERP data.