A team led by University of New Mexico Department of Psychology Assistant Professor Jeremy Hogeveen was one of 70 research groups from across the globe to contribute neuroimaging analyses to a project that seeks to understand how the methodological choices made by neuroscientists can affect research outcomes.

Hogeveen’s work involves a large and highly collaborative new research project created by a postdoctoral scientist at Dartmouth College Rotem Botvinik-Nezer, senior lecturer at Tel Aviv University Tom Schonberg, and professor of Psychology at Stanford University Russel Poldrack. Hogeveen used the Center for Advanced Research Computing (CARC) resources to process the vast quantity of data involved in this study.

FMRI, or functional magnetic resonance imaging, has become a standard tool in modern neuroscience. This type of scan reveals both brain structure and brain activity, indicating which areas of the brain are particularly active at a given moment. While fMRI technology has revealed a wealth of reliable knowledge about our brains, Hogeveen notes that certain choices a researcher makes can significantly affect the results of a neuroimaging study.

“In human fMRI there are a number of different stages of the analysis pipeline, with each stage requiring a decision from the experimenter," Hogeveen explains. "For example, since each individual’s brain is in a different position in the MRI scanner, all subjects must be ‘registered’ to a common space, and different analysis approaches place registration at different stages of the pipeline which can affect results.

"Further, even if two teams used an identical pipeline, they may choose different methods for defining the anatomical location of observed brain activity, which can impact their conclusions. Overall, these potential forks in the road in the fMRI analysis pipeline have long been a suspected source of ambiguous findings across labs, and the NARPS project is one of the first attempts to empirically derive an estimate of how much this can impact our inferences in practice."

The study, called the Neuroimaging Analysis, Replication, and Prediction Study (NARPS), began in the Strauss Imaging Center at Tel Aviv University, where fMRIs were administered to participants taking part in a monetary decision-making task. The scans, along with nine hypotheses, were then sent to 70 different research teams for analysis; one of these teams was the Hogeveen Lab at UNM. Each team independently used the data to evaluate the hypotheses and reported their conclusions.

Because of the large computational demand of processing fMRI data, Hogeveen worked with Research Assistant Professor Matthew Fricke and Ph.D. candidate Schuyler Liphardt to analyze the imaging using CARC resources. Hogeveen reports, “this analysis would have taken over 1,300 hours of computing time to process in serial . . . Instead, by working with CARC scientists to parallelize the job, it was completed in less than 80 hours of total compute time.”

A comparison of the research teams’ conclusions shows that, for five out of the nine hypotheses, there was substantial disagreement as to whether the results were statistically significant. In other words, when asked to evaluate the same exact hypotheses using the same exact data, neuroscience researchers didn’t always come up with the same answers.

There are, however, tools that researchers can use to improve the consistency of neuroimaging analysis. For example, the Hogeveen Lab is implementing an fMRI data processing pipeline called fMRIPrep to help standardize their neuroimaging analyses. The pipeline was created by Poldrack in an effort to make neuroscience findings more reproducible.

Hogeveen reflects, “There are two main conclusions we should take from this study to help guide us going forwards. First, we need to describe our methods in complete detail when we publish papers, so we can properly understand inconsistent results in the field. Second, in the interest of cumulative science and moving the field forward, it is critical for folks across the world to start adopting gold standard analysis pipelines that produce stable results across labs.”

For more information: Variability in the analysis of a single neuroimaging dataset by many teams