-
Notifications
You must be signed in to change notification settings - Fork 1
/
qap_intro.tex
7 lines (5 loc) · 4.08 KB
/
qap_intro.tex
1
2
3
4
5
6
7
\section{Introduction}
\label{intro}
It is well accepted that poor quality data interferes with the ability of neuroimaging analyses to uncover biological signal and distinguish meaningful from artefactual findings, but there is no clear guidance on how to differentiate “good” from “bad” data. A variety of measures for assessing data quality have been proposed \citep{magnotta2006,atkinson1997,friedman2008,mortamet2009,power2012,giannelli2010} \todo{add reference to new metric}, but there is no consensus on the primacy of one measure over another or on the ranges of values for the measures that indicate poor quality data. As a result, researchers are required to rely on painstaking visual inspection to assess data quality. But this approach consumes a lot of time and resources, is subjective, and is susceptible to inter-rater and test-retest variability. Additionally, it is possible that some defects are too subtle to be fully appreciated by visual inspection, yet are strong enough to degrade the accuracy of data processing algorithms or bias analysis results. Further, it is difficult to visually assess the quality of data that has already been processed, such as that being shared through the Preprocessed Connectomes Project (PCP; \url{http://preprocessed-connectomes-project.github.io/}), the Human Connectome Project (HCP) \citep{VanEssen2012, Glasser2013}, and the Addiction Connectomes Preprocessing Iniatiative (ACPI; \url{http://fcon_1000.projects.nitrc.org/indi/ACPI/html/}). To begin to address this problem, the PCP has assembled several of the quality metrics proposed in the literature to implement a Quality Assessment Protocol (QAP; \url{http://preprocessed-connectomes-project.github.io/quality-assessment-protocol}).
\todo{need to add new metric}The QAP is an open source software package implemented in Python for the automated calculation of quality measures for functional and structural MRI data. The QAP software combines functionality from the AFNI \citep{cox1996} neuroimaging toolkit with custom Python functions using the Nipype pipe-lining library \citep{gorgolewski_2016_50186} to efficiently achieve high throughput processing on a variety of different high performance computing systems. The quality of structural MRI data is assessed using contrast-to-noise ratio (CNR) \citep{magnotta2006}, entropy focus criterion (EFC) \citep{atkinson1997}, foreground-to-background energy ratio (FBER), voxel smoothness (FWHM) \citep{friedman2008}, percentage of artifact voxels (QI1) \citep{mortamet2009}, and signal-to-noise ratio (SNR) \citep{magnotta2006}. The QAP includes methods to assess both the spatial and temporal quality of fMRI data. Spatial quality is assessed using EFC, FBER, and FWHM, in addition to ghost-to-signal ratio (GSR) \citep{giannelli2010}. Temporal quality of functional data is assessed using the standardized root mean squared change in fMRI signal between volumes (DVARS) \citep{power2012, Nichols2013}, mean root mean square deviation (MeanRMSD) \cite{Jenkinson99FD}, the temporal mean of AFNI’s \texttt{3dTqual} metric \citep{cox1996}, global correlation (GCOR) \citep{saad2013}, and the average fraction of outliers found in each volume using AFNI’s \texttt{3dTout} command \citep{cox1996}.
Using QAP outputs for quantitatively (or automatically) assessing data quality will require learning which of the measures are the most sensitive to poor quality and the ranges of their values that indicate good data. The solutions to these questions are likely to vary based on the analyses at hand and finding them will require the ready availability of QAP metrics calculated on large scale heterogeneous datasets. To help with this goal, the QAP has been used to measure structural and temporal data quality on data from the Autism Brain Imaging Data Exchange (ABIDE) \citep{dimartino2014} and the Consortium for Reliability and Reproducibility (CoRR) \citep{zuo2014} and the results are being openly shared through the PCP. An initial analyses of the resulting values has been performed to evaluate their collinearity, correspondence to expert-assigned quality labels, and test-retest reliability.