Publication date: Available online 10 November 2018
Source: Medical Image Analysis
Author(s): Yubing Tong, Jayaram K. Udupa, Dewey Odhner, Caiyun Wu, Stephen J. Schuster, Drew A. Torigian
Abstract
Purpose
The derivation of quantitative information from images in a clinically practical way continues to face a major hurdle because of image segmentation challenges. This paper presents a novel approach, called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on positron emission tomography/computed tomography (PET/CT) images. This approach explores how to decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of only object recognition results from our recently developed automatic anatomy recognition (AAR) method to quantify disease burden.
Method
The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model building that relate to finding an optimal disease map for each organ. The parameters of the disease map are estimated from a set of training image data sets including normal subjects and patients with metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantifying disease based on this map. This process handles blur arising in PET images from partial volume effect entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at the voxel level without explicitly performing correction for the partial volume effect. Disease quantification is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable even when the "object" of interest is recognized manually with a simple and quick action such as interactively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically (DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically.
Results
We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phantom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities measured were TLG, SUVMean, and SUVMax. On phantom data sets, overall error for the three parameters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large "lesions" (37 mm diameter) to 37% for small "lesions" (10 mm diameter). On patient data sets, for non-conspicuous lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for liver than for lungs, although without statistical significance.
Conclusions
Accurate disease quantification on PET/CT images without performing explicit delineation of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results than DQ-AO although the difference is statistically not significant. Compared to current methods from the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were comparable for large lesions and were superior for smaller lesions, with less demand on training data and computational resources. DQ-AO and even DQ-MO seem to have the potential for quantifying disease burden body-wide routinely via the AAR-DQ approach.
Graphic Abstract
from Imaging via a.sfakia on Inoreader https://ift.tt/2DeILpT
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.