Publication date: Available online 20 November 2018
Source: Ultrasound in Medicine & Biology
Author(s): Andreas Østvik, Erik Smistad, Svein Arne Aase, Bjørn Olav Haugen, Lasse Lovstakken
Abstract
Transthoracic echocardiography examinations are usually performed according to a protocol comprising different probe postures providing standard views of the heart. These are used as a basis when assessing cardiac function, and it is essential that the morphophysiological representations are correct. Clinical analysis is often initialized with the current view, and automatic classification can thus be useful in improving today's workflow. In this article, convolutional neural networks (CNNs) are used to create classification models predicting up to seven different cardiac views. Data sets of 2-D ultrasound acquired from studies totaling more than 500 patients and 7000 videos were included. State-of-the-art accuracies of 98.3% ± 0.6% and 98.9% ± 0.6% on single frames and sequences, respectively, and real-time performance with 4.4 ± 0.3 ms per frame were achieved. Further, it was found that CNNs have the potential for use in automatic multiplanar reformatting and orientation guidance. Using 3-D data to train models applicable for 2-D classification, we achieved a median deviation of 4° ± 3° from the optimal orientations.
from Imaging via a.sfakia on Inoreader https://ift.tt/2S6j3rU
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.