Skip to main content

Less is More: Simultaneous View Classification and Landmark Detection for Abdominal Ultrasound Images

Posted by on Friday, December 14, 2018 in News.

Zhoubing Xu, Yuankai Huo, JinHyeong Park, Bennett Landman, Andy Milkowski, Sasa Grbic, and Shaohua Zhou. “Less is More: Simultaneous View Classification and Landmark Detection for Abdominal Ultrasound Images.” In International Conference on Medical Image Computing and Computer-Assisted Intervention. vol 11071. pp. 711—719. Springer, Cham, 2018.

Open Access ArXiv Download

Abstract

An abdominal ultrasound examination, which is the most common ultrasound examination, requires substantial manual efforts to acquire standard abdominal organ views, annotate the views in texts, and record clinically relevant organ measurements. Hence, automatic view classification and landmark detection of the organs can be instrumental to streamline the examination workflow. However, this is a challenging problem given not only the inherent difficulties from the ultrasound modality, e.g., low contrast and large variations, but also the heterogeneity across tasks, i.e., one classification task for all views, and then one landmark detection task for each relevant view. While convolutional neural networks (CNN) have demonstrated more promising outcomes on ultrasound image analytics than traditional machine learning approaches, it becomes impractical to deploy multiple networks (one for each task) due to the limited computational and memory resources on most existing ultrasound scanners. To overcome such limits, we propose a multi-task learning framework to handle all the tasks by a single network. This network is integrated to perform view classification and landmark detection simultaneously; it is also equipped with global convolutional kernels, coordinate constraints, and a conditional adversarial module to leverage the performances. In an experimental study based on 187,219 ultrasound images, with the proposed simplified approach we achieve (1) view classification accuracy better than the agreement between two clinical experts and (2) landmark-based measurement errors on par with inter-user variability. The multi-task approach also benefits from sharing the feature extraction during the training process across all tasks and, as a result, outperforms the approaches that address each task individually.

 

<img src=”https://cdn.vanderbilt.edu/vu-my/wp-content/uploads/sites/2304/2018/12/14140729/Capture.jpg” width=”550″ height=”431″ ]

An overview of the tasks for abdominal ultrasound analytics. In each image, the upper left corner indicates its view type. If present, the upper right corner indicates the associated landmark detection task, and the pairs of long- and short-axis landmarks are colored in red and green, respectively. An icon is circled on one image; such icons are masked out when training the view classification.