Quantification of muscle, bones and fat on single slice thigh CT
Qi Yang, Xin Yu, Ho Hin Lee, Yucheng Tang, Shunxing Bao, Kristofer S Gravenstein,
Ann Zenobia Moore, Sokratis Makrogiannis ,Luigi Ferrucci , Bennett A Landman
Muscle, bone, and fat segmentation of CT thigh slice is essential for body composition research. Voxel-wise image segmentation enables quantification of tissue properties including area, intensity and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require substantial data. Due to high cost of manual annotation, training deep learning models with limited human labelled data is desirable but also a challenging problem. Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address this issue in thigh segmentation. We study 2836 slices from Baltimore Longitudinal Study of Aging (BLSA) and 121 slices from Genetic and Epigenetic Signatures of Translational Aging Laboratory Testing (GESTALT). First, we generated pseudo-labels based on approximate hand-crafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels are fed into deep neural networks to train models from scratch. Finally, the first stage model is loaded as initialization and fine-tuned with a more limited set of expert human labels. We evaluate the performance of this framework on 56 thigh CT scans and obtained average Dice of 0.979,0.969,0.953,0.980 and 0.800 for five tissues: muscle, cortical bone, internal bone, subcutaneous fat and intermuscular fat respectively. We evaluated generalizability by manually reviewing external 3504 BLSA single thighs from 1752 thigh slices. The result is consistent and passed human review with 5 failed thigh images, which demonstrates that the proposed method has strong generalizability.