{"id":1830,"date":"2018-10-26T15:27:21","date_gmt":"2018-10-26T20:27:21","guid":{"rendered":"https:\/\/my.vanderbilt.edu\/masi\/?p=1830"},"modified":"2018-12-14T16:30:20","modified_gmt":"2018-12-14T21:30:20","slug":"synseg-net-synthetic-segmentation-without-target-modality-ground-truth","status":"publish","type":"post","link":"https:\/\/my.vanderbilt.edu\/masi\/2018\/10\/synseg-net-synthetic-segmentation-without-target-modality-ground-truth\/","title":{"rendered":"SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth"},"content":{"rendered":"<ol>\n<li>Yuankai Huo, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert Assad, Tamara K. Moyo, Michael R. Savona, Richard G. Abramson, and Bennett A. Landman. \u201cSynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth.\u201d \u00a0<i>IEEE transactions on medical imaging<\/i>\u00a0(2018).<\/li>\n<\/ol>\n<p><a href=\"https:\/\/arxiv.org\/abs\/1810.06498\">Open Access ArXiv Download<\/a><\/p>\n<h2>Abstract<\/h2>\n<p>A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks (CycleGAN) and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: (1) MRI to CT splenomegaly synthetic segmentation for abdominal images, and (2) CT to MRI total intracranial volume synthetic segmentation (TICV) for brain images. The proposed end-to-end approach achieved superior performance to two stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available (https:\/\/github.com\/MASILab\/SynSeg-Net).<\/p>\n<figure style=\"width: 550px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/my.vanderbilt.edu\/masi\/wp-content\/uploads\/sites\/2304\n2661\/2018\/10\/Figure1.png\" width=\"550\" height=\"431\" \/><figcaption class=\"wp-caption-text\">The upper panel showed the network structure of the proposed SynSegNet during training stages. The left side was the CycleGAN synthesis subnet, where \ud835\udc46 was MRI and \ud835\udc47 was CT. \ud835\udc3a\u0b35 and \ud835\udc3a\u0b36 were the generators while \ud835\udc37\u0b35 and \ud835\udc37\u0b36 were discriminators. The right subnet was the segmentation subnet \ud835\udc46\ud835\udc52\ud835\udc54 for an end-to-end training. Loss function were added to optimize the SynSegNet. The lower panel showed the network structure of SynSeg-Net during testing stage. Only the trained subnet \ud835\udc46\ud835\udc52\ud835\udc54 was used to segment a testing image from target imaging modality.<\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Yuankai Huo, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert Assad, Tamara K. Moyo, Michael R. Savona, Richard G. Abramson, and Bennett A. Landman. \u201cSynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth.\u201d \u00a0IEEE transactions on medical imaging\u00a0(2018). Open Access ArXiv Download Abstract A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is&#8230;<\/p>\n","protected":false},"author":2823,"featured_media":1837,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1830","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/posts\/1830","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/users\/2823"}],"replies":[{"embeddable":true,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/comments?post=1830"}],"version-history":[{"count":6,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/posts\/1830\/revisions"}],"predecessor-version":[{"id":1868,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/posts\/1830\/revisions\/1868"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/media\/1837"}],"wp:attachment":[{"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/media?parent=1830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/categories?post=1830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/my.vanderbilt.edu\/masi\/wp-json\/wp\/v2\/tags?post=1830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}