Unsupervised Cross-Modal Synthesis of Subject-Specific Scans

Raviteja Vemulapalli, Hien Van Nguyen, Shaohua Kevin Zhou; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 630-638


Recently, cross-modal synthesis of subject-specific scans has been receiving significant attention from the medical imaging community. Though various synthesis approaches have been introduced in the recent past, most of them are either tailored to a specific application or proposed for the supervised setting, i.e., they assume the availability of training data from the same set of subjects in both source and target modalities. But, collecting multiple scans from each subject is undesirable. Hence, to address this issue, we propose a general unsupervised cross-modal medical image synthesis approach that works without paired training data. Given a source modality image of a subject, we first generate multiple target modality candidate values for each voxel independently using cross-modal nearest neighbor search. Then, we select the best candidate values jointly for all the voxels by simultaneously maximizing a global mutual information cost function and a local spatial consistency cost function. Finally, we use coupled sparse representation for further refinement of synthesized images. Our experiments on generating T1-MRI brain scans from T2-MRI and vice versa demonstrate that the synthesis capability of the proposed unsupervised approach is comparable to various state-of-the-art supervised approaches in the literature.

Related Material

author = {Vemulapalli, Raviteja and Van Nguyen, Hien and Zhou, Shaohua Kevin},
title = {Unsupervised Cross-Modal Synthesis of Subject-Specific Scans},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}