Blind Output Matching for Domain Adaptation in Segmentation Networks
We study the domain adaptation problem for the (pixel-wise) semantic segmentation task of perfectly aligned images. The literature on adapting segmentation networks is currently dominated by adversarial models. We propose a simpler approach, based on a class of loss functions, which encourage direct blind matching in the network-output space. Rather than using an intermediate domain discriminator, our approach is directly applicable and can be used with any segmentation network. Therefore, it simplifies the adaptation procedure by avoiding many pitfalls associated with adversarial training. The resulting improvements in quality, stability and efficiency of training can be obtained as a result of the perfectly aligned source and target images. We juxtapose our approach to state-of-the-art adaptation via adversarial training in the network-output space in the challenging task of adapting brain segmentation across different magnetic resonance imaging (MRI) modalities. Our approach achieves significantly better results both in terms of accuracy and stability.