The conditional generative adversarial network (cGAN) is widely used in image-to-image translation for natural images by implementing supervised learning loss in the GAN framework. It has also been extended to biomedical images for image segmentation. However, during the training phase, it lacks proper guidance for smooth convergence, and the model requires a large dataset, which is often unavailable for biomedical images. To address this limitation, we integrated a multi-task learning technique using a cGAN-based adversarial framework for lesion segmentation in ultrasound (US) breast images. Our proposition defines an auxiliary task of finding the mask similarity score between the input US image and the ground-truth segmentation mask, which is trained on features extracted from the available data for the segmentation task. This learning methodology is imposed within a cGAN framework to boost the original segmentation task of the generator and guide it through the auxiliary task. The proposed method shows improved convergence and stability compared to the core adversarial network, and its performance has been assessed on different datasets and model architectures, demonstrating significant improvement over the baseline. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image segmentation
Gallium nitride
Network architectures
Data modeling
Education and training
Biomedical optics
Breast