Attention in Attention for PET-CT Modality Consensus Lung Tumor Segmentation

Y Zhao, X Zhou, H Guo, Q Guo, Y Zuo… - … on Multimedia and …, 2024 - ieeexplore.ieee.org
Y Zhao, X Zhou, H Guo, Q Guo, Y Zuo, S Song, S Gao, W Zhang
2024 IEEE International Conference on Multimedia and Expo (ICME), 2024ieeexplore.ieee.org
Combination of multi-modal PET-CT imaging for lung tumor segmentation is significant for
clinical treatment. Existing methods have not fully considered the impact of noise in PET-CT
on the multi-modal interaction. To address this, we propose a novel Attention in Attention
Network (AiANet). AiANet can mutually learn multi-modal characteristics for segmentation
through its cross-learning modules. Within the cross-learning module, we introduce two
nested-attention blocks, namely Attention in Self-Attention (AiSA) and Attention in Cross …
Combination of multi-modal PET-CT imaging for lung tumor segmentation is significant for clinical treatment. Existing methods have not fully considered the impact of noise in PET-CT on the multi-modal interaction. To address this, we propose a novel Attention in Attention Network (AiANet). AiANet can mutually learn multi-modal characteristics for segmentation through its cross-learning modules. Within the cross-learning module, we introduce two nested-attention blocks, namely Attention in Self-Attention (AiSA) and Attention in Cross-Attention (AiCA), for multi-scale feature enhancement and multi-modal feature interaction. Importantly, since traditional attention weights calculated solely or unilaterally based on PET or CT can be vulnerable to the inevitable noisy information, we embed a novel Attention in Attention (AiA) module into AiSA and AiCA. The AiA module can seek cross-modal consensus for attention weights to alleviate their noise. Experimental results on clinical PET-CT data of lung cancer demonstrate the superiority of our method.
ieeexplore.ieee.org
Showing the best result for this search. See all results