Purpose: Deep-brain stimulation via neuro-endoscopic surgery is a challenging procedure that requires accurate targeting of deep-brain structures that can undergo deformations (up to 10 mm). Conventional deformable registration methods have the potential to resolve such geometric error between preoperative MR and intraoperative CT but at the expense of long computation time. New advances in deep learning methods offer benefits to inter-modality image registration accuracy and runtime using novel similarity metrics and network architectures. Method: An unsupervised deformable registration network is reported that first generates a synthetic CT from MR using CycleGAN and then registers the synthetic CT to the intraoperative CT using an inverse-consistent registration network. Diffeomorphism of the registration is maintained using deformation exponentiation “squaring and scaling” layers. The method was trained and tested on a dataset of CT and T1-weighted MR images with randomly simulated deformations that mimic deep-brain deformation during surgery. The method was compared to a baseline method using inter-modality deep learning registration, VoxelMorph. Results: The methods were tested on 10 pairs of CT/MR images from 5 subjects. The proposed method achieved a Dice score of 0.84±0.04 for the lateral ventricles, 0.72±0.09 for the 3rd ventricle, and 0.63±0.10 for the 4th ventricle, with target registration error (TRE) of 0.95±0.54 mm. The proposed method showed statistically significant improvement in both Dice score and TRE in comparison to inter-modality VoxelMorph, while maintaining a fast runtime of less than 3 seconds for a typical MR-CT pair of volume images. Conclusion: The proposed unsupervised image synthesis and registration network demonstrates the capability for accurate volumetric deformable MR-CT registration with near real-time performance. The method will be further developed for application in intraoperative CT (or cone-beam CT) guided neurosurgery.
|