Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Dec 20.
Published in final edited form as: IEEE Int Conf Robot Autom. 2023 Jul 4;2023:4724–4731. doi: 10.1109/icra48891.2023.10160372

Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing

Shervin Dehghani 1,4, Michael Sommersperger 1, Peiyao Zhang 4, Alejandro Martin-Gomez 4, Benjamin Busam 1, Peter Gehlbach 3, Nassir Navab 5, M Ali Nasseri 1,2, Iulian Iordachita 4
PMCID: PMC10732544  NIHMSID: NIHMS1907055  PMID: 38125032

Abstract

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

Keywords: Computer Vision for Medical Robotics, Medical Robots and Systems, Vision-Based Navigation

I. Introduction

Over the past decade, various robotic systems have been introduced to enhance ophthalmic surgery [1]-[6]. The utility of such systems is particularly evident in the treatment of complex retinal diseases that require delicate navigation and precise positioning of instruments, exceeding human capabilities in manual surgery. The treatment of age-related macular degeneration (AMD) is an example of a potential motivation to employ precise robotic systems. AMD is a disease that in its advanced neovascular stage, is characterized by leakage of fluid and blood near the macula. The leakage leads to irreversible damage to the retinal cells and loss of vision. AMD is considered the leading cause of visual impairment in patients over age 65 in developed countries [7] and is predicted to affect 288 million people by 2040 [8]. The current standard of care for AMD is the intravitreous injection of anti-VEGF drugs [9]. This requires repeated treatments to delay progression but does not lead to a cure.

Novel advancements include but are not limited to, stem cell therapy [10], retinal pigment epithelium (RPE) cell transplants [11], and gene editing technology [12]. These may eventually offer an efficient treatment of AMD with a single intervention, enabled by precise subretinal delivery of the therapeutic agent to the potential space between the photoreceptors and the RPE-Bruch’s membrane complex [13]. For this reason, robotic approaches are envisioned to precisely guide a microsurgical injection needle into the subretinal space, in order to safely deliver a therapeutic payload.

To aid surgical visualization and to reveal retinal layer structures, intraoperative Optical Coherence Tomography (iOCT) has been integrated into the surgical setup in the last decade. This modality enables 2D and 3D micrometer-resolution imaging of the surgical area and can provide video-rate volumetric imaging due to novel advances in scanning technology [14], [15]. Studies have shown that iOCT has the potential to enable targeted delivery of therapeutic agents into the subretinal space [16]. Prior work has combined iOCT imaging with a manually controlled robotic system [17], [18], confirming the advantages of employing a robotic system for subretinal injection, such as reduced tremor, higher bleb formation and reduction of reflux. In this work, we present a system for autonomous robot navigation, which is tailored for the use case of subretinal injection. The contributions of this work are as follows: 1) We present a framework that merges a robotic system with an iOCT-integrated microscope (Fig. 1), to create a surgeon-in-the-loop workflow (Fig. 2). The robot motion is enabled by real-time processing of volumetric iOCT data in an effort to navigate a surgical needle to a subretinal injection target chosen by the surgeon. The proposed workflow includes data acquisition, visualization, and selection of a target point within the acquired volume, followed by automatic volume processing and robot navigation. The final trajectory planning and motion is based on an online registration between the robotic instrument and the iOCT volume. 2) We introduce a novel concept of intelligent virtual B-scans as automatically selected instrument-aligned slices, reducing the complexity of the 3D iOCT data. In our framework, this facilitates the integration of convolutional neural networks (CNNs) for instrument pose estimation and trajectory planning in real-time. We evaluate the proposed framework by first performing targeted navigation to arbitrary points in the volume and then by performing needle insertions in ex-vivo porcine eyes. The results show high accuracy and repeatability, encouraging further iOCT-guided robotic vitreoretinal applications. Our results also demonstrate the potential of virtual B-scans for real-time volumetric iOCT processing.

Fig. 1.

Fig. 1.

Overview of the proposed system, consisting of an iOCT Microscope (a), which is used for volume acquisition and data transmission to the computation unit(b). All the necessary computations are applied and transmitted to the robot (c) to execute the injection. The B-scans slices acquired by the microscope are shown in (e) and (d) illustrates the microscopic view of the scene.

Fig. 2.

Fig. 2.

The proposed workflow follows volume processing and visualization (a), planning (b), and execution (c) stages.

II. Related work

Prior work has addressed components of our framework, such as instrument pose estimation, OCT B-scan segmentation, and the calibration of an iOCT system to a robotic setup. Needle pose estimation from iOCT volumes has been proposed in various works [19]-[21]. Such initial works addressing 5-degree-of-freedom (DoF) tool tracking methods were later extended to a 6-DoF localization approach [22]. While these methods have shown high accuracy in instrument detection and pose estimation, they are limited to tracking the needle above the retina and are based on the processing of the entire iOCT volumes. This, in turn, poses constraints on the integrated processing algorithms to cope with the high data throughput of high-speed iOCT systems. More advanced detection and segmentation algorithms are still required to improve the robustness and generalizability of such instrument tracking.

With the technical advances in recent years, deep learning approaches for the segmentation of OCT B-scans have shown high success rates [23]. Convolutional neural network (CNN) architectures have shown promising results for the segmentation of retinal layers [24]. Recently, 3D network architectures [25] have been developed and specifically tailored to segment whole OCT volumes in the presence of clinical pathology related to AMD. Most of these works, however, focus on the segmentation of retinal structures in diagnostic OCT, with higher image quality as compared to current iOCT imaging, and do not include surgical instruments and their related artifacts, such as shadows obscuring the retinal anatomy or mirroring artifacts. To our knowledge, there are only a few works addressing advanced instrument segmentation in iOCT imaging, such as the development of a fully convolutional network for the instrument segmentation in iOCT B-scans [26] and instrument segmentation in axial iOCT projection images used for real-time visualization of 4D iOCT [27]. A joint instrument and retinal layer segmentation from iOCT imaging for distance estimation between the instrument and selected retinal layers was proposed only recently [28]. Similar to dedicated iOCT segmentation algorithms, methods on the calibration of a volumetric iOCT and a robotic system are, to date, sparse and limited to expensive 3D instrument segmentation [29].

III. Method

A. Overview

In this work, we address the composition of a system that comprises the necessary components for robotic navigation based on iOCT, which includes rapid volume processing, instrument pose estimation, registration, and trajectory planning. The setup connects a robotic system with an iOCT-integrated microscope to meet the real-time requirements of intraoperative applications (Fig. 3) while keeping the human in the loop. The designed workflow consists of a sequence of steps: first, an iOCT volume is acquired. The required processing and instrument pose estimation is applied, and the volume is visualized (Fig. 2 (a)). The surgeon then chooses a 3D target point for subretinal injection in the volume. Subsequently, a trajectory for navigating to the specified target location is estimated (Fig. 2 (b)). Upon approval, the command is transmitted to the robot for execution (Fig. 2 (c)). The components of our workflow are specifically tailored to the use-case of subretinal injection.

Fig. 3.

Fig. 3.

Setup overview. (a) Steady-Hand Eye-robot. (b) Leica iOCT-integrated Microscope. (c) Ex-vivo pig eye.

To perform a successful robotic injection, two features need to be extracted from an input volume: 1) the needle pose w.r.t the iOCT volume and 2) the optimal trajectory to guide the instrument to a selected target point for subretinal injection. While 3D learning-based methods show promising results for high-resolution diagnostic OCT, they are not yet suitable for real-time 3D analysis. Additionally, because of the non-uniform voxel-spacing of the acquired iOCT volumes, conventional 3D CNN-based methods are not directly applicable. In our approach, we, therefore, decompose the problem of needle pose estimation into two consecutive 2D segmentation tasks. We introduce intelligent virtual B-scans to segment only selected regions extracted in a rapid volume-processing step. This is further discussed in Sec. III-B and Sec. III-C. Afterward, in Sec. III-D the needle pose estimation and the registration between the robot and volume are described. Finally, the trajectory for executing the injection at the given 3D target point is calculated and transmitted to the robot (Sec. III-E).

B. Axial Projection Image

As proposed in [27], axial projection maps, generated by applying an operation on the iOCT A-scans of the volume, can result in feature maps with strong visual cues for locating an instrument in the volume. Leveraging the instrument shadowing generated by attenuation of the emitted iOCT spectrum at the needle surface, we apply an operation on the A-scans to highlight the needle footprint on the generated projection image. According to [27], we name this image the Axial Projection Image, which can be defined as:

𝒫Axial=Z𝒱[,i] (1)

where 𝒱 is the original volume and is an operator. We choose to be the mean operator, as it has shown promising results in our experiments. Visual examples of the generated projection images can be seen in Figure 4.

Fig. 4.

Fig. 4.

Applying an operation on the A-scans of the volume to create Axial Projection Image.

By employing a CNN-based approach to segment the instrument in the 2D projection, all A-scans containing the instrument can be identified in the volume. Alike [27], we train a U-Net [30] with ResNet18 [31] backbone for the segmentation with Binary Cross Entropy Loss on our custom dataset, dsaxial (Sec. IV). Examples of the predicted instrument segmentation and the corresponding ground-truth in our test set are shown in Fig. 5.

Fig. 5.

Fig. 5.

Performance of the Axial Image Segmentation model on the test set. Shown ground truths include the augmentations.

During inference, we filter the pixels with high confidence and fit a 2D line to them using Huber Regressor [32]. During our experiments, we chose a confidence value based on the highest 1% output probabilities. We name the rotation of this line as θz (Fig. 6 (c)), as this value encodes the needle rotation w.r.t. the volume’s Z-axis. The needle tip position in this image also provides tx and ty, defining the needle tip translation in volume X and Y axes, being defined as the coordinates of the extremum point on the line.

Fig. 6.

Fig. 6.

The method of the intelligent virtual B-scan generation. (a) Original volume. (b) Axial Projection Image, generated with a mean operation. (c) Fitted line on the segmentation result for estimating the virtual B-scan parameters. (d) Intelligent virtual B-scan, generated from interpolation between adjacent A-Scans.

C. Intelligent Virtual B-scan

The location of each B-scan in the volume can be identified by the conventional plane equation in 3D space, (P0,n), where P0 is a 3D point on the plane and n is the plane normal vector. Hence, the plane formulation of each of the B-scans acquired from the microscope can be written as:

(P0:(0,i×ξ,0),n:j) (2)

where ξ is the spacing in Y-axis, i is the index of the specific B-scan and j the unit vector in Y-axis. Contrary, we define virtual B-scan, as an arbitrary plane in the original iOCT volume. Instead of extracting the B-scan cross-sections provided by the iOCT scanning pattern from the volume, a plane equation can define any arbitrary volume slicing. We call a slicing intelligent if the plane is automatically selected based on a semantic understanding of the surgical scene and the application, in order to create an optimized representation of complex data. The pixel intensities of this slice are composed by interpolating the voxel intensities adjacent to the virtual plane. In our example, we use the instrument line parameters θz, tx and ty from III-B to define a tool-aligned intelligent virtual B-scan as:

((tx,ty,),sinθz,cosθz,0) (3)

which provides a plane aligned with the needle. The tool-aligned slicing can therefore be generated by interpolation of the adjacent iOCT A-scans, identified by the line in Sec. III-B (Fig. 6). While in general, for the composition of the intelligent virtual B-scan, a trilinear or more complex interpolation method could be chosen to generate a visually appealing cross-section, we employ a linear interpolation scheme to reduce the computational complexity.

This tool-aligned intelligent virtual B-scan, allows the estimation of the axial instrument rotation Ry and the instrument tip coordinate in the volume Z-axis, tz. To identify the needle, Inner Limiting Membrane (ILM), and RPE layers, we subsequently employ a second segmentation network. Using our custom dataset, dsbscan (see Sec. IV), we train a U-Net style network with ResNet18 backbone, using a combination of Cross Entropy Loss and Focal Loss for the joint segmentation of retinal layers and surgical instrument, as proposed by [28]. The segmentation network was additionally pre-trained on synthetic iOCT data generated by the method described in [33]. Examples of the virtual B-scans and the obtained segmentations are illustrated in Fig. 7.

Fig. 7.

Fig. 7.

Example of the B-scan Segmentation model on the test data. Yellow color represents the needle, red represents ILM layer, and blue, the RPE layer.

Similar to III-B, we fit a line to the segmented needle pixels with high confidence. We name the rotation of this line θy (Fig. 6 (d)) since it provides the needle Ry. Furthermore, tz can also be estimated from the innermost segmented point on the line. Having Rz, Ry, tx, ty, and tz calculated, the 5-DoF needle pose can be formulated as:

R=Rz(θz)Ry(θy)Rx() (4)

assuming yaw, pitch and roll rotation around the Z, Y and X axes. Hence, we obtain the following rotation matrix:

R=(cosθzsinθz0sinθzcosθz0001)(cosθy0sinθy010sinθy0cosθy)I3 (5)

In this case, the rotation around the X-axis (roll) can be neglected due to the needle symmetry in this case. For the tooltip position, we obtain the following:

t=[tx,ty,tz] (6)

D. Robot to Volume Registration

In iOCT-guided interventions, the surgeon can dynamically adjust the scanning area. Therefore, pre-operational calibration is not valid as the scanning region is updated, and a dynamic online registration between the robotic system and the iOCT volume is preferred. In our case, the Z-axis of the robot base and iOCT Z directions are aligned, being parallel to the A-scan direction of the iOCT system. To estimate the transformation between the robot and volume, it is therefore sufficient to estimate the transformation of one of the other two axes. As the injection needle is rigidly mounted on the robot, the robot axes can be defined as:

vz=k,vy=sinθz,cosθz,0,vx=vy×vz (7)

Given these vectors, any translation vector, Tv, in the volume coordinate system can be transformed into a translation vector in the robot coordinate system as:

C=[vxvyvz] (8)
Tr=C1Tv (9)

Eq. 8 transforms the translation vectors in the next step from iOCT volume coordinates to robot coordinates.

E. Trajectory Planning

We define insertion line, similar to [34], which is a 3D line through the target point, V, and parallel to the needle. In order to execute a successful insertion, from the time of contact between the tissue and tool, the needle needs to follow this line to reach the target point and perform a successful injection. We decouple the trajectory into two parts: a) tA, a translation to align the needle with the insertion line and b) tB, a translation along the insertion line until the target point V is reached.

There are multiple possibilities to define tA and tB. However, in order to avoid tissue damage while applying tA, we apply these constraints on tA and tB:

{tA+tB=VTneedle,tBrneedle,(tA,k)=π2} (10)

We calculate point J as the intersection of plane (Tneedle,k) and the insertion line. Thereafter, tA=JTneedle and tB=VJ, as illustrated in Fig. III-E.

To formulate these equations, only the intelligent virtual B-scan described in Sec. III-C containing the needle and a second virtual B-scan, which contains the needle after applying tA, are required. The second virtual B-scan can be identified as (V, sinθz,cosθz,0)). Since in our iOCT system, the cross-sections include different refractive indices along the A-scans when the emitted iOCT spectrum traverses different media, the acquired points need to be compensated for the displacements in Z-axis [35]. We leverage the segmented ILM and RPE layer boundaries from the second virtual B-scan to correct the axial spacing by applying the appropriate refractive index to each region since the A-scan fixation only applies to the step tB.

IV. Materials

For our experiments, we used a setup consisting of Steady Hand Eye Robot [6] and a Leica Proveo 8 With EnFocus OCT Imaging System (Fig. 3). Both robot and microscope are connected to a workstation with LAN connections through a network switch, establishing the communication between all components. The microscope was pre-calibrated, aligning the iOCT beam to the surgeon’s view and the video signal to the iOCT scanning area. To acquire the iOCT datasets for our experiments and network training, we collected B-scans, each composed of 1000 A-scans with an axial resolution of 1024 pixels each. Each volume was collected at a squared scanning area of 2.5mm × 2.5mm at a scanning depth of 3.38mm, containing 100 linearly acquired B-scans. This setup for the iOCT volumes provides a voxel spacing of (2.5μm, 25μm, 3μm). For dsaxial in Sec. III-B, we have created a dataset of Axial Projection Images from 100 iOCT volumes, as shown in Fig. 5. While acquiring the iOCT volumes from the microscope during inference time, some B-scans are not successfully transmitted due to network connection issues. This can decrease the instrument segmentation performance because the missing B-scans generate a similar effect in the Axial Projection Image as the needle shadowing. In this order, we randomly omit some of the B-scans during training. For the dsbscan described in Sec. III-C, we have collected a custom dataset with 300 images, 2.5mm × 3.8mm, using a NanoFil NF36BV 36 Gauge Bevealed Needle and an ICSI MIC-SI-0 Micropipette, above and inserted into ex-vivo pig eye retinas, in absence and presence of vitreous. We have only collected B-scans which were obtained from the microscope directly while having the B-scans aligned with the needle. We will make both datasets publicly available soon.

All of the CNN networks were trained using Pytorch 1.10.1 framework, using NVIDIA RTX 2080Ti. The datasets were labeled by two biomedical engineering experts using ImFusionLabels (ImFusion GmbH, Munich, Germany). The acquired iOCT volumes are visualized on the workstation via custom rendering plugins integrated in the ImFusionSuite1 framework. (ImFusion GmbH, Munich, Germany).

To correct the pixel spacing resulting from deflection in Sec. III-E, we used nair = 1, nvitreous = 1.38 and ntissue = 1.38, as suggested by the microscope manufacturer, during open-sky ex-vivo porcine eye experiments.

V. Experiments and Results

A. Inference Rate

With inference on an NVIDIA GeForce RTX 2080Ti, for the step of intelligent virtual B-scan composition and segmentation with needle pose estimation, we measured an average inference time of 0.232 ± 0.008 seconds (step a in workflow Fig. 2), and 0.188 ± 0.009 seconds for trajectory planning (step b). We measured 5.45 ± 0.80 seconds for the robot to perform the needle insertion (step c), averaged over 10 insertions in randomly selected positions. With an average image acquisition time of 7.69 ± 0.27 seconds, this step has shown to be the most time-consuming step, as our microscopic system has not been specifically designed for integration in a robotic setup.

B. System Accuracy

We individually evaluated the precision of the robot, the precision of the system in the absence of tissue, and the precision of autonomously navigating to an injection target in the retina, using the needles described in Sec. IV. We have not reported the accuracy of the segmentation models individually since the performance of these networks has been evaluated and discussed for similar use cases in the referenced works. Two biomedical engineering experts independently marked the tool tip locations in all the volumes to generate a reliable ground truth position. To measure the accuracy of the robot’s translational movement, we moved each robot axis about 500μm for 10 times and measured the Euclidean distance of the needle tip in the iOCT volume before and after each movement. The result proves an accuracy of 5 ± 2μm for each of the axes of the robot. To measure the precision of autonomous robot navigation, we first performed 10 trials of navigating to a target location without retinal contact. Afterward, we performed 10 autonomous needle insertions into the retinal tissue and chose a random target point in proximity to the RPE. For these experiments, we used ex-vivo porcine eyes in an open-sky setup without removing the vitreous gel. For all trials, we measured the Euclidean distance of the needle tip’s final position to the target point. For the 10 trials without retina insertion, we measured a mean error of 24 ± 5μm, while for navigation to a target inside the retina, we measured an average error of 32 ± 4μm. The error distribution for both trials is illustrated in Fig. 9.

Fig. 9.

Fig. 9.

Final performance of the injection process. The chart shows the Euclidean distance between the needle tip and the target point in two different environments. The number of sample points is not sufficient for accurate distribution estimation. However, it is still insightful to understand the consistency and accuracy of the method.

VI. Discussion and future work

In this work, we introduced an approach for autonomous robot navigation based on volumetric iOCT tailored for subretinal injection. We have proposed an end-to-end framework that integrates a robotic platform as well as an iOCT system while keeping the surgeon in the loop. Our experiments on ex-vivo porcine eyes show promising results with errors close to the precision that is required for targeting the anatomical area for subretinal injection [13]. The comparison to our experiments in air suggests that tissue and instrument deformations need to be investigated in future works.

In this paper, we introduce intelligent virtual B-scan composition to reduce the complexity of the scene and focus the processing on the relevant structures of the volume. Such representations not only allow rapid estimation of the instrument pose but may also be displayed to the surgeon to verify the processing algorithms, which in turn could accelerate the acceptance of autonomous systems. During our experiments, we used a straight needle. However, when using a bent needle or other micro-surgical tools, where the instrument cannot be captured by a single 2D plane, multiple cross-sections could be extracted and a similar method with modifications on Sec. III-B and Sec. III-C could be employed. To understand the feasibility of such an end-to-end system, we restricted the robot control for our open-sky experiments and only applied translational movements. In a realistic scenario, rotations around the remote center of motion (RCM) need to be applied to minimize the force on the sclera. However, our method can be adapted to integrate RCM-based rotations by adjusting tA in Sec. III-E, with a series of translational and rotational movements. In a surgical scenario, patient breathing and instrument navigation can lead to significant retinal motion, therefore a higher volume acquisition rate in state-of-the-art swept-source iOCT systems can facilitate the integration of temporal registration, which in turn can be used to track the target point and update the trajectory accordingly. Another possible extension is dynamically aligning the iOCT scanning area with the generated tool-aligned B-scans and hence, adjusting the iOCT scanning pattern to only acquire selected B-scans, which will increase the overall inference speed of our end-to-end system. Currently, we have not yet optimized the inference times of our network, since the current bottleneck of our system is the acquisition and transmission of the iOCT volumes. However, previous works [27], [28], [36] have shown that network inference times for iOCT segmentation can be drastically improved using kernel optimization and tensor fusion strategies. Additionally, to further improve the real-time capability of such systems, the communication between the iOCT and robotic system may be optimized using faster communication protocols. In future works, we will consider the segmented ILM and RPE to safely navigate the instrument, using real-time processing as proposed by [28], to avoid harming RPE and cells of the neuroretina. We chose a fixed speed for the robot during trajectory execution. However, the navigation speed could be optimized by adopting a higher speed for movement without tissue contact during tA and a dynamic speed for tB to ensure effective tissue insertion.

VII. Conclusion

In this paper, we proposed an end-to-end framework for robotic subretinal injections and have designed a workflow with the surgeon in the loop, which is also adaptable to future extensions. We proposed a fast processing method for volumetric iOCT data to automate the navigation of the robot. Despite current hardware limitations for real-time scan acquisition, we have shown our method is capable of real-time processing, solving the 3D instrument pose estimation by relaxing it to two separate 2D segmentation tasks. This step is mainly enabled by the composition of intelligent virtual B-scans. Our method forms a base for many future works based on volumetric iOCT processing and can provide a real-time approach for similar tasks, which involve 3D pose estimation and trajectory planning for robotic retinal surgery.

Fig. 8.

Fig. 8.

Steps of trajectory calculations. (a) Original volume with the target point, needle, injection line, tA and tB (b) Visualization of the volume, needle, and target point after applying tA (c) Side view of previous step (d) Side view of the volume, needle and target point after applying tB

Acknowledgment

We thank Leica Microsystems GmbH (Wetzlar, Germany) for their assistance with Leica Proveo 8 With EnFocus OCT, which made it possible to conduct real-time access to the microscope data.

This work was supported by U.S. National Institutes of Health under the grants number 2R01EB023943-04A1 and 1R01 EB025883-01A1, and partially by JHU internal funds.

Footnotes

References

  • [1].Ullrich F, Bergeles C, Pokki J, Ergeneman O, Erni S, Chatzipirpiridis G, Pané S, Framme C, and Nelson BJ, “Mobility experiments with microrobots for minimally invasive intraocular surgery,” Investigative ophthalmology & visual science, vol. 54, no. 4, pp. 2853–2863, 2013. [DOI] [PubMed] [Google Scholar]
  • [2].Rahimy E, Wilson J, Tsao TC, Schwartz S, and Hubschman JP, “Robot-assisted intraocular surgery: development of the IRISS and feasibility studies in an animal model,” Eye, vol. 27, no. 8, pp. 972–978, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Gijbels A, Vander Poorten E, Gorissen B, Devreker A, Stalmans P, and Reynaerts D, “Experimental validation of a robotic comanipulation and telemanipulation system for retinal surgery,” in 2014 5th IEEE RAS EMBS Int. Conf. Biomed. Robot. Biomechatronics. IEEE, 2014, pp. 144–150. [Google Scholar]
  • [4].Nasseri MA, Eder M, Nair S, Dean EC, Maier M, Zapp D, Lohmann CP, and Knoll A, “The introduction of a new robot for assistance in ophthalmic surgery,” in Eng. Med. Biol. Soc. (EMBC), 2013 35th Annu. Int. Conf. IEEE. IEEE, 2013, pp. 5682–5685. [DOI] [PubMed] [Google Scholar]
  • [5].Molaei A, Abedloo E, de Smet MD, Safi S, Khorshidifar M, Ahmadieh H, Khosravi MA, and Daftarian N, “Toward the art of robotic-assisted vitreoretinal surgery,” Journal of ophthalmic & vision research, vol. 12, no. 2, p. 212, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].He X, Roppenecker D, Gierlach D, Balicki M, Olds K, Gehlbach P, Handa J, Taylor R, and Iordachita I, “Toward clinically applicable steady-hand eye robot for vitreoretinal surgery,” in ASME International Mechanical Engineering Congress and Exposition, vol. 45189. American Society of Mechanical Engineers, 2012, pp. 145–153. [Google Scholar]
  • [7].Wong WL, Su X, Li X, Cheung CMG, Klein R, Cheng C-Y, and Wong TY, “Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis,” The Lancet Global Health, vol. 2, no. 2, pp. e106–e116, 2020/October/14 2014. [Online]. Available: 10.1016/S2214-109X(13)70145-1 [DOI] [PubMed] [Google Scholar]
  • [8].Casten R, Rovner BW, and Fontenot JL, “Targeted Vision Function Goals and Use of Vision Resources in Ophthalmology Patients with Age-Related Macular Degeneration and Comorbid Depressive Symptoms,” Journal of Visual Impairment & Blindness, vol. 110, no. 6, pp. 413–424, 2020/October/14 2016. [Online]. Available: 10.1177/0145482X1611000604 [DOI] [Google Scholar]
  • [9].Finger RP, Daien V, Eldem BM, Talks JS, Korobelnik J-F, Mitchell P, Sakamoto T, Wong TY, Pantiri K, and Carrasco J, “Anti-vascular endothelial growth factor in neovascular age-related macular degeneration –a systematic review of the impact of anti-VEGF on patient outcomes and healthcare systems,” BMC Ophthalmology, vol. 20, no. 1, p. 294, 2020. [Online]. Available: 10.1186/s12886-020-01554-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Wang Y, Tang Z, and Gu P, “Stem/progenitor cell-based transplantation for retinal degeneration: a review of clinical trials,” Cell Death & Disease, vol. 11, no. 9, p. 793, 2020. [Online]. Available: 10.1038/s41419-020-02955-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Zarbin M, Sugino I, and Townes-Anderson E, “Concise Review: Update on Retinal Pigment Epithelium Transplantation for Age-Related Macular Degeneration,” Stem cells translational medicine, vol. 8, no. 5, pp. 466–477, May 2019. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/30748126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Chung SH, Mollhoff IN, Nguyen U, Nguyen A, Stucka N, Tieu E, Manna S, Meleppat RK, Zhang P, Nguyen EL, Fong J, Zawadzki R, and Yiu G, “Factors Impacting Efficacy of AAV-Mediated CRISPR-Based Genome Editing for Treatment of Choroidal Neovascularization,” Molecular therapy. Methods & clinical development, vol. 17, pp. 409–417, January 2020. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/32128346 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Karampelas M, Sim DA, Keane PA, Papastefanou VP, Sadda SR, Tufail A, and Dowler J, “Evaluation of retinal pigment epithelium–Bruch’s membrane complex thickness in dry age-related macular degeneration using optical coherence tomography,” British Journal of Ophthalmology, vol. 97, no. 10, pp. 1256–1261, 2013. [Online]. Available: https://bjo.bmj.com/content/97/10/1256 [DOI] [PubMed] [Google Scholar]
  • [14].Carrasco-Zevallos O, Keller B, Viehland C, Hahn P, Kuo AN, DeSouza PJ, Toth CA, and Izatt JA, “Real-time 4d visualization of surgical maneuvers with 100khz swept-source Microscope Integrated Optical Coherence Tomography (MIOCT) in model eyes,” Investigative Ophthalmology & Visual Science, 2014. [Google Scholar]
  • [15].Carrasco-Zevallos OM, Viehland C, Keller B, McNabb RP, Kuo AN, and Izatt JA, “Constant linear velocity spiral scanning for near video rate 4D OCT ophthalmic and surgical imaging with isotropic transverse sampling,” Biomedical optics express, vol. 9, no. 10, pp. 5052–5070, September 2018. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/30319921 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Xue K, Groppe M, Salvetti AP, and MacLaren RE, “Technique of retinal gene therapy: delivery of viral vector into the subretinal space,” Eye, vol. 31, no. 9, pp. 1308–1316, 2017. [Online]. Available: 10.1038/eye.2017.158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Ladha R, Meenink T, Smit J, and de Smet MD, “Advantages of robotic assistance over a manual approach in simulated subretinal injections and its relevance for gene therapy,” Gene Therapy, 2021. [Online]. Available: 10.1038/s41434-021-00262-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Nasseri MA, Maier M, and Lohmann CP, “A targeted drug delivery platform for assisting retinal surgeons for treating age-related macular degeneration (amd),” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2017, pp. 4333–4338. [DOI] [PubMed] [Google Scholar]
  • [19].Zhou M, Huang K, Eslami A, Roodaki H, Lin H, Lohmann CP, Knoll A, and Nasseri MA, “Beveled needle position and pose estimation based on optical coherence tomography in ophthalmic microsurgery,” in 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2017, pp. 308–313. [Google Scholar]
  • [20].Zhou M, Huang K, Eslami A, Roodaki H, Zapp D, Maier M, Lohmann CP, Knoll A, and Nasseri MA, “Precision needle tip localization using optical coherence tomography images for subretinal injection,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 4033–4040. [Google Scholar]
  • [21].Weiss J, Rieke N, Nasseri MA, Maier M, Eslami A, and Navab N, “Fast 5dof needle tracking in ioct,” International journal of computer assisted radiology and surgery, vol. 13, no. 6, pp. 787–796, 2018. [DOI] [PubMed] [Google Scholar]
  • [22].Zhou M, Hao X, Eslami A, Huang K, Cai C, Lohmann CP, Navab N, Knoll A, and Nasseri MA, “6dof needle pose estimation for robot-assisted vitreoretinal surgery,” IEEE Access, vol. 7, pp. 63 113–63 122, 2019. [Google Scholar]
  • [23].Viedma IA, Alonso-Caneiro D, Read SA, and Collins MJ, “Deep learning in retinal optical coherence tomography (oct): A comprehensive survey,” Neurocomputing, vol. 507, pp. 247–264, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S092523122200995X [Google Scholar]
  • [24].Roy AG, Conjeti S, Karri SPK, Sheet D, Katouzian A, Wachinger C, and Navab N, “Relaynet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express, vol. 8, no. 8, pp. 3627–3642, Aug 2017. [Online]. Available: http://opg.optica.org/boe/abstract.cfm?URI=boe-8-8-3627 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Mukherjee S, Silva TD, Grisso P, Wiley H, Tiarnan DLK, Thavikulwat AT, Chew E, and Cukras C, “Retinal layer segmentation in optical coherence tomography (oct) using a 3d deep-convolutional regression network for patients with age-related macular degeneration,” Biomed. Opt. Express, vol. 13, no. 6, pp. 3195–3210, Jun 2022. [Online]. Available: http://opg.optica.org/boe/abstract.cfm?URI=boe-13-6-3195 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Zhou M, Roodaki H, Eslami A, Chen G, Huang K, Maier M, Lohmann CP, Knoll A, and Nasseri MA, “Needle segmentation in volumetric optical coherence tomography images for ophthalmic microsurgery,” Applied Sciences, vol. 7, no. 8, 2017. [Online]. Available: https://www.mdpi.com/2076-3417/7/8/748 [Google Scholar]
  • [27].Weiss J, Sommersperger M, Nasseri A, Eslami A, Eck U, and Navab N, “Processing-aware real-time rendering for optimized tissue visualization in intraoperative 4d oct,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu A, and Joskowicz L, Eds. Cham: Springer International Publishing, 2020, pp. 267–276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Sommersperger M, Weiss J, Nasseri MA, Gehlbach P, Iordachita I, and Navab N, “Real-time tool to layer distance estimation for robotic subretinal injection using intraoperative 4d oct,” Biomed. Opt. Express, vol. 12, no. 2, pp. 1085–1104, Feb 2021. [Online]. Available: http://opg.optica.org/boe/abstract.cfm?URI=boe-12-2-1085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Zhou M, Hamad M, Weiss J, Eslami A, Huang K, Maier M, Lohmann CP, Navab N, Knoll A, and Nasseri MA, “Towards robotic eye surgery: Marker-free, online hand-eye calibration using optical coherence tomography images,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3944–3951, 2018. [Google Scholar]
  • [30].Ronneberger O, Fischer P, and Brox T, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Navab N, Hornegger J, Wells WM, and Frangi AF, Eds. Cham: Springer International Publishing, 2015, pp. 234–241. [Google Scholar]
  • [31].He K, Zhang X, Ren S, and Sun J, “Deep residual learning for image recognition. corr abs/1512.03385 (2015),” 2015. [Google Scholar]
  • [32].Huber PJ, “Robust methods of estimation of regression coefficients,” Statistics: A Journal of Theoretical and Applied Statistics, vol. 8, no. 1, pp. 41–53, 1977. [Google Scholar]
  • [33].Sommersperger M, Martin-Gomez A, Mach K, Gehlbach PL, Nasseri MA, Iordachita I, and Navab N, “Surgical scene generation and adversarial networks for physics-based ioct synthesis,” Biomed. Opt. Express, vol. 13, no. 4, pp. 2414–2430, Apr 2022. [Online]. Available: http://opg.optica.org/boe/abstract.cfm?URI=boe-13-4-2414 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Dehghani S, Sommersperger M, Yang J, Salehi M, Busam B, Huang K, Gehlbach P, Iordachita I, Navab N, and Nasseri MA, “Colibridoc: An eye-in-hand autonomous trocar docking system,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 7717–7723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Turani Z, Fatemizadeh E, Xu Q, Daveluy S, Mehregan D, and Avanaki MRN, “Refractive index correction in optical coherence tomography images of multilayer tissues,” Journal of biomedical optics, vol. 23, no. 7, p. 070501, 2018. [DOI] [PubMed] [Google Scholar]
  • [36].Borkovkina S, Camino A, Janpongsri W, Sarunic MV, and Jian Y, “Real-time retinal layer segmentation of oct volumes with gpu accelerated inferencing using a compressed, low-latency neural network,” Biomed. Opt. Express, vol. 11, no. 7, pp. 3968–3984, Jul 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES