Surgicalsimulation Systematicreview PDF
Surgicalsimulation Systematicreview PDF
Surgicalsimulation Systematicreview PDF
Australian Safety
and Efficacy
Register of New
Interventional
Procedures - Surgical
Systematic Review
July 2007
ISBN 0909844 83 6
Published July 2007
Table of Contents
Executive Summary .............................................................................................................. iv
The ASERNIP-S Classification System ............................................................................ vii
The ASERNIP-S Review Group ........................................................................................ ix
1. Introduction................................................................................................................... 1
Objective ............................................................................................................................. 1
Context................................................................................................................................ 1
Surgical training ............................................................................................................. 1
Surgical simulation ........................................................................................................ 2
Types of surgical simulation ........................................................................................ 3
Synthetic (inanimate) models and box trainers .................................................... 3
Live animal models................................................................................................... 4
Cadaveric models...................................................................................................... 4
Ex vivo animal tissue models ................................................................................... 5
Virtual reality (computer-based) models ............................................................... 5
Hybrid simulators ..................................................................................................... 6
Skills transfer to the operating theatre ....................................................................... 6
Summary ............................................................................................................................. 7
2. Methodology.................................................................................................................. 8
Literature search protocol ................................................................................................ 8
Inclusion criteria............................................................................................................ 8
Literature search strategies ............................................................................................... 9
Databases searched and search terms used ............................................................... 9
Search terms .............................................................................................................. 9
Literature database & exclusions ................................................................................ 9
Data extraction and assessment of study quality......................................................... 10
Data analysis ..................................................................................................................... 10
Ongoing and Unpublished Trials ............................................................................. 10
3. Studies included in the review................................................................................... 12
Literature Search Results ................................................................................................ 12
Designation of Levels of Evidence and Critical Appraisal........................................ 12
Description of studies ..................................................................................................... 13
4. Results........................................................................................................................... 24
Performance on simulators ............................................................................................ 24
Skills transfer outcomes .................................................................................................. 26
Overall performance of patient-based procedure....................................................... 26
Performance time ............................................................................................................ 28
Success rate of patient-based assessment..................................................................... 30
i
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
ii
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
List of Tables
Table 1 Databases searched................................................................................................. 9
Table 2 Summary of included studies .............................................................................. 19
Table 3 Description of training in included studies ....................................................... 21
Table 4 Description of statistical analyses used in included studies............................ 22
Table 5 Patient-based assessments: overall performance results ................................. 27
Table 6 Patient-based assessments: performance time results ..................................... 29
Table 7 Patient-based assessments: success rate results ................................................ 31
Table 8 Patient-based assessments: ability to complete the assessment operation ... 32
Table 9 Patient-based assessments: supervising surgeon takeover and need for
assistance ............................................................................................................................... 35
Table 10 Patient-based assessments: the use of assistants ............................................ 36
Table 11 Patient-based assessments: errors made during assessment operations ..... 37
Table 12 Patient-based assessments: error scores – combined outcomes.................. 38
Table 13 Patient-based assessments: flow of operation and economy of movement
................................................................................................................................................. 39
Table 14 Patient-based assessments: time and motion outcomes ............................... 41
Table 15 Patient-based assessments: economy of movement – combined outcomes
................................................................................................................................................. 42
Table 16 Patient-based assessments: knowledge of procedure .................................... 44
Table 17 Patient-based assessments: knowledge of instruments* ............................... 45
Table 18 Patient-based assessments: instrument handling*.......................................... 46
Table 19 Patient-based assessments: respect for tissue ................................................. 47
Table 20 Patient-based assessments: ability to identify landmarks .............................. 48
Table 21 Patient-based assessments: ability to insert scope safely............................... 49
Table 22 Patient-based assessments: visualisation of the mucosa on withdrawal ..... 50
Table 23 Patient-based assessments: patient discomfort .............................................. 55
Table 24 Patient-based assessments: surgical confidence ............................................. 59
List of Figures
Figure 1 Process for selection of studies retrieved from the literature database ....... 12
iii
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Executive Summary
Objective
To assess whether skills acquired via simulation-based training transfer to the
operative setting.
Methods
Search strategy – Studies were identified by searching MEDLINE, EMBASE,
CINAHL, The Cochrane Library and Current Contents from inception to December
2006. The Clinical Trials Database (US), NHS Centre for Research and
Dissemination Databases (UK), National Research Register (UK), Meta Register of
Controlled Trials, and the Australian Clinical Trials Registry were also searched in
December 2006.
Study selection – Only studies that reported the use of simulation for surgical skills
training, and reporting on the transferability of these skills to the patient care setting
were included for review. The articles included must have contained training and/or
measures of performance in the simulated setting and measures of performance in
the operative setting. Measures of surgical task performance included accuracy of
skills, time to complete technique, efficiency of movement, error rates and
achievement of performance to criterion levels.
Data collection and analysis – Data from the included studies was extracted by an
ASERNIP-S researcher using standardised data extraction tables developed a priori
and checked by a second researcher. Statistical pooling was not appropriate due to
the heterogeneity of the included studies.
Results
A total of 12 randomised controlled trials and two non-randomised comparative
studies were included in this review. The review looked at simulation as a concept,
and as such included studies with various training techniques in the surgical setting.
There were differences in indications, simulation-based training methods, training
times, and the amount of guidance and feedback provided to trainees. In most cases,
simulation-based training was an add-on to normal surgical training programs. Only
one study compared simulation-based training with current training methods
(patient-based training).
For laparoscopic cholecystectomy, participants who received simulation-based
training prior to conducting patient-based assessment generally performed better
than their counterparts who did not have this training. This improvement was not
universal for all the parameters measured, but the untrained group never
outperformed the trained group. Trained groups generally made fewer errors, and
iv
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
had less instances of supervising surgeon takeover than participants who did not
have the training.
For colonoscopy/sigmoidoscopy, simulation-based training prior to patient-based
assessment generally appeared to provide participants some advantage over their
untrained controls, particularly during the initial stages of learning.
For catheter-based intervention for occlusive vascular disease and TEP hernia repair,
simulation-based training appeared to show benefits for participants when later
conducting patient-based assessment.
There were no differences in performance between endoscopic sinus surgery
simulator-trained residents compared with controls when performing endoscopic
sinus surgery.
The study that compared patient-based training with simulation-based training for
colonoscopy/sigmoidoscopy found that participants who received training in the
assessment procedure exhibited better performance than those who had trained
exclusively on a simulator without any mentoring or supervision.
Classifications
Evidence rating
The evidence-base in this review is rated as average. The studies included were of
variable quality, and did not have comparable simulation-based methods for the same
indications, resulting in an inability to draw solid conclusions.
v
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Important note
The information contained in this report is a distillation of the best available evidence
located at the time the searches were completed as stated in the protocol.
vi
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Evidence Rating
The evidence for ASERNIP-S systematic reviews is classified as Good, Average or
Poor, based on the quality and availability of this evidence. High quality evidence is
defined here as having a low risk of bias and no other significant flaws. While high
quality randomised controlled trials are regarded as the best kind of evidence for
comparing interventions, it may not be practical or ethical to undertake them for
some surgical procedures, or the relevant randomised controlled trials may not yet
have been carried out. This means that it may not be possible for the evidence on
some procedures to be classified as good.
Good
Most of the evidence is from a high quality systematic review of all relevant
randomised trials or from at least one high quality randomised controlled trial of
sufficient power. The component studies should show consistent results, the
differences between the interventions being compared should be large enough to be
important, and the results should be precise with minimal uncertainty.
Average
Most of the evidence is from high quality quasi-randomised controlled trials, or from
non-randomised comparative studies without significant flaws, such as large losses to
follow-up and obvious baseline differences between the comparison groups. There is
a greater risk of bias, confounding and chance relationships compared to high-quality
randomised controlled trials, but there is still a moderate probability that the
relationships are causal.
An inconclusive systematic review based on small randomised controlled trials that
lack the power to detect a difference between interventions and randomised
controlled trials of moderate or uncertain quality may attract a rating of average.
Poor
Most of the evidence is from case series, or studies of the above designs with
significant flaws or a high risk of bias. A poor rating may also be given if there is
insufficient evidence.
Research Recommendations
It may be recommended that an audit or a controlled (ideally randomised) clinical
trial be undertaken in order to strengthen the evidence base.
vii
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Clinical Recommendations
Additional recommendations for use of the training techniques in clinical practice
may be provided to ensure appropriate use by sufficiently qualified/experienced
centres and/or individuals.
viii
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
ASERNIP-S Director
Professor Guy Maddern
ASERNIP-S
Royal Australasian College of Surgeons
PO Box 553
Stepney SA 5069
Protocol Surgeon
Professor John Windsor
Department of Surgery
University of Auckland
PO Box 92019
Auckland New Zealand
Advisory Surgeon
Mr Patrick Cregan
PO Box 1124
Penrith NSW 2751
Advisory Surgeon
Mr Peter Hewett
142 Ward Street
North Adelaide SA 5006
Advisory Surgeon
Mr Peter Cosman
PO Box 196
Brighton Le Sands NSW 2216
ASERNIP-S Researcher
Lana Sturm
ASERNIP-S
Royal Australasian College of Surgeons
PO Box 553
Stepney SA 5069
ix
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Conflicts of Interest
• Professor John Windsor
Director and Educational Advisor, Go Virtual Medical Ltd.
Founding Director, Advanced Clinical Skills Centre, University of Auckland.
• Mr Patrick Cregan
Medical Director, Medic Vision Pty. Ltd.
x
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
1. Introduction
Objective
The objective of this systematic review is to assess whether skills acquired via
simulation-based training transfer to the surgical setting.
Context
Surgical training consists of developing cognitive, clinical and technical skills, with
the latter traditionally acquired through mentoring in the operating theatre. Current
surgical skills training programs are under pressure due to increased service
requirements, new and emerging techniques, a greater focus on surgeons’
competence, and concerns regarding patient safety. In order to face these challenges
and to improve surgical education, many surgical colleges are re-engineering their
training curricula with an increased interest in the use of rapidly developing and
emerging educational strategies. Surgical simulation offers the opportunity for
surgical trainees to practise surgical skills (mental practice and reinforcement) before
entering the operating room and allows for detailed feedback (proximal and
technical), and objective assessment of performance. To establish whether there is
benefit in using simulated environments to teach surgical skills, it must be shown that
the skills acquired through simulation-based training can positively transfer to clinical
practice.
Surgical training
Surgical skills training to date has largely been conducted via the mentored or
‘apprenticeship’ approach. It involves surgical trainees learning surgical skills in the
operating theatre by first observing, then assisting mentors, before being permitted
to operate under supervision. As the trainee’s experience grows, he or she is able to
take on a more active role, until eventually he or she is able to work unsupervised. In
this environment, unskilled trainees learn to perform skills on actual patients.
Competency is subjectively assessed by supervising surgeons.
The movement toward increased specialisation in academic teaching hospitals has
resulted in more complex and challenging surgical problems (Grober et al. 2004),
greater volumes of cases, increased service responsibilities, and a need for surgeons
to work at maximum efficiency with minimal interruption. The mentoring approach
is also dependent on the flow of patients through a hospital and can result in trainees
having random ad hoc exposure to less common procedures.
The mentored approach is dependent on skilled surgeons having the time and
resources to train and supervise trainees. A survey in 2005 found that 91% of the
current Australian Fellowship are involved in training supervision, providing on
average, 10.2 hours of training supervision per week (Royal Australasian College of
SECTION 1 · INTRODUCTION 1
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Surgeons 2005). This survey also found that the current workforce is ageing and
within the next five years, one third of the current active Fellowship is expected to
retire from emergency call work and two thirds of this group are also intending to
retire from operative practice. This will reduce the number of Fellows available to
provide training, supervision and mentoring. In addition to this, legislated
restrictions on surgeons’ working hours have reduced the number of hours during
which experienced surgeons are avaliable to observe and assist trainees. It has also
reduced the number of hours trainees can expect to be supervised (Woodrow et al.
2006). In the Australian system, as in many other countries, working shifts of 36
hours or more are now largely outlawed and have been replaced by maximum
working periods of 13 to 14 hours with mandated rest periods (Maddern 2003). The
impact of these changes has resulted in a more consultant-led supervision of care, an
increase in the number of surgical trainees per surgical unit, and competition for the
surgical cases available from which to gain surgical experience during the course of
training (Maddern 2003).
The move away from open surgery to less invasive techniques has meant that trainee
surgeons now have less opportunity to learn an open procedure prior to learning the
minimally invasive technique. For example, cholecystectomy, anti-reflux and
bariatric surgery are now being done using a laparoscope (Aggarwal et al. 2006)
instead of the open surgical approaches used previously. Minimally invasive
procedures using a laparoscope as well as other endoscopic techniques differ from
open surgery in the way of direct tactile contact and visual feedback, and an increased
need for hand-eye coordination (Gallagher et al. 1998). Although the skills needed
for these surgical modalities, like those needed for open surgery, can be taught in the
operating theatre, simulation allows trainees to practise these skills before entering
the operating theatre environment.
Surgical simulation
Simulation is an instructional strategy used to teach technical skills, procedures, and
operations, by presenting learners with situations that resemble reality (Krummel
1998). Surgical simulation involves the use of objects, devices, electronic and/or
mechanical surgical simulators, cadavers, animals and animal organs to reproduce or
represent, under test conditions, situations that are likely to occur in actual
performance (Krummel 1998).
Surgical competence encompasses a combination of requisite knowledge, technical
skills, cognitive skills, decision-making ability, communication skills, leadership skills,
and professional ethics (Moorthy et al. 2003). Of these, technical skills make up the
majority of the objective data on surgical training and assessment, although cognitive
skills are likely to play a larger part (Satava et al. 2003). Simulated training allows
trainees to practise the cognitive and technical skills of a procedure under various
conditions without the pressures of the operating room, and allows for the teaching
of rare or unusual cases. The trainees actions can be analysed, errors identified and
2 SECTION 1 · INTRODUCTION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
corrected, and performance scored under standardised, though not real, conditions.
Simulation-based skill training allows an individual to acquire skills to the point
where many psychomotor skills and spatial judgments have become automated
(Gallagher et al. 2005). This allows the trainee to focus more on learning the steps of
the operation and how to handle intraoperative complications, than on the
refinement of technical skills (Gallagher et al. 2005). Simulation-based training using
flight simulators has been mandatory in the United States aviation industry since
1955 (Kaufmann 2001). All commercial and military pilots must train and be
certified on a simulator before actual flight (Cohen et al. 2006a). Anaesthesiology has
applied principles similar to those used in pilot training and now has over 30 years of
history in simulation-based training (Reznek et al. 2002).
SECTION 1 · INTRODUCTION 3
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Although some trainees using these types of simulators have found the training
unrealistic and boring due to their low fidelity (the extent to which the model
imitates reality) (Hyltander 2003), studies show that they are valid and reliable
instruments for training specific surgical skills (Grantcharov et al. 2004; Seymour et al.
2002). Their relatively low acquisition cost, high availability and easy portability,
make this type of simulator the most widely available and most validated surgical
training system (Roberts et al. 2006).
Cadaveric models
The use of human cadavers allows surgical trainees to develop a detailed
understanding of human anatomy and are a valuable tool for the teaching of whole-
body anatomy and the interaction between different body parts when affected by
disease processes (Parker 2002). In addition to anatomical dissection courses,
surgical trainees have used human cadavers to practise many procedures, including
laparoscopy (Levine et al. 2006), endoscopy (Rivron and Maran 1991), and saphenous
vein cutdown (Wong and Stewart 2004). Preserved human cadavers (including
cadavers preserved by plastination) do not bear the same tissue elasticity as live or
recently deceased cadavers, thereby losing some fidelity as a surgical instructional
model (Herbella and Del Grande 2001). In addition to this, considerable expense is
involved in the provision of cadaveric specimens, which are single use and not
portable (Wong and Stewart 2004). The limited supply of cadavers in Australia, the
decline of anatomical dissection courses in Australian medical schools (Parker 2002),
concerns regarding disease transmission from human tissues and fluids, and ethical
and cultural issues limit this mode of training.
4 SECTION 1 · INTRODUCTION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 1 · INTRODUCTION 5
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
train some surgical skills, particularly to junior level trainees (Anastakis et al. 1999;
Grober et al. 2004). Studies comparing VR simulators have demonstrated that they
are useful in training students in such procedures as upper endoscopy (Cisler and
Martin 2006), haemostasis (Maiss et al. 2006), flexible sigmoidoscopy (Datta et al.
2002; Tuggy 1998) and colonoscopy (Cohen et al. 2006b; Sedlack and Kolars 2003).
Hybrid simulators
Hybrid simulators are a combination of physical simulators and VR simulators. They
consist of a physical object (frequently a mannequin) being linked to a sophisticated
computer program that provides visual images and/or feedback (Satava 2001). The
computer program can simulate patient responses, both physically on the mannequin
and physiologically in response to a procedure (Satava 2001). These simulators go
beyond basic skills training and are designed to recreate specific anatomy and
physiology, and allow trainees to practise all the skills necessary to perform a
particular operation (Roberts et al. 2006). They allow the production of realistic
clinical environments where teams work within simulated scenarios to practise crisis
management, team response, communication, and other complex tasks. These
simulators will not replace basic skills training, but may help to bridge the gap to the
operating room (Roberts et al. 2006). Hybrid simulators are expensive and are limited
in use due to high demands of time and effort needed to prepare and run them
(Sewell et al. 2004). In anaesthesia and critical care medicine, there is extensive
research supporting the use of hybrid simulator models for training (Cooper and
Taqueti 2004).
6 SECTION 1 · INTRODUCTION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Srivastava et al. 2004). In live animal models, studies have demonstrated improved
operative performance in anaesthetised animals after simulation-based training
(Hyltander et al. 2002; Korndorffer, Jr. et al. 2005; Van Sickle et al. 2006).
To date, less attention has been focused on correlating simulator performance with
operative performance on live human patients. If the positive relationship between
technical skills measured in a simulated environment and the technical skills
measured in the operating room are robust, simulation-based training could not only
predict a trainee’s future performance in the operative setting, but could also go
some way towards justifying the cost of these training devices. Without evidence that
transfer is occurring, simulation-based training will battle to find relevance and
acceptance into surgical skill training programs.
Summary
Increasing demands on current surgical training programs has resulted in other
approaches to training being investigated and employed. The ability to learn specific
technical surgical skills requires deliberate practice, and evidence suggests that many
of the skills required for surgery can be acquired away from the operating theatre
(Hamdorf and Hall 2000). Surgical simulation-based training is attractive in the field
of surgical training because it does not require the use of patients for skills practice,
and is less reliant on supervising surgeons’ time. Simulation-based training also
ensures that trainees have opportunities to practise under various conditions,
facilitates the teaching of rare or unusual cases and provides opportunity for
objective standard assessment. In order for surgical simulation-based to gain
significance in training environments, it is essential to demonstrate that these skills
can be transferred to real patients.
SECTION 1 · INTRODUCTION 7
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
2. Methodology
Measures of surgical task performance in the simulated setting and the clinical
setting, which could include, but not be limited to:
• accuracy of skill/technique
• time to complete skill/technique
• efficiency of movement.
• error rates
• achievement of performance to criterion level.
Language restriction
Searches were conducted without language restriction. Foreign language papers were
subsequently excluded unless the findings provided additional information over that
reported in well designed studies published in the English language.
8 SECTION 2· METHODS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Search terms
In the Cochrane Library the search term used was:
surgical simulation
For MEDLINE, EMBASE, CINAHL and Current Contents Connect the following
search terms were used:
surg* AND simulat* AND (skill* OR train*)
The NHS CRD databases were searched using the above terms. The National Research
Register, Clinicaltrials.gov, Meta-Register and the Australian Clinical Trials Registry were
also searched using the above search terms for RCTs in progress.
Note: * is a truncation character that retrieves all possible suffix variations of the root
word e.g. surg* retrieves surgery, surgical, surgeon, etc. In Cochrane the truncation
character is *; in Current Contents, EMBASE, CINAHL and MEDLINE (Ovid) it is $.
# is a wildcard symbol that substitutes for one required character in Current Contents,
EMBASE, CINAHL and MEDLINE (Ovid).
SECTION 2· METHODS 9
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
the review discussion. The bibliographies of all publications retrieved were manually
searched for relevant references that may have been missed in the database search
(pearling).
Data analysis
The included studies were categorised initially by the non-simulation-based training
method (ie simulation-based training vs. no training, and simulation-based training vs.
patient-based training). Studies were then categorised by intervention, and then by the
level of evidence. It was judged that no data were suitable for statistical pooling due to
the variability in simulation devices and training methods. Where data could not be
grouped, the main outcomes have been reported narratively.
10 SECTION 2· METHODS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 2· METHODS 11
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Figure 1 Process for selection of studies retrieved from the literature database
Duplicates 2372
References excluded on title or abstract 1999
General background information 146
Some authors and/or centres have published more than one report on the transfer of
skills acquired via surgical simulation to the patient-based setting. As a result, some
studies published by the same group may have common pools of patients. These studies
have been identified.
Description of studies
A total of 12 randomised controlled trials and two non-randomised comparative studies
were included in this review (Table 2). There were a total of 287 participants in the
included studies. Table 3 summarises the training program used with the simulator or
simulation in each study, and Table 4 describes the statistical analyses used within each
study. The simulator device specifications and a description of patient-based training are
given in Appendix E. A summary of the critical appraisal is given in Appendix F.
Only one out of the five studies reported the study period (Sedlack et al. 2004).
It was stated that face and construct validity of the AccuTouch® endoscopy simulator
used by Ahlberg et al. (2005) was demonstrated by Datta et al. (2002) but it was not stated
whether the methods used to assess performance had been validated. Sedlack and Kolars
(2004) stated that the AccuTouch® colonoscopy device used within the simulation
curriculum had been previously validated by Sedlack and Kolars (2003) and Sedlack and
Kolars (2002) but it was not reported whether the methods used for assessment had been
validated. It was also not stated by Sedlack et al. (2004) whether the assessment methods
had been validated. Cohen et al. (2006b) reported that validation data for the
AccuTouch® simulator was presented in Sedlack and Kolars (2004) and that for patient-
based assessment, an evaluation form previously used by Cass et al. (1996) was used (this
study was published as an abstract and does not elucidate the assessment tool). Tuggy
(1998) did not report whether the simulator or assessment methods had been previously
validated.
The reported inclusion criteria for the studies included the training level of participants
(Ahlberg et al. 2005; Sedlack and Kolars 2004; Sedlack et al. 2004), no prior experience in
flexible sigmoidoscopy (Tuggy 1998) and the training directors’ adherence to the study
protocol (Cohen et al. 2006b). Exclusion criteria were listed in one study to be prior
performance of more than 10 colonoscopies and an inability to adhere to the study
protocol (Cohen et al. 2006b). None of the other studies listed exclusion criteria
(Ahlberg et al. 2005; Sedlack and Kolars 2004; Sedlack et al. 2004; Tuggy 1998).
Baseline characteristics of participants were reported as gender and pervious experience
in colonoscopy for Ahlberg et al. (2005) and Sedlack and Kolars (2004), and pre-
colonoscopy experience for Cohen et al. (2006b). Sedlack et al. (2004) stated that no
participant had prior endoscopy training. No baseline characteristics were reported for
the participants of Tuggy (1998).
Cohen et al. (2006b) reported that there were no significant differences between the
groups prior to training. Ahlberg et al. (2005) reported that participants came from eight
different institutions in Sweden and were at different levels of experience (from the 2nd to
5th postgraduate year), but had the common feature that they had no previous experience
in colonoscopy and were designated to start colonoscopy training. Sedlack and Kolars
(2004) and Sedlack et al. (2004) did not state whether there were any significant
differences between the two groups prior to training. Tuggy (1998) stated that matched
pairs of residents performed examinations sequentially on the same patient to reduce the
risk of encountering different colon structures which could affect their performance.
Catheter-based intervention for occlusive vascular disease
training on the simulator but it was difficult to ascertain if there were baseline differences
between the two groups.
N 9 13
Schijven et al. comparative III-2 4 day laparoscopic no training course laparoscopic
2005 study cholecystectomy training course cholecystectomy
(Netherlands)
N 12 12
Chaer et al. 2006 RCT II Procedicus VIST™ simulator no simulator training catheter-based
(USA) intervention for
occlusive disease
N 10 10
Hamilton et al. RCT II TEP hernia repair rubber model no simulation training TEP hernia repair
2001 (USA) simulator, instructional video and
interactive CD ROM
N 10 11
Edmond 2002 comparative III-2 ESS simulator no simulator training endoscopic sinus
(USA) study surgery
N 2 2
Seymour et al. 2002 MIST-VR simulator Training until criterion levels reached (3 – 8 hours). 8/16
RCT Level II
Scott et al. 2000 SCMIS GEM trainer Separate 30 minute sessions for 10 days. 13/27
RCT Level II Tasks practised an average of 138 times (range 94 – 171
times).
Scott et al. 1999 Video trainer Separate 30 minute sessions for 10 days. 9/22
RCT Level II
Schijven et al. 2005 Laparoscopic 4 day course including videos, oral presentations, table 12/24
Comparative study cholecystectomy training sessions, instrument displays and repetitive sessions of VR
course software simulations using the Xitact LS500 laparoscopy
Level III-2
simulator platform. Both psychomotor VR simulation (MIST-
VR) and procedural laparoscopic cholecystectomy
simulation, including the clip-and-cut, navigation and
dissection modules (Xitact) were featured.
Ahlberg et al. 2005 AccuTouch® endoscopy Median total training time 20 hours (range 15 -25) over at 6/12
RCT Level II simulator least 4 days.
Sedlack et al. 2004 AccuTouch® flexible Independent, supervised 3 hour simulator based training 19/38
RCT Level II sigmoidoscopy simulator curriculum involving brief multimedia tutorial followed by 8 –
10 simulated scenarios.
Sedlack and Kolars AccuTouch® colonoscopy 6 hours of simulator training over a 2 day period. 4/8
2004 simulator Performance of 20 – 25 simulated colonoscopies.
RCT Level II
Cohen et al. 2006b Simbionix GI Mentor™ 10 hrs over 8 week period. 22/49
RCT Level II
Tuggy 1998 Gastro-Sim® flexible Initially 5 hours. Then an additional 5 hours. 5/10
RCT Level II sigmoidoscopy simulator
Chaer et al. 2006 Procedicus VIST™ A single session of not more than 2 hours duration (mean 90 10/20
RCT Level II ± 21 minutes).
Hamilton et al. 2001 TEP hernia repair rubber 10 separate 30 minute sessions over a 2 week period. 10/21
RCT Level II model simulator, instructional Participants asked to alternate daily between simulator and
video and interactive CD CD ROM.
ROM
Grantcharov et al. MIST-VR simulator Normal distribution of the data was confirmed using Q-Q plots. The primary outcome
2004 (n = 10) measure was the difference in performance scores during laparoscopic cholecystectomy
RCT Level II No simulator training in the operating theatre between the first and second procedures. An independent
(n = 10) samples t-test was used to examine the difference in improvements demonstrated by
the two groups.
Result was considered statistically significant at P ≤ 0.05.
Seymour et al. 2002 MIST-VR simulator Statistical comparisons were performed by chi-square analysis, analysis of variance
RCT Level II (n = 8) (ANOVA), and Mann-Whitney test.
No simulator training Result significant at P < 0.05.
(n = 8)
Scott et al. 2000 SCMIS GEM trainer To determine whether there were differences between the control and trained groups at
RCT Level II (n = 13) baseline testing, a two-tailed Wilcoxon rank-sum test was used. To determine if training
No simulator training was beneficial, within-person changes in performance were compared for the control
(n = 14) and the trained groups. The amount of improvement varied with baseline performance,
so a linear covariance adjustment was used to compensate for differences in baseline
scores. The covariance-adjusted improvements for residents in the control and trained
groups were compared using a Wilcoxon rank-sum test. To test the hypothesis that the
trained group achieved greater adjusted improvement than the control group, a one-
tailed test was used. Questionnaire data regarding comfort with laparoscopic surgery
were analysed using Fisher’s exact test.
Tests were considered significant at P ≤ 0.05.
Scott et al. 1999 Video trainer To determine differences between control and trained groups after simulator training as
RCT Level II (n = 9) well as to determine differences between control and trained groups after patient-based
No simulator training assessment, an exact Wilcoxon test was used.
n = 22
(n = 13) P was considered significant at P < 0.05.
Schijven et al. 2005 Laparoscopic Normal distribution of the primary outcome parameter judgment and the secondary
Comparative study cholecystectomy outcome parameters fluency and carefulness was confirmed using Q-Q plots. To
training course determine differences in performance status between the two groups, a Mann-Whitney
Level III-2
(n = 12) U test was used.
n = 24
No training course P was considered significant at P ≤ 0.05.
(n = 12)
Ahlberg et al. 2005 AccuTouch® The number of successful colonoscopies for the two groups was evaluated using a
RCT Level II endoscopy simulator binary logistic regression analysis, controlling for patient gender, order of operation and
(n = 6) student background. Confidence intervals were calculated using the profile likelihood
n = 12
No simulator training estimation. The nine colon segments were categorised into three groups (1 – 4, 5 – 8
(n = 6) and 9), and an ordinal logistic regression analysis was carried out, and controlled for the
same factors as mentioned above. The evaluate the training effect on the mean
performance time per section, a stratified Mann-Whitney U test was performed. The
time was ranked within each segment, and gender, gastroscopic experience, and
procedure order was used as strata. Multiple regression analysis, controlling for the set
of confounders, evaluated the association between training and the total procedure time
for the trainees who completed the colonoscopy. The patient’s pain score was
categorised in three groups and ordinal logistic regression, controlling for different
confounders, was used to assess the effect of training.
A P < 0.05 was used as a criterion for inclusion in the statistics packaged used for the
regression models.
Sedlack et al. 2004 AccuTouch® flexible Median scores for each parameter graded by staff, resident, and patients were analysed
RCT Level II sigmoidoscopy by using Wilcoxon rank-sum test. Median differences between paired staff and resident
simulator (n = 19) evaluation scores were compared by using a Wilcoxon signed rank test.
n = 38
No simulator training
(n = 19)
Sedlack and Kolars AccuTouch® The analysis of the staff evaluations and patient surveys were compared between the
2004 colonoscopy simulator two groups of Fellows. Comparisons were made of all colonoscopies performed as well
RCT Level II (N = 4) as by examining procedures in chronological groups of 15 based on the order of
No simulator training performance (ie colonoscopies 1 – 15, 16 – 30, 11 – 45 etc). Rates of independent
n=8
(N = 4) procedure completion were analysed using a t-test. All other results were analysed
using Wilcoxon rank-sum test.
P was considered significant at P < 0.05.
Cohen et al. 2006b Simbionix GI Mentor™ A 2-sample t-test was used to compare the difference in objective competence,
RCT Level II (N = 22) subjective competence, and observed patient discomfort between the simulator group
n = 49 No simulator training and no-simulator group at every group of 20 cases (each 20 cases called a block). All of
(N = 23) the blocks' data were then combined, and a mixed-effects model was applied to
compare the difference between groups at every block simultaneously: in the mixed-
effects model, a random effect was used to take into consideration the correlations
between the observations from the same Fellow over time; fixed effects included each
block as a categorical variable, a group indicator (simulator and no-simulator), and the
interaction between them. A log-rank test was performed to compare the two groups.
A Bonferroni correction was made on multiple comparisons, and a comparison was
considered to be statistically significant if the P value was below 0.005.
Tuggy, 1998 Gastro-Sim® flexible The paired t-test was used to compare the differences between the mean scores of the
RCT Level II sigmoidoscopy two groups at the designated points in the study protocol. The Mann-Whitney U test was
simulator (N = 5) used to analyse the qualitative assessment of colon viewing.
n = 10
No simulator training It was not stated for which value P was significant.
(N = 5)
Chaer et al. 2006 Procedicus VIST™ Wilcoxon 2-sample test and Fisher’s exact test were used to evaluate for statistically
RCT Level II (N = 10) significant changes in performance pre- and post-training.
n = 20 No simulator training Mean differences were considered significant for a P < 0.05.
(N = 10)
Hamilton et al. 2001 TEP hernia repair Within-group comparisons of individual and composite global assessment scores were
RCT Level II rubber model simulator, performed using Wilcoxon signed rank tests. Between-group comparisons were
instructional video and performed using the Wilcox rank sum test to accomplish an analysis of covariance
n = 21
interactive CD ROM (ANCOVA) with pre-test scores serving as the covariate. Questionnaire data were
(N = 10) analysed using a Fisher’s exact test.
No simulation training Statistical significance was set at a threshold of P < 0.05.
(N = 11)
4. Results
The results have been presented in two main sections. Initially, the results of
participants training on the simulators are presented (ie participant simulator
performance results), after which results for participant’s performance on patients in
the operating theatre are presented (ie the evidence pertaining to transfer of skills).
The different parameters measured during operating room performance have been
grouped together as closely as possible into tables, but it should be noted that there
were many differences in assessment tools and techniques, which have been
described in footnotes.
Studies were categorised initially by the non-simulation-based training method (ie
simulation-based training vs. no simulation-based training and; simulation-based
training vs. patient-based training), then by the intervention, and the level of
evidence.
The terms trainees, Fellows, residents and participants all refer to people included in
the studies, who were at the beginner-level of the surgical interventions studied.
Performance on simulators
There was considerable variation in the reporting of performance data (metrics)
between studies.
24 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
SECTION 4 · RESULTS 25
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
on the simulator compared with the last 3 cases on the simulator (data not shown).
None of the parameters significantly improved over time.
26 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 4 · RESULTS 27
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Performance time
Performance time was reported as the time taken, in minutes or seconds, to conduct
the patient-based assessment procedures.
28 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 4 · RESULTS 29
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
30 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
SECTION 4 · RESULTS 31
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Laparoscopic cholecystectomy
SECTION 4 · RESULTS 33
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
0.05). There was no significant difference in composite scores between the two
groups after the training period. When conducting an internal group comparison,
both the trained and untrained groups demonstrated significant improvements in
overall performance and composite scores after the training period (p < 0.05), but
the post-training scores for the control group were significantly lower than the
trained group (p value not reported).
34 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Table 9 Patient-based assessments: supervising surgeon takeover and need for assistance
SECTION 4 · RESULTS 35
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Use of assistants
Simulation-based training vs. no simulation-based training
Three studies reported on the appropriate use of assistants during the assessment
operation after simulation-based training or no simulation-based training (Scott et al.
2000; Scott et al. 1999; Hamilton et al. 2001) (Table 10).
Laparoscopic cholecystectomy
36 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Performance errors
Performance errors were described as movements or events outside the normal
procedure.
Laparoscopic cholecystectomy
SECTION 4 · RESULTS 37
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
differences for liver injury, incorrect plane of dissection, tearing tissue, instrument
out of view, and non-contact cautery injury.
Colonoscopy/sigmoidoscopy
38 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Flow of operation
Flow of operation referred to the ability of a participant to move continuously and
fluently through the procedure, confident of each step.
unnecessary/random movements).
¶ The training course consisted of a variety of teaching elements, including videos, oral presentations, table sessions, instrument displays and
repetitive sessions of VR software simulations using the Xitact LS500 laparoscopy simulator platform. Both psychomotor VR simulation (MIST-VR)
and procedural laparoscopic cholecystectomy simulation, including the clip-and-cut, navigation and dissection modules (Xitact) were featured.
** Measured on a scale of 0 (frequently stopped; seemed unaware of next move) to 4 (obviously planned course; effortless flow).
†† Including an instructional video and interactive CD ROM.
NS not significant
NR not reported
SD standard deviation
SECTION 4 · RESULTS 39
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Laparoscopic cholecystectomy
40 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NR not reported
NS not significant
SD standard deviation
Laparoscopic cholecystectomy
SECTION 4 · RESULTS 41
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
simulator training. There was a significant difference in time and motion for the
second intervention only (p = 0.01).
TEP hernia repair
Laparoscopic cholecystectomy
Simulation-based training vs. no simulation-based training
Grantcharov et Intervention N = 20* Economy of movement†, median rating (range)
al. 2004 Baseline Final
MIST-VR 8 5.8 (4.5 – 6) 3.3 (2 – 6)
Level II No training 8 6 (6 – 8) 6 (4.5 – 9)
P-value NS 0.003
* 4 lost to assessment. Trained group n = 2, Control group n = 2.
† Economy of movement was a combination of two parameters, unnecessary movements and confidence of movements (referred to as time and
motion, and instrument handling in original global rating scale developed by Reznick et al. (1997)). Economy of movement was measured on a
scale of 1 (clear economy of movement and maximum efficiency, and; fluent moves with instruments and no awkwardness) to 5 (many
unnecessary moves, and; repeated tentative awkward or inappropriate moves with instruments).
NS not significant
42 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
MIST-VR training compared with controls who had not had this training (p =
0.003).
Procedural knowledge
The parameter procedural knowledge was reported in studies that used the global
assessment form developed and validated by Reznick et al. (1997) (Scott et al. 2000;
Scott et al. 1999; Chaer et al. 2006; Hamilton et al. 2001) and focussed on a
participant’s knowledge and familiarity with the patient-based assessment procedure.
SECTION 4 · RESULTS 43
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NS not significant
NR not reported
SD standard deviation
Knowledge of instruments
Knowledge of instruments was described as the ability of a participant to use the
correct instrument for the procedure, and was reported in three of the four studies
that used the global assessment form developed and validated by Reznick et al. (1997)
(Scott et al. 2000; Scott et al. 1999; Hamilton et al. 2001).
44 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NS not significant
NR not reported
SD standard deviation
Instrument handling
Instrument handling was described as using the appropriate instrument skilfully and
correctly, and was again reported in three of the four studies that used the global
assessment form developed and validated by Reznick et al. (1997) (Scott et al. 2000;
Scott et al. 1999; Hamilton et al. 2001).
SECTION 4 · RESULTS 45
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NS not significant
NR not reported
SD standard deviation
46 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NA not applicable
NS not significant
NR not reported
SD standard deviation
Laparoscopic cholecystectomy
SECTION 4 · RESULTS 47
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
48 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
Colonoscopy/sigmoidoscopy
SECTION 4 · RESULTS 49
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
participants to insert the scope safely during patient-based colonoscopies. They also
found no significant differences between staff-evaluated scores and participant self-
assessed scores.
50 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Staff productivity
The number of colonoscopies/sigmoidoscopies a staff member was able to perform
whilst trainees trained on the simulator was compared with the number of
procedures performed whilst in the presence of trainees. This was used to determine
whether training on a simulator could impact staff productivity.
SECTION 4 · RESULTS 51
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
52 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Patient morbidity/mortality
Patient morbidity and mortality were regarded as complications or deaths as a result
of procedures being conducted on patients by trained or untrained participants
during each study period. There were no reported deaths in any of the studies.
Studies instead focussed on complications that occurred at the time of the
procedures.
SECTION 4 · RESULTS 53
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Patient discomfort
Patient discomfort was described as either the pain felt by the patient undergoing the
procedure, or the pain felt by the patient as determined by the assessor.
54 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 4 · RESULTS 55
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Colonoscopy/sigmoidoscopy
56 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Sedlack et al. (2004) reported pain scores for patients undergoing sigmoidoscopy by
AccuTouch® sigmoidoscopy simulator-trained participants or participants that had
no exposure to the simulator. Median patient-reported discomfort scores were
significantly lower for simulator-trained residents (p < 0.01). There were no statistical
differences between the trained and untrained groups and there were no significant
differences between supervisor-assessed and self-assessed scores.
Cohen et al. (2006b) reported no significant differences in proctor-assessed patient
discomfort between the Simbionix GI Mentor™-trained group or the untrained
controls during patient colonoscopies at any time during the study.
Tuggy (1998) stated that there were no significant differences between the Gastro-
Sim® flexible sigmoidoscopy simulator-trained residents compared with non-trained
controls in patient pain scores after five hours of simulator training (data not
reported).
SECTION 4 · RESULTS 57
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Cohen et al. (2006b) administered a survey and found that respondents rated the
overall satisfaction with the Simbionix GI Mentor™ simulator training as moderately
useful to useful, with a mean score of 3.5 (range, 1 [no use] to 5 [very useful]).
Tuggy (1998) stated that the residents' survey responses indicated they strongly
agreed that the Gastro-Sim® flexible sigmoidoscopy simulator training was valuable
and would enhance the likelihood of their mastering the skill later in practice. No
statistical analysis was done on the survey data. Tuggy (1998) further reported that
participants showed a willingness to commit to the necessary hours of using the
simulator during the study and that all residents voluntarily trained on the simulator
on their own time despite normal work schedules. They also reported that many
trainees commented that some colons in the simulator were more challenging than
the live examinations.
Laparoscopic cholecystectomy
Surgical confidence
To determine the level of surgical confidence of participants included in the studies,
investigators either administered a survey to participants after the training period or
evaluated them during the patient-based assessment operation.
58 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
NA not applicable
NR not reported
NS not significant
SD standard deviation
Laparoscopic cholecystectomy
SECTION 4 · RESULTS 59
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Training costs
The reporting of costs in relation to the purchase of surgical simulators, as well as
cost savings, if any, associated with their use, varied between studies.
60 SECTION 4 · RESULTS
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 4 · RESULTS 61
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
5. Discussion
62 SECTION 5 · DISCUSSION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
studies and were attributed to the relatively small numbers of surgical trainees
available to participate. This may be an unavoidable limitation in studies of
psychomotor skill training in surgery.
Two studies (Grantcharov et al. 2004; Seymour et al. 2002) modified a previously
validated global rating scale for the assessment of surgical performance, which may
have compromised the validity of the results. One of these studies (Gratcharov et al.
2004) modified the global rating scale substantially by combining performance
characteristics.
There were large variations in the length of time participants were trained (Table 4)
and varied from less than two hours (Chaer et al. 2006) to unlimited access to the
simulator (Gerson and van Dam 2003). It can be argued that the short duration of
simulation-based training may have resulted in a positive transfer effect not being
evident, although simulator-trained groups did not always show superiority over
groups who did not have the training. The end-points of training were often ill-
defined and were not consistent between studies, making it difficult to comment on
the skill level at the end of the training.
Statistical comparisons between studies was made difficult because of other factors
that were not consistent between the studies. Variables in the operating room such
as differences in the severity of patient disease, the degree of independence granted
by clinicians and various staff assistants, the mentoring given to residents during the
training period, and the complexity of the assessment operations, differed between
studies. The different parameters measured during the assessment operations were
often ill-defined, making direct comparisons between studies difficult. In addition to
this, there may have been pre-existing faculty assessor bias about individual resident’s
abilities that may have influenced their evaluations and the results of the studies
(Hamilton et al. 2001). Inherent differences between participants used for these
studies will always exist, and include variations in hand-eye skills between trainees
(Ahlberg et al. 2005) and the ability of some trainees to learn and master techniques
faster than others (Chaer et al. 2006).
The adjustments made for any baseline differences were not uniform between
studies, making direct comparisons in changes in performance difficult. Some
studies did not perform baseline testing of participants, while some others used the
device used for training, or the performance of the assessment procedure, or both.
Using the simulator, or a patient for baseline testing in itself allows a participant to
gain familiarity with the procedure or device, and hence can lead to improvements in
clinical performance. This was evident in Hamilton et al. (2001) where untrained
controls also improved their performance over the study period. The authors
attributed the improvement in the control group over time to a progression in skill
and knowledge just by performing resident duties, and because they may have
studied the subject more than they normally would have to compare favourably with
their trained counterparts. Not withstanding these considerations, the improvements
seen in the simulation-trained group significantly exceeded those observed in the
SECTION 5 · DISCUSSION 63
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
untrained group for almost all the measured parameters. Similarly, Scott et al. (2000)
stated that improvement in the control group was expected because residents were
exposed to each task three times during the initial testing session and because they
were undergoing ‘on-the-job’ training while performing different operative cases on
their surgical rotation. Likewise, the improvements seen in the simulation-trained
group significantly outweighed those observed in the control group for many of the
parameters measured.
Although training methods and training duration varied, participants who trained
using simulation improved their simulator performance over time, as indicated by
direct improvements in performance parameters, reaching criterion levels, or the
completion of a certain number of simulated cases.
Transfer outcomes
Simulation-based training vs. no simulation-based training
Laparoscopic cholecystectomy
Laparoscopic surgery is particularly suited to technical skills training as it requires a
skill set based on instrumentation, depth perception, and fine motor control
(Valentine and Rege 2004; Villegas et al. 2003). Five studies were included for
laparoscopic cholecystectomy, covering three different modes of simulation (MIST-
VR, video-trainers and training course). The simulators in the studies taught simple
tasks and were of low fidelity, which are sufficient for novice learners, like those
included in the studies. There were large variations in assessment methods within
this intervention, with Scott et al. (2000) and Scott et al. (1999) assessing the
performance of the entire laparoscopic cholecystectomy procedure, Schijven et al.
2005 and Grantcharov et al. (2004) assessing the clip and cut part of the procedure,
and Seymour et al. (2002) assessing only the gallbladder excision from the liver.
Despite these variations in assessment, participants who underwent simulation-based
training prior to conducting patient-based laparoscopic cholecystectomy performed
better than their counterparts who had no contact with the simulators. This
improvement was not universal for all the parameters measured however, but the
untrained group never outperformed the trained group.
Gallagher et al. (2005) suggest that the most valuable metrics that simulation-based
training can provide involve the measurement of errors. Two studies (with small
sample sizes) reported outcomes for errors during laparoscopic cholecystectomy
(Seymour et al. 2002; Grantcharov et al. 2004), and it was found that the trained
groups generally made fewer errors than untrained groups.
Supervising surgeon takeover was measured in one study (Seymour et al. 2002) and
referred to instances where a surgeon had to take control of the procedure. Events
such as these have large clinical significance, as they represent catastrophic failures in
technique, and the point where patient safety is compromised. Simulation-trained
64 SECTION 5 · DISCUSSION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
SECTION 5 · DISCUSSION 65
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
66 SECTION 5 · DISCUSSION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Other considerations
One of the benefits of demonstrating successful skills transfer following simulation-
based training is the reduced need to use patients for training. This is likely to
increase patient safety, to address some risk management concerns, and to improve
operating theatre efficiency.
The development of technical skills is only one part of surgical training, and no single
parameter measured in a simulator can by itself demonstrate that a trainee has
acquired an expert level of proficiency or competence (Ahlberg et al. 2005). A good
example of this is performance time, which was measured by many of the included
studies. Although more rapid task completion is a recognised feature of expert
performance, measurement of this variable alone does not give any indication of the
quality of the task performed, and caution should be taken when interpreting it
without any additional objective quality data.
Gallagher et al. (2005) suggest that simulation-based training allows for the
development of the ‘pre-trained novice’; an individual who has been trained to the
point where many psychomotor skills and spatial judgments have been automated,
allowing them to focus more on learning operative strategy and how to handle
intraoperative complications, rather than wasting valuable operating room time on
the initial refinement of psychomotor skills.
With adequate pre-training, the trainee can gain maximum advantage from the
supervised opportunities for training in the operating room. Many skills (including
the cognitive skills of anatomical recognition, decision making, leadership, and
communication) must be incorporated into the training of a surgeon. The place of
simulation-based training in a rich curriculum incorporating these other skills, has yet
to be defined. Such a curriculum will use a range of skill training modalities, didactic
teaching methods and evaluation techniques (Alhberg et al. 2005). The aim of a
training program should be to integrate training for technical and non-technical skills
SECTION 5 · DISCUSSION 67
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
with team skills training in order to make the most effective use of available
resources and to maximise patient safety.
At the present time simulation-based training programs are often 'add-ons' to
traditional surgical training and they are often voluntary, which is reflected in the
included studies. Although data (survey data or subjective evidence) regarding the
participants’ views on training were positive (Schijven et al. 2005; Ahlberg et al. 2005;
Cohen et al. 2000; Tuggy 1998) (data not shown), they were volunteers, indicating a
pre-study willingness to participate. A study by Chang et al. (2007) found that when
given the opportunity to train in a simulation laboratory, many residents chose not to
use it for reasons relating to time, location and lack of interest. To increase
participation rates, and for simulation to be an effective part of a training curriculum,
the authors suggest that simulation-based training will need to be made mandatory,
and fully integrated within the curriculum (Gaba 2004). Other approaches to
improve participation in simulation-based training include training in a variety of
settings, the use of multi-disciplinary approaches (Dunkin et al. 2007), and to increase
motivation by using defined performance criteria to define specific training goals for
which to aim (Ahlberg et al. 2005).
The cost of providing simulation-based training is often used as an argument against
its use. It is difficult to comment on the cost of simulator-based training from the
included studies because of the variations in the devices and the cost of current
training methods (Haluck et al. 2005). In addition to this, no cost estimations were
reported for training curricula that used more than just a simulator as the training
tool; however, most authors agreed that it was important to justify the cost of
simulation-based training. Data is not yet available to demonstrate the money saved
through improved operating room efficiency and/or the reduced risk to patients.
The cost associated with the integration of simulation-based training into surgical
training programs will depend on the approach taken by organisations responsible
for surgical training (Gaba 2004). It is expected that the costs of simulation-based
training will decrease as it becomes viewed as an integral part of surgical training
(Villegas et al. 2003; Haluck et al. 2005).
Future research
The challenges to surgical training are substantial and simulation has the potential to
make a significant contribution to the evolution of surgical curriculum. It is very
important that further studies be undertaken to provide the best evidence to
determine how simulation-based training can be used in the most beneficial way.
It is recommended that further research be done into the transfer of skills acquired
via surgical simulation to the patient-based setting to strengthen the current evidence
base. While critical of the quality of study design in the included studies, it is
acknowledged that there are challenges in conducting RCTs in this area. One of these
is to bring the appropriate statistical and design discipline, as there is a tendency to
use criteria common within educational research, but which many in the surgical
68 SECTION 5 · DISCUSSION
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
community would view as being less stringent than those normally used in clinical
trials. Such challenges however, should not be used as an excuse to undertake
inferior studies, or provide a justification for inadequate reporting of methodological
detail. Similarly, they should not be deemed to justify avoiding a decision on the
utility of simulation-based training strategies.
Future studies will have the opportunity to explore other important dimensions to
the issue of skills transfer. These would include:
• the nature and duration of training required to deliver the greatest transfer effect
• the stage of training at which trainees receive maximum skill transfer benefits
from different forms of simulation,
• the effect of different levels of mentoring during the training period on transfer
rates, and
• changes in staff productivity as a result of surgical simulation-based training.
SECTION 5 · DISCUSSION 69
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
The aim of this systematic review was to determine whether skills acquired through
simulation-based training are transferable to the operative setting. The studies
included in this review were of variable quality and design, which limited the strength
of the conclusions. Overall the evidence available demonstrates that simulation-
based training results in skills transfer to the operative setting. It would therefore
appear that simulation-based training provides a safe, effective and ethical way for
trainees to acquire surgical skills before entering the operating room. Higher quality
studies are required to confirm these findings, and will need to examine different
simulation technologies, clinical procedures, training regimens and assessment
techniques, if the place of simulation-based training within surgical training programs
is to be determined.
Acknowledgments
The authors wish to acknowledge Dr Karen Facey and Ms Prema Thavaneswaran for
their assistance during the preparation of this review. The ASERNIP-S project is
funded by the Australian Government Department of Health and Ageing and the
South Australian Department of Health.
References
Aggarwal R, Grantcharov T, Moorthy K, Hance J, Darzi A. A competency-based
virtual reality training curriculum for the acquisition of laparoscopic
psychomotor skill. American Journal of Surgery 2006; 191: 128-133.
Bridges M and Diamond DL. The financial impact of teaching surgical residents in
the operating room. American Journal of Surgery 1999; 177: 28-32.
Carter FJ, Schijven MP, Aggarwal R, Grantcharov T, Francis NK, Hanna GB,
Jakimowicz JJ. Consensus guidelines for validation of virtual reality surgical
simulators. Surgical Endoscopy 2005; 19(12): 1523-1532.
Cass OW, Freeman ML, Cohen J, Zuckerman G, Watkins J, Nord J, Locke GR,
Jensen D, Diehl D, Cerulli M, Lyche K, Fennerty M, Edmundowicz S,
Etzkorn K, Al-Kawas D, Cave D, Lehman G. Acquisition of competency in
endoscopic skills (ACES) during training: a multicentre study [abstract].
Gastrointestinal Endoscopy 1996; 43(3): 308.
Chaer RA, Derubertis BG, Lin SC, Bush HL, Karwowski JK, Birk D, Morrissey NJ,
Faries PL, McKinsey JF, Kent KC. Simulation improves resident
performance in catheter-based intervention: results of a randomised,
controlled study. Annals of Surgery 2006; 244(3): 343-352.
Chang L, Petros J, Hess DT, Rotondi C, Babineau TJ. Integrating simulation into a
surgical residency program. Surgical Endoscopy 2007; 21: 418-421.
Chapman DM, Rhee KJ, Marx JA, Honigman B, Panacek EA, Martinez D, Brofeldt
BT, Cavanaugh SH. Open thoracotomy procedural competency: Validity
study of teaching and assessment modalities. Annals of Emergency Medicine.
1996; 28(6): 641-647.
REFERENCES 71
Cohen J, Cohen SA, Vora KC, Xue X, Burdick JS, Bank S, Bini EJ, Bodenheimer H,
Cerulli M, Gerdes H, Greenwald D, Gress F, Grosman I, Hawes R, Mullen
G, Schnoll-Sussman F, Starpoli A, Stevens P, Tenner S, Villanueva G.
Multicentre, randomised, controlled trial of virtual-reality simulator training
in acquisition of competency in colonoscopy. Gastrointestinal Endoscopy 2006b;
64(3): 361-368.
Dent J. Current trends and future implications in the developing role of clinical skills
centres. Medical Teacher 2001; 35: 909-915.
Deziel DJ, Milikan KW, Economou SG, Doolas A, Ko ST, Airan MC.
Complications of laparoscopic cholecystectomy: a national survey of 4,292
hospitals and an analysis of 77,604 cases. American Journal of Surgery 1993; 165:
9-14.
Edmond CV, Jr. Impact of the endoscopic sinus surgical simulator on operating
room performance. Laryngoscope 2002; 112(7 Pt 1): 1148-1158.
Fried GM. Lessons from the Surgical Experience with Simulators: Incorporation into
Training and Utilisation in Determining Competency. Gastrointestinal
Endoscopy Clinics of North America 2006; 16(3): 425-434.
Gaba DM. The future vision of simulation in health care. Quality and Safety in Health
Care 2004; 12: 2-10.
72
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD,
Satava RM. Virtual reality simulation for the operating room: proficiency-
based training as a paradigm shift in surgical skills training. Annals of Surgery
2005; 241(2): 364-372.
Grober ED, Hamstra SJ, Wanzel KR, Reznick RK, Matsumoto ED, Sidhu RS, Jarvi
KA. The educational impact of bench model fidelity on the acquisition of
technical skill: the use of clinically relevant outcome measures. Annals of
Surgery 2004; 240(2): 374-381.
Hamdorf JM and Hall JC. Acquiring surgical skills. British Journal of Surgery 2000;
87(1): 28-37.
Hamilton EC, Scott DJ, Fleming JB, Rege RV, Laycock R, Bergen PC, Tesfay ST,
Jones DB. Comparison of video trainer and virtual reality training systems on
acquisition of laparoscopic skills. Surgical Endoscopy 2002; 16(3): 406-411.
Hamilton EC, Scott DJ, Kapoor A, Nwariaku F, Bergen PC, Rege RV, Tesfay ST,
Jones DB. Improving operative performance using a laparoscopic hernia
simulator. American Journal of Surgery 2001; 182(6): 725-728.
Herbella FAM and Del Grande JC. Human cadavers as an experimental model for
esophageal surgery. Diseases of the Esophagus 2001; 14(3-4): 218-222.
REFERENCES 73
Hyltander A. Simulation as a teaching alternative: Utopia or reality? CAL-laborate
2003; June: 9-12.
Hyltander A, Liljegren E, Rhodin PH, Lonroth H. The transfer of basic skills learned
in a laparoscopic simulator to the operating room. Surgical Endoscopy 2002;
16(9): 1324-1328.
Kaufmann CR. Computers in surgical education and the operating room. Annales
Chirurgiae et Gynaecologiae 2001; 90(2): 141-146.
Kirk RM. Teaching the craft of operative surgery. Annals of the Royal College of Surgeons
of England 1996; 78 (Suppl 1): 25-28.
Korndorffer JR, Jr., Dunne JB, Sierra R, Stefanidis D, Touchard CL, Scott DJ.
Simulator training for laparoscopic suturing using performance goals
translates to the operating room. Journal of the American College of Surgeons 2005;
201(1): 23-29.
Krummel TM. Surgical simulation and virtual reality: the coming revolution. Annals of
Surgery 1998; 228(5): 635-637.
McDougall EM, Corica FA, Boker JR, Sala LG, Stoliar G, Borin JF, Chu FT,
Clayman RV. Construct validity testing of a laparoscopic surgical simulator.
Journal of the American College of Surgeons 2006; 202(5): 779-787.
Reznick RK and MacRae H. Teaching surgical skills - changes in the wind. The New
England Journal of Medicine 2006; 355(25): 2664-2669.
74
- ASERNIP-S REVIEW OF SURGICAL SIMULATION FOR TRAINING: TRANSFER TO THE OR. JULY 2007 -
Rivron RP and Maran AGD. The Edinburgh FESS trainer: A cadaver-based bench-
top practise system for endoscopic ethmoidal surgery. Clinical Otolaryngology &
Allied Sciences. 1991; 16(4): 426-429.
Roberts KE, Bell RL, Duffy AJ. Evolution of surgical skills training. World Journal of
Gastroenterology 2006; 12(20): 3219-3224.
Satava RM, Gallagher AG, Pellegrini CA. Surgical competence and surgical
proficiency: Definitions, taxonomy, and metrics. Journal of the American College
of Surgeons. 2003; 196(6): 933-937.
Schijven MP and Jakimowicz J. The learning curve on the Xitact LS 500 laparoscopic
simulator: profiles of performance. Surgical Endoscopy 2003; 18: 121-127.
Schijven MP and Jakimowicz JJ. Validation of virtual reality simulators: Key to the
successful integration of a novel teaching technology into minimal access
surgery. Minimally Invasive Therapy & Allied Technologies: Mitat. 2005; 14(4-5):
244-246.
Schijven MP, Jakimowicz JJ, Broeders IA, Tseng LN. The Eindhoven laparoscopic
cholecystectomy training course - improving operating room performance
using virtual reality training: results from the first E.A.E.S. accredited virtual
reality trainings curriculum. Surgical Endoscopy 2005; 19(9): 1220-1226.
Scott DJ, Bergen PC, Euhus DM, Guo WA, Jeyarajah DR, Laycock R, Rege RV,
Tesfay ST, Thompson WM, Valentine RJ, Jones DB. Intense laparoscopic
skills training improves operative performance of surgery residents. American
College of Surgeons Surgical Forum 1999; 50(L): 670-671.
Scott DJ, Bergen PC, Rege RV, Laycock R, Tesfay ST, Valentine RJ, Euhus DM,
Jeyarajah DR, Thompson WM, Jones DB. Laparoscopic training on bench
models: Better and more cost effective than operating room experience?
Journal of the American College of Surgeons 2000; 191(3): 272-283.
Sedlack RE and Kolars JC. Computer simulator training enhances the competency of
gastroenterology fellows at colonoscopy: results of a pilot study. American
Journal of Gastroenterology 2004; 99(1): 33-37.
REFERENCES 75
Sedlack RE, Kolars JC, Alexander JA. Computer simulation training enhances patient
comfort during endoscopy. Clinical Gastroenterology and Hepatology 2004; 2: 348-
352.
Seymour NE. Virtual reality in general surgical training. European Surgery - Acta
Chirurgica Austriaca Supplement. 2005; 37(5): 298-303.
Seymour NE, Gallagher AG, Roman SA, O'Brien MK, Bansal VK, Andersen DK,
Satava RM. Virtual reality training improves operating room performance:
results of a randomised, double-blinded study. Annals of Surgery 2002; 236(4):
458-463.
Srivastava S, Youngblood PL, Rawn C, Hariri S, Heinrichs WL, Ladd AL. Initial
evaluation of a shoulder arthroscopy simulator: establishing construct
validity. Journal of Shoulder and Elbow Surgery 2004; 13(2): 196-205.
Valentine RJ and Rege RV. Integrating technical competency into the surgical
curriculum: doing more with less. Surgical Clinics of North America 2004; 84(6):
1647-1667.
Van Sickle KR, Ritter EM, Smith CD. The pretrained novice: Using simulation-based
training to improve learning in the operating room. Surgical Innovation. 2006;
13(3): 198-204.
Villegas L, Schneider BE, Callery MP, Jones DB. Laparoscopic skills training. Surgical
Endoscopy 2003; 17(12): 1879-1888.
Woodrow SI, Segouin C, Armbruster J, Hamstra SJ, Hodges B. Duty hours reforms
in the United States, France, and Canada: Is it time to refocus our attention
on education? Academic Medicine 2006; 81(12): 1045-1051.
76
APPENDIX A – EXCLUDED STUDIES
77
Appendix A – Excluded studies
The following articles were excluded from the methodological assessment as outlined
in the methods section of the review.
Excluded Studies
78
Chapman DM, Rhee KJ, Marx JA, Honigman B, Panacek EA, Martinez D, Assessment in animals
Brofeldt BT, Cavanaugh SH. Open thoracotomy procedural competency:
Validity study of teaching and assessment modalities. Annals of
Emergency Medicine. 1996; 28(6): 641-647.
Clark JA, Volchok JA, Hazey JW, Ssadighi PJ, Fanelli RD. Initial Assessed in a simulator/via simulation
experience using an endoscopic simulator to train surgical residents in
flexible endoscopy in a community medical centre residency program.
Current Surgery 2005; 62(1): 59-63.
Datta V, Bann S, Beard J, Mandalia M, Darzi A. Comparison of bench test No comparison between groups
evaluations of surgical skill with live operating performance assessments.
Journal of the American College of Surgeons 2004; 199(4): 603-606.
Datta V, Bann S, Mandalia M, Darzi A. The surgical efficiency score: a Assessed in a simulator/via simulation
feasible, reliable, and valid method of skills assessment. American Journal
of Surgery 2006; 192(3): 372-378.
Datta V, Mackay S, Mandalia M, Darzi A. The use of electromagnetic Assessed in a simulator/via simulation
motion tracking analysis to objectively measure open surgical skill in the
laboratory-based model. Journal of the American College of Surgeons
2001; 193(5): 479-485.
Duffy AJ, Hogle NJ, McCarthy H, Lew JI, Egan A, Christos P, Fowler DL. Assessed in a simulator/via simulation
Construct validity for the LAPSIM laparoscopic surgical simulator. Surgical
Endoscopy. 2005; 19(3): 401-405.
Fearn SJ, Burke K, Hartley DE, Semmens JB, Lawrence-Brown MMD. A Assessment in animals
laparoscopic access technique for endovascular procedures: Surgeon
training in an animal model. Journal of Endovascular Therapy 2006; 13(3):
350-356.
Ford GS, Mazzone MA, Taylor K. Effect of computer-assisted instruction Not surgical
versus traditional modes of instruction on student learning of
musculoskeletal special tests. Journal of Physical Therapy Education 2005;
19(2): 22-30.
Fried GM, Derossis AM, Bothwell J, Sigman HH. Comparison of No comparison between groups
laparoscopic performance in vivo with performance measured in a
laparoscopic simulator. Surgical Endoscopy 1999; 13(11): 1077-1081.
Friedlich M, MacRae H, Oandasan I, Tannenbaum D, Batty H, Reznick R, Assessed in a simulator/via simulation
Regehr G. Structured assessment of minor surgical skills (SAMSS) for
family medicine residents. Academic Medicine. 2001; 76(12): 1241-1246.
Gallagher AG, Smith CD, Bowers SP, Seymour NE, Pearson AMS, Assessed in a simulator/via simulation
Hananel D, Satava RM. Psychomotor skills assessment in practicing
surgeons experienced in performing advanced laparoscopic procedures.
Journal of the American College of Surgeons 2003; 197: 479-488.
Goldmann K and Steinfeldt T. Acquisition of basic fiberoptic intubation skills Case series
with a virtual reality airway simulator. Journal of Clinical Anaesthesia 2006;
18(3): 173-178.
Grantcharov TP, Rosenberg J, Pahle E, Funch-Jensen P. Virtual reality Assessment in animals
computer simulation: An objective method for the evaluation of
laparoscopic surgical skills. Surgical Endoscopy 2001; 15(3): 242-244.
Grober ED, Hamstra SJ, Wanzel KR, Reznick RK, Matsumoto ED, Sidhu Assessment in animals
RS, Jarvi KA. Laboratory based training in urological microsurgery with
bench model simulators: a randomised controlled trial evaluating the
durability of technical skill. Journal of Urology 2004; 172(1): 378-381.
Grober ED, Hamstra SJ, Wanzel KR, Reznick RK, Matsumoto ED, Sidhu Assessment in animals
RS, Jarvi KA. The educational impact of bench model fidelity on the
acquisition of technical skill: the use of clinically relevant outcome
measures. Annals of Surgery 2004; 240(2): 374-381.
Halvorsen FH, Elle OJ, Dalinin VV, Mork BE, Sorhus V, Rotnes JS, Fosse Assessment in animals
E. Virtual reality simulator training equals mechanical robotic training in
improving robot-assisted basic suturing skills. Surgical Endoscopy. 2006;
20(10): 1565-1569.
79
Hamad MA, Mentges B, Buess G. Laparoscopic sutured anastomosis of Assessment in animals
the bowel. Surgical Endoscopy 2003; 17(11): 1840-1844.
Hamilton EC, Scott DJ, Fleming JB, Rege RV, Laycock R, Bergen PC, No untrained group
Tesfay ST, Jones DB. Comparison of video trainer and virtual reality
training systems on acquisition of laparoscopic skills. Surgical Endoscopy
2002; 16(3): 406-411.
Hance J, Aggarwal R, Moorthy K, Munz Y, Undre S, Darzi A. Assessment Assessment in animals
of psychomotor skills acquisition during laparoscopic cholecystectomy
courses. American Journal of Surgery 2005; 190(3): 507-511.
Hariri S, Rawn C, Srivastava S, Youngblood P, Ladd A. Evaluation of a Not skills training
surgical simulator for learning clinical anatomy. Medical Education 2004;
38(8): 896-902.
Hart R, Doherty DA, Karthigasu K, Garry R. The value of virtual reality- Assessment in animals
simulator training in the development of laparoscopic surgical skills. Journal
of Minimally Invasive Gynaecology 2006; 13(2): 126-133.
Heinrich M, Tillo N, Kirlum HJ, Till H. Comparison of different training Assessment in animals
models for laparoscopic surgery in neonates and small infants. Surgical
Endoscopy 2006; 20: 641-644.
Henkel TO, Potempa DM, Rassweiler J, Manegold BC, Alken P. Lap Focus not on skills training
simulator, animal studies, and the Laptent. Bridging the gap between open
and laparoscopic surgery. Surgical Endoscopy 1993; 7(6): 539-543.
Hochberger J, Matthes K, Maiss J, Koebnick C, Hahn EG, Cohen J. Assessed in a simulator/via simulation
Training with the compactEASIE biologic endoscopy simulator significantly
improves hemostatic skill of gastroenterology Fellows: a randomised
controlled comparison with clinical endoscopy training alone.
Gastrointestinal Endoscopy 2005; 61(2): 204-215.
Hyltander A, Liljegren E, Rhodin PH, Lonroth H. The transfer of basic skills Assessment in animals
learned in a laparoscopic simulator to the operating room. Surgical
Endoscopy 2002; 16(9): 1324-1328.
Jacomides L, Ogan K, Cadeddu JA, Pearle MS. Use of a virtual reality Assessed in a simulator/via simulation
simulator for ureteroscopy training. Journal of Urology 2004; 171(1): 320-
323.
Kimara T, Kawabe A, Suzuki K, Wada H. Usefulness of a virtual reality Assessment in animals
simulator or training box for endoscopic surgery training. Surgical
Endoscopy 2006; 20: 656-659.
Knudson MM and Sisley AC. Training residents using simulation Not surgical
technology: experience with ultrasound for trauma. Journal of Trauma
2000; 48(4): 659-665.
Korndorffer J, Jr., Dunne JB, Sierra R, Stefanidis D, Touchard CL, Scott Assessment in animals
DJ. Simulator training for laparoscopic suturing using performance goals
translates to the operating room. Journal of the American College of
Surgeons. 2005; 201(1): 23-29.
Korndorffer JR, Jr., Hayes DJ, Dunne JB, Sierra R, Touchard CL, Markert Assessment in animals
RJ, Scott DJ. Development and transferability of a cost-effective
laparoscopic camera navigation simulator. Surgical Endoscopy 2005;
19(2): 161-167.
Korndorffer JR, Stefanidis D, Scott DJ. Laparoscopic skills laboratories: Focus not on skills training
current assessment and a call for resident training standards. American
Journal of Surgery 2006; 191(1): 17-22.
Kothari SN, Kaplan BJ, DeMaria EJ, Broderick TJ, Merrell RC. Training in Assessed in a simulator/via simulation
laparoscopic suturing skills using a new computer-based virtual reality
simulator (MIST-VR) provides results comparable to those with an
established pelvic trainer system. Journal of Laparoendoscopic &
Advanced Surgical Techniques. Part A. 2002; 12(3): 167-173.
Larsson A. Intracorporeal suturing and knot tying in surgical simulation. Focus not on skills training
Studies in Health Technology Information 2001; 81: 266-271.
Lee SK. Trauma assessment training with a patient simulator: a Not surgical
prospective, randomised study. Journal of Trauma 2003; 55(4): 651-657.
80
Lehmann KS, Ritz JP, Maass H, Cakmak HK, Kuehnapfel UG, Germer CT, Assessed in a simulator/via simulation
Bretthauer G, Buhr HJ. A prospective randomised study to test the transfer
of basic psychomotor skills from virtual reality to physical reality in a
comparable training setting. Annals of Surgery 2005; 241(3): 442-449.
Madam AK, Frantzides CT, Shervin N, Tebbit CL. Assessment of individual Assessed in a simulator/via simulation
hand performance in box trainers compared to virtual reality trainers. The
American Surgeon 2003; 69: 1112-1114.
Madan AK, Frantzides CT, Tebbit C, Quiros RM. Participants' opinions of Assessment in animals
laparoscopic training devices after a basic laparoscopic training course.
American Journal of Surgery 2005; 189(6): 758-761.
Maiss J, Dumser C, Zopf Y, Naegel A, Krauss N, Hochberger J, Matthes K, Assessed in a simulator/via simulation
Hahn EG, Schwab D. "Hemodynamic efficacy" of two endoscopic clip
devices used in the treatment of bleeding vessels, tested in an
experimental setting using the compact Erlangen Active Simulator for
Interventional Endoscopy (compactEASIE) training model. Endoscopy
2006; 38(6): 575-580.
Maiss J, Wiesnet J, Proeschel A, Matthes K, Prat F, Cohen J, Chaussade Assessed in a simulator/via simulation
S, Sautereau D, Naegel A, Krauss N, Peters A, Hahn EG, Hochberger J.
Objective benefit of a 1-day training course in endoscopic hemostasis using
the "compactEASIE" endoscopy simulator. Endoscopy 2005; 37(6): 552-
558.
Martin JA, Regehr G, Reznick RK, MacRae H, Murnaghan J, Hutchison C, Focus not on skills training
Brown M. Objective structured assessment of technical skills (OSATS) for
surgical residents. British Journal of Surgery 1997; 84(2): 273-278.
Matsumoto ED, Kondraske GV, Ogan K, Jacomides L, Wilhelm DM, Pearle Assessment in cadaver
MS, Cadeddu JA. Assessment of basic human performance resources
predicts performance of ureteroscopy. American Journal of Surgery 2006;
191(6): 817-820.
McClusky DA, III, Ritter EM, Lederman AB, Gallagher AG, Smith CD. Assessed in a simulator/via simulation
Correlation between perceptual, visuo-spatial, and psychomotor aptitude to
duration of training required to reach performance goals on the MIST-VR
surgical simulator. The American Surgeon 2005; 71(1): 13-20.
Molinas CR, Binda MM, Mailova K, Koninckx PR. The rabbit nephrectomy Case series
model for training in laparoscopic surgery. Human Reproduction 2004;
19(1): 185-190.
Moorthy K, Mansoori M, Bello F, Hance J, Undre S, Munz Y, Darzi A. Assessed in a simulator/via simulation
Evaluation of the benefit of VR simulation in a multi-media web-based
educational tool. Studies in Health Technology Information 2004; 98: 247-
252.
Moorthy K, Munz Y, Adams S, Pandey V, Darzi A, Imperial C. Self- Assessed in a simulator/via simulation
assessment of performance among surgical trainees during simulated
procedures in a simulated operating theatre. American Journal of Surgery
2006; 192(1): 114-118.
Munz Y, Kumar BD, Moorthy K, Bann S, Darzi A. Laparoscopic virtual Assessed in a simulator/via simulation
reality and box trainers: is one superior to the other? Surgical Endoscopy
2004; 18(3): 485-494.
Nadu A, Olsson LE, Abbou CC. Simple model for training in the Results for two groups not reported
laparoscopic vesicourethral running anastomosis. Journal of Endourology separately
2003; 17(7): 481-484.
Ogan K, Jacomides L, Shulman MJ, Roehrborn CG, Cadeddu JA, Pearle Assessment in cadaver
MS. Virtual ureteroscopy predicts ureteroscopic proficiency of medical
students on a cadaver. Journal of Urology 2004; 172(2): 667-671.
Ost D, DeRosiers A, Britt EJ, Fein M, Lesser L, Mehta AC. Assessment of Not surgical
a bronchoscopy simulator. American Journal of Respiratory and Critical
Care 2001; 164: 2248-2255.
81
O'Toole RV, Playter RR, Krummel TM, Blank WC, Cornelius NH, Roberts Assessed in a simulator/via simulation
WR, Bell WJ, Raibert M. Measuring and developing suturing technique with
a virtual reality surgical simulator. Journal of the American College of
Surgeons 1999; 189(1): 114-127.
Paisley AM, Baldwin PJ, Paterson-Brown S. Validity of surgical simulation Results for two groups not reported
for the assessment of operative skill. British Journal of Surgery 2001; separately
88(11): 1525-1532.
Peugnet F, Dubois P, Rouland JF. Virtual reality versus conventional Not surgical
training in retinal photocoagulation: a first clinical assessment. Computer
Aided Surgery 1998; 3(1): 20-26.
Richards C, Rosen J, Hannaford B, Pellegrini C, Sinanan M. Skills No comparison between groups
evaluation in minimally invasive surgery using force/torque signatures.
Surgical Endoscopy 2000; 14(9): 791-798.
Ritter ME, McClusky DA, Gallagher AG, Enochsson L, Smith D. Perceptual, Assessed in a simulator/via simulation
visuospatial, and psychomotor abilities correlate with duration of training
required on a virtual-reality flexible endoscopy simulator. American Journal
of Surgery 2006; 192: 379-384.
Rosenthal R, Gantert WA, Scheidegger D, Oertli D. Can skills assessment Assessed in a simulator/via simulation
on a virtual reality trainer predict a surgical trainee's talent in laparoscopic
surgery? Surgical Endoscopy 2006; 20(8): 1286-1290.
Rossi JV, Verma D, Fujii GY, Lakhanpal RR, Wu SL, Humayun MS, De Assessed in a simulator/via simulation
Juan E Jr. Virtual vitreoretinal surgical simulator as a training tool. Retina
2004; 24(2): 231-236.
Rulli F, Cina G, Galata G, Cina A, Vincenzoni C, Fiorentino A, Farinon AM. Not skills training
Teaching subfascial perforator veins surgery: Survey on a 2-day hands-on
course. ANZ Journal of Surgery. 2004; 74(12): 1116-1119.
Scerbo MW, Schmidt EA, Bliss JP. Comparison of a virtual reality simulator No untrained group
and simulated limbs for phlebotomy training. Journal of Infusion Nursing
2006; 29(4): 214-224.
Schijven MP and Jakimowicz J. The learning curve on the Xitact LS 500 Assessed in a simulator/via simulation
laparoscopy simulator: profiles of performance. Surgical Endoscopy 2004;
18(1): 121-127.
Scott DJ, Rege RV, Bergen PC, Guo WDA, Laycock R, Tesfay ST, Focus not on skills training
Valentine RJ, Jones DB. Measuring operative performance after
laparoscopic skills training: Edited videotape versus direct observation.
Journal of Laparoendoscopic & Advanced Surgical Techniques -Part A
2000; 10(4): 183-190.
Seymour NE. Integrating simulation into a busy residency program. Assessment in animals
Minimally Invasive Therapy and Allied Technology 2005; 14(4): 280-286.
Smeak DD, Beck ML, Shaffer CA, Gregg CG. Evaluation of video tape and Assessment in animals
a simulator for instruction of basic surgical skills. Veterinary Surgery 1991;
20(1): 30-36.
Srivastava S, Youngblood PL, Rawn C, Hariri S, Heinrichs WL, Ladd AL. Assessed in a simulator/via simulation
Initial evaluation of a shoulder arthroscopy simulator: establishing construct
validity. Journal of Shoulder and Elbow Surgery 2004; 13(2): 196-205.
Stefanidis D, Sierra R, Korndorffer JR, Jr., Dunne JB, Markley S, Touchard Case series
CL, Scott DJ. Intensive continuing medical education course training on
simulators results in proficiency for laparoscopic suturing. American Journal
of Surgery 2006; 191(1): 23-27.
Strom P, Hedman L, Sarna L, Kjellin A, Wredmark T, Fellander-Tsai L. Assessed in a simulator/via simulation
Early exposure to haptic feedback enhances performance in surgical
simulator training: a prospective randomised crossover study in surgical
residents. Surgical Endoscopy 2006; 20(9): 1383-1388.
82
Suzuki N, Thomas-Gibson S, Vance M, Fraser C, Swain D, Schofield G, Case series
Saunders BP. Efficacy of an accelerated colonoscopy training week: Audit
from one national colonoscopy training centre in the UK. Digestive
Endoscopy. 2006; 18(4): 288-293.
Torkington J, Smith SG, Rees BI, Darzi A. Skill transfer from virtual reality Assessed in a simulator/via simulation
to a real laparoscopic task. Surgical Endoscopy 2001; 15(10): 1076-1079.
Van Sickle KR, Ritter EM, Smith CD. The pre-trained novice: Using Assessment in animals
simulation-based training to improve learning in the operating room.
Surgical Innovation 2006; 13(3): 198-204.
Verdaasdonk EG, Stassen LP, van Wijk RP, Dankelman J. The influence of Assessed in a simulator/via simulation
different training schedules on the learning of psychomotor skills for
endoscopic surgery. Surgical Endoscopy 2006;
Watterson JD, Beiko DT, Kuan JK, Denstedt JD. Randomised prospective Assessed in a simulator/via simulation
blinded study validating acquisition of ureteroscopy skills using computer
based virtual reality endourological simulator. Journal of Urology 2002;
168(5): 1928-1932.
Wong K and Stewart F. Competency-based training of basic surgical Case series
trainees using human cadavers. ANZ Journal of Surgery 2004; 74(8): 639-
642.
Woodrum DT, Andreatta PB, Yellamanchilli RK, Feryus L, Gauger PG, Case series
Minter RM. Construct validity of the LapSim laparoscopic surgical
simulator. American Journal of Surgery 2006; 191(1): 28-32.
Youngblood PL, Srivastava S, Curet M, Heinrichs WL, Dev P, Wren SM. Assessment in animals
Comparison of training on two laparoscopic simulators and assessment of
skills transfer to surgical performance. Journal of the American College of
Surgeons 2005; 200(4): 546-551.
83
84
APPENDIX B – HIERARCHY OF EVIDENCE
85
Appendix B – Hierarchy of evidence
I Evidence obtained from a systematic review of all relevant randomised controlled trials.
II Evidence obtained from at least one properly designed randomised controlled trial.
Evidence obtained from well-designed pseudo-randomised controlled trials (alternate
III-1
allocation or some other method).
Evidence obtained from comparative studies (including systematic reviews of such studies)
III-2 with concurrent controls and allocation not randomised, cohort studies, case-control studies,
or interrupted time-series with a control group.
Evidence obtained from comparative studies with historical control, two or more single arm
III-3
studies, or interrupted time series without a parallel control group.
IV Evidence obtained from case-series, either post-test or pre-test/post-test.
86
APPENDIX C – METHODOLOGICAL ASSESSMENT AND
STUDY DESIGN TABLES
87
Appendix C – Methodological assessment and study design tables
Appendix C.1 Study design tables – MIST-VR training vs. no simulator training
Author Intervention Study Design Study population
Grantcharov et al. 2004 Objective: To investigate whether laparoscopic skills acquired on MIST- Randomised controlled trial Sample size: n = 20
VR transfer to performance of laparoscopic cholecystectomy. • Training: n = 10
Location Method of randomisation: not stated • Control: n = 10
Department of Surgical Pre-Test:
Gastroenterology, 16 surgical residents performed laparoscopic cholecystectomy on Allocation concealment: sealed envelopes Baseline characteristics of participants:
Aarhus University, and patients under supervision of experienced surgeon. Training Control
Department of Surgical Level of evidence: II Sex ratio (M:F) 5:3 5:3
Gastroenterology, Intervention: Age (yrs) 36.5 (31 – 40) 36.5 (32 –
University of Participants then randomised: Blinding: supervising surgeon blinded during assessment operation. 44)
Copenhagen at • Training: MIST-VR: 10 repetitions of all 6 tasks: 1) grasp Assessors blinded to training status and performed evaluation Time since 7 (4-10) 7 (5 – 14)
Hvidovre, Glostrup, and virtual sphere and place in virtual box, 2) grasp virtual sphere, independently. graduation (yrs)*
Gentofte Hospitals. transfer instrument and place in virtual box, 3) grasp No of previous 6 (2 -7) 4 (0 – 8)
alternative segments of virtual pipe, 4) grasp virtual sphere, Intention to treat: not stated laparoscopic
Denmark touch it with tip of other instrument, withdraw and reinsert, and cholecystectomies
once more touch sphere, 5) grasp sphere, touch virtual plates Power calculation: not stated performed*
by other instrument and virtual diathermy them away with *Values are median (range)
pedal 6) combines actions of 4) and 5). Lost to assessment: n = 4 (video recorder malfunction)
• Control: No simulator training. • Training: n = 2 Inclusion: general surgical residents
• Control: n = 2
Time to assessment: within 14 days of training. Exclusion: not stated
Study period: August 2000 – August 2002
Assessment: Details of patients for live assessment: not stated
All participants again performed a supervised laparoscopic Outcome measures: rating scale (validated) only assessed
cholecystectomy on patients. psychomotor skills:
Both operative procedures recorded on video tape and assessed • Time to complete procedure
independently by 2 senior surgeons using predefined rating scales. Only • Error score
part of the procedure was assessed starting from the point at which clips • Economy of movement
were applied to the cystic artery and cystic duct, finishing with dissection
of the gallbladder from the liver bed.
Device:
MIST-VR: Mentice Medical Simulation, Gothenburg, Sweden
89
90
Appendix C.1 Study design tables – Procedicus VIST™ training vs. no simulator training
Author Intervention Study Design Study population
Chaer et al. 2006 Objective: To measure the effectiveness of simulator training on the Randomised controlled trial Sample size: n = 20
performance of catheter-based interventions by surgical residents. • Training: n = 10
Location Method of randomisation: not stated • Control: n = 10
Columbia University, All participants:
Weill Cornell Division of All participants were provided with didactic teaching in the form of reading Allocation concealment: sealed envelopes Baseline characteristics of participants: information
Vascular Surgery, New material and a lecture on catheter-based intervention. collected but data not shown
York Presbyterian An entrance survey was performed to determine demographics as well as Level of evidence: II
Hospital, New York. previous experiences. Inclusion: general surgical residents with no prior
All participants were administered a visuospatial evaluation consisting of Intention to treat: not stated endovascular experience
USA a card rotation and a cube comparison (Educating Testing Service,
Princeton, NJ) to evaluate pre-study differences between the capability of Power calculation: not stated Exclusion: not stated
residents to perceive 3 dimensional structures.
Lost to assessment: not stated Details of patients for live assessment: not stated
Intervention:
Participants then randomised: Study period: not stated
• Training: Procedicus VIST™, computer-based, haptic
simulator for standardised iliofemerol angioplasty/stenting. Blinding: assessing vascular surgeon blinded to training status
Supervision present. Endpoint of simulation training was
independent completion of the procedure by the participant Outcome measures: criteria assessed included:
with proficiency in all basic endovascular techniques. Training • Time and motion
was not allowed to exceed 2 hours. • Wire and catheter handling
• Control: No simulator training. • Awareness of wire position
• Maintenance of wire stability
Time to assessment: within 14 days of randomisation. • Awareness of fluoroscopy usage
• Precision or wire/catheter technique
Assessment: • Flow of operation
All participants performed 2 consecutive catheter based interventions for • Knowledge of procedure
lower extremity occlusive disease supervised by a blinded attending • Quality of final product
vascular surgeon.
• Ability to complete the case
Assessment was done by the attending vascular surgeon using an 18
• Need for verbal prompts
step checklist of the required steps for a standard catheter based
intervention as well as a global rating scale of the participant’s • Attending takeover
endovascular technique and ability (validated). All participants were
verbally guided through the steps of the procedure by the attending
surgeon while their technical skills were assessed.
Device:
Procedicus VIST™ (Mentice, Inc., Evanston, IL). Desktop PC (Intel Xeon
2.66 GHz, 1 GB RAM, nVIDIA GeForce4 Ti 4200 with AGP 8X) with 3-D
software of the human arterial system. This is coupled to a haptic module
utilising a force feedback system. The instructional system is displayed
on a touch screen monitor and a simulated fluoroscopic image is
displayed on a second monitor).
Appendix C.1 Study design tables – Total extraperitoneal (TEP) hernia repair curriculum training vs. no TEP hernia repair curriculum training
Author Intervention Study Design Study population
Hamilton et al. 2001 Objective: To evaluate the impact of a laparoscopic hernia repair Randomised controlled trial Sample Size: 21
curriculum on resident surgeons’ operative performance and confidence • Training: n = 10
Location with a laparoscopic TEP hernia repair. Randomisation method: not stated • Control: n = 11
Southwestern Centre
for Minimally Invasive Participants: Allocation concealment: not stated Baseline characteristics of participants: not stated
Surgery, Department of All participants completed a questionnaire regarding baseline operative
Surgery, University of experience and familiarity and comfort with total extraperitoneal (TEP) Level of evidence: II Gender Mix: not stated
Texas Southwestern hernia repair.
Medical Centre, Dallas, Intention to treat: not stated Inclusion Criteria: 3rd and 4th year surgery residents on
Texas. Pre-test: surgical rotation.
All performed a laparoscopic TEP hernia repair during week 1 of a Power calculation: not stated
USA general surgery rotation; baseline performance evaluated by 1 of 4 faculty Exclusion Criteria: not stated
members using a validated global assessment tool. Each operation Follow-up: not stated
performed under guidance of a faculty surgeon who served as both first Details of patients for live assessment: not stated
assistant and assessor. Participants were encouraged to work as Lost to assessment: not stated
independently as possible.
Study period: January 2000 – March 2001
Intervention:
Participants randomised to two groups: Blinding: evaluators blinded to participants training status
• Training: 10 separate 30 minute sessions over a 2 week
period. Training included a detailed instructional video Outcome measures: performance measures for laparoscopic TEP
(watched once at beginning and once at end of study); an hernia repair:
interactive CD-ROM; and a moulded rubber hernia simulator • Respect for tissue
(alternate daily instruction via CD-ROM and simulator). • Time and motion
• Control: TEP hernia repair curriculum training. • Instrument handling
• Knowledge of instruments
Time to assessment: after completion of 2 weeks of training. • Flow of operation
• Use of assistants
Assessment: • Knowledge of specific procedure
Participants were tested after the training period in an operating room • Overall performance.
with real TEP hernia repair under conditions similar to the pre-test
evaluation by the same group of assessors.
Device:
TEP simulator designed by Southwestern Centre for Minimally Invasive
Surgery in conjunction with GSI. Consists of a rubber model of a human
pelvis including all relevant pelvic anatomy. The model accepts a
laparoscope and trocars and a replaceable rubber insert that allows
participants to practise repeated mesh fixation over indirect, direct, or
femoral defects.
91
92
Appendix C.1 Study design tables – MIST-VR training vs. no simulator training
Authors Intervention Study Design Study Population
Seymour et al. 2002 Objective: To demonstrate that virtual reality training transfers technical Randomised controlled trial Sample Size: n = 16
skills to the operating room environment. • Training: n = 8
Location Randomisation method: not stated • Control: n = 8
Department of Surgery, Pre-test: Participants stratified by postgraduate year
Yale University School All residents completed a series of previously validated tests to assess Baseline characteristics of participants:
of Medicine, New fundamental abilities – visuospatial (Card Rotation Cube Comparison and Allocation concealment: not stated Female n = 5; Male n = 11
Haven, Connecticut, Map Plan tests), perceptual (Pictorial Surface Orientation test), and
USA, and Department psychomotor (MIST-VR at medium level of difficulties). Level of Evidence: II Inclusion Criteria: surgical residents (years 1-4)
of Psychology, Queen’s
University, Belfast. Intervention: Intention to treat: not stated Exclusion Criteria: not stated
Participants randomised:
Northern Ireland • MIST-VR training + standard training (ST): criterion levels Power calculation: not stated Details of patients for live assessment: not stated
first established by 4 experienced surgeons on Manipulate and
Diathermy task at difficult level. Participants were then Blinding: assessors blinded to participant’s training status
required to perform the same task equally as well with both
hands on 2 consecutive trials at this criterion level. Training Follow-up: not stated
lessons lasted approximately 1 hour, and sessions were
continued until expert criterion levels were achieved. Training Lost to assessment: not stated
was supervised.
• Control: Standard programmatic training – no simulator Study Period: not stated
training.
All participants watched a standard video. This video defined specific Outcome measures: from archive footage, 8 events associated with
deviations from optimal performance that would be considered errors. the excisional phase of procedure were identified as errors and chosen
After the viewing, all residents were given an 8-question multiple-choice as study measurements:
exam that tested the recognition of these errors. • Lack of progress
• Gallbladder injury
Time to assessment: NR • Liver injury
• Incorrect plane of dissection
Assessment: • Burn non-target tissue
All residents performed laparoscopic cholecystectomy (gall bladder • Tearing tissue
excision after division of the identified cystic structures) using a • Instrument out of view
standardised 2-handed method with 1 of the surgeon-investigators. This • Attending takeover
part of procedure was video recorded with voice audio. Procedures with Gall bladder excision phase scored on a minute by minute basis using
attending takeover were flagged for examination of audio. Each a scoring matrix.
procedural video was viewed without audio by 2 surgeon-investigators. Errors recorded using fixed-interval time span sampling.
Device:
MIST-VR, Mentice AB, Gothenburg, Sweden. Frameset v.1.2. MIST-VR
system (frameset v 1.2) was run on a desktop PC (400 MHz Pentium II,
64 MB RAM). The video subsystem employed (Matrox Mystique, * MB
SDRAM) delivered a frame rate of approximately 15 frames per second,
permitting near-real-time translations of instrument movements to the
video screen. The laparoscopic interface input device (Immersion
Corporation, San Jose, CA) consisted of 2 laparoscopic instruments at a
comfortable surgical height relative to the operator, mounted in a frame
by position-sensing gimbals that provided 6 degrees of freedom, as well
as a foot pedal to activate simulated electrosurgery instruments.
93
94
Appendix C.1 Study design tables – Video trainer (SCMIS GEM) training vs. no simulator training
Author Intervention Study Design Study population
Scott et al. 2000 Objective: To develop a model to provide intense laparoscopic skills Randomised controlled trial Sample size: n = 27
training to surgical residents and to determine if improvement of skills on • Training: n = 13
Location a video trainer translates into an improvement in operative performance. Method of randomisation: random digits table • Control: n = 14
Department of Surgery,
University of Texas Participants: Allocation concealment: not stated Baseline characteristics of participants:
Southwestern Medical All participants randomised. Training Control
Centre, Dallas, Texas. All participants completed a baseline questionnaire regarding earlier Level of evidence: II Mean no. previous 15 18
laparoscopic experience and competency in laparoscopic skills. laparoscopic
USA Intention to treat: not stated cholecystectomies
Pre-test: performed*
During week 1, all residents were tested on all tasks on the video-trainer Power calculation: nonparametric power analysis performed to * as either the surgeon or first assistant (P = 0.501).
(each task was performed 3 times). No practice was allowed before determine sample size needed to detect an effect of training. 27 or
testing except for the suture foam drill so participants could get familiar more participants provide a power of at least 0.8 with a type I error of Inclusion: 2nd and 3rd year general surgical residents
with device. 0.05. rotating for one month periods on the general surgery
All participants performed a laparoscopic cholecystectomy in the services at Parkland Memorial Hospital.
operating theatre. Operations performed in a one-handed or two-handed Blinding: assessing surgeons and faculty evaluators blinded
fashion according to the supervising surgeon’s preference. Participants Exclusion: not stated
allowed to perform as much of the operation as possible. Operations Lost to assessment: scheduling difficulties
were supervised by 1 of 3 designated faculty surgeons (blinded) who also • Training: n = 4 Details of patients for live assessment: patients with the
acted as first assistant during the operation. Participants were asked • Control: n = 1 diagnosis of symptomatic cholelithiasis for whom elective
about key anatomic landmarks and their operative plan. cholecystectomy was indicated were scheduled for the
Operations were videotaped for investigations outside the scope of this Study period: August 1998 – January 1999 observed cases.
study (used in separate study Scott et al. 2000 – reference below).
Outcome measures: improvement defined as baseline minus post
Intervention: training score calculated for each resident, adjusted by linear analysis
• Training: SCMIS GEM video trainer: During weeks 2 and 3, of covariance for differences in baseline scores.
residents in training group met for at least 30 minutes a day for • Respect for tissue
10 days to perform the 5 established laparoscopic drills on a • Time and motion
video trainer. Tasks were: bean drop, running string, • Instrument handling
checkerboard, block move, and suture foam. • Knowledge of instruments
• Control: no simulator training. • Flow of operation
• Knowledge of specific procedure
Time to assessment: During week 4. • Overall performance
Assessment:
All residents again tested on the 5 tasks on the video trainer (each task Scott et al. 2000* (not the included study, see reference below)
performed 3 times), and average completion time recorded. compared the global assessment data from direct observations in this
All residents performed another laparoscopic cholecystectomy in the study with global assessments of the videotaped procedures. Global
operating theatre with the same first assistant surgeon. The same 3 assessment of the videotapes was conducted by the same assessors
faculty evaluators performed global assessments based on direct as global assessments from direct observation. Correlation coefficients
observation. Global assessments performed by 3 additional faculty for videotape versus direct observation for five global assessment
surgeons who did not participate in operations and were blinded to criteria were <0.33 for both raters (NS for all values). The correlation
training status. coefficient for interrater reliability for the overall score was 0.57 (P =
At the completion of the rotation, all residents were asked to complete a 0.01) for direct observation vs. 0.28 (NS) for videotape. The trained
questionnaire regarding their laparoscopic experience. group had significantly better overall performance than the control
After the rotation, the control group was offered video trainer experience group according to the assessment by direct observation (P = 0.02) but
(results not included as part of this study). not by videotape assessment (NS).
Device:
Southwestern Centre for Minimally Invasive Surgery Guided Endoscopic
Module (SCMIS GEM). Six-station video-trainer (Karl Storz Endoscopy,
Culver City, CA).
* Scott DJ, Rege RV, Bergen PC, Guo WDA, Laycock R, Tesfay ST, Valentine RJ, Jones DB. Measuring operative performance after laparoscopic skills training: Edited videotape versus direct
observation. Journal of Laparoendoscopic & Advanced Surgical Techniques - Part A 2000; 10(4): 183-190.
NS not significant
95
96
Appendix C.1 Study design tables – Video trainer training vs. no simulator training
Author Intervention Study Design Study population
Scott et al. 1999 Objective: To develop a model for intense laparoscopic skills training and Randomised controlled trial Sample size: n = 22
to determine if improvement of skill level on a video-trainer translates into • Training: n = 9
Location an improvement in operative performance. Method of randomisation: not stated • Control: n = 13
Southwestern Centre
for Minimally Invasive Participants: Allocation concealment: not stated Baseline characteristics of participants: not stated
Surgery, University of All participants randomised.
Texas Southwestern Level of evidence: II Inclusion: 2nd and 3rd year residents
Medical Centre, Dallas, Pre-test:
Texas. All residents underwent a validated global assessment of their ability to Intention to treat: not stated Exclusion: not stated
perform laparoscopic cholecystectomy in the operating theatre.
USA All residents tested on 5 standardised video-trainer tasks based on time Power calculation: not stated Details of patients for live assessment: not stated
needed to complete each task.
Blinding: raters of operative procedures blinded
Intervention:
• Training: Video trainer: practised 5 established laparoscopic Lost to assessment: not stated
tasks for 30 minutes a day for 10 days. Tasks were: bean
drop, running string, checkerboard, block move, and suture Study period: not stated
foam.
• Control: no video-trainer training. Outcome measures:
• Respect for tissue
Time to assessment: after 10 days of training. • Time and motion
• Instrument handling
Assessment: • Knowledge of instruments
Laparoscopic cholecystectomies conducted and scored on same global • Flow of operation
assessment scale. Rated on a scale of 1 – 5 in 8 areas by the same 3 • Knowledge of specific procedure
raters. • Overall performance
All residents repeated the video-trainer test.
Device:
Video-trainer (Karl Storz Endoscopy).
Appendix C.1 Study design tables – AccuTouch® flexible endoscopy simulator vs. no simulator training
Author Intervention Study Design Study population
Ahlberg et al. 2005 Objective: To investigate whether the use of the AccuTouch® flexible Randomised controlled trial Sample size: n = 12
endoscopy simulator improves the early part of the learning curve in (surgical n = 10; gastroenterological n = 2)
Location colonoscopy training. Method of randomisation: not stated • Training: n = 6
Department of Surgery • Control: n = 6
and Centre for Advanced Participants: Allocation concealment: sealed envelopes
Medical Simulation, All participants were given the same theoretical study material, containing Baseline characteristics of participants:
Karolinska Hospital, a booklet on colonoscopy, and a free sample instructive CD on Level of evidence: II 10 men; 2 women
Stockholm. colonoscopy. No previous experience in colonoscopy. All had
Intention to treat: not stated experience in gastroscopy, with a minimum of 20
Sweden Intervention: individually performed procedures.
Participants randomised: Power calculation: not stated
• Training: AccuTouch® flexible endoscopy simulator. An Inclusion: not stated
expert level of performance was established by assessing 5 Blinding: patients and assessing supervisors blinded to training status
expert colonoscopists. Participants instructed to train under of participants Exclusion: not stated
supervision until reaching criterion level. To reach criterion
level in simulator, the participants had to be able to intubate Lost to assessment: not stated Details of patients for live assessment: patients
the caecum within 7 minutes without the use of sedation, a designated to undergo diagnostic colonoscopy, on an
‘virtual attending’, simulation tips and external view. More Study period: not stated all-comer basis, without a history of previous
than 97% of procedure time had to be without patient abdominal surgery were asked to participate.
discomfort and there had to be no period of severe or extreme Outcome measures:
discomfort. Navigation to the caecum had to be performed • Total procedure time
with less than 1500ml of air insufflated and with less than 15% • Segment of colon where investigation stopped
of procedure time being in red-out. Training was initially under • Reason for stopping (if applicable)
supervision. Feedback given at any time. • Analgesic drugs given
• Control: no simulator training. • Complications
Assessment:
All participants performed their first 10 colonoscopic procedures in
patients under supervision. Assessing supervisors instructed not to guide
the participant. The colon was divided into 9 consecutive segments and
the subjects were given a maximum of 15 minutes to pass each segment,
with a maximum overall procedure time of 60 minutes. The supervisors
were instructed to take over the procedure if the time limits were reached,
if the patient experienced excessive discomfort or if the colon was poorly
prepared. Patient gender and diagnosis were also noted. In addition, a
patient form was completed in which the maximum discomfort during the
procedure was graded on a visual analog scale.
Appendix C.1 Study design tables – AccuTouch® flexible sigmoidoscopy simulator training vs. no simulator training
Author Intervention Study Design Study population
Sedlack et al. 2004 Objective: To determine whether computer based endoscopy training Randomised controlled trial Sample size: n = 38
results in measurable benefit to trainees’ endoscopic performance in • Simulator training: n = 19
Location flexible sigmoidoscopy on the basis of staff evaluations of procedural skill Method of randomisation: not stated • No simulator training: n = 19
Division of and patient evaluations of discomfort.
Gastroenterology and Allocation concealment: not stated Baseline characteristics of participants: none had prior
Hepatology, Mayo All participants: endoscopy training
Clinic, Rochester, Participants randomised. Level of evidence: II
Minnesota. Inclusion: 2nd year internal medicine residents
Intervention: Blinding: evaluating staff not blinded to participant training status
USA • Simulator training: independent 3 hour AccuTouch® flexible Exclusion: not stated
sigmoidoscopy simulator based training curriculum under the Intention to treat: not stated
supervision of a gastroenterology Fellow. Curriculum entailed Details of patients for live assessment: not stated
a brief multimedia tutorial followed by 8 – 10 hands on Power calculation: not stated
simulated scenarios.
• No training: no simulator training. Follow-up (years): not stated
Device:
AccuTouch® flexible sigmoidoscopy simulator (Immersion Medical,
Gaithersburg, MD, Simulator engine 1.1.1). This computer based
colonoscopy simulator consists of a specialised endoscope that is
inserted into a computer based module and provides 6 sigmoidoscopy
scenarios of varying complexity. Simulator utilises internal mechanics that
provide tactile feedback as well as a computer-generated voice to
simulate patient discomfort.
Appendix C.1 Study design tables – 4-day VR laparoscopic cholecystectomy training course vs. no training course
Author Intervention Study Design Study population
Schijven et al. 2005 Objective: To investigate operating room performance of surgical Non-randomised comparative study Sample size: n= 24
residents after participating in the Eindhoven virtual reality laparoscopic • Training: n = 12
Location cholecystectomy training course. Level of evidence: III-2 • Control: n = 12
IJsselland Hospital,
2900 AR Capelle, Participants: Lost to assessment: technical recording failures Baseline characteristics of participants:
IJssel. All participants attended a Basic Surgical Skills Course before • Training: n = 2 • Mean age in both groups: 31 years
participating in this study. • Control: n = 2 • Mean years of training for both groups: 1.8
The Netherlands years
Intervention: Blinding: video tape assessors blinded to training status and video • All participants right handed
• Training: trainees participated in a 4-day virtual reality tape segments in random order
laparoscopic cholecystectomy training course. Videos, oral Training Control
presentations, interactive sessions incorporated. Repetitive Study period: April 2003 - March 2004 Sex ratio (M:F) 8 :4 10 : 2
training using open Xitact LS500 laparoscopic simulator No. previous 0.3 (0 – 1) 1.8 (0 – 3)*
platform, VR simulation (MIST-VR) and procedural Outcome measures: structured questionnaire using a 5-point Likert laparoscopic
laparoscopic cholecystectomy simulation including the clip- rating scale used for assessment. Parameters measured: cholecystectomies,
and-cut, navigation, and dissection modules (Xitact) were • Fluency mean (range)
featured. • Carefulness No. participants 6 8.4
On days 2 and 3, participants in course attended the operating • Sumscore: focused on clip and cut part of procedure. training to be
theatre in conjunction with their VR sessions to act as Performance judged by integration of psychomotor skills, general surgeons
assistant surgeon or camera assistant on laparoscopic procedural knowledge of anatomy and decision making. No. participants 0 3
cholecystectomy by expert surgeon. Constructed primarily by metrics of Xitact’s clip and cut who have
• Control: no training course. simulation. conduced similar
• Judgment training course†
Time to assessment: on day 4. • Time to complete clip and cut procedure. *p = 0.008 in favour of control group
† either animal or non animal training courses
Assessment:
On day 4, all participants performed a full laparoscopic cholecystectomy
Inclusion: trainee surgeons and novices who had
under supervision. Procedure was videotaped.
performed less than 4 laparoscopic cholecystectomies.
Only the clip and cut part of the procedure, the clipping and cutting of the
cystic artery and cystic duct, was assessed.
Exclusion: not stated
The procedure was assessed starting from the moment the laparoscopic
clip applier was introduced and ending at the moment the laparoscopic
Details of patients for live assessment: patients selected
scissors were removed from the operative field.
for assessment were American Society of
Video fragments from both the trained and control groups were
Anesthesiology class 1 with a history of uncomplicated
independently evaluated by 2 laparoscopic engaged surgeons from
cholelithiasis and no previous abdominal complications
different academic training hospitals. Participant’s video fragments were
or surgery.
mixed in random order before being copied to the reviewer’s videotape.
Devices:
Xitact LS500 laparoscopy simulator platform (Xitact SA, Morges,
Switzerland).
MIST-VR
101
102
Appendix C.1 Study design tables – Endoscopic sinus surgical simulator training vs. no simulator training
Author Intervention Study Design Study population
Edmond 2002 Objective: To evaluate an endoscopic sinus surgical simulator (ESS) as a Non-randomised comparative study Sample size: 10
training device and to introduce a methodology to assess its impact on • Training: n = 2
Location actual operating room performance. Level of evidence: III-2 • Control: n = 2
Madigan Army Medical
Centre, the Intervention: Lost to assessment: not stated Baseline characteristics of participants:
Departments of Surgery • Training: novice, intermediate and advanced models of ESS • 2 subjects had trained on the simulator.
and Otolaryngology – simulator. Performance of each trial recorded by the system. Study period: not stated • 2 had no prior simulator experience
Head and Neck Hazards created in all 3 models. Subtask scores were • None had prior operating room sinus surgical
Surgery, University of calculated as accuracy (navigation, injections and dissection Blinding: videotape assessors blinded to training status experience
Washington, Seattle accuracy), optimal time/completed time. To normalise across
and the Department of models, optimal times were derived from performance of 2 Outcome measures: performance criteria measured on 10 point scale: Inclusion: first-year (junior) ear-nose-throat residents
Surgery, Tripler Army expert surgeons on each subtask for each model. • Navigation
Medical Centre • Control: no simulator training. • Injection Exclusion: not stated
Honolulu, Hawaii. • Uncinectomy
Time to assessment: not stated • Anterior ethmoidectomy Details of patients for live assessment: not stated
USA • Maxillary antrostomy
Assessment: • Orientation of video image
4 subjects performed a routine endoscopic sinus surgery procedure on a • Image-to-task alignment
patient. Assessors evaluated participants on a 10 point scale. Procedures • Proper depth of image for task
were videotaped. A blinded panel of 4 experienced sinus surgeons rated • Tool manipulation
5 videotapes (4 first time surgeries and 1 operation performed by an
• Tool selection
experienced staff surgeon). The panel rated each videotape on the same
• Tool-tool dexterity
10 point scale.
• Tissue respect
Device: • Surgical confidence
ESS simulator incorporates 3 computer systems linked by an Ethernet • Case difficulty
interface. A Silicon Graphics Inc (SGI) Onyx 2 computer serves as the
simulation host platform. In contains the virtual patient model and is
responsible for the simulation of the endoscopic image, the surgical
interface, and the user interface. The Onyx is configured with two R10000
CPUs, IR graphics hardware, and a soundboard. The second computer,
a 333 MHz Pentium PC is dedicated to control the electromechanical
hardware. A 3rd computer allows voice recognition and provides virtual
instruction while training.
Appendix C.1 Study design tables – Gastro-Sim® flexible sigmoidoscopy simulator training vs. no simulator training
Author Intervention Study Design Study population
Tuggy 1998 Objective: To determine whether a virtual reality flexible sigmoidoscopy Randomised controlled trial Sample size: n = 10
simulator improves the hand-eye coordination and various performance • Simulator training: n = 5
Location parameters in a live patient. Method of randomisation: not stated • Control: n = 5
Swedish Family
Medicine Residency, All participants: Allocation concealment: not stated Baseline characteristics of participants: not stated
Seattle. Participants randomised.
Level of evidence: II Inclusion: residents with no experience in flexible
USA Intervention: sigmoidoscopy
• Simulator training: 5 hours training on the Gastro-Sim® Blinding: patient blinded to experience and training status of participant
flexible sigmoidoscopy simulator. Participants not given any Exclusion: not stated
training or guidance on the skills required for sigmoidoscopy Intention to treat: not stated
other than what was encountered during the simulation. Details of patients for live assessment: 2 healthy men
• Control: no simulator training or preparation before performing Power calculation: not stated aged 25 – 35 who were compensated for their
first live patient examinations. participation in the study.
Lost to assessment: not stated
Time to assessment: not stated
Study period: not stated
Assessment:
Examinations performed on 2 live patient volunteers. Before each set of Outcome measures:
sigmoidoscopies, each patient received a brief examination by a • Time to reach 30 cm, 40 cm, and maximal insertion
supervising physician to ensure the colon was adequately prepared. The • Total time of examination
air was then removed from the colon before the study participants • Total time in red-out
performed their examinations. • Quality of visualisation of the colon walls,
Each matched pair of residents then performed examinations sequentially • Estimated percentage of the colon visualised
on the same patient to reduce the risk of encountering different colon • Hand eye skills assessed by the amount of directional
structures. A sigmoidoscopist monitored the examinations. The errors made during procedure
sigmoidoscopist inserted or retracted the sigmoidoscope.
The examinations were videotaped and used for assessment.
The patients completed a pain scale, rated the perceived confidence of
the examiner, and evaluated the duration of the examination.
All participants completed a survey on the effect of the simulator on their
perception of their hand-eye skills.
After the first set of live patient examinations, each of the 5 residents in
the control group was then allowed access to the simulator and
completed 5 hours of training. The experimental group continued to train
on the simulator for up to 5 additional hours. Once this training was
completed, the matched resident pairs again performed the procedure on
the volunteer patients. During this second phase of the trial, the paired
residents examined the alternate patient.
Device:
Gastro-Sim® flexible sigmoidoscopy simulator (Interact Medical).
103
104
Appendix C.1 Study design tables – Simbionix GI Mentor™ sigmoidoscopy simulator training vs. no simulator training
Author Intervention Study Design Study population
Cohen et al. 2006b Objective: To determine whether a 10 hour structured training program Randomised controlled trail Sample size: n = 49
using the GI Mentor™ simulator provides an objective benefit to novice • Simulator training: n = 22
Location gastroenterology Fellows learning to perform colonoscopy. Method of randomisation: a random number table • No training: n = 23
New York NY,
Charleston SC, Dallas All participants: Allocation concealment: not stated Baseline characteristics of participants:
Texas. All participants completed a questionnaire outlining demographics Precolonoscopy Simulator No Total
including age, gender, and year of graduation; gastroenterology training Level of evidence: II experience* training training
USA program; and the number of flexible sigmoidoscopies performed before Mean 67 80 147
the GI Fellowship. Blinding: proctors filling out individual evaluation forms were blinded to gastroscopies
All participants attended general lectures on colonoscopy as part of a resident’s training status. Participant’s names did not appear on study Mean flexible 4 5 9
didactic endoscopy course. evaluation forms (only a code number). Only principal investigator had sigmoidoscopies
access to the key to code numbers. Total 71 85 156
Intevention: *No significant differences between the 2 groups
Participants randomised: Intention to treat: not stated
• Simulator training: participants were given a supervised Inclusion: the participant’s training director had to agree
orientation to the Simbionix GI Mentor™ simulator. Each hour Power calculation: Kaplan-Meier curves were generated to determine to adhere to the protocol and to delay any first year
of training followed a standard protocol of activities. In all, 10 the number of blocks of 20 cases needed for each group to reach a performance of colonoscopy for the first 8 weeks of the
different cases were used during the simulator training. Each median of 90% objective and subjective competency. Fellowship
participant completed five 2-hour private sessions on the
simulator over 8 weeks. Participants kept a log of attempted Lost to assessment: n = 4 (protocol violations during training phase) Exclusion: previous formal training in colonoscopy (> 10
simulated procedures performed during each 2-hour session cases) and an inability to comply with the training
and a log all actual sigmoidoscopies and gastroscopies Study period: not stated schedule
performed during the study. During the second year of the
study, the simulator was able to record performance variables. Outcome measures: Details of patients for live assessment: not stated
Participants did not perform any real colonoscopies until they • Ability to reach the transverse colon and caecum without
completed all 10 hours on the simulator. Participants filled out assistance
a questionnaire at the end of the training regarding usefulness • Ability to correctly recognise and identify abnormalities
of training. • Overall subjective rating of competency on a 5 point scale
• No simulator training (referred to as traditional training in
study, but does not involve a pre-assessment training
element): supervised colonoscopies starting approximately 8
weeks into the Fellowship (so that both groups commenced
the actual colonoscopies at the same time).
Assessment:
For each procedure, participants were responsible for getting their proctor
to fill out an evaluation form. For those cases where the participant could
not reach the caecum without assistance, the proctor was asked to
indicate whether or not the procedure was difficult for the proctor to
complete.
Proctors also asked to rate patient discomfort. Forms were collected each
month until the participant reached 200 procedures or until the study
period was over.
Device:
Simbionix GI Mentor™ (Simbionix USA Corporation, Gathersburg, Md)
105
106
Appendix C.2 Study design tables – Virtual reality sigmoidoscopy simulator training vs. patient-based training
Author Intervention Study Design Study population
Gerson and van Dam Objective: To compare patient-based teaching of sigmoidoscopy with that Randomised controlled trial Sample size: n = 16
2003 provided by a virtual reality endoscopy simulator. • Simulator training: n = 9
Method of randomisation: sequentially allocated by investigator • Patient-based training: n = 7
Location Participants:
Division of All participants completed a questionnaire which inquired about age, Allocation concealment: none Baseline characteristics of participants:
Gastroenterology and gender, amount of training in gastroenterology, and prior experience with Simulation Patient-
Hepatology, Stanford computers. Level of evidence: II training based
University School of training
Medicine, Stanford, Intervention: Intention to treat: not stated Internal 9 7
California. • Simulator training: participant’s permitted unlimited time and medicine
access to the VR sigmoidoscopy simulator. Residents were Power calculation: Using a comparison of the means of 2 independent residents (n)
USA instructed to review all didactic modules and complete all 6 samples, the sample size required to detect a magnitude difference Mean resident 28 ± 0.8 29.4 ± 1.1
cases prior to the test cases. Residents were allowed between arms of 30%, a power of 90% and an alpha of 0.05, was age ± SEM (yrs)
unlimited time on the simulator during the 2 week period prior calculated to be 30 examinations in each arm of the study (assuming
to the scheduled test cases. Participants were not allowed to 5% of examinations would not be completed, and another 5% due to 10 residents had extensive experience with the use of
view live cases as part of their training prior to using simulator. patient intolerance, 33 patients would need to be recruited per arm). computers, and the remaining 6 described themselves
Performance on the simulator was not observed and coaching as ‘somewhat experienced’.
not provided. Blinding: neither the investigators nor the participants were blinded to
• Patient-based training: Participants were required to perform the group assignment. Participating patients were blinded to residents’ Inclusion: internal medicine residents
10 sigmoidoscopic examinations with an attending training status. Assessing physicians not blinded.
gastroenterologist. Participants were trained with a video Exclusion: participants with prior experience with
endoscope. The attending physician was instructed to teach Lost to assessment: none flexible sigmoidoscopy, observation of sigmoidoscopy
the residents using his or her own teaching preferences and as part of a rotation, or prior use of an endoscopic
techniques. Residents were expected to learn how to advance Study period: not stated simulator.
the colonoscope independently by the end of the 10 sessions.
The teaching sessions were scheduled to occur during a Outcome measures: standardised form for data collection: Details of patients for live assessment:
consecutive 2 week period. • Examination duration and extent Asymptomatic patients referred for routine colorectal
• Splenic flexure recognition cancer screening via flexible sigmoidoscopy were
Time to assessment: two weeks after commencement of training. • Ability to recognise pathology asked to participate.
• Completion of retroflexion Simulation Patient-
Assessment: training based
All residents performed 5 test examinations under supervision and training
evaluation of an attending gastroenterologist. Patient mean 54 ± 1.4 56 ± 1.5
Residents were instructed to complete the examination to the splenic age ± SEM (yrs)
flexure and to notify the attending physician when the flexure was Patient gender 10/34 (29%) 16/32 (50%)
identified. Residents were also required to notify if any pathology was (M/F, % male)
encountered during the examination. Residents were expected to
perform retroflexion at the completion of the sigmoidoscopy. If the
participant encountered difficulties, the attending physician was allowed
to take over until the resident could continue.
Attending physician used a standardised form for data collection.
Patients were interviewed by a medical assistant at the completion of the
sigmoidoscopy using a standard questionnaire.
Device:
Virtual reality sigmoidoscopy simulator (Immersion Medical Inc.,
Gathersburg, Maryland, USA).
SEM Standard error of the mean
107
108
APPENDIX D – RESULTS TABLES
109
110
MIST-VR simulator training vs. no simulator training for laparoscopic cholecystectomy (Grantcharov et al. 2004)
Table D.1. Performance of laparoscopic cholecystectomy between VR trained and non-VR trained participants
Participants who received VR training performed laparoscopic cholecystectomy significantly faster than those in the control group (p = 0.021, t test).
The VR trained group showed significantly greater improvement in their error (p = 0.003) and economy of movement (p = 0.003) scores.
SCMIS GEM video trainer vs. no simulator training for laparoscopic cholecystectomy (Scott et al. 2000)
Table D.2. Baseline video-trainer scores: time (seconds) for task completion
Scott et al. 2000 No training Simulator training P-value*
(n = 13) (n = 9)
Checker-board 144 (122 - 152) 146 (126 - 187) 0.616
Bean drop 58 (48 - 69) 53 (45 - 66) 0.471
Running string 62 (46 - 74) 74 (67 - 84) 0.096
Block move 50 (38 - 55) 48 (39 - 58) 0.815
Suture foam 58 (49 - 90) 56 (42 - 94) 0.695
Values are medians with 25th – 75th percentiles in parentheses.
*Trained versus control groups, one-tailed Wilcoxon rank-sum test.
Table D.4. Adjusted improvement* in video-trainer scores: time (seconds) for completion of task
Scott et al. 2000 No training Simulator training P-value†
(n = 13) (n = 9)
Checker-board 31 (-1 – 37) 63 (23 - 75) 0.014
Bean drop 14 (10 - 18) 24 (18 - 26) 0.002
Running string 3 (-13 - 16) 26 (21 - 38) 0.001
Block move 9 (-2 - 14) 22 (11 - 25) 0.015
Suture foam 26 (18 - 38) 48 (44 - 50) 0.001
Values are medians with 25th – 75th percentiles in parentheses.
*Improvement defined as post-training minus baseline scores, calculated individually for each participant, adjusted by linear analysis of covariance for differences in baseline scores.
†Trained versus control groups, one-tailed Wilcoxon rank-sum test.
All 9 residents in the training group completed 10 practice sessions lasting 30 minutes.
On average, residents practised 138 video trainer tasks (range 94 – 171). Each of the 5 tasks was practised 28 times (range 19 – 34).
Laparoscopic experience in the operating room during the study interval was comparable for both groups (p = 0.612).
The trained group had significantly larger median time reductions for all five video trainer tasks compared with the control group.
On global assessment, the trained group had significantly larger median increases in four of eight performance criteria, compared with the control group.
When initially asked if they felt comfortable with their laparoscopic skills, 3 of 13 control residents and 5 of 9 trained residents replied ‘yes’.
On the completion questionnaire, 6 of 13 control residents and 8 of 9 trained residents felt comfortable with their laparoscopic skills at the end of the rotation.
Of those who were not comfortable with their laparoscopic skills at baseline, 3 of 10 in the control group were comfortable at the end of the rotation, in contrast to 3 of 4 in the trained group (p = 0.175). After
training, 9 of 9 residents improved their hand-eye-hand coordination and 8 of 9 felt that the training had improved their skills in the operating room.
Table D.6. Participant self-assessment improvement after simulator training
SCMIS GEM video trainer vs. no video-trainer training for laparoscopic cholecystectomy (Scott et al. 1999)
Table D.7. Video trainer skills: time (seconds) for task completion after training
Scott et al. 1999 No simulator training Simulator training P-value
(mean ± SD) (n = 13) (mean ± SD) (n = 9)
Checker-board 125 ± 31 94 ± 27 < 0.05
Bean drop 45 ± 11 34 ± 7 < 0.05
Running string 63 ± 23 46 ± 14 < 0.05
Block move 42 ± 15 28 ± 7 < 0.05
Suture foam 42 ± 15 20 ± 6 < 0.05
Values obtained at repeat examination after training.
There were no significant differences in any of the initial battery of assessment tests noted between the groups.
All residents in both groups successfully completed the dissection of the gallbladder from the liver bed.
All residents in the VR group successfully achieved the required criterion levels of performance in 3 – 8 training sessions.
The duration of the dissection for the VR trained group was 29% less than the standard training group (p = NS).
115
116
4-day VR laparoscopic cholecystectomy training course vs. no training course for laparoscopic cholecystectomy (Schijven et al. 2005)
Participants of both experimental and control groups did not differ in demographic parameters, except for the number of laparoscopic cholecystectomies in favour of the control group (p = 0.008).
Both observers judged that there was a significant improvement for ‘judgment’ (Observer 1, p = 0.004 and Observer 2, p = 0.013).
Observer 1 found a significant improvement for ‘fluency’ (p = 0.0037).
AccuTouch® colonoscopy simulator training vs. no simulator training for colonoscopy (Ahlberg et al. 2005)
Median total training time per resident was 20 hours (range 15 – 25). Training sessions lasted 1 – 2 hours on each occasion, and could be repeated several times per day and continued over at least 4
days. After completion of the training, participants had to do their colonoscopies within 1 week.
Trainees in the control group started after studying the theoretical material.
A successful colonoscopy was defined as intubation of the caecum within given time limits.
The success rate was 52% in the simulator trained group compared with 19% in controls, which was a significant difference (p = 0.0011). See Table.
There was a significant difference (p = 0.008) in procedure time in favour of the simulator trained group. The time to reach the caecum in successful cases was a median of 30 minutes (interquartile range
(IQR) 17 – 38 minutes) in the trained group compared with 40 minutes (IQR 25 – 45 minutes) in controls.
There was a significant learning curve (p = 0.039) for all participants during the study, and the likelihood of a participant reaching the caecum was 2.57 times higher for procedures 6 – 10 compared with
procedures 1 – 5 in both test groups.
Patient gender significantly (p = 0.016) affected success rate. It was 3 times more likely that a colonoscopy would be successful in a male patient than a female patient.
The number of previously performed gastroscopies influenced success rate (p = 0.006) equally in both groups. It was 3.76 times more likely that a participant with a previous experience exceeding 50
gastroscopies would succeed with colonoscopy.
The post graduate year of the participant did not significantly influence the level of success rate.
Significantly less patient discomfort was reported in the simulator trained group (median 4, IQR 2.5 – 6) compared with controls (median 5, IQR 4 – 7) (p = 0.02). In addition it was noted that male patients
reported less pain (p = 0.001) compared with female patients.
AccuTouch® colonoscopy simulator training vs. no simulator training for colonoscopy (Sedlack and Kolars 2004)
Table D.11. Median Performance Scores (25–75% IQR) Rendered by Supervising Faculty are Shown for Each Parameter
Sedlack and Kolars 2004 Colon 1–15 Colon 16–30 Colon 31–45 Colon 46–60
N = Number of faculty evaluations recorded in each group
Simulator 60 58 58 60
Regular 60 59 60 60
Time to reach maximum insertion (minutes)
Simulator 23 (19–30) 21 (19–28) 21 (18–28) 17 (12–25)
Regular 25 (20–30) 22 (15–30) 20 (15–30) 20 (15–27)
P-value 0.155 0.947 0.321 0.090
Depth of unassisted insertion (1 = rectum, 6 = terminal ileum)
Simulator 4.0 (3.0–5.0) 5.0 (4.0–5.0) 4.5 (4.0–5.0) 5.0 (4.0–5.0)
Regular 3.0 (2.0–4.0) 4.0 (3.0–5.0) 4.5 (3.0–5.0) 5.0 (4.0–5.0)
P-value 0.003 0.006 0.905 0.085
Percentage completed independently by the fellow (%)
Simulator 38% (27–50) 59% (46–71) 50% (37–63) 72% (59–84)
Regular 20% (9–31) 34% (21–46) 50% (37–63) 58% (46–71)
P-value 0.027 0.007 1.000 0.128
Identifies landmarks (1 = strongly disagree, 4 = neutral)
7 = strongly agree)
Simulator 6.0 (6.0–7.0) 6.0 (6.0–7.0) 6.0 (6.0–7.0) 7.0 (6.0–7.0)
Regular 6.0 (5.0–7.0) 6.0 (5.0–7.0) 6.0 (6.0–7.0) 7.0 (5.5–7.0)
P-value 0.041 0.044 0.166 0.439
Inserts in a safe manner (1–7)
Simulator 7.0 (6.0–7.0) 6.0 (6.0–7.0) 6.0 (6.0–7.0) 7.0 (6.0–7.0)
Regular 6.0 (6.0–7.0) 6.0 (6.0–7.0) 6.0 (6.0–7.0) 7.0 (6.0–7.0)
P-value 0.008 0.274 0.256 0.559
Adequately visualises mucosa withdrawal (1–7)
Simulator 6.0 (6.0–7.0) 6.0 (6.0–7.0) 6.0 (6.0–7.0) 7.0 (6.0–7.0)
Regular 6.0 (5.0–7.0) 6.0 (5.0–7.0) 6.0 (5.5–7.0) 6.0 (6.0–7.0)
P-value 0.009 0.396 0.230 0.518
Responds appropriately to patient discomfort (1–7)
Simulator 6.5 (6.0–7.0) 6.0 (5.8–7.0) 6.0 (5.0–7.0) 7.0 (6.0–7.0)
Regular 6.0 (5.3–7.0) 6.0 (6.0–7.0) 6.0 (6.0–7.0) 7.0 (5.0–7.0)
P-value 0.019 0.560 0.137 0.771
"Percentage of endoscopies completed independently" shown are averages (95% CI).
Columns represent the chronological order of procedures being compared in blocks of 15.
Table D.12. Median scores (25–75% IQR) of patient discomfort
Sedlack and Kolars 2004 Colon 1–15 Colon 16–30 Colon 31–45 Colon 46–60
N = Number of patient evaluations recorded in each group
Simulator 60 58 58 60
Regular 60 59 60 60
Patient reported pain score: (1 = no pain, 10 = worst pain of life)
Simulator 2.0 (1.0–4.0) 2.0 (1.0–4.0) 2.0 (1.0–4.0) 1.5 (1.0–4.0)
Regular 4.0 (1.5–5.0) 2.0 (1.0–4.0) 2.5 (1.0–4.3) 2.0 (1.0–3.0)
P-value 0.019 0.343 0.531 0.731
There appeared to be a significant reduction in pain scores during the first 15 colonoscopies performed by each simulator trained fellow. The absence of sedation data, however, makes these results difficult
to interpret.
The 4 simulator trained participants completed an average of 21 simulated cases (range 19 – 26) prior to their patient based experience.
The 4 simulator trained participants performed an average of 117 (range 62 – 195) patient based colonoscopies over an average of 7 weeks (range 4 – 8) of training.
The 4 traditionally trained participants completed an average of 108 (range 76 – 157) patient based colonoscopies over an average of 7 weeks (range 4 – 8) of training.
An analysis of procedures broken down in chronological groups of 15 demonstrates that simulator trained fellows scored significantly better in all parameters during the first 15 colonoscopies (p < 0.05)
performed with the exception of ‘time to reach maximum insertion’. The simulation trained fellows inserted the endoscope further with a median depth score of 4.0 vs 3.0 (p = 0.003) and reached the
caecum independently in 38% of procedures vs 20% (p = 0.027) during this initial training period. Simulator trained fellows continued to have a significantly greater ‘depth of insertion’, higher percentage of
‘independently completed procedures’ and greater ability to ‘identify landmarks’ throughout the first 30 colonoscopies (p < 0.05). Beyond 30 colonoscopies, none of the parameters showed a statistical
difference between groups.
Patient surveys also demonstrated a lower median discomfort score during the first 15 colonoscopies performed by the simulator trained fellows; with colonoscopies by trained fellows achieving a median
discomfort score of 2.0 (range 1 – 8) vs 4.0 (range 1 – 7) for traditionally trained fellows (p = 0.019). The degree of sedation during each procedure was not assessed.
Patient surveys also demonstrated a lower median discomfort score during the first 15 colonoscopies performed by the simulator-trained fellows; with colonoscopies by CBCS-trained fellows achieving a
median discomfort score of 2.0 (range 1–8) vs. 4.0 (range 1–7) for traditionally trained fellows (p= 0.019). The degree of sedation during each procedure was not assessed, therefore, no comparisons of the
adequacy of sedation could be made between the 2 groups nor could a correlation be made of sedation levels and individual patient discomfort scores.
During the 2 half-days of the CBCS training, faculty without an accompanying fellow were able to complete an average of 8 colonoscopies (range 7–9) per half-day while simulator fellows worked with the
CBCS. Faculty of the traditional fellow group completed an average of 3.5 procedures (range 2–4) per half-day during the same initial training interval. This allowed an average of 9 additional colonoscopies
to be performed by the staff endoscopists for each fellow trained with this CBCS curriculum. There was no significant difference between supervising staff volumes once CBCS fellows began patient-based
colonoscopy.
119
120
AccuTouch® flexible sigmoidoscopy simulator training vs. no simulator training for sigmoidoscopy (Sedlack et al. 2004)
The 19 simulator trained residents performed an average of 9 simulated procedures (range 6 – 11) followed by an average of 11 patient based procedures each (range 7 – 19) for a total of 212 patient based
procedures. Of the simulator trained residents, 150 (71%) patients agreed to complete surveys
The residents that had no simulator training performed an average of 12 procedures each (range (7 – 17) for a total of 230 patient based procedures.
Of the non-simulator trained residents, 175 (76%) patients completed surveys.
During the same period, staff endoscopists completed 780 procedures with 585 (75%) patient surveys returned.
No adverse events were reported with any of the procedures during this study.
Analysis of patient surveys demonstrated that median patient reported discomfort scores were significantly lower for simulator trained residents than for traditionally trained resident, 3 (IQR, 2 – 5) vs 4 (IQR,
2 – 6) (p < 0.01).
An increase in staff productivity was shown as a result of simulator training. During the simulator training interval, non-teaching staff performed an average of 7 procedures per half day while operating
independently compared with an average of less than 3 procedures during direct patient-based teaching. The 3 hour simulator training session allowed an average of 4 additional procedures to be
performed by staff for each participant trained via the simulator.
Simbionix GI Mentor™ simulator training vs. no simulator training for colonoscopy (Cohen et al. 2006b)
Table D.14. Comparison between simulator and no-simulator group in objective and subjective, competence and patient discomfort
Cohen et al. 2006b Objective* competence Subjective† competence Patient discomfort‡
Simulator No training P-value Simulator No training P-value Simulator No training P-value
(n = 23) (n = 22) (t-test) (n = 23) (n = 22) (t-test) (n = 23) (n = 22) (t-test)
Mean Mean Mean Mean Mean Mean
Session 1 50.4 40.9 0.06 47.6 36.6 0.08 25.7 31.4 0.42
Session 2 64.5 52.0 < 0.0001 68.6 57.4 0.004 23.2 19.1 0.14
Session 3 74.0 62.0 < 0.0001 76.3 68.4 0.005 16.7 19.5 0.22
Session 4 76.7 64.4 < 0.0001 78.0 75.4 0.32 16.0 18.2 0.39
Session 5 76.8 70.2 0.03 81.3 79.4 0.28 16.7 16.5 0.94
Session 6 77.8 77.6 0.91 82.0 82.3 0.88 13.4 13.9 0.85
Session 7 80.8 80.5 0.89 86.1 84.1 0.32 11.9 11.3 0.74
Session 8 89.5 83.7 0.01 88.8 86.4 0.11 10.5 10.4 0.99
Session 9 87.8 85.2 0.02 88.9 86.8 0.32 10.7 11.8 0.55
Session 10 92.7 90.9 0.04 90.8 90.5 0.82 8.9 9.2 0.81
* Objective competency is the ability to reach the transverse colon and the caecum without assistance, and the ability to correctly recognise and identify abnormalities.
† Subjective competency is on a 5-point scale; 1 (totally unskilled) to 5 (competent and expedient).
‡ Patient discomfort rated on a scale of 1 (very comfortable) to 5 (severe pain to the patient).
There were no significant differences in colonoscopy experience between the two groups.
The respondents rated the overall satisfaction with the simulator training as moderately useful to useful, with a mean score of 3.5 (range, 1 [no use] to 5 [very useful]).
121
122
Gastro-Sim® flexible sigmoidoscopy simulator training vs. no simulator training for sigmoidoscopy (Tuggy 1998)
Table D.15. Performance comparisons and quality 360-degree visualisation technique between control and experimental groups
Tuggy 1998 No 5 hrs P-value No 10 hrs P-value
training simulator training simulator
(n = 5) training (n = 5) training
(n = 5) (n = 5)
Time to 30 cm (sec) 357 286 0.52 357 119 0.03
Time to 40 cm (sec) 518 341 0.27 518 211 0.03
Total examination time (sec) 654 530 0.31 654 323 0.01
Directional errors (n) 8.6 2.8 0.01 8.6 1.6 < 0.01
Time in red-out (sec) 70 27 0.16 70 14 0.07
Percentage of colon visualised 45 55 0.60 45 79 0.02
Quality of viewing 360o * 2.4 1.3 0.05 2.4 1.4 0.03
* Based on a rating scale of 1 – organised, 2 – adequate, 3 – haphazard.
Table D.16. Resident survey responses to using virtual reality sigmoidoscopy simulator
Tuggy 1998 Strongly Agree (%) Neutral Disagree Strongly
agree (%) (%) (%) disagree
(%)
Simulator resembles live sigmoidoscopy 0 89 11 0 0
Graphics resembled actual colon 22 67 11 0 0
Tactile feedback was similar to colon 0 44 44 11 0
Learned hand-eye skills on simulator 45 55 0 0 0
Learned more with more practice 22 55 11 11 0
Gained confidence for live patient 55 45 0 0 0
examination
Likely to perform in practice in simulator 55 33 11 0 0
available in training
Tutorial component was helpful 0 33 33 11 22
Enhanced features on simulator would 44 44 11 0 0
make me skilled in flexible sigmoidoscopy
Training on the virtual reality simulator produced substantial improvements in examination times and hand-eye skill measures.
After 6 to 10 hours of training on the simulator, the experimental group achieved significantly faster insertion times to 30 cm (p = 0.03), 40 cm (p = 0.03), and a shorter mean length of examination (p = 0.01).
There was also significant improvement of hand-eye skill measures of the experimental group in directional errors (p < 0.01), percentage of colon visualised (p = 0.02), and viewing quality of examination
when compared with the control group's initial performance on live patients.
Resident survey findings after the study confirmed the trainee's perception of the benefit of the simulator training.
There were no observed significant differences in procedure-assessed patient discomfort between the 2 groups at any time during the training.
Procedicus VIST™ simulator training vs. no simulator training for catheter based intervention for occlusive vascular disease (Chaer et al.
2006)
Table D.17. Mean checklist scores on individual measures of performance for simulator and non simulator trained residents
Chaer et al. 2006 Simulator No training P-value Simulator No training P-value
training (n = 10) training (n = 10)
(n = 10) (n = 10)
Intervention 1 Intervention 2
Advance femoral wire 2.4 1.4 NS 2.6 2.0 NS
Advance wire atraumatically 2.6 1.8 0.05 2.8 2.0 0.03
Constantly visualise wire tip 2.9 1.4 0.005 3.1 1.9 0.001
Mount and advance catheter wire 2.9 2.0 0.01 3.1 2.9 NS
Position imaging catheter 2.1 1.2 0.04 2.4 1.8 0.03
Knowledge of anatomy 2.4 1.3 NS 2.5 1.6 0.04
Walk catheter back over wire 2.9 2.0 NS 3.4 2.7 0.05
Advance balloon over wire 3.1 2.2 0.006 3.4 2.6 0.02
Centre balloon over stenosis 3.0 2.0 0.009 2.9 1.9 0.003
Balloon inflation 3.0 2.0 0.003 3.0 1.7 0.003
Balloon pressure 2.6 1.3 0.003 2.3 1.1 0.002
Walk balloon back over wire 3.0 2.2 NS 3.3 2.0 0.006
Image after PTA 2.5 1.8 NS 2.6 1.9 NS
Advance stent over wire 3.0 2.3 NS 3.4 2.2 0.01
Centre stent over stenosis 2.6 2.1 NS 2.9 1.8 0.01
Accurately deploy stent 2.6 1.4 NS 3.0 1.7 0.01
Walk stent shaft out over wire 3.0 2.4 NS 3.3 2.0 0.006
Completion angiogram 2.2 1.9 NS 2.7 1.7 0.04
Overall assessment 50 ± 6 33 ± 9 0.0015 53 ± 6 36 ± 7 0.0006
NS not significant
PTA percutaneous transluminal angioplasty
123
124
Table D.18. Mean endovascular global rating scale scores on individual measures of performance for simulator and non simulator trained residents
Chaer et al. 2006 Simulator No training P-value Simulator No training P-value
training (n = 10) training (n = 10)
(n = 10) (n = 10)
Intervention 1 Intervention 2
Time and motion 2.3 1.4 NS 2.6 1.7 0.01
Wire and catheter handling 2.8 1.6 0.002 3.0 1.9 0.009
Awareness of wire position 2.6 1.7 0.005 3.0 1.8 0.01
Wire stability 2.6 1.9 NS 3.0 2.1 0.04
Fluoroscopy usage 1.5 1.1 NS 2.0 1.1 0.003
Precision of wire/catheter 2.8 1.7 0.03 2.8 1.7 0.005
technique
Flow of operation 2.4 1.4 NS 2.8 1.2 0.002
Knowledge of procedure 2.0 1.4 NS 2.4 1.1 0.005
Quality of final product 3.0 3.0 0.03 3.3 3.2 NS
Ability to complete the case 2.4 1.4 0.03 2.6 1.4 0.01
Need for verbal prompts 2.3 1.0 0.03 2.4 1.4 0.01
Attending takeover 2.6 1.4 0.003 2.9 1.7 0.006
Overall assessment 30 ± 7 19 ± 5 0.0052 33 ± 6 21 ± 6 0.0015
NS not significant
There were no pre-study differences between the training and control groups. Both groups were comparable in terms of their age and gender as well as previous experiences that might be relevant to a
resident’s ability to assimilate catheter techniques. Performance on the visuospatial test was not different between the 2 groups (p = not significant).
All residents in the training group were able to complete the simulator training session in less than 2 hours (mean of 90 ± 21 minutes).
There were no intraoperative complications that developed as a result of resident participation in the study. There were no differences in peri-operative morbidity or mortality between patients treated by the
2 groups.
The trained group scored higher than controls during the first (50 ± 6 vs 33 ± 9, p = 0.0015) and second (53 ± 6 vs 36 ± 7, p = 0.0006) endovascular intervention. For the first intervention, training led to a
numerical enhancement of each of the individual measures of performance (not all statistically significant). For the second intervention, the trained group was significantly better in all but 3 variables. There
were 2 procedural steps where a significant difference was not found with either the first or the second intervention (advance femoral wire and image after percutaneous angioplasty). The first is not well
taught by a simulator. The second measures the ability of a resident to remember to perform a completion angiogram after angioplasty. Resident performance did not improve from the first to the second
intervention.
All residents in the simulator-trained group were able to complete the mentored simulator training session in less than 2 hours (mean of 90 ± 21 minutes).
A more subjective evaluation of resident performance was also conducted by attending surgeons. Trained participants scored higher overall on the global rating scale of endovascular performance for the
first (30 ± 7 vs 19 ± 5, p = 0.052) and second (33 ± 6 vs 21 ± 6, p = 0.0015) intervention. For both procedures, training led to numerical enhancement in all the individual measures of performance. For the
first intervention, 4 of these were not statistically significant and 1 was not statistically significant for the second intervention. Resident performance did not improve from the first to the second intervention
Table D.19. Composite score and individual categories comprising composite score of global assessment tool
Hamilton et al. 2001 Simulator training No training
(n = 10) (n =11)
Individual assessment parameter Baseline Final Baseline Final
mean ±SD mean ±SD mean ±SD mean ±SD
Respect for tissue 3.1 ± 1.1 3.5 ± 0.9 2.7 ± 0.9 3.1 ± 1.0
Time and motion 2.7 ± 1.1‡ 3.5 ± 0.7*§ 1.8 ± 0.6 2.4 ± 0.9*
Instrument handling 2.9 ± 1.0 3.7 ± 0.7†║ 2.1 ± 0.7 3.1 ± 0.9*
Knowledge of instrument 3.0 ± 1.3 3.7 ± 0.8║ 2.9 ± 1.5 2.9 ± 0.4
Flow of operation 2.8 ± 1.0 3.7 ± 0.7*§ 2.1 ± 0.8 2.6 ± 1.1
Use of assistants 2.5 ± 1.3 3.5 ± 1.1§ 1.6 ± 0.7 2.1 ± 1.1
Knowledge of procedures 2.5 ± 0.9 3.8 ± 0.9*§ 2.0 ± 0.8 2.6 ± 0.9*
Overall performance 2.4 ± 0.8 3.6 ± 0.7*§ 1.9 ± 0.7 2.4 ± 0.9*
composite score (%) 44.6 ± 24.6 65.7 ± 17.5* 29.6 ± 15.7 41.0 ± 23.5*
Between group comparisons made using Wilcox rank sum test. Within group comparisons made using Wilcoxon signed rank test.
* Significant difference between pre-training and post training scores at p < 0.05.
† Significant difference between pre-training and post training scores at p = 0.05.
‡ Significant difference between trained and untrained groups prior to training at p < 0.05.
§ Significant difference between trained and untrained groups after training at p < 0.05.
║ Significant difference between trained and untrained groups after to training at p = 0.05.
Questionnaire data responses were similar for the control and trained group: at baseline (p = NS). After training, 10 of 10 residents in the trained group felt their ability to perform a laparoscopic TEP hernia
repair improved over the study period compared with 5 of 11 in the control group (100% vs. 45.5%, P= 0.01). Additionally, all the residents in the trained group (100%) reported that their understanding of the
operation improved over the month compared with 5 of 11 residents (45.5%) in the control group (p = 0.01). Compared with controls, residents in the trained group also expressed an increased willingness to
offer laparoscopic TEP hernia repairs to patients with concurrent, unilateral hernias (p = 0.02).
125
126
Endoscopic sinus surgery simulator training vs. no simulator training for catheter for endoscopic sinus surgery (Edmond 2002)
Table D.20. Mean Rating for Residents with and without prior simulation training across videotape rating criteria for their first operating room procedure.
Edmond 2002 No training (n = 2) Simulator training P-value
Mean ± SD (n = 2) Mean ± SD
Navigation 7. 7 ± 0 6.5 ± 1.2 0.40
Injection 2.8 ± 1.6 6.8 ± 1.6 0.14
Uncinectomy 2.83 ± 1.2 6.7 ± 0.5 0.15
Anterior ethmoidectomy 3.3 ± 0.9 7 ± 0.9 0.0.6
Maxillary antrostomy 3. 7 ± 0.5 6.3 ± 0.9 0.17
Orientation of image 4.8 ± 0.2 6.8 ± 1.6 0.34
Image-task alignment 4.8 ± 0.2 7 ± 1.9 0.35
Proper depth of image 5. 3 ± 0.5 6.8 ± 1.2 0.34
Tool manipulation 3±0 7 ± 0.9 0.11
Tool selection 3.8 ± 0.2 6.8 ± 1.6 0.24
Tool-tool dexterity 3.5 ± 1.2 6.5 ± 2.1 0.33
Tissue respect 2.8 ± 0.2 5.5 ± 3.1 0.44
Surgical confidence 2.8 ± 0.2 6.5 ± 0.7 0.09
Case difficulty 5±0 7.2 ± 0.7 0.14
Overall mean rating 4.0 ± 6.7E-02 6.7 ± 1.1 0.19
NS not significant
Four participants: 2 had extensive simulator experience. 2 had none. None had prior operating room sinus surgery experience.
While the 2 simulation-trained residents were rated consistently better than the other 2 residents across all measures, these differences approached statistical significance for only two items (most likely as a
result of the small number of subjects): anterior ethmoidectomy (p = 0.06; p < 0.05) and surgical confidence (p = 0.09; p < 0.05).
Simulator experience could be a strong predictor of first-time operating room performance as determined by rating videos. This result approaches but does not achieve significance (r = 0.911, p < 0.1).
Appendix D – Results tables
Simulation training vs. patient-based training
Sigmoidoscopy simulator training vs. patient-based training for sigmoidoscopy (Gerson and van Dam 2003)
An analysis of the results according to the patient’s gender was carried out. There was no significant difference between the mean score when the examination was performed in female patiens (3.3 ± 0.8) compared to
male patients (3.4 ± 1.8; p = 0.7). The mean scores for each group did not significantly differ from the overall scores when examinations performed in female patiens were compared.
127
128
No significant differences in patient discomfort was found between the two arms of the study.
Residents in the simulator trained group were asked to rate the performance of the simulator and to provide feedback about its performance after they performed the test cases. On average, residents spent
over 2 hours on the simulator performing the practise cases (excluding the time spent watching the educational videos, which is not assessed). The average distance obtained with the simulator was 42 cm,
demonstrating that most residents were unable to complete the simulated cases successfully to the level of the splenic fixture.
In order to determine whether residents’ performance improved on the simulator over time, data regarding procedure time, insertion length and retroflexion ability was calculated for each resident during the
first 3 cases performed on the simulator compared with the last 3 cases on the simulator. None of the parameters significantly improved over time. No association was found between time spent on the
simulator and performance during the test cases.
At the end of the test cases, the simulator trained residents were asked to rate their training experience. In general, the residents trained on the simulator felt that the educational videos and teaching cases
were useful. However, all the residents stated that the test cases were more difficult than the simulated examinations. Specific critiques of the simulator included the fact that the rectum appeared
insufflated upon insertion of the endoscope, and that the negotiation of the rectosigmoid area was not challenging enough during the simulated cases. In addition, the sumilator was unable to simulate a
realistic sensation of resistance to passage of the sigmoidoscope. Both attending physicians noted considerable anxiety when the residents in the simulator trained group attempted the test cases, although
this impression was not quantified.
129
Appendix E – Summary of simulators and patient-based training
Study Simulator overview
Grantcharov et al. 2004 The Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) system is a low fidelity virtual reality
Schijven et al. 2005 simulator with combined metrics systems to provide feedback to novice learners during practice. The system
Seymour et al. 2002 trains basic dexterity skills as the student uses real laparoscopic handles that activate virtual instrument tips
within the computer. The acquisition of skills on this device relates to generalisation and automation of skills
in laparoscopic navigation (Ahlberg et al. 2005; Gallagher et al. 2005).
Scott et al. 2000 The Southwestern Centre for Minimally Invasive Surgery Guided Endoscopic Module (SCMIS GEM)
video-trainer is composed of frames supporting traditional laparoscopic video monitors, light sources, and
camera systems (Hamilton et al. 2002). The frame forms a box inside which pre-manufactured tasks are
performed. Speed is measured by the trainee and is the measure of performance (Hamilton et al. 2002).
Video trainers ‘shape’ performance by training skills of progressive difficulty (Valentine and Rege 2004;
Gallagher et al. 2005).
Ahlberg et al. 2005 The AccuTouch® simulator system is a full-procedure simulator and includes training for flexible
Sedlack and Kolars 2004 sigmoidoscopy, colonoscopy and bronchoscopy (bronchoscopy was not part of this review). It incorporates a
Sedlack et al. 2004 mannequin, force feedback and measurement of performance data. Both diagnostic and therapeutic
scenarios are provided, and a number of aids are available (Dunkin et al. 2007).
Chaer et al. 2006 The Procedicus VIST™ simulator is a multimedia device designed to simulate endovascular techniques in a
variety of clinical settings. It consists of a three-dimensional representation of the human arterial system,
coupled to a haptic module that uses a force feedback system that provides tactile sensory information when
the user inserts and manipulates standard angiographic catheters and guide wires (Dayal et al. 2004).
Separate devices are attached that simulate the injection of contrast dye, performance of angioplasty,
deployment of stents, and performance of fluoroscopy with digital subtraction angiography (Dayal et al. 2004).
It is one of the most sophisticated VR simulators in the world (Gallagher et al. 2005).
Cohen et al. 2006b The Simbionix GI Mentor™ is a real-time interactive computer simulator that replicates both diagnostic and
therapeutic procedures (Barr-Meir 2000). It includes a life-sized plastic head and torso with apertures for
upper and lower endoscopy (Valentine and Rege 2004). Real-time three-dimensional pictures are generated
as an endoscope is passed through the torso body (Valentine and Rege 2004). Program software generates
force feedback to simulate resistance from touching bowel wall as the endoscope is passed. A monitor
depicts typical findings seen at endoscopy as well as adverse events that must be treated (Valentine and
Rege 2004). The simulated procedures look and feel similar to the actual procedures and train tasks that will
directly transfer to the performed procedures (Gallagher et al. 2005). The simulator uses a ‘fading’ training
strategy where major guides and clues are give a the beginning of training, which are gradually faced out until
the trainee can perform the task without support (Gallagher et al. 2005).
Tuggy 1998 The Gastro-Sim® flexible sigmoidoscopy simulator is no longer commercially available (Gerson 2006) and
widespread information does not exist on this simulator (United States Patent 4907973).
Hamilton et al. 2001 The TEP hernia repair simulator is constructed from rubber and depicts a human pelvis. Ports are available
for placement of laparoscopes and trocars that permit residents to practise mesh fixations over indirect,
direct, or femoral defects (Valentine and Rege 2004).
Edmond 2002 The endoscopic sinus surgery simulator contains a virtual patient that is responsible for the simulation of
the endoscopic image, the surgical interface, and the user interface. The system also contains a haptic
system, allows voice recognition, and provides virtual instruction while training (Edmond 2002). The
simulated procedures look and feel similar to the actual procedures and train tasks that will directly transfer to
the performed procedures (Gallagher et al. 2005). The simulator uses a ‘fading’ training strategy where major
guides and clues are give a the beginning of training, which are gradually faced out until the trainee can
perform the task without support (Gallagher et al. 2005).
Schijven et al. 2005 The Xitact LS500 is a hybrid simulator that combines a physical object with computer software simulation
providing visual images and tactile feedback. It is a modular VR training platform that trains a variety of
laparoscopic skills (Schijven and Jakimowicz 2003).
130
Appendix E – Summary of simulators and patient-based
training continued
131
APPENDIX F – SUMMARY OF CRITICAL APPRAISAL
132
Appendix F – Summary of critical appraisal
Randomis Allocation Blinding of Intention Power Losses to Study Validated Inclusion Exclusion Baseline
ation concealme assessors to treat calculation assessme period assessme criteria criteria characteris
nt nt nt tools tics
Randomised controlled trials (Level II)
Grantcharov et al. 2004 2 3 3 2 2 3 3 3 3 2 3
Tuggy 1998 2 2 2 2 2 2 2 2 3 2 2