University of Central Florida University of Central Florida
STARS STARS
Graduate Thesis and Dissertation 2023-2024
2024
Examining the Impact of a Video Review Guide on Robotic Examining the Impact of a Video Review Guide on Robotic
Surgical Skill Improvement Surgical Skill Improvement
Mary Mansour Soliman
University of Central Florida
Find similar works at: https://stars.library.ucf.edu/etd2023
University of Central Florida Libraries http://library.ucf.edu
This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted
for inclusion in Graduate Thesis and Dissertation 2023-2024 by an authorized administrator of STARS. For more
information, please contact [email protected].
STARS Citation STARS Citation
Soliman, Mary Mansour, "Examining the Impact of a Video Review Guide on Robotic Surgical Skill
Improvement" (2024).
Graduate Thesis and Dissertation 2023-2024
. 299.
https://stars.library.ucf.edu/etd2023/299
EXAMINING THE IMPACT OF A VIDEO REVIEW GUIDE ON ROBOTIC SURGICAL SKILL IMPROVEMENT
by
MARY MANSOUR SOLIMAN
B.A.E. Arizona State University, 2007
M.S.E. Samford University, 2010
A dissertaon submied in paral fulllment of the requirements
for the degree of Doctor of Educaon
in the Department of Learning Sciences and Educaonal Research
in the College of Community Innovaon and Educaon
at the University of Central Florida
Orlando, Florida
Summer Term
2024
Major Professor: Glenda A. Gunter
ii
© 2024 Mary M. Soliman
iii
ABSTRACT
Surgical educaon has the arduous task of providing eecve and ecient methods of surgical skill
acquision and clinical judgment while staying abreast with the latest surgical technologies within an
ever-changing eld. Roboc surgery is one such technology. Many surgeons in pracce today were either
never taught or were not eecvely taught roboc surgery during training, leaving them to navigate the
roboc learning curve and reach mastery independently. This dissertaon examines the impact of a
video review guide on improving roboc surgical skills. Using Kolb’s Experienal Learning Theory as a
framework, the literature review argues that video review can be used as a catalyst for reecon, which
can deepen learning and improve self-assessment. Reecon, however, is not an innate skill but must be
explicitly taught or guided. The researcher argues that a wrien video review guide can help novice
surgeons develop reecve pracce, resulng in improved surgical skills and a shorter roboc learning
curve. A between-group quasi-random experiment was conducted to test this theory. The parcipants
performed a pre-test technical simulaon, conducted an independent video review, and then repeated
the same simulaon as a post-test. The intervenon group received a surgical video review guide created
by the researcher using Gibb’s Reecve Cycle and addional evidence-based strategies during the video
review. The parcipants also completed an exit survey measuring the perceived usefulness of video
review guides. Data analysis found that overall, both groups signicantly improved their surgical skills;
however, there was no stascal dierence between the two groups. The parcipants perceived both
the surgical video review guide and video review guides in general as useful. Implicaons for pracce
and recommendaons for future research were discussed. This research underscores the potenal of
reecve guides as a low-cost and independent method to develop reecve praconers further and
improve surgical pracce.
iv
Trust in the Lord with all your heart and lean not on your own understanding; in all your ways
acknowledge Him, and He shall direct your paths. Proverbs 3:5-6
To my parents, Drs. Marcus and Mervat Mansour, Ph.Ds. It is an honor to follow in your footsteps.
To my sister, Nermine, who taught me the meaning of resilience.
To my kids, Lydia and Silas. No learning curve is longer or more rewarding than that of parenthood.
Thank you for your paence and love as Mommy worked on her big paper.
To my husband, Mark, whose obsession with video review and growth mindset inspired this topic. You
are the greatest blessing in my life, and I thank God every day for you. 143.
v
ACKNOWLEDGMENTS
First and foremost, I oer praise and thanksgiving to God. He is my source of joy, comfort, and
hope. Without Him, I am nothing; with Him, all things are possible.
I extend my deepest thanks to my chair and advisor, Dr. Glenda Gunter, for her unwavering
support and invaluable guidance. I am also grateful to my committee members for their diverse insights.
Dr. David Boote, thank you for inspiring the initial topic during your class and for your encouragement.
Dr. Joshua Guillemette, your expertise in data analysis demystified complex data for me, for which I am
grateful. Dr. Richard Hartshorne, your prompt and constructive feedback kept me on track. Dr. Gillian
Duncan, your industry knowledge has profoundly shaped my understanding of surgical needs. Thank you
to Intuitive Surgical for providing the resources necessary for data collection.
A special mention goes to my EdD support group: Amanda Rescheke, CJ Roberts, Tori Taylor,
and Vicki Lavendol. Your camaraderie, emotional support, and shared insights were vital to my personal
and academic growth. Thank you for making this journey less daunting and more joyous.
I must also express my gratitude to my extended family my seesters, my clan, my et al. Thank
you for being my biggest cheerleaders, for your understanding when my research demanded my focus,
and for caring for the kids at a moment’s notice. Your support made this pursuit possible. I am also
indebted to my Coptic family for their genuine love and fellowship. Your prayers and endless
encouragement have been my anchor and refuge. I am forever humbled and blessed to be part of an
incredible community.
Lastly, none of this would have been possible without my subject matter expert, sounding
board, editor-in-chief, best friend, and husband, Mark. Working with you has been the highlight of this
journey. Thank you.
vi
TABLE OF CONTENTS
LIST OF FIGURES ............................................................................................................................................. x
LIST OF TABLES .............................................................................................................................................. xi
LIST OF ABBREVIATIONS............................................................................................................................... xii
CHAPTER ONE: INTRODUCTION..................................................................................................................... 1
Background ................................................................................................................................................ 1
Roboc Surgery ...................................................................................................................................... 1
Benets .............................................................................................................................................. 1
Roboc Learning Curve ...................................................................................................................... 2
Roboc Skill Acquision ..................................................................................................................... 4
Problem Statement .................................................................................................................................... 6
Signicance ................................................................................................................................................ 8
Purpose of the Study ................................................................................................................................. 9
Research Quesons.................................................................................................................................... 9
Theorecal Framework ............................................................................................................................ 10
Denions of Key Terms .......................................................................................................................... 14
CHAPTER TWO: LITERATURE REVIEW .......................................................................................................... 16
Reecve Pracce .................................................................................................................................... 16
Background .......................................................................................................................................... 16
Reecon in Healthcare ...................................................................................................................... 18
Self-Assessment ................................................................................................................................... 19
Debrieng ............................................................................................................................................ 20
Self-Debrieng ..................................................................................................................................... 24
Video in Surgery ....................................................................................................................................... 24
Instruconal Videos ............................................................................................................................. 25
vii
Video-Based Assessment ..................................................................................................................... 26
Video Review ....................................................................................................................................... 29
Expert Video Review ........................................................................................................................ 32
Benchmark Videos ........................................................................................................................... 34
Video Review Guides ....................................................................................................................... 34
Perceived Usefulness ............................................................................................................................... 36
Summary .................................................................................................................................................. 36
CHAPTER THREE: METHODOLOGY ............................................................................................................... 38
Research Quesons.................................................................................................................................. 38
Research Design ....................................................................................................................................... 38
Sample and Recruitment ......................................................................................................................... 39
Site One................................................................................................................................................ 40
Site Two ................................................................................................................................................ 41
Intervenon ............................................................................................................................................. 42
Instrumentaon ....................................................................................................................................... 44
Demographic Survey ............................................................................................................................ 45
Roboc Simulator ................................................................................................................................ 45
Exit Surveys .......................................................................................................................................... 47
Data Collecon Procedures ...................................................................................................................... 48
Data Analysis ............................................................................................................................................ 52
Threats to Validity .................................................................................................................................... 52
Summary .................................................................................................................................................. 56
CHAPTER FOUR: FINDINGS .......................................................................................................................... 57
Introducon ............................................................................................................................................. 57
Research Quesons.................................................................................................................................. 57
Parcipants .............................................................................................................................................. 58
viii
Data Cleaning ....................................................................................................................................... 58
Demographics ...................................................................................................................................... 59
Sub-Queson One .................................................................................................................................... 61
Descripve Stascs ............................................................................................................................ 62
Tests of Assumpons ........................................................................................................................... 62
Inferenal Stascs ............................................................................................................................. 63
Addional Insights ............................................................................................................................... 66
Power Analysis ..................................................................................................................................... 70
Sub-Queson Two .................................................................................................................................... 71
Data Analysis ........................................................................................................................................ 71
Instrument Reliability .......................................................................................................................... 73
Central Research Queson....................................................................................................................... 73
Summary .................................................................................................................................................. 74
CHAPTER FIVE: DISCUSSION ........................................................................................................................ 75
Introducon ............................................................................................................................................. 75
Summary of the Study ............................................................................................................................. 75
Discussion of Sub-Queson One .............................................................................................................. 76
Discussion of Sub-Queson Two .............................................................................................................. 81
Discussion of Central Research Queson ................................................................................................ 83
Limitaons................................................................................................................................................ 85
Implicaons for Pracce .......................................................................................................................... 88
Recommendaons for Future Research .................................................................................................. 89
Conclusion ................................................................................................................................................ 90
APPENDIX A: PERMISSION TO REPRINT SHARP DEBRIEFING TOOL ............................................................. 92
APPENDIX B: SURGICAL VIDEO REVIEW GUIDE ........................................................................................... 95
APPENDIX C: INFORMED CONSENT & DEMOGRAPHIC SURVEY .................................................................. 97
ix
APPENDIX D: PERMISSION TO PRINT SIMULATOR IMAGES ....................................................................... 106
APPENDIX E: SAMPLE SIMULATION REPORT ............................................................................................. 108
APPENDIX F: EXIT SURVEYS ........................................................................................................................ 110
APPENDIX G: IRB APPROVAL ...................................................................................................................... 120
LIST OF REFERENCES .................................................................................................................................. 124
x
LIST OF FIGURES
Figure 1 Example of a Mul-Phasic Roboc Learning Curve ....................................................................... 4
Figure 2 Example of a Surgeon’s Progress Through Kolbs Experienal Learning Cycle ............................. 11
Figure 3 Gibb's Reecve Cycle Overlaid on Kolb's Experienal Learning Cycle ........................................ 13
Figure 4 SHARP Debrieng Tool for Surgery (Ahmed et al., 2013) .............................................................. 23
Figure 5 The Role of Intraoperave Videos in Surgical Educaon .............................................................. 36
Figure 6 Three-Phase Development Process ............................................................................................... 43
Figure 7 Images from the SimNow Combo Exercise ................................................................................... 46
Figure 8 Flowchart of Study Procedures ..................................................................................................... 51
Figure 9 Count of Parcipants Operang in Each State.............................................................................. 59
Figure 10 Scaerplot of the Score Dierence Between the Baseline and Post Simulaon Test ................. 65
Figure 11 Scaerplot of the Relaonship Between the Baseline and Post Simulaon Scores.................... 67
Figure 12 Box and Whisker Plot Comparison of Baseline Simulaon Performance.................................... 69
Figure 13 Box and Whisker Plot Comparison of Post-Simulaon Performance .......................................... 70
xi
LIST OF TABLES
Table 1 Straed Random Sampling of Parcipants in Each Group .......................................................... 42
Table 2 Threats to Validity ......................................................................................................................... 53
Table 3 Demographic Characteriscs of Parcipants ................................................................................ 60
Table 4 Roboc & Video Review Experience .............................................................................................. 61
Table 5 Descripve Stascs for Intervenon Group (n = 20) and Control Group (n = 21) ........................ 62
Table 6 Homogeneity of Variance Test ........................................................................................................ 63
Table 7 Shapiro-Wilk Test for Normality ..................................................................................................... 63
Table 8 Repeated Measures ANOVA ........................................................................................................... 64
Table 9 Correlaon Matrix of the Score Dierence and Demographic Characteriscs .............................. 66
Table 10 Linear Regression for Intervenon Group ................................................................................... 68
Table 11 Linear Regression for Control Group ........................................................................................... 68
Table 12 Descripve Stascs of Perceived Usefulness of VRGs ................................................................ 72
Table 13 Descripve Stascs of Perceived Usefulness of the SVRG (n=21) .............................................. 72
Table 14 Internal Reliability of Perceived Usefulness Scales ....................................................................... 73
xii
LIST OF ABBREVIATIONS
ABS American Board of Surgery
ACGME Accreditaon Council for Graduate Medical Educaon
APDCRS Associaon of Program Directors for Colon and Rectal Surgery
CAT Competency Assessment Tool
CRQ Central Research Queson
C-SATS Crowd-Sourced Assessment of Technical Skills
CUSUM Cumulave Sum
EPAs Entrustable Professional Acvies
FDA Food and Drug Administraon
GEARS Global Evaluave Assessment of Roboc Skills
GOALS Global Operave Assessment of Laparoscopic Skills
M&M Morbidity and Mortality Conference
MSQC Michigan Surgical Quality Collaborave
OCC Orlando Colorectal Congress
OSATS Objecve Structured Assessment of Technical Skill
SHARP Feedback and Debrieng Tool for Surgeons
SQ1 Sub-Queson One
xiii
SQ2 Sub-Queson Two
SVRG Surgical Video Review Guide
VAD Video-Assisted Debrieng
VBA Video-Based Assessment
VRG Video Review Guides
VRO Video Review Only
1
CHAPTER ONE: INTRODUCTION
Background
Roboc Surgery
The growth of robotic surgery has been exponential over the past fifteen years. The first robotic-
assisted device created by Intuitive Surgical Inc. received FDA clearance in the year 2000. Seventeen
years later, the leading surgical robotic system on the market reached a milestone of five million
procedures performed. Only four years later, in 2021, this number doubled, with 10 million procedures
performed and 55,000 trained robotic surgeons across the globe (Intuitive Surgical Inc., 2021). The latest
published data found that robotic procedures increased from 1.8% of total operations to 15.1% from
2012 to 2018 within the Michigan Surgical Quality Collaborative (MSQC), which documents 90% of all
surgical procedures in Michigan (Sheetz et al., 2020).
Robotic-assisted devices, otherwise known as robotic surgery, are the latest minimally invasive
technology in the field of surgery (Marino et al., 2018). A robotic surgical system consists of multiple
parts, including a surgeon console, a patient cart (robotic arms), a camera system, and computer
software. A surgeon sits at the surgeon console and uses their hands and feet to control multiple robotic
arms, which are inserted into the patient through 812 mm incisions. The camera system offers the
surgeon high-definition three-dimensional (3D) video with magnification, while the sophisticated
computer software translates the surgeon’s movements into precise robotic actions. In addition, the
computer software provides data analytic feedback such as operative time and economy of motion to
inform the surgeons of their robotic performance.
Benets
The benets of roboc surgery are numerous. When compared to open surgery, roboc
procedures require signicantly smaller incisions, which results in less pain, lower instances of infecon,
2
and quicker recovery me. While the monetary cost of performing a roboc procedure is inially higher
than an open procedure, shorter hospital stays and reduced complicaon rates can translate into overall
cost savings (Chiu et al., 2019). When compared to convenonal minimally invasive laparoscopic surgical
instruments, roboc instrumentaon oer a 180-degree range of moon, thereby not being limited by
the human wrist. Roboc plaorms oer three-dimensional (3D) imaging to assist in depth percepon,
and the ability for a single surgeon to control mulple arms, advanced tremor ltraon, and the ability
for the surgeon to x instruments in space, all thereby reducing the need for a surgical assistant.
Addionally, performance and kinemac moon feedback through the robot’s onboard computer
soware allows the surgeon to track their surgical skill improvement and plan for future operaons
(Rivero-Moreno et al., 2023). Finally, the surgeon console oers improved ergonomics over tradional
surgical modalies that require the surgeon to stand for mulple hours, oen hunched over, resulng in
poor posture and long-term adverse health eects (Wee et al., 2020).
Roboc Learning Curve
Though the benefits are numerous, robotic surgery presents significant challenges. Since the
surgical instruments are controlled remotely and do not provide haptic feedback to the surgeon, the
entire procedure relies on visual cues to gauge the required force for achieving the desired tissue effect.
This can be particularly difficult, especially for novice surgeons (Patel et al., 2022). In addition, robotic
surgery requires more equipment and more time to set up the patient in the operating room correctly
compared to other modalities, and the actual operations typically take longer for surgeons to complete
while they are in their learning curve compared to laparoscopic surgery, often with similar outcomes
(Solaini et al., 2022). For these reasons, it is common for novice robotic surgeons to abandon robotic
surgery after a few initial attempts.
The robotic learning curve is measured using a cumulative sum (CUSUM) score, the measure of
variance in operative time over time (Pernar et al., 2017). The learning curve for robotic surgeons can be
3
precarious to measure as there are several confounding variables, including surgeon-related, patient-
related, procedure-related, and institution-related factors such as the frequency and complexity of
operations and the presence and support of expert robotic surgeons for proctoring and guidance
(Kassite et al., 2019; Wong & Crowe, 2022). However, surgeons who perform complex robotic
abdominal cases generally have been found to progress through a multi-phasic learning curve (Figure 1).
Surgeons typically choose simple operative procedures during the initial learning curve while gaining
comfort with the robot's mechanics. Their operative times decrease as they progress and then plateau
between cases 2533. The challenging phase is accompanied by an initial increase in operative time due
to the surgeon attempting more complex operative cases before once again decreasing and plateauing
around case 75. While most surgeons plateau here, some robotic surgeons enter an expert phase,
characterized by an additional bell curve, plateauing after approximately 128 cases.
4
Figure 1
Example of a Mul-Phasic Roboc Learning Curve
Surgical educaon is primarily focused on shortening the learning curve for surgeons. A mul-
surgeon, single-instuon study by Guend et al. (2017) found that it took the rst roboc surgeon 75
operave cases to overcome their learning curve and only 25 cases for subsequent surgeons who later
joined the pracce. This nding highlights the signicance of having an expert roboc surgeon as a
support and guide for novices.
Roboc Skill Acquision
A licensed surgeon in the United States completes an Accreditation Council of Graduate Medical
Education (ACGME) approved general surgical residency program (five to seven years) and then has the
option to complete a surgical specialty fellowship (one to three years). Surgical residents and fellows
primarily acquire surgical skills following Fitts and Posner’s Three-Stage Theory of Skills Acquisition (Fitts
& Posner, 1973). That is, residencies and fellowships typically follow a master-apprenticeship model in
which surgical trainees observe, imitate, and practice surgical skills under the guidance of more
experienced surgeons until they have reached a level of competence that no longer requires monitoring.
Nearly all surgical residency programs today offer some form of robotic training, though the quality and
consistency of training vary widely across the country (Madion et al., 2022; Zhao, Lam, et al., 2020).
Robotic surgery is still a relatively new modality; therefore, most surgeons who completed
residency programs more than five years ago never received sufficient robotics training (Madion et al.,
2022). Surgeons already in practice must become trained through a robotics company, such as Intuitive
Surgical Inc., or by attending trainings hosted by surgical societies or healthcare institutions. The hands-
on tissue training is typically completed in one to three days; therefore, surgeons must rely on additional
methods of skill reinforcement such as simulator practice, surgical coaching, and instructional videos to
reach mastery.
5
Simulators. The advent of dry-box simulaons and virtual reality simulators have proven to be
safe and eecve ways for novices to improve their surgical skills outside of the operang theatre,
especially for minimally invasive modalies, such as roboc surgery (Yang et al., 2017). Simulaons allow
surgeons to exercise deliberate pracce, a purposeful and systemac form of pracce that generally
involves feedback and targeted strategies to improve a specic skill within a domain (Ericsson, 2011).
Compared to an unstructured approach, deliberate pracce on simulators allows novices to achieve
experse in a shorter period by focusing on specic roboc skills or steps of a procedure and allowing
them to pracce as many mes as necessary without having actual paents.
Surgical Coaching. After residency/fellowship, surgeons operate independently, often with little
or no support. Some surgeons, however, may determine that they would benefit from additional
surgical coaching or mentorship post-residency, especially if they are learning a new surgical procedure,
technique, or modality, such as robotics. A systematic review found that surgical coaching is a valuable
means of continuing education by improving technical, leadership, and communication skills. In
addition, virtual coaching is found to be as effective as in-person coaching (El-Gabri et al., 2020). Surgical
coaching can be conducted through formal programs such as The Academy for Surgical Coaching, which
connects surgeons with experts for a fee (https://surgicalcoaching.org/). Alternatively, surgeons can
seek informal coaching by relying on partners in their surgical practice or by reaching out for surgical
advice on social media such as private Facebook groups or SurgeOn a networking app exclusively for
surgeons. Additionally, many medical device companies maintain surgical networks to allow surgeons to
connect with mentors and proctors.
Instructional Videos. Video-based learning is a popular and effective method of acquiring both
procedural learning and operative skills (Larkins et al., 2023; Takagi et al., 2023). Watching surgical
videos to prepare for upcoming surgeries is becoming a ubiquitous method of continuing education,
with up to 98% of surgeons reporting watching surgical videos preoperatively (Mota et al., 2018).
6
Robotic surgery, in particular, has a robust library of surgical videos due to the ease of recording through
the robotic console, allowing the viewer to watch everything the surgeon sees. By watching surgical
videos ahead of time, surgeons can better anticipate and plan for challenges they may encounter during
their operations.
The methods mentioned above provide surgical trainees pathways to learn, practice, and
reinforce their robotic skills and shorten the length of their learning curve. One method that is often
overlooked is the role of reflection in improving learning and skills. This dissertation will explore the role
of video review as a catalyst for self-reflection and a mechanism for improving robotic surgical skills.
Problem Statement
Robotic-assisted devices are the latest minimally invasive technology in surgery (Marino et al.,
2018). These devices provide surgeons with 3D visualization, 360-degree range of motion, data analytic
feedback, and improved ergonomics over their predecessor, laparoscopic devices. Robotic surgery as a
modality continues to experience exponential growth, rapidly changing the landscape of minimally
invasive surgery since the first robotic-assisted device gained FDA clearance in the year 2000. With this
growth, surgical training programs face the challenge of expeditiously integrating robotic instruction,
most of them only adopting robotic training within the past five years (Madion et al., 2022). As a result,
many surgical trainees are trained by attendings who themselves are new to robotic surgery and are still
in their robotic learning curve rather than by expert robotic surgeons (Zhao, Hollandworth, et al., 2020).
Weis et al. (2023) found that the average number of robotic cases performed by surgical fellows in
training per year increased from 3.6 cases in 2010 to 49.5 cases in 2019. While this is a significant
increase, it is still less than one robotic case per week and remains within the initial robotic learning
curve as most of these cases are performed under the guidance of a surgical attending.
The learning curve for robotic surgeons can vary significantly as several factors influence its
length and slope, including case complexity and level of mentorship (Kassite et al., 2019; Wong & Crowe,
7
2022). Proven adjuncts to support surgeons during their robotic learning curve include surgical coaching
(Esposito et al., 2022), simulator practice (Schmidt et al., 2021), and instructional videos (Reck-Burneo et
al., 2018). Though effective, surgical coaching and simulator practice can be cost-prohibitive, are not
widely available, and may not be convenient to use due to limited access and the time constraints of
surgeons (Esposito et al., 2022; MacCraith et al., 2019). In addition, the current surgical culture can be a
barrier to coaching as many surgeons perceive surgical coaching as a sign of incompetence, creating a
juxtaposition with the image of the perfect, confident, and all-knowing surgeon (Mutabdzic et al., 2015).
While instructional videos are convenient and accessible to watch via social media, they still rely on
experienced surgeons taking the time to edit, provide commentary, and share their videos for
educational purposes. In addition, there is no screening process before surgeons upload their videos to
sites such as YouTube, resulting in many videos with inadequate and insufficient educational quality,
leaving novice surgeons to distinguish between high-quality and low-quality videos on their own (Gorgy
et al., 2022).
Video review is used across multiple fields, including teacher education (Baecher et al., 2018),
sports (Walker et al., 2020), and throughout various healthcare fields (Zhang et al., 2019), and has been
found to improve self-reflection and performance. In surgery, video review is the practice of recording
and playing back surgical cases and comparing them to other surgeons’ videos. Unlike surgical coaching
and simulator practice, video review is accessible on personal computers and mobile devices, and
therefore, it can be conducted independently anywhere and at any time.
Studies have found video review to be an effective method of improving self-assessment and,
consequently, surgical skills, though most often, video review is conducted with a more experienced
surgeon (Van Der Leun et al., 2022; Zhang et al., 2019). The results of independent video review studies
for surgeons are more variable as several factors impact its effectiveness, including the availability of
benchmark videos and video review guides (Scaffidi et al., 2019; Wang et al., 2020). Experience also
8
plays a role in effective video review as expert robotic surgeons implicitly organize their reviews in a way
that allows them to reflect on their surgical performance efficiently, a skill that novice robotic surgeons
lack (Soliman & Soliman, 2023). Only one study to date has explored using a video review guide to
improve surgical skills: Wang et al. (2020) found that independent video review with a video review
guide was as effective as expert guidance in improving surgical knot tying. There has yet to be an
established method of proper video review for novice surgeons, nor are there video review guides for
robotic surgeons.
As with any new technology, there will be a limited number of expert robotic surgeons until
robotic surgery becomes a ubiquitous modality and enough surgeons complete the robotic surgical
learning curve and are readily available to support their peers and trainees. Due to the current limited
available support from expert robotic surgeons (Zhao, Hollandworth, et al., 2020), the problem that this
dissertation will address is the need for proper independent video review guidance for novice robotic
surgeons to improve their surgical skills and accelerate their robotic learning curve.
Signicance
The importance of surgical technical skill cannot be over-emphasized when it comes to paent
safety, as recent literature reviews have found that surgeon technical skills can predict clinical outcomes,
including 30-day complicaon and reoperaon rates (Balvardi et al., 2022; Woods et al., 2023). Video
review allows professionals to analyze and reect on their pracce to improve and rene their skills
(Isreb et al., 2021; Schön, 1987; Tripp & Rich, 2012). Reecon through video review improves surgeons’
self-assessment accuracy (Scadi et al., 2019), which can inform them of areas requiring performance
improvement. Despite this, video review as a method of reecve pracce is not yet widely used as, by
some esmates, only ve percent of residency programs regularly use video recording in their operang
rooms (Esposito et al., 2022). The current lack of video review guidance in roboc surgery is a signicant
problem because a surgeon’s self-reecon and self-assessment ability is essenal for improving their
9
operave performance and, ulmately, it is essenal for the health and safety of surgical paents. Video
review can be a powerful tool to improve surgical skills when ulized correctly; however, without
guidance, novice roboc surgeons are le with no clear method for eecvely reviewing their surgical
performance. This decit in reecve pracce may result in a longer learning curve and, ulmately, a
failure to adopt robocs into their surgical pracce.
Purpose of the Study
This study aims to analyze and describe the impact of video review guide utilization on novice
robotic surgeons. A video review guide with evidence-based strategies was created for this study with
the aim of prompting critical self-reflection in surgeons. The use of an evidence-based surgical video
review guide by surgeons aims to assist in improving robotic surgical technical skills, thereby
accelerating the robotic learning curve of novices who lack expert robotic surgical support.
The findings of this study contribute to the field of surgical education by providing an
independent, efficient, and cost-effective method of video review, which can accelerate the learning
curve for robotic surgeons. The surgical video review guide is designed for reflection on robotic technical
skills on a simulator. The guide provides a template for the creation of future video review guides to
enhance both technical and non-technical surgical skills, including clinical judgment, communication,
and leadership. This study highlights the value of video review and provides a structured method of
independent video review that may also increase the currently underutilized practice of video review for
reflective practice.
Research Quesons
To evaluate the eecveness of an intervenon designed to improve roboc surgical technical skills, this
study sought to answer the following central research queson and sub-quesons:
10
CRQ: What is the impact of ulizing a wrien video review guide during independent video
review on the surgical skills of novice roboc surgeons?
SQ1: Is there a stascally signicant dierence in the improvement of roboc surgical
technical skills using a simulator between novice roboc surgeons who conduct an
independent video review using a wrien surgical video review guide compared to those
who do not use a guide?
SQ2: To what extent do novice roboc surgeons perceive wrien surgical video review
guides as useful?
Theorecal Framework
This study applied Kolb’s (1984) Experienal Theory to examine the eect of video review on
roboc surgical skills. Kolb’s framework explains that learning results from a cycle of doing and thinking.
The four-part experienal learning cycle posits that learning begins with a concrete experience (doing)
followed by reecve observaon (what happened?). Next is abstract conceptualizaon, where the
learner applies theory to their experience (thinking), then nally, acve experimentaon, which is the
learner planning how to proceed in future experiences (what now?). Figure 2 illustrates applying Kolb’s
model to surgical video review.
11
Figure 2
Example of a Surgeon’s Progress Through Kolb’s Experienal Learning Cycle
Kolb notes that while the stages are sequential, a learner may enter the experiential learning
cycle during any stage. In this study, the experiment will begin with the participants watching a
benchmark video of a surgeon completing the simulation (abstract conceptualization stage) so they may
preview what is expected of them and mentally create an initial plan of how they will complete the
exercise (active experimentation stage). Initiating the experiment with a benchmark video follows the
findings that providing benchmark videos leads to more accurate self-assessment than video review
12
alone (Hawkins et al., 2012; Scaffidi et al., 2019). The participants will then complete the simulation
exercise (concrete experience stage) and then watch their recorded performance (reflective observation
stage). The cycle will repeat once again, with a surgical video review guide to assist them in processing
their learning (abstract conceptualization stage) and determining what they will do differently (active
experimentation stage) during the subsequent simulation (concrete experience stage).
Gibbs (1988) expanded on Kolb's work by creating a Reflective Cycle that can be applied to the
Experiential Learning Cycle and serves as a structured debriefing to reflect more critically on the
experience, thus deepening learning. Gibbs cycle (1988) is as follows:
Description of the experience
Feelings and thoughts about the experience
Evaluation of the experience, both good and bad
Analysis to make sense of the situation
Conclusion about what you learned and what you could have done differently
Action plan for how you would deal with similar situations in the future or general changes
you might find appropriate (Gibbs, 1988, pp. 4950).
Figure 3 was created by the researcher to demonstrate how Gibb’s Reflective Cycle fits within Kolb’s
Experiential Learning Cycle. Rather than reflection being isolated to a single step of the learning cycle, it
is interwoven throughout the entire process. Numerous studies within healthcare have tested Gibb’s
Reflective Cycle as a method of facilitating reflection, and the National Health Services (NHS) in the
United Kingdom has integrated it into the mandatory reflective portfolios for annual appraisals (Holder
et al. 2019).
13
Figure 3
Gibb's Reecve Cycle Overlaid on Kolb's Experienal Learning Cycle
Experienal Learning Theory is an appropriate framework for this study because the literature
review argues that surgical video review serves as a vehicle for crical self-reecon, producing more
meaningful learning and ulmately improved surgical skills. The research design was structured to allow
the parcipants to progress through all four stages of Kolb’s learning cycle while ulizing a video review
guide developed according to Gibb’s Reecve Cycle.
14
Denions of Key Terms
ABS: The American Board of Surgery is responsible for board-cerfying surgical trainees who have
successfully completed an ACGME-accredited residency or fellowship (www.absurgery.org).
ACGME: The Accreditaon Council for Graduate Medical Educaon accredits all graduate medical
training programs (www.acgme.org).
Instruconal Surgical Videos: Surgical videos used to teach how to perform an operaon. Videos may or
may not include step-by-step instrucons (audio).
FDA: The Food and Drug Administraon is responsible for protecng public health by ensuring the
safety, ecacy, and security of human and veterinary drugs, biological products, and medical devices”
(U.S. Food and Drug Administraon, 2023).
Laparoscopic / Laparoscopy: A minimally invasive surgical modality involving a surgeon using
laparoscopes (tubes) to perform surgery inside the human body to minimize the need for large incisions.
Roboc Surgery or Roboc Assisted Devices: A minimally invasive surgical modality involving the
surgeon controlling mulple roboc arms bearing surgical instruments and a camera to minimize the
need for large incisions.
Roboc Surgical Benchmark Videos: These may be the same as instruconal videos but are used to
compare one’s surgical performance to exemplary performances. They are used to improve self-
assessment and determine areas for improvement.
Roboc Surgical Learning Curve: In surgery, the learning curve is a correlaon between the length of
operave me and the number of operave cases
15
Roboc Surgical Simulator: A roboc console programmed with surgical exercises to allow surgeons to
pracce technical skills outside of the operang theater. Similar to video game consoles, simulators
provide metrics to inform surgeons of their performance.
Surgeon Console: The control center of a roboc surgical system. The surgeon controls the roboc arms
using hand controls, foot pedals, and a display screen aached to the roboc console.
Surgical Fellowship: A period of specialized surgical training following surgical residency to gain experse
in a specic surgical subspecialty. Fellowships are typically one to three years in length.
Surgical Residency: A period of general surgical training following medical school that typically lasts ve
to seven years.
Surgical Outcomes: The results of surgical procedures are typically measured in paent recovery, survival
rates, postoperave complicaons, and the eecveness of the intervenon.
Surgical Video Review: Recording one’s surgical performance and playing it back for analysis.
Surgical Video Review Guide (SVRG): The wrien video review guide designed specically as the
intervenon for this study.
Video Review Guide: A wrien guide with prompts and quesons to be used during video review to
elicit reecon in a structured manner.
16
CHAPTER TWO: LITERATURE REVIEW
This literature review examines and criques the research and scholarship on surgical video
review. First, an overview of the current state of reecve pracce in surgery is necessary to situate the
importance of video review as a catalyst for reecon, learning, and skill improvement. Next, how videos
are most frequently used in surgical educaon is outlined, highlighng the shortcomings of its current
use. Although studies on the eecveness of video review have been mixed, the researcher argues that
this is due to a lack of consensus on eecve video review design. As such, this literature review provides
addional insight into how video review guides may deepen reecon and improve self-assessment,
resulng in improved surgical skills without the need for expert surgeons as a means of support. The
nal secon of this chapter briey examines the research on perceived usefulness, which provides a
framework for understanding if, when, and why surgeons may adopt a reecve tool. A comprehensive
review of reecve pracce and video review is necessary to establish the relevance of this study.
Reecve Pracce
Background
Dewey (1933) introduced reflection as a crucial component of the cognitive thinking process and
paramount for building knowledge. He argued that experience alone does not necessarily lead to
learning; instead, it is critical reflection on the experience that fosters understanding. Therefore,
reflection must be a deliberate act. Schön (1983) expanded on Dewey’s work and defined reflective
practice as the ability to reflect on one’s actions to engage in continuous learning. He describes two
types of reflection: reflection-on-action and reflection-in-action. Reflection-on-action involves comparing
one’s performance to existing knowledge and understanding to develop new schemas. Reflection-in-
action is otherwise known as “thinking on your feet.” Schön explains that expert professionals are able
to make proper decisions quickly in the moment because they have built schemas through their ongoing
17
reflection-on-action. Thompson and Pascal (2012) then added reflection-for-action, which involves
planning for future action based on past performance and understanding new concepts. These three
forms of reflection create the reflective practitioner and closely align with Gibb’s Reflective Cycle
overlaid on Kolb’s Experiential Learning Cycle (as explained in the Theoretical Framework section in
Chapter 1). Reflective practice combines theory, practice, active learning, questioning, analysis, and
understanding with an open mind (Thompson & Pascal, 2012).
Reflective practice has iterative and vertical dimensions. Iterative reflections are experiences
that trigger deeper thinking, providing new understanding and changing future behavior (Boud et al.,
1985). Vertical reflection refers to the level of depth of one’s reflective thinking. Kim (1999) defined
three levels of reflection in her Critical Reflection Inquiry Model: descriptive is the shallowest level,
which is a thorough description of an event; reflective is the intermediate level and provides analysis of
the situation, including feelings, attitudes, values, intentions, and practice standards. The deepest level
of vertical reflection is critical, which involves critiquing, correcting, and changing ineffective practices.
Effective reflective practice has been found to bridge the theory-practice gap, highlight poor practices
resulting in improved patient care, enhance self-awareness, empower practitioners towards change, and
stimulate critical thinking (Patel & Metersky, 2022).
Experience is critical for any form of medical education, whether during medical school
rotations, residency, or continuing medical education for practicing physicians. In Kolb’s Experiential
Learning Theory, reflection plays a mediating role between experience and learning. Through reflection,
the learner constructs knowledge and identifies missing knowledge, leading to deeper learning (Bui &
Yarsi, 2023). Medical education is structured to provide countless experience opportunities; however,
without reflection and abstract conceptualization, which day-to-day clinical practice often lacks, the
experiential learning cycle is incomplete, resulting in limited learning (Sheng et al., 2018).
18
Reecon in Healthcare
Many surgeons unfortunately fail to take the time to reflect on their medical practice, thus
restricting their ability to build knowledge and improve their surgical skills (Soleimani-Nouri et al., 2023).
A literature review by Mann et al. (2009) found that healthcare professionals generally only reflect on
challenging or novel situations. In addition, novices struggle to engage in critical reflection, limiting their
reflection to descriptions of events rather than understanding processes, learning, and self-assessment
(Kim, 2018). Davies (2012) found that physicians resist reflective practice because of a reluctance to
challenge and evaluate their decision-making process. Additionally, she found that many doctors do not
understand the reflective process, are unsure which experiences to reflect on, and believe that
reflection is time-consuming. This is not to say that medical professionals do not believe reflection is
important. An action research study by Naumeri (2023) found that pediatric surgery residents
recognized the importance of reflective practice, acknowledging that it improves patient outcomes and
helps with self-monitoring and critical appraisal. The participants believed that a lack of guided
reflection, timely feedback, and time to reflect were the most significant barriers to reflective practice.
Struggles with reflective practice may stem from physicians' experiences with reflection during
their post-graduate medical education. Gathu’s (2022) narrative review of facilitators and barriers to
reflective learning points out that there is evidence to support reflection as an essential aspect of
graduate student learning; however, how it is conducted will influence whether students adopt it into
their medical practice. Reflection is often part of summative assessments in which the students are
externally motivated to earn a grade rather than intrinsically motivated to engage in reflective behavior
genuinely, leading to students ‘gaming the system’ to meet assessment criteria (Truykov, 2023). The
sheer number of reflective assignments in medical school, often filled with ambiguity and a lack of
formative feedback, can lead to “reflection fatigue,” resulting in students viewing reflection as merely
checking a box or busy work (Trumbo, 2017). At worst, poorly taught reflective practice can lead
19
students to hate reflection; as one nursing student said, “I am sure Gibbs was put on this earth to make
student nurses a living hell!!!” (Timmins et al., 2013, p. 1373). While recognizing the value of reflection,
medical students and residents across medical specialties strongly resisted written reflections
(Shaughnessy & Duggan, 2013; Tonni et al., 2016; Tuykov, 2023). Furthermore, reflective practice is
often not role-modeled outside the classroom, which leads students to believe it is a task to be
performed in training but unnecessary once in medical practice. Additional challenges to developing
reflective practice include a lack of guiding tools, the perception that it is time-consuming, and that it
can lead to feelings of vulnerability (Holder et al., 2019). Reflection requires time, thus, if viewed as a
low-value skill, it will likely be overlooked as an essential part of professional medical practice (Gathu,
2022).
Self-Assessment
While self-reecon involves asking what happened and what I would change, self-assessment
asks how did I do, forming a dynamic relaonship between self-reecon and self-assessment (Mann et
al., 2009). Kruger and Dunnings (1999) work on self-assessment found that the less knowledge or skill
one has on a parcular topic, the more condent they are in one's knowledge or ability and the less
accurate they are in self-assessment. Nayar et al. (2020) validated this phenomenon in a literature
review, nding that surgeons’ ability to accurately self-assess their surgical skills improved with age and
experience. A study by Varban et al. (2022) comparing self-rated versus peer-rated surgical skills found
that surgeons who over-rated their skills had higher leak rates for complex bariatric procedures.
Gordon et al. (1991) concluded that inaccuracies in self-assessment were oen the result of
inconsistencies between the criteria used by the self-assessor and the evaluator. If learners are not
provided with explicit benchmarks, they will assess themselves based on subjecve criteria that may not
align with objecve standards. Addional facilitators for accurate self-assessment include performance
feedback and the review of the performance data by the learner, such as video reecon (Lu et al.,
20
2021). These ndings shed light on the risks associated with novice surgeons evaluang themselves in
the absence of more experienced surgeons or standards. If novice surgeons do not know how to engage
in deep reecon and cannot accurately assess their surgical performance, they may struggle to improve
their surgical skills and adopt new surgical modalies. More importantly, they may put the health of their
paents at risk.
Debrieng
Research reveals that reflective practice cannot be assumed, nor is it innate, but should be
explicitly taught. Indeed, there is literature to suggest that effective reflective practices can be taught to
novices across fields and result in positive outcomes, such as improved decision-making skills (Baecher
et al., 2018; Gray & Coombs, 2018; Kim, 1999; Nagro et al., 2017; Tripp & Rich, 2012). Kirschner et al.
(2006) expound that direct instruction, which entails fully explaining the concepts and procedures
required to learn and providing strategy support, is an efficient way to alter long-term memory, which
results in learning. In contrast, minimally guided instruction can overload a learner’s working memory,
resulting in little learning.
Graduang from an accredited medical training program is required to become a board-cered
physician in the United States. The governing board responsible for accreding all graduate medical
training programs is the Accreditaon Council for Graduate Medical Educaon (ACGME). The ACGME has
outlined milestones specic to each specialty that residents and fellows must be evaluated on by their
program each year (ACGME, 2019). Each of the 18 surgical milestones is divided into ve levels, and
tracking through the levels is synonymous with moving from novice to expert. One of these milestones is
Pracce-Based Learning and Improvement 2: Reecve Pracce and Commitment to Personal Growth.
The ve levels for this milestone are as follows:
Level 1: Establishes goals for personal and professional development
21
Level 2: Idenes opportunies for performance improvement; designs a learning plan
Level 3: Integrates performance feedback and pracce data to develop and implement a
learning plan
Level 4: Revises learning plan based on performance data
Level 5: Coaches others in the design and implementaon of learning plans
This milestone is evidence that the ACGME acknowledges that reecve pracce is a skill that takes
years to develop and recognizes that it is an essenal component of surgical prociency.
One way surgical training programs help their residents meet this milestone is through morbidity
and mortality (M&M) conferences. The ACGME mandates this weekly meeng for surgical training
programs to maintain accreditaon and Medicare funding for graduate medical educaon. The purpose
of this weekly conference is to review paent deaths and complicaons and discuss whether they were
preventable. This method of group debrieng oen follows Kolb’s Experienal Learning Cycle and Gibb’s
Reecve Cycle by providing an opportunity for reecon (what happened?), learning (why did it
happen?), and planning (what will we do dierently next me?) for future paent care. Naumeri (2023)
interviewed pediatric surgery residents aer a 12-month period of weekly M&M meengs that followed
Gibb’s Reecve Cycle and found that the parcipants acvely engaged in reecve pracce. A survey
distributed to 129 surgery departments across the United States and Canada found that 98% of the
departments require mandatory M&M conference aendance by residents; however, only 49% of faculty
aended these conferences (Anderson et al., 2020). Furthermore, M&M conferences are typically only
found in academic instuons, as hospitals without medical training programs are not required to hold
them. This underscores the perceived value of group debrieng by physicians already in pracce. While
debrieng and reecon are emphasized in educaonal sengs, they oen do not connue in
professional pracce.
22
M&M conference is one formal method of debrieng specic for complicaons and death, but
surgical debrieng can take several other forms as well. Debriengs can occur aer simulaons, clinics,
and surgical operaons and during didacc meengs. A meta-analysis by Keiser and Arthur (2021) found
that debrieng leads to improved performance, especially when coupled with objecve review media
(e.g., videos). In addion, structured debrieng is more eecve than unstructured debrieng. A
literature search found 22 debrieng tools used in healthcare, though only one was designed specically
for surgery. The SHARP clinical debrieng tool for surgery (Figure 4), shown to improve the quality of
debrieng objecvely, is a ve-step feedback tool for surgical aendings to use for structured debrieng
with their trainees (Ahmed et al., 2013). This tool aligns closely with Gibb’s reecve cycle except for its
lack of reference to descripon and feelings. The wrien surgical video review guide designed for this
study ulized aspects of the SHARP debrieng tool described in further detail in Chapter 3 of this
dissertaon. Despite the evidence supporng debrieng as an eecve tool in surgical educaon, a
literature review by McKendy et al. (2017) found that most surgical debriengs are unstructured and are
performed inconsistently or inadequately. Therefore, novice roboc surgeons cannot always rely on their
aendings, surgical partners, or peers for guided or collaborave reecon. Rather, they must be
provided with the tools necessary to eecvely self-reect. Indeed, a study by Fama et al. (2020) found
that surgical residents who were asked to respond to a structured wrien self-reecon worksheet
following a surgical skills lesson signicantly improved their surgical skills on a post-test compared to
peers who did not engage in structured self-reecon.
23
Figure 4
SHARP Debrieng Tool for Surgery (Ahmed et al., 2013)
Note. From “Operaon Debrief: A SHARP Improvement in Performance Feedback in the Operang
Room,” by M. Ahmed, S. Arora, S. Russ, A. Darzi, C. Vincent, and N. Sevdalis, 2013, Annals of Surgery,
258(6), 958–963 (hps://doi.org/10.1097/SLA.0b013e31828c88fc). Copyright 2013 by Lippinco
Williams & Wilkins. Reprinted with permission (Appendix A).
24
Self-Debrieng
While debrieng in a training seng is typically led by a facilitator to teach and guide novices, an
integrave review by MacKenna et al. (2021) found that self-debrieng can be as eecve as facilitator-
led debrieng with addional resource-saving and psychological benets (Keiser & Arthur, 2021). Self-
debrieng reduces the demand for surgical trainers' me; Isaranuwatchai et al. (2016) found that guided
self-debrieng was as eecve as an instructor-led brieng with addional cost-savings when the
willingness-to-pay for eect is less than ≤Can$200. Self-debrieng can also provide psychological safety
for learners by reducing the pressure to respond correctly or promptly. Self-debrieng provides learners
with privacy and the ability to think and reect on their own schedules (Verkuyl et al., 2018). However,
feedback, reecon, and user experience must be considered for self-debrieng to be as eecve.
Access to a video recording of one’s performance in addion to benchmark data (video, checklist,
scoresheet, etc.) can suciently replace live expert or peer feedback. Reecon can be elicited through
a reecve guide, such as wrien prompts or quesons. User experience involves providing clear
instrucons and suggesons for self-debrieng, including me, seng, and length. Reecon outside
training programs is most frequently conducted alone; therefore, teaching novice surgeons how to self-
debrief is crical in allowing for connuous reecve pracce aer formal surgical training is completed.
Video in Surgery
Video is an integral part of surgical educaon, and it is ulized in mulple ways and for various
purposes in training and professional pracce. The following secon outlines the three primary forms of
video in surgery: instruconal videos, video-based assessment, and video review. It discusses the
benets and drawbacks of each method and how video review can be used as a catalyst for reecve
pracce.
25
Instruconal Videos
The most frequent use of intraoperave video in surgical educaon is for the purpose of
instrucon before performing a procedure. Surgical training is based on an apprenceship model with
trainees supervised by faculty and given more responsibility over operave cases as they progress
through the program. Instruconal videos supplement this model by exposing surgeons to addional
procedures, techniques, anatomical variaons, and complicaons they may not otherwise encounter
due to duty-hour restricons or case types their aendings accept (Green et al., 2019). Video oers a
visual guide for surgeons to observe intricate techniques and gain insight into the nuances of procedural
steps. Furthermore, video enables surgeons to revisit specic segments of an operaon as frequently as
needed to reinforce their understanding. In addion, videos can be viewed in any place and at any me
according to the surgeons schedule. A systemac review by Youssef et al. (2023) found that video-based
surgical educaon is eecve for learning surgical skills, though the studies included in the review failed
to indicate if they have a long-term impact on paent outcomes due to their limited duraons.
Not all instruconal videos are eecve. A systemac review by Green et al. (2019) found that
including schemacs, diagrams/labels, and audio of procedure narraon had >75% associaon with
improved training. These ndings are in line with Mayer’s (2002) Cognive Theory of Mulmedia
Learning and several of his 12 Principles of Mulmedia Learning, including the mulmedia principle: a
combinaon of words and pictures, the signaling principle: highlighng key points with labels, and
temporal conguity principle: voiceovers. Surveys by Rapp et al. (2016) found that YouTube is the most
frequently used source for surgical videos; however, a simple YouTube search reveals that not all surgical
videos are made with mulmedia principles in mind. Ninety-six percent of arcles in a systemac review
found that surgical videos on YouTube lacked educaonal quality (Gorgy et al., 2022). Furthermore,
Halim et al. (2021) found no correlaon between engagement metrics (views and likes) and content
quality. These authors recommend direcng learners to surgical journals and sociees that oer peer-
26
reviewed surgical videos; however, the barriers to publicaon, including me, cost, and loss of
ownership, make it challenging to compete with social media.
Video-Based Assessment
Birkmeyer et al. (2013) published a landmark study using video assessment to establish a
correlation between surgical technical skill and patient outcomes in bariatric surgery. Since then, there
has been increasing interest in using video to evaluate surgeons’ performance, and this study has been
replicated numerous times across surgical specialties with similar findings (Brajcich et al., 2021; Fecso et
al., 2019; Hogg et al., 2016; Jung et al., 2018; Stulberg et al., 2020).
In July 2023, the American Board of Surgery (ABS) moved from time-based to competency-based
assessment for general surgery residents by introducing Entrustable Professional Activities (EPAs)
(Entrustable Professional Activities (Epas) for Surgeons, n.d.). EPAs are observable units of work
performed by residents and evaluated by faculty for feedback and assessment. In conjunction with this
change, ABS introduced a pilot program exploring the use of video-based assessment as part of the
board certification process (Pryor et al., 2023). Video-based assessment alleviates the time and resource
limitations of requiring expert surgeons to directly observe trainees in the operating room (Mcqueen et
al., 2019). Videos can be reviewed at increased playback speed, and assessors can focus on only
pertinent parts of an operation to reduce assessment time by 50% to 80%. Videos can be submitted
anonymously to prevent bias in the evaluation process and can be rated by multiple evaluators to
increase reliability, which is impossible in a live operation. As video-recording capabilities in the
operating room become more ubiquitous, more medical institutions are requiring surgeons to submit
operative videos before approving hospital credentials or offering employment.
There are several different valid and reliable scales and metrics that can be used to evaluate
surgical skills. The most commonly used scale is the Objective Structured Assessment of Technical Skill
(OSATS) (Martin et al., 1997). Its seven domains, which include respect for tissue, time and motion,
27
instrument handling, knowledge of instruments, use of assistants, flow of operation and forward
planning, and knowledge of specific procedure, can be evaluated using a Likert scale. The advantage of
this scale is that it can be used to assess any surgical modality (e.g., open, laparoscopic, robotic) and any
surgical specialty (e.g., general, colorectal, urology, etc.). The Global Operative Assessment of
Laparoscopic Skills (GOALS) is used to evaluate surgical skills in laparoscopic surgery. Its four domains
include depth perception, which measures target accuracy; bimanual dexterity, which measures the
utilization of both hands; efficiency, which measures speed and movement; and tissue handling
(Vassiliou et al., 2005). Likewise, in robotic surgery, the Global Evaluative Assessment of Robotic Skills
(GEARS) scale is used. This assessment tool measures the same four domains as GOALS and includes a
fifth domain for robotic control, which measures camera and arm control (Goh et al., 2012). Like OSATS,
the GOALS and GEARS scales can be applied to any surgical specialty using laparoscopic or robotic
devices.
While these scales focus mainly on technical skills, other more detailed and encompassing scales
have been developed for specific surgical specialties and procedures. For example, the valid and reliable
Competency Assessment Tool (CAT) was designed to assess the laparoscopic skills of colorectal surgeons
(Miskovic et al., 2013). Similar to OSATS, it assesses instrument use, tissue handling, errors, and end-
product, but these four domains are evaluated for each of the four main tasks of a colorectal procedure:
exposure, pedicle control, mobilization, and resection/anastomosis. The result is 16 competencies to be
assessed rather than four, five, or seven as evaluated by the previously mentioned scales. The CAT offers
a more comprehensive evaluation of a surgeon’s performance. However, it also requires more time for
assessment and an evaluator who is an expert in the specific procedure. The other, more commonly
used scales have been found to be reliable even when utilized in crowdsourced assessment.
Crowdsourced assessment relies on a large group of untrained individuals (crowd workers) to evaluate
intraoperative videos, all using the same scale. A literature review by Olsen et al. (2022) found a strong
28
correlation between crowd workers and expert surgeon evaluators, concluding that crowdsourced
assessment can provide accurate, timely, and cost-effective feedback to surgeons.
Video-based assessment is not limited to formal settings for summative purposes. Using video
for formative assessment is a learning tool that supports novice surgeons by providing constructive
feedback to improve their surgical skills (Esposito et al., 2022). Crowd-Sourced Assessment of Technical
Skills (C-SATS) is a subscription-based post-operative surgical insights platform that allows surgeons to
anonymously upload their operative videos for formative assessment and maintain a personal video
library (Ross et al., 2023). C-SATS uses crowd workers to objectively evaluate a surgeon’s robotic video
using GEARS or GOALS for robotic or laparoscopic cases. A surgical expert can also provide qualitative
feedback, noting strengths and weaknesses, commenting on specific procedural steps, and
recommending instructional videos for the surgeon to watch to improve particular skills. The advantage
of a platform such as C-SATS is that it can efficiently offer formative feedback when mentorship and
expert feedback are unavailable, as often is the case after the completion of surgical training
(Tommaselli et al., 2022). Surgeons who choose to participate in a surgical coaching program find that
they often begin with an expert first evaluating the mentee’s video for formative assessment and then
meeting with them for a guided video review and evaluation while providing strategies,
recommendations, and video resources for surgical skill improvement (Fainberg et al., 2022). C-SATS and
coaching programs offer feedback for a monetary fee, which is a barrier to access for many surgeons.
Alternatively, surgeons can post their operative videos on social media, such as private surgical groups
on Facebook (www.facebook.com) and specialty communities on the SurgeOn app
(www.surgeonapp.com), to solicit formative feedback and operative advice free of charge.
Video-based assessment is not without its limitations. Though video recording technology is
continually improving, recording open surgeries is not very common as it requires a room camera or
GoPro headset, and it limits the privacy of the operating room team (Brennan & Kirby, 2023).
29
Standardizing video-based assessment for board certification or hiring practices would require a
significant monetary investment by hospitals to purchase available technology (Ross et al., 2023).
Several ethical and legal considerations surround video recording in the operating room (Quach et al.,
2023). The question of intellectual property and data ownership arises, as does privacy and the risk of
surgical videos being used as evidence in malpractice lawsuits. A survey on video recording in the
operating room found that 63% of gynecologists, urologists, and residents surveyed preferred video
recording only without audio (Van De Graaf et al., 2021). In the case of formative feedback, there is little
quality control when seeking advice from online crowdsourced and social media platforms (Schlick et al.,
2020). While video-based assessment can offer valuable feedback, it is not a reflective process. The use
of scales and evaluators determines if specific criteria are being met (Cook & Hatala, 2016), whereas
reflection is an introspective process that promotes a deeper understanding of performance and critical
thinking (Kim, 1999). Despite these reservations, it is clear that the use of video-based assessment is
increasing and will continue to grow and be implemented throughout the field of surgery, from training
to certification, credentialing, and even employment (Prebay et al., 2016).
Video Review
Video review involves recording and playing back one’s professional pracce for analysis (Tripp &
Rich, 2012). Video-based assessment diers from video review in that the former is intended to be
viewed by evaluators, while the performer conducts the laer as a means of self-reecon and self-
assessment. Unfortunately, video review appears to be an underulized use of video in the eld of
surgery, as a literature search failed to retrieve any surveys or reviews examining how common video
review pracce is for surgeons. Addionally, only ve percent of residency programs reported regularly
using video recording in their operang rooms; it can be inferred from this stasc that video review is
not commonly pracced in training (Esposito et al., 2022). A longitudinal study on the perceived
usefulness of surgical residents recording their simulaon performances for a video porolio found that
30
only 36% of the parcipants accessed their videos over the course of one academic year (McKinley et al.,
2019). Though 95% of the residents expressed interest in access to a video library of their aendings’
surgical procedures, only 59% were interested in recordings of their own performances, and 45% desired
to review their videos with a senior resident or faculty member.
Only one arcle, wrien by the researcher, explored best pracces and recommendaons for
video review; however, the study was limited to interviewing eight expert roboc colorectal surgeons
(Soliman & Soliman, 2023). Despite this, there is evidence that video review is an eecve way to
improve professional skills across mulple elds, from teaching to sports, aviaon, and surgery (Ali &
Miller, 2018; Baecher et al., 2018; Walker et al., 2020; Zhang et al., 2019). In the surgical literature, video
review for the purpose of reecon is not explicitly evident; instead, it is used as a means of self-
debrieng with the goal of improving self-assessment and surgical skills (Nayar et al., 2020; Van der Leun
et al., 2022), which is a direct consequence of reecve pracce.
The rst study to report the eecve use of video review was by Goldman et al. (1970), who
found that surgical trainees permied to watch their recorded performance of an open inguinal hernia
repair, either with expert guidance or independently, signicantly decreased the number of
inappropriate surgical movements in a subsequent operaon compared to trainees who did not watch
their recorded performance. Since this seminal work, numerous studies have replicated the posive
eects of video review, including improved self-assessment, improved surgical skill quality and speed,
and reduced skill degradaon.
Jamshidi et al. (2009) found that residents who reviewed their videos twice on videoscopic
suturing signicantly improved in quality and me compared to the no-video control group. Likewise,
Van der Leun et al. (2022) found signicantly higher improved surgical simulaon scores for medical
students who were provided with their video performance and an expert benchmark video during
31
pracce sessions compared to medical students who were permied to pracce on a simulator without
video. Vyasa et al. (2017) found that residents who watched their videos of a colonoscopy simulaon
improved their post-test performance over those who only pracced on a simulator. Kun et al. (2019)
found that video review following a 72-hour delay from simulaon performance reduced skill
degradaon compared to no video review. This is signicant because it highlights the benet of
objecvely watching one’s performance and nocing aspects of the performance that may have been
missed otherwise if relying on one’s memory. Phillips et al. (2017) found that providing medical students
with their video performance and an expert video is more eecve than providing direct expert
feedback without video. Independent video review has also improved the nontechnical skills of resident
anesthesiologists to the same degree as residents who received expert debrieng without video
assistance (Boet et al., 2011). A grounded theory study on how expert surgeons conduct roboc video
reviews revealed several benets of video review, including the ability to examine crical incidents
objecvely outside the pressure of the operang theater and the opportunity to track one’s progress
along the learning curve, resulng in perseverance and a growth mindset (Soliman & Soliman, 2023).
Not all studies support the conclusion that video review alone provides marked surgical
improvement over other intervenon methods. Halim et al. (2021) found no stascal dierence
between residents who self-assessed themselves performing laparoscopic intracorporeal suturing using
video review and residents who received expert verbal feedback; however, they found that residents
who received expert video feedback outperformed the other two groups. Likewise, both Hawkins et al.
(2012) and Scadi et al. (2019) found that video review alone did not improve the self-assessment skills
of surgeons. Aldinc et al. (2022) found signicant improvement in the cricothyroidotomy performance of
medical students who received expert video feedback compared to those who conducted video reviews
alone.
32
These studies claim that independent video review is an ineecve technique for improving
surgical skills; however, a more accurate statement is that unguided independent video review is
inadequate. The parcipants in these studies were not explicitly taught how to conduct a video review;
therefore, they failed to eecvely reect on their pracce, resulng in limited learning or skill
improvement. Asking novice surgeons to simply review their videos generates minimal learning because
they are aempng to navigate a domain they have limited prior knowledge of; in other words, they do
not know what they do not know (Kruger & Dunning, 1999). Ideally, all novice roboc surgeons should
have access to guidance and support; however, providing experts to guide them through their video
review is not a readily available soluon. Fortunately, the literature points to alternave strategies
novices can use in place of expert guidance.
Expert Video Review
Understanding expert surgeons' tacit knowledge and how they conduct video reviews can
provide insight into what strategies novices can employ when instructors are not readily available to
support them directly (Soliman & Soliman, 2023). The Implicit Theory of Intelligence is a movaonal
theory that posits that those with an enty (xed) mindset believe that their ability and intelligence are
stac; thus, success is the result of talent. Conversely, those with an incremental (growth) mindset
believe that ability and intelligence can be developed; thus, success is the result of eort (Dweck &
Dweck, 2000). Mindset determines goal-orientaon: those with a xed mindset are performance-
orientated, meaning they are concerned with performing beer than others, whereas those with a
growth mindset are learning-oriented, meaning they are concerned with developing new skills and value
learning in and of itself (Wolco et al., 2021). Many expert surgeons exhibit a growth mindset, which is
precisely one of the aributes that, in turn, makes them experts. They connuously reect on their
surgical videos because they believe there is always room for improvement (Soliman & Soliman, 2023).
Without a learning-oriented mindset, video review will produce lile benet.
33
There is a direct correlaon between years of experience and nocing abilies; in reviewing
videos, experts have developed the ability to noce relevant informaon that novices are sll developing
(Yang et al., 2021). The ability to improve nocing can be eecvely taught through video-based learning
(Qi et al., 2022). Asking surgeons what they pay aenon to and what they noce while reviewing
roboc videos provides a framework for what novices should be taught. Likewise, expert surgeons ask
themselves numerous quesons within several categories when reviewing videos, including safety
concerns, eciency, procedural steps, crical incidents, future planning, and general reecon (Soliman
& Soliman, 2023). Many of these quesons follow Gibb’s Reecve Cycle described in chapter one,
including how do I feel, how did I do, what could have I done dierently, and what is my goal for the next
operaon? Providing novices with a list of quesons they can ask themselves during video review can
improve their nocing abilies and help develop a growth mindset, resulng in eecve reecve
pracce.
Expert surgeons have greater situaonal awareness and can ancipate and avoid problems.
Situaonal awareness is the ability to perceive the elements of an environment, comprehend what they
mean, and ancipate future states of the environment (Endsley, 1988). When experts review videos of
crical incidents, they not only examine what went wrong but also assess what circumstances, decisions,
and techniques led to the incident to begin with, and they determine what they will do dierently next
me (Soliman & Soliman, 2023). They are reorganizing their mental schemas through reecve pracce
and abstract conceptualizaon, which in turn leads to improved surgical skills and beer paent
outcomes (Kolb, 1984; Schön, 1983). Situaonal awareness requires prior knowledge, which is culvated
through experience and experse – qualies that novices inherently lack. The absence of experience can
be supplemented with other methods, such as intraoperave videos, as a way for novices to build up
their prior knowledge.
34
Benchmark Videos
Instruconal surgical videos are increasing in numbers across social media plaorms such as
YouTube, Facebook, and SurgeOn, as well as in published journals and surgical society websites (Lima et
al., 2022). Not only are surgeons using these instruconal videos to learn or solidify their procedural
knowledge preoperavely, but they are also using them as benchmark videos to compare their operave
performance to exemplary videos postoperavely. While previously menoned studies found that video
review alone did not improve self-assessment, having surgeons conduct independent video review with
a benchmark video improved their ability to self-assess (Hawkins et al., 2012; Scadi et al., 2019). In a
qualitave study, expert roboc surgeons insisted that watching other surgeons’ videos is equally
essenal as watching one’s own videos because “novices don’t know what good looks like” (Soliman &
Soliman, 2023, p. 7). Providing explicit benchmarks can prevent learners from employing subjecve
criteria to assess themselves or comparing themselves to others instead of objecve standards.
Benchmark videos are eecve because they allow one to compare and reect on performance,
resulng in more accurate self-assessment (Kruger & Dunning, 1999).
Video Review Guides
Video review can further be supported with the use of video review guides. Kirschner et al.
(2006) acknowledged that an instructor is not always available to provide direct instrucon and
determined that process worksheets can be ulized as a form of direct instrucon as eecvely. A
literature review by Tripp and Rich (2012) found that teachers prefer to analyze their videos using an
observaon guide. Kong et al. (2009) created a guiding framework for student-teachers to use during
video review to scaold their self-reecon because they are not yet discerning enough to idenfy
pernent aspects independently. Medical students who conducted structured self-assessment through
the use of a checklist required fewer repeons to master a mastoidectomy simulaon compared to
those who did not self-assess (Andersen et al., 2019). Finally, Wang et al. (2020) argued that guided
35
video reecon is a novel tool that combines the concepts of video review with structured reecve
pracce, which may improve self-assessment accuracy and circumvent the need for an external expert or
coach. Their study found that providing a group of medical students with a video of their knot-tying
performance, a video review guide, and a benchmark video resulted in comparable performance to a
group that was provided with expert feedback and one hour of expert guidance. The video review guide
group achieved competency with fewer resources, saving me and money. Providing novice surgeons
with a wrien video review guide encompassing expert techniques described in the previous secon,
such as reecve quesons and prompts to guide what novices should pay aenon to, may enhance
their self-debrieng skills.
It is important to note that a video review guide is dierent from an assessment scale such as
OSATS, GEARS, etc. Assessment scales serve as prescripve checklists and outcome measures to ensure
specic criteria are met, and tasks are completed consistently with minimal error (Marn et al., 1997).
Reecve guides are more introspecve and aim to facilitate crical thinking and thoughul analysis of
one’s performance, what led to the quality of the performance, and changes that can be made to future
performances (Nagro et al., 2017). While objecve scales focus on performance and can improve self-
assessment, they may not always promote a cycle of connuous learning and development in the same
way a reecve guide can. Guides can help develop reecve skills in novices that experts exercise
tacitly. They are a self-help tool that provides a methodology for a complex metacognive process.
Reecve guides oer exibility in the amount of aenon each step receives and serve as a bridge for
novices to learn reliably unl they can examine more subtle aspects independently (Leise & Beyerlein,
2007).
36
Figure 5
The Role of Intraoperave Videos in Surgical Educaon
Perceived Usefulness
The best tools have lile worth if they are not ulized. The second sub-queson of this study
seeks to understand to what extent novice roboc surgeons nd wrien surgical video review guides
useful. Perceived usefulness is dened as the extent to which an individual believes ulizing an object or
system will improve their job performance (Davies, 1989). This construct is most frequently used and
measured as part of the Technology Acceptance Model (TAM), which is used to determine behavior
intenon and, ulmately, the likelihood of a product being adopted. The factors that inuence perceived
usefulness include ease of use, how compable the tool is with the user’s exisng pracces, beliefs, and
values, the perceived benets of the tool, and the opinions, recommendaons, and experiences of
others (Venkatesh & Davis, 2000). A systemac review of the adopon of mobile health applicaons
validated the importance of perceived usefulness by healthcare professionals in choosing to ulize a new
technology (Gagnon et al., 2016).
Summary
Reecve pracce is not an innate skill but is most eecve when explicitly taught (Gray &
Coombs, 2018). A common form of reecve pracce during surgical training is debrieng with a surgical
aending post-operavely; however, this pracce is not always completed eecvely and oen ceases
once surgeons operate independently. The use of video in surgical educaon takes three forms:
instruconal videos, video-based assessment, and video review. All three forms are benecial and should
37
be ulized, but each serves a dierent purpose; thus, they are eecve in various ways. While the
literature on the rst two forms is robust, video review is currently being ulized in a limited manner.
Independent video review can serve as a catalyst for reecve pracce when conducted properly. In
place of conducng video reviews with expert surgeons, novices may be provided with benchmark
videos and video review guides to achieve equivalent levels of accurate self-assessment and surgical skill
improvement; however, to date, there are no video review guides for roboc surgeons. This dissertaon
contributes to the eld of surgical educaon by tesng the ecacy of a video review guide on roboc
surgical skill improvement.
38
CHAPTER THREE: METHODOLOGY
This research study ulized the Experienal Learning Cycle (Kolb, 1984) as a framework to
examine the eect of guided but independent video review on roboc surgical skill improvement.
Constructs from Gibb’s Reecve Cycle (1988) were woven through a wrien surgical video review guide
and provided to novice roboc surgeons to explicitly teach them how to reect on their surgical
performance. This chapter explains the study design, parcipants, instrumentaon, data collecon, and
analysis.
Research Quesons
To evaluate the eecveness of an intervenon designed to improve roboc surgical technical
skills, this study sought to answer the following central research queson and sub-quesons:
CRQ: What is the impact of ulizing a wrien video review guide during independent video
review on the surgical skills of novice roboc surgeons?
SQ1: Is there a stascally signicant dierence in the improvement of roboc surgical
technical skills using a simulator between novice roboc surgeons who conduct an
independent video review using a wrien surgical video review guide compared to those
who do not use a guide?
SQ2: To what extent do novice roboc surgeons perceive wrien surgical video review
guides as useful?
Research Design
A quantave study with a between-group quasi-random experimental design was conducted to
determine how eecve a surgical video review guide is in improving the surgical technical skills of
novice roboc surgeons. An experiment is a suitable research design to determine whether an
39
intervenon inuences an outcome (Creswell, 2019). This experiment sought to determine whether a
surgical video review guide (independent variable) accelerates surgical technical skill improvement
(dependent variable), thereby reducing the length of the learning curve. The acceleraon of roboc
technical skill improvement was determined by comparing the outcomes of two groups: a video review-
only group (VRO) served as the control, and a surgical video review guide (SVRG) group served as the
intervenon. Improvement in technical skill from the rst to the second simulaon was expected across
all parcipants due to repeon and purposeful pracce – an intenonal eort to improve performance
(Ericsson & Pool, 2016). The roboc simulator provided immediate performance feedback in the form of
an overall score. This feedback is a characterisc of purposeful pracce and may have inuenced their
second simulaon performance. Therefore, conducng the experiment with two groups was necessary
to determine whether using a video review guide signicantly improves roboc surgical skills while
controlling for the confounding variable of purposeful simulator pracce. To ensure that the eect of the
SVRG was the only variable being measured, a comparison group was also necessary to control for the
confounding eect of video review.
Sample and Recruitment
The study's target populaon was surgeons learning how to perform roboc abdominal surgery.
The surgeons may be in training, such as a surgical residency or specialty fellowship, or in pracce
choosing to adopt robocs into their surgical career. Acquiring a new skill takes me and eort, and the
relaonship between me and skill improvement can be graphically represented as a learning curve
(Yelle, 1979). Learning curve theory posits that the more a task is performed, the less me and resources
will be required to complete the task. The learning curve length of individuals acquiring a specic skill
can vary greatly based on how much expert support they receive (Rice et al., 2020). Providing sucient
support while a learner is acquiring a new skill is crical to prevent them from abandoning the task and
to maximize skill transfer (Ritchie et al., 2021). Video review guides are meant to be scaolding tools to
40
assist learners in gaining and retaining a new skill (McVee, 2018). This study explores the impact of a
video review guide as a means of support for novice surgeons in lieu of expert support. Pernar et al.
(2017) found that overcoming the roboc learning curve ranges widely from 8 to 128 cases depending
on surgical specialty and case complexity, with an average of 25–44 cases to overcome the inial learning
curve for colorectal cases. Therefore, for this study, a novice roboc surgeon was dened as one who has
completed less than 41 roboc cases independently.
The sample for this study was gathered from two dierent sites, as described below. The rst site
was a conference held in Orlando, Florida in November 2023. The second site was a robocs training
course in Peachtree Corners, Georgia, in February 2024. The parcipants at both locaons engaged in
the study similarly, enabling the ulizaon of a single sample.
Site One
The rst group of novice roboc abdominal surgeons was recruited during the Orlando
Colorectal Congress (OCC) from November 15–17, 2023. The OCC is an annual meeng for colorectal and
general surgeons, residents and fellows, gastroenterologists, advanced pracce providers (e.g., physician
assistants, nurse praconers), and hospital administrators. The conference’s objecve is to teach new
surgical techniques and procedures, discuss the surgical educaon of colorectal diseases, and review the
management of clinical scenarios. This site was chosen due to the presence of a da Vinci Skills Simulator
at the conference and the aendance of approximately 100 surgeons from across the United States,
which allowed the sample to be more generalizable to the study populaon (Salkind, 2010).
A conference organizer announced the details of the study to the conference aendees at the
start of each day, informing them of the study locaon and inclusion criteria. The study took place in the
vendor exhibit hall where Intuive Inc. displayed a da Vinci Xi robot, vision cart, and surgeon console.
Though the study was available from 9:00 am to 4:00 pm each day, the parcipants were recruited
41
during the conference breaks, which limited the number of aendees able to parcipate. In total, 15
parcipants were recruited from site one, with seven in the control group and eight in the intervenon
group.
Site Two
To increase the sample size, more parcipants were recruited during a Roboc Training
Advanced Course hosted by the Associaon of Program Directors for Colon and Rectal Surgery (APDCRS)
at Intuive Surgical, Inc. in Peachtree Corners, Georgia, from February 28–March 1, 2024. This one-day
course is for colorectal fellows in the United States who have completed basic roboc training and would
like to learn more advanced techniques before they enter pracce. The three co-instructors were expert
colorectal surgeons from across the United States. The course was repeated for three days with dierent
fellows daily to maximize the number of aendees. This site was chosen due to the availability of two da
Vinci Skills Simulators and the aendance of 42 colorectal fellows. During surgical training, aendings
guide fellows through the vast majority of the surgical procedures; therefore, none of the fellows had
completed more than 40 roboc cases independently, allowing them to all qualify for this study.
The aendees received didacc instrucon in a conference room each morning and hands-on
roboc training in a lab. Didaccs was delivered in a lecture format, introducing roboc principles and
steps of the roboc operaon the trainees would perform during the lab. The researcher introduced the
study to each cohort in the conference room and then individually invited fellows to parcipate in the
lab. During the lab poron of the course, every two fellows were assigned to one roboc staon, and the
fellows took turns operang on the surgeon console. The fellows who were not operang were invited to
parcipate in the study. Once a fellow completed the study, they switched with their partners. In total,
28 parcipants were recruited from site two, with 14 in the control group and 14 in the intervenon
group.
42
A total of 43 parcipants were recruited across both sites – 22 in the intervenon group and 21
in the control group. The parcipants were assigned to either the intervenon or control groups using
straed quasi-random sampling. This method of assignment ensured that both groups were equal at
baseline. Table 1 provides the distribuon of the parcipants by roboc experience level and gender in
each group.
Table 1
Straed Random Sampling of Parcipants in Each Group
Control Group
(n=21)
Intervention Group
(n=22)
# Roboc Cases
010
10
11
1120
4
3
2130
5
6
3140
2
2
Gender
Male
11
11
Female
10
11
Intervenon
This study examined the ecacy of a wrien surgical video review guide in improving roboc
technical skills. The parcipants in the studys intervenon group received a physical copy of the Surgical
Video Review Guide (SVRG) (Appendix B) to use during the video review poron of the study. The
researcher created the guide to instruct novice roboc surgeons on eecvely reviewing their roboc
simulaon video. The guide provides quesons and prompts for novices as they watch their surgical
43
videos to enable deeper self-reecon and accurate self-assessment. The intent was that by the end of
the guided but independent video review, the parcipants would develop a clear, aconable plan to
implement during the second simulaon, resulng in improved simulator performance. The parcipants
were given a pen and the opon to take notes directly on the guide.
The researcher constructed the SVRG through a three-phase process. Phase one consisted of a
literature search on video review guides, methods, strategies, and tools to prompt reecon on
performance. Phase two involved consolidang, analyzing, and synthesizing these reecve resources,
followed by an inial design of the SVRG. In phase three, the researcher consulted with a subject maer
expert on roboc surgery and video review to revise and nalize the SVRG.
Figure 6
Three-Phase Development Process
The SVRG was constructed from a collecon of evidence-based tools and strategies from
previous research by Ahmed et al. (2013), Gibbs (1988), and Soliman and Soliman (2023). Following
Gibb’s Reecve Cycle (1988), the four main reecve quesons prompt the parcipants to evaluate,
analyze, and draw conclusions about their performance and then determine at least one change they
will make during the next simulaon. The SVRG follows a similar structure to the validated SHARP
debrieng tool (Ahmed et al., 2013); however, it diers from SHARP in two ways. SHARP is designed to
be used by aending surgeons when they debrief operave cases with their surgical trainees, and it
begins with seng a goal with the trainee before the case begins. In contrast, the SVRG is designed to be
used independently, and since it is constructed specically for video review, it begins with determining
44
the purpose of the review. The laer change was made because Soliman & Soliman (2023) found that
expert surgeons rst categorize their video review based on its purpose, which then determines what
they pay aenon to while they conduct their review. This structured approach allows surgeons to
noce things they may have otherwise missed. Below each main reecve queson are sub-quesons to
prompt further reecon, retrieved from the same study. Though expert surgeons collecvely idened
65 quesons they ask themselves during video review, only 10 were included, and the overall guide was
limited to half a page to avoid cognive overload (Sweller, 1988). Finally, the surgical video review guide
was reviewed by a subject maer expert (a colorectal surgeon who has completed over 900 roboc
cases and acvely contributes to the eld of surgical educaon) for content validaon.
To implement the intervenon, the parcipants' roboc simulaon baseline performances were
rst video recorded. The parcipants in the intervenon group were then provided with the SVRG and
instructed to watch the recording of their baseline simulaon independently while reecng on their
performance.
Instrumentaon
In this study, instrumentaon refers to the objecve tools used to collect the parcipants’
background characteriscs, measure their technical skills, and measure their perceived usefulness of
video review guides. Instrumentaon consisted of a demographic survey, a roboc simulator, and two
exit surveys—one for each group. To avoid collecng any idenfying data, each parcipant was assigned
a number, which they documented on both the demographic and exit surveys. The videos of their
simulator performances were also saved with their assigned number, so all the parcipant data
remained linked while maintaining anonymity. This secon describes each of the instruments in detail.
45
Demographic Survey
Informed consent and demographics were collected through a single survey (Appendix C) hosted
by Qualtrics (hps://qualtrics.com). The rst two quesons following informed consent veried that the
parcipants qualied for the study. 1. Have you been trained to use a da Vinci robot? (e.g. simulator
exercises, basic roboc training, etc.) 2. Approximately how many roboc cases have you completed all
or key porons of the operaon independently? If a parcipant answered no to queson one or 41+ to
queson 2, the survey would have ended, and they would not be able to parcipate in the study.
Demographic informaon included gender, age, surgical posion, surgical specialty, years of total
surgical experience, where they live, and video review frequency. Straed quasi-random sampling based
on the reported number of independent roboc cases completed determined whether each parcipant
was in the intervenon or control group. This form of sampling was chosen because the most signicant
indicator of surgical skill is experience (Azari et al., 2020; Ericsson, 2004). Ensuring that the parcipants’
experience level was similar across groups increased the likelihood that the baseline simulator data
would be similar, which was necessary to eecvely compare surgical skill improvement between groups
aer the post-test.
Roboc Simulator
A da Vinci roboc simulator created by Intuive Surgical Inc. was used to objecvely measure
the roboc technical skills of the parcipants at baseline and aer video review. The parcipants
completed the three-part Combo Exercise from the da Vinci SimNow Library, which is built-in soware on
the simulator (Figure 7). The exercise consists of a three-arm relay, which tests the surgeons' ability to
use and control the roboc arms; needle driving, which simulates surgical suturing; and energy usage,
which allows surgeons to pracce cauterizing and coagulang ssue. This simulaon was chosen
because it highlights common technical skills required for abdominal surgery and takes less than 10
minutes to complete. In addion, it is considered a more advanced simulaon, allowing for more room
46
for improvement between pre and post-tests compared to a more basic simulaon in which most
parcipants would likely score high on their rst try.
Figure 7
Images from the SimNow Combo Exercise
Note. From le to right: needle driving, energy usage, three-arm relay, score report. Images printed with
permission (Appendix D).
The benets of using a roboc simulator are that it ensures every parcipant receives idencal
tests and it can video record each parcipants performance for subsequent review. The simulator
collects validated objecve data necessary for analysis, including me to compleon – how long it takes
the parcipant to complete the simulaon exercise; economy of moon – the distance the roboc arms
moved to complete the exercise; and penales – how many errors such as roboc arm collisions,
excessive force, etc. were commied. The simulator then provides a single overall score out of 100 based
on these metrics (Tellez et al., 2024) (see Appendix E for a sample simulaon report). The reports from
the simulaon exercise were used to determine the ecacy of the SVRG.
47
Exit Surveys
An exit survey was provided at the end of the study to gain insight into the parcipants’
perceived usefulness of video review guides. Perceived usefulness is the extent to which a person
believes a system or tool will improve their job performance (Davis, 1989). Understanding surgeons’
perceived usefulness of video review guides is essenal because it will determine the likelihood of them
adopng the tool in their video review pracce.
The parcipants completed one of two exit surveys depending on their group assignment
(Appendix F). The exit survey for the intervenon group included eight statements regarding their
percepon of the SVRG and six statements regarding their percepon of video review guides in general.
In addion, one qualitave queson requests suggesons to improve the video review guide. The exit
survey for the control group included the same six statements regarding their percepon of video review
guides in general, one statement on whether they think a video review guide would have helped them in
this study, and two qualitave quesons seeking to understand their video review process. The
parcipants responded to the statements using a 7-point Likert scale, with one meaning strongly
disagree, and seven meaning strongly agree.
A literature search located one exisng exit survey on a video review guide designed to improve
video comprehension for students learning Russian (Iskold, 2008). Three statements were adapted from
Iskold’s study with wording altered so that it was specic to the SVRG and video review guides in general,
including, the video review guide allowed me to noce things in my performance I may not have noced
otherwise; the video review guide was distracng, and I would use a surgical video review guide when
reviewing my operave videos in the future. The rest of the statements were created in collaboraon
with an expert roboc surgeon serving as a subject maer expert for this study, which were guided by
48
the perceived usefulness scale and perceived ease of use scale from the Technology Acceptance Model
quesonnaire (Davis, 1989).
Data Collecon Procedures
This research study received Instuonal Review Board approval from the University of Central
Florida in October 2023 (Appendix G). Aer data collecon at Site One, a modicaon was submied to
IRB for approval of Site Two in an eort to increase the sample size (see Sample and Recruitment
secon). The modicaon was approved in January 2024. The following is the list of steps taken to
collect data from the parcipants at both sites. The me it took the parcipants to complete the study
ranged from 25 to 50 minutes.
1. The researcher approached each potenal subject to parcipate in the study and
presented them with an index card with a number wrien on it (the index cards were
distributed in sequenal order). They were asked to keep the index card for the duraon
of the study.
2. A sheet of paper with a Quick Response (QR) code was then presented to each
parcipant for them to scan with their personal mobile device to access the Qualtrics
survey, which contained the informed consent form and demographic quesons. The
parcipants entered their assigned number on the demographic survey and indicated
how many roboc cases they had completed independently. This step was necessary to
verify each parcipants eligibility for the study and to strafy the parcipants so that
the control and intervenon groups had equal levels of roboc experience (see Sample
and Recruitment above).
3. The parcipants were then presented with another QR code to access an exemplary
video of the Combo Exercise simulaon without audio on YouTube
49
(hps://youtu.be/hQuDMT9wI8k). Allowing the parcipants to preview the simulaon
exercise before the pretest is consistent with ndings that over 90–98% of surgeons
watch surgical videos to prepare for surgery (Mota et al., 2018; Rapp et al., 2016). The
parcipants were informed that they could increase the playback speed of the video and
that they should not aempt to memorize or study the video but rather simply become
familiar with what would be expected of them during the simulaon.
4. While the parcipants reviewed the exemplary video, the researcher determined their
group assignment using straed quasi-random sampling to ensure that both groups
were equal regarding roboc experience, as indicated on the demographic survey.
5. The parcipants then proceeded to the roboc simulator and performed the simulaon
Combo Exercise. At Site One, the da Vinci simulator was connected to a cloud-based
system called Intuive Hub, which allowed for direct video recording. This service was
unavailable at Site Two, so the two simulators used were connected to personal laptops
for video recording. The recordings were labeled with the parcipant number, group,
and pre or post-test (e.g., 1CPre, 2IPost, etc.). The screens were turned o or turned
around during the simulaon to maintain the privacy of the parcipants during the
study. The score sheet of the simulaon performance automacally appears on the
screen upon compleon of the exercise and thus was included in each recording.
6. Upon compleon of the pretest simulaon, the video recording was stopped, and the
parcipants were instructed to watch their recorded performance in its enrety. They
were provided the opon to pause, rewind, and adjust the playback speed as they
deemed t.
Parcipants in the control group were only instructed to review their video.
50
Parcipants in the intervenon group were given an SVRG and pen. They were
told that the purpose of the guide was to help deepen their reecon, and they
were instructed to refer to the SVRG while playing back the video recording of
their performance. They were also given the opon to write notes on the guide.
7. Aer compleng their video review, the parcipants in both groups repeated the same
simulaon as a post-test. The second simulaon performances were recorded to keep a
backup copy of the results; however, the parcipants did not review the post-test
videos. During the second simulaon, the researcher recorded which group the
parcipant was assigned to and how many roboc cases they reported performing on a
spreadsheet to determine the placement of subsequent parcipants.
8. Finally, the parcipants scanned a third QR code using their personal mobile devices to
access and complete the exit survey assigned to their group. The parcipants input their
assigned number on the exit survey to link their exit survey data to the demographic
survey and simulaon scores.
51
Figure 8
Flowchart of Study Procedures
52
Data Analysis
Data analysis for this research study consisted of quantave analysis techniques performed
using Jamovi Stascal Soware. Descripve stascs were calculated on both groups’ pretest and post-
test simulator data. A Shapiro-Wilk test determined if the two groups followed a normal distribuon, and
a Leven test was calculated to ensure the variance between groups was insignicant. Next, a Repeated
Measures ANOVA between subjects and within subjects was calculated to determine what factors had
an eect on the post-score. This test determines if the SVRG had a signicant eect in improving the
roboc technical skills of the parcipants. All tests were two-tailed with stascal signicance set to p-
values<0.05. Box and Whisker plots were generated to determine if there were outliers in the data and
examine the variability within each group. Two scaerplots were generated: 1. To determine how well
the prescore predicted the post scores of each group. 2. To determine how well the prescore predicted
the dierence between pre and post-scores. Finally, a correlaon matrix examined if any demographic
categories correlated with the pre or post scores of the parcipants.
Descripve stascs were calculated on the exit survey data to determine whether novice
roboc surgeons perceived video review guides as useful and if they believed the SVRG in this study
helped improve their performance on the post-test. The reliability of each measure was calculated using
Cronbach’s Alpha. Only a few parcipants answered the qualitave quesons at the end of the survey, so
those quesons were not analyzed due to insucient data.
Threats to Validity
Table 2 lists the internal and external threats to validity and how they were addressed for this
study.
53
Table 2
Threats to Validity
Threat
Status
Explanation
Internal
Confounding variables
Mostly addressed
Participants in both groups were given the same
procedures and amount of time.
The baseline scores of both groups were the same.
Having a comparison group and conducting
pre/post-tests controlled for simulator practice
and video review.
Stratified quasi-random sampling controlled for
prior robotic case experience.
The study did not consider the amount of
simulator experience each participant had prior to
their participation. Some participants were
familiar with the Combo Exercise, while others had
never seen it before.
History
Mostly addressed
Most of the participants completed the study
without interruption; however, because of the
public locations of the robotic simulators, some of
the participants were briefly distracted by people
speaking to them or by their phones.
The researcher conducted the study for all 43
participants therefore, there was no “instructor
effect.”
Maturation
Addressed
The participants completed the study in less than
one hour.
Testing
Mostly addressed
Designing the study with two groups controls for
the confounding variable of simulator practice.
Performing the simulation twice within a short
time span may contribute to fatigue; however, the
average time to complete the pretest was eight
minutes and seven minutes for the post-test.
54
Threat
Status
Explanation
The total study duration of up to 50 minutes may
have caused some participants to rush through the
post-test resulting in a lower score than their
pretest.
Instrumentation
Addressed
The simulator provides a validated, objective, and
consistent measurement.
The Combo Exercise glitched 6 times out of the 86
performances, requiring a participant to repeat
the first portion of the simulation. Despite this,
these participants' scores were within normal
range of the group.
Statistical regression
Partially addressed
Data analysis found statistical regression towards
the mean for outlier participants who scored
extremely high on the pretest and worse on the
post-test and for those who scored extremely low
on the pretest and significantly higher on the post-
test.
However, there was no statistical difference
between the two groups on the pretest therefore,
any differences between the groups on the
posttest cannot be attributed to statistical
regression.
Selection
Addressed
The researcher did not personally know any of the
participants and used stratified quasi-random
sampling to assign the participants to the groups
based on robotic experience.
Mortality
Addressed
All participants completed the study.
Placebo/nocebo effect
Mostly addressed
The participants were not explicitly informed of
which group they were assigned to. Many believed
video review alone was the intervention.
The participants saw their pretest scores prior to
the video review, and most were motivated to
improve their post-test scores.
Both groups perceived video review guides as
useful.
Contamination effect
Addressed
The participants completed the study in less than
one hour.
55
Threat
Status
Explanation
Hawthorne effect
Mostly addressed
There is no evidence that the researcher can
influence simulator performance.
The participants may have perceived video review
guides as more useful because they knew this was
the subject of the study.
Experimenter bias
Addressed
The simulator is an objective measure of robotic
technical skill that the experimenter cannot
impact.
Interaction effects
Addressed
Confounding variables such as simulator practice
and video review were controlled by using a
comparison group.
External
Sample bias
Mostly addressed
The majority of the participants were colorectal
fellows, who are included the target population.
The study did not have many participants who
were practicing surgeons in their robotic learning
curve.
Reactive & interaction
effects of testing
Not addressed
Surgeons would not perform an operation, review
their video, and then repeat the operation in the
span of one hour. Due to the limited time and
access to participants, a longitudinal study was not
possible, and the participants had to complete the
study in one sitting.
Reactive effects of
arrangements
Partially addressed
There is literature supporting the transfer of
robotic technical skills from a simulator to the
operating room (Schmidt et al., 2021).
The ability to video record robotic operations is
becoming more readily available and the SVRG is a
simple intervention that can be accessed for free
on any personal device.
There are currently no high-quality surgical
procedure simulations, so this study was limited to
examining robotic technical skills using inanimate
objects (e.g., pegs, doors). Surgeons review
surgical videos to analyze safety, procedural steps,
critical incidents, anatomy, and technical skills.
56
Threat
Status
Explanation
The pre and post-test in this study were identical
but no two surgical operations are identical in the
same way.
Multiple treatment
interference
Addressed
A single intervention was addressed in this study,
the SVRG.
Summary
This study used a between-group quasi-random experimental design with a control and
intervenon group to determine the eect of a wrien video review guide on novice surgeons' roboc
surgical skill improvement. A secondary research queson sought to determine to what extent novice
roboc surgeons nd video review guides useful. The SVRG was constructed ulizing Gibb’s Reecve
Cycle (1988) as a framework and evidence-based debrieng and reecve strategies for surgeons
(Ahmed et al., 2013; Soliman & Soliman, 2023). Data analysis involved measuring the dierence in
roboc skill improvement on a da Vinci Skills Simulator between novice roboc surgeons (<41 roboc
cases) who used the SVRG during the video review poron of the study and those who did not use the
guide. Parcipants responded to Likert-scale statements on exit surveys to measure their perceived
usefulness of video review guides. The following chapter will discuss the ndings of this study.
57
CHAPTER FOUR: FINDINGS
Introducon
This study aimed to describe and analyze the effect of a video review guide on the robotic
surgical technical skill improvement of novice robotic surgeons. Kolb’s Experiential Learning Theory
(1984) and Gibb’s Reflective Cycle (1988) provided a framework for how reflection can lead to deeper
learning. The use of an evidence-based surgical video review guide by surgeons aimed to enhance
reflective practice. Reflective practice would result in improved robotic surgical technical skills, thereby
accelerating the robotic learning curve of novices who lack expert robotic surgical support. To achieve
this objective, a quantave study with a between-group quasi-random experimental design was
conducted. This chapter presents the data analysis and ndings of the study.
Research Quesons
The central research queson and sub-quesons that guided this study are listed below. The
following secons present the demographic characteriscs of the parcipants, followed by the stascal
analyses and results for each sub-queson. All data analyses were completed using Jamovi stascal
soware. A p-value <.05 was considered stascally signicant. This chapter concludes with a summary
of the results to answer the central research queson.
CRQ: What is the impact of ulizing a wrien video review guide during independent video review on
the surgical skills of novice roboc surgeons?
SQ1: Is there a stascally signicant dierence in the improvement of roboc surgical
technical skills using a simulator between novice roboc surgeons who conduct an
independent video review using a wrien surgical video review guide compared to those
who do not use a guide?
58
SQ2: To what extent do novice roboc surgeons perceive wrien surgical video review
guides as useful?
Parcipants
Data Cleaning
Data for this study was collected from two sites. To qualify for the study, the parcipants must be
surgeons who have completed roboc training and have independently performed less than 41 roboc
surgical operaons. Fieen parcipants were recruited from the Orlando Colorectal Congress in Orlando,
Florida, and 28 from the APDCRS Roboc Course in Peachtree Corners, Georgia, for a total of 43
parcipants: 22 in the intervenon group and 21 in the control group.
The data of two parcipants in the intervenon group were enrely or parally removed prior to
data analysis. One parcipant completed the Combo Exercise simulaon pretest in 29.8 minutes and the
post-test in 15.5 minutes, while the average me was 8.9 minutes and 7.1 minutes, respecvely,
resulng in a score of zero on both tests. Their data was enrely removed aer concluding that they
were not adequately robocally trained and, therefore, did not qualify for the study. The simulaon data
of a second parcipant was removed due to the simulator failing to provide a post-test score report upon
compleon. The exit survey data of this parcipant was included in the data analysis since they qualied
and completed the enre study. One parcipant in the control group did not complete the exit survey;
therefore, that person’s simulator data was included in the data analysis of SQ1, but their exit survey
data was not included in the analysis of SQ2.
Box-and-whisker plots revealed several outliers in the simulator poron of the data set; however,
removing the outlier parcipants did not signicantly aect the results. Therefore, their data remained in
the nal analysis to retain a larger sample size. The results of the boxplots will be discussed in further
59
detail in a subsequent secon of this chapter. Forty-two parcipants were included in the nal data
analysis, with 41 out of the 42 included in each sub-queson.
Demographics
At the me of data collecon, the 42 parcipants lived in 16 dierent states across the United
States (see Figure 9). Table 3 outlines addional demographic informaon that was collected from the
parcipants. The sample was made up of 50% males and 50% females. Since both data collecon sites
catered to colorectal surgeons, only two parcipants were general surgeons; no other surgical specialty
was reported. Thirty-seven (86%) parcipants were in colorectal fellowship at the me of data collecon
and had similar years of surgical experience.
Figure 9
Count of Parcipants Operang in Each State
60
Table 3
Demographic Characteriscs of Parcipants
Demographic
Characteriscs
Total Sample
(n=42)
Intervenon Group
(n=21)
Control Group
(n=21)
n
%
n
%
n
%
Gender
Male
21
50
10
48
11
52
Female
21
50
11
52
10
48
Specialty
Colorectal
40
77
20
95
20
95
General
2
5
1
5
1
5
Posion
Resident
2
5
1
5
1
5
Fellow
37
88
19
90
18
85
In Pracce
3
7
1
5
2
10
Experience
0-5 years
5
12
3
14
2
10
6-10 years
34
81
17
81
17
81
11-20 years
2
5
1
5
1
5
21+ years
1
2
0
0
1
5
Age
25-34
23
55
12
57
11
52
35-44
18
43
9
43
9
43
45-54
1
2
0
0
1
5
Table 4 displays addional informaon collected from the parcipants at the beginning of the
study. All of the parcipants at the APDCRS site were colorectal fellows. Several of these parcipants
requested claricaon when answering the queson: Approximately how many roboc cases have you
completed independently? The researcher instructed them to report how many roboc cases they have
completed without any assistance from an aending, which may be less than the number of roboc
operaons they have parcipated in total. This queson was included to ensure that both groups had
equivalent levels of roboc experience at baseline. Eighty-six percent of the parcipants reported
61
watching their own operave videos less than half of the me, with 41% stang they never watch their
videos and 45% watching them somemes.
Table 4
Roboc & Video Review Experience
Baseline Characteriscs
Total Sample
(n=42)
Intervenon Group
(n=21)
Control Group
(n=21)
n
%
n
%
n
%
Roboc Cases
0-10
20
48
10
47
10
47
11-20
7
16
3
14
4
19
21-30
11
26
6
29
5
24
31-40
4
10
2
10
2
10
Video Review Freq.
Never
17
41
10
47
7
33
Somemes
19
45
8
38
11
52
About half the me
3
7
2
10
1
5
Most of the me
3
7
1
5
2
10
Always
0
0
0
0
0
0
Site
OCC
14
33
7
33
7
33
APDCRS
28
67
14
67
14
67
Sub-Queson One
Is there a stascally signicant dierence in improvement of roboc surgical technical skill using a
simulator between novice roboc surgeons who conduct independent video review using a wrien
surgical video review guide compared to those who do not use a guide?
Da Vinci Xi roboc simulators were used to measure the roboc technical skills of the
parcipants. The parcipants performed a three-part exercise from the SimNow Library named Combo
Exercise for both the pretest and post-test. The simulator recorded the me measured in seconds and
the economy of moon (path length) measured in cenmeters required to complete the exercise. The
simulator also counted penales such as instrument collisions, excessive force, and improper energy use.
62
Based on these three metrics, the simulator provided a single overall score out of 100. The simulators
were used as an objecve measure of technical skill to determine the eecveness of the surgical video
review guide (SVRG) intervenon.
Descripve Stascs
Data analysis began with calculang the descripve stascs for the intervenon and control
groups (see Table 5). Since me, the economy of moon, and penales are composites of the overall
simulator score, stascal analysis focused only on the pre- and post-scores of the intervenon and
control groups. The mean of the control group’s baseline scores was slightly higher (M = 66.52, SD =
25.42) than the intervenon group (M = 60.8, SD = 23.4). The post-test scores of the intervenon group
(M = 80.35, SD = 15.31) were nearly idencal to the control group (M = 80.33, SD = 15.74).
Table 5
Descripve Stascs for Intervenon Group (n = 20) and Control Group (n = 21)
Group
Pre
Score
Post
Score
Pre
Time
Post
Time
Pre
EoM
Post
EoM
Pre
Pen.
Post
Pen.
M
I
60.8
80.35
509.67
429.04
767.18
690.75
33.1
17.05
C
66.52
80.33
495.56
429
754.51
691.07
26.71
16.86
SD
I
23.4
15.31
101.77
81.56
171.55
132.77
20.82
13.89
C
25.42
15.74
150.74
81.85
185.42
137.51
18.53
13.58
Median
I
64
82.5
499.4
406.65
715.1
691.15
30
15.5
C
74
88
472.5
418.8
717.8
633.3
23
12
Min
I
0
33
366.3
323.2
462.3
497.6
7
2
C
6
35
290.4
281.8
537.6
524.4
2
5
Max
I
93
98
723.7
625.6
1082.3
974.2
100
65
C
98
95
876.2
642
1252.2
961.3
77
62
Note. Score calculated out of 100. Time measured in sec. EoM = Economy of Moon measured in cm.
Pen. = penales measured as a count.
Tests of Assumpons
Tests of assumpons were conducted to ensure that the intervenon and control groups were
equal at baseline and that the data met specic assumpons required to conduct a repeated measures
63
analysis of variance (RM-ANOVA). Table 6 presents Levene’s homogeneity of variances test, which
determined that both groups had equal levels of variance with a p-value > 0.05.
Table 6
Homogeneity of Variance Test
Levene (F)
df1
df2
p
Pre Score
0.0604
1
39
0.807
Post Score
0.1518
1
39
0.698
Shapiro-Wilk test for normality was calculated for the pre-scores and post-scores of the
intervenon and control groups, and it was found that the data did not follow a normal distribuon, with
a p-value < 0.05 (see Table 7). However, since the sphericity assumpon was met, an RM-ANOVA could
sll be calculated without risking a Type I error (Blanca et al., 2023).
Table 7
Shapiro-Wilk Test for Normality
Group
Variable
Shapiro-Wilk W
p
Intervenon
Pre-Score
0.936
0.198
Control
Pre-Score
0.905
0.044
Intervenon
Post-Score
0.821
0.002
Control
Post-Score
0.834
0.002
Inferenal Stascs
An RM-ANOVA (Table 8) found a signicant dierence in the pre-and post-simulator scores of the
parcipants within each group (p < 0.001), but there was no stascally signicant dierence in the
scores between the intervenon and control groups (p = 0.587). Therefore, the answer to SQ1 was there
is no stascally signicant dierence in the improvement of roboc surgical technical skills using a
64
simulator between novice roboc surgeons who conduct independent video review using a wrien
surgical video review guide compared to those who do not use a guide.
Table 8
Repeated Measures ANOVA
Within Subjects Eects
Sum of Squares
df
Mean Square
F
p
Score
5700
1
5700
20.176
<0.001
Score * Group
169
1
169
0.597
0.444
Residual
11018
39
283
Between Subjects Eects
Sum of Squares
df
Mean Square
F
p
Group
167
1
167
0.300
0.587
Residual
21720
39
557
The percentage change between the prescores and postscores of the parcipants was
calculated, and this dierence was compared to their prescores. Figure 10 demonstrates an inverse
relaonship between prescore and the percentage change from prescore to postscore. The lower the
prescore, the greater the percentage change, suggesng this study had a ceiling eect, where there was
less room for improvement for the parcipants who inially scored high on the pretest. In addion,
there appears to be some regression towards the mean, where a few parcipants who scored high at
baseline scored lower on the post-test.
65
Figure 10
Scaerplot of the Score Dierence Between the Baseline and Post Simulaon Test
Finally, a correlaon matrix was calculated to determine whether any demographic
characteriscs inuenced the parcipants' roboc surgical skill improvement. Technical skill
improvement was measured by the percentage dierence between the parcipants pre-scores and
post-scores. This data was compared to each demographic category using Spearman’s rho to determine
if a correlaon existed. No correlaon was found between demographic characteriscs and skill
improvement, as shown in Table 9.
66
Table 9
Correlaon Matrix of the Score Dierence and Demographic Characteriscs
Addional Insights
Though this study did not nd a stascally signicant dierence in the technical surgical skills of
novice roboc surgeons in the intervenon group compared to the control group, some potenal trends
were revealed through addional data analysis. Figure 11 is a scaerplot of the relaonship between the
pre-scores and post-scores of the intervenon and control groups. The steeper posive slope of the
67
intervenon group trendline indicates more consistent performance and pronounced improvement
compared to the control group.
Figure 11
Scaerplot of the Relaonship Between the Baseline and Post Simulaon Scores
Simple linear regression analysis was conducted to evaluate the extent to which the pre-scores
could predict the post-scores for each group (Tables 10 and 11). A signicant regression was found (F(1,
18) = 11.58, p = 0.003) for the intervenon group. The R
2
was 0.39, indicang that the pre-scores
explained approximately 39% of the variance in post-scores. In contrast, a signicant regression was not
found (F(1, 19) = 0.36, p = 0.56) for the control group. The R
2
was 0.02, indicang that the pre-scores
68
explained only 2% of the variance in post-scores.
Therefore, the intervenon had a signicant impact on
the relaonship between the pre-scores and post-scores, whereas in the control group, the pre-scores
did not signicantly predict the post-scores.
Table 10
Linear Regression for Intervenon Group
Model Fit Measures
Model
R
R
2
1
0.63
0.39
Model Coecients – Intervenon Postscores
Predictor
Esmate
SE
t
p
Intercept
55.46
7.81
7.10
<.001
Prescore
0.41
0.12
3.40
0.003
Table 11
Linear Regression for Control Group
Model Fit Measures
Model
R
R
2
1
0.14
0.02
Model Coecients – Control Postscores
Predictor
Esmate
SE
t
p
Intercept
74.72
9.99
7.48
<.001
Prescore
0.08
0.14
0.6
0.556
A graphic representaon of the baseline scores and post-scores using boxplots provided
addional insights relevant to the ecacy of the intervenon. Figures 12 and 13 reveal that the
intervenon group performed slightly worse on the baseline simulaon (though not stascally
signicant) while performing equal to the control group in the post-simulaon. The boxplots of the post
69
scores (Figure 13) indicate that the intervenon group had less variability compared to the control group,
suggesng more consistent performance among the parcipants aer the intervenon.
Figure 12
Box and Whisker Plot Comparison of Baseline Simulaon Performance
70
Figure 13
Box and Whisker Plot Comparison of Post-Simulaon Performance
Power Analysis
With a total of 41 parcipants divided between two groups, a post-hoc power analysis was
conducted to determine the stascal power of this study. The eect size, as measured by Cohen’s d,
was 0.001, and the power of the test was 0.05, meaning there was only a 5% chance of detecng a
signicant dierence between the groups. To increase the power of this study to 80%, thus reducing a
Type II error, the sample size must be increased to approximately 128 parcipants, with 64 parcipants
in each group for a medium eect size (d = 0.5) with a signicance level of p < 0.05.
71
Sub-Queson Two
To what extent do novice roboc surgeons perceive wrien surgical video review guides as useful?
To answer the second sub-queson, the parcipants completed one of two exit surveys at the
end of the study, depending on their group assignment. Both exit surveys included six statements
regarding their percepon of video review guides in general (VRG statements). The exit survey for the
intervenon group included an addional eight statements regarding their percepon of the Surgical
Video Review Guide (SVRG statements). The exit survey for the control group included one statement on
whether the parcipants believed a video review guide would have helped them on their second
simulaon performance. The parcipants responded to all of the statements using a 7-point Likert scale,
with one meaning strongly disagree and seven meaning strongly agree.
Data Analysis
Descripve stascs were calculated to determine the extent to which the parcipants
perceived video review guides as useful (Table 12). First, the scores from statements four from the VRG
statements and 5-8 from the SVRG statements were reversed to obtain accurate means. The overall
mean for the perceived usefulness of VRG was M = 5.70, SD = 0.92. Therefore, all the parcipants
perceived video review guides in general as “somewhat useful” to “useful.The mean response for the
intervenon group (M = 5.74, SD = 0.91) regarding VRGs in general was slightly higher than the control
group’s mean (M = 5.65, SD = 1.52); however, an independent samples t-test indicated that the
dierence was not signicant (t(39) = 0.54, p = 0.59). Likewise, there was no signicant dierence in the
perceived usefulness of VRGs for any other covariates.
72
Table 12
Descripve Stascs of Perceived Usefulness of VRGs
Statements
Total Sample (n=41)
Intervenon (n=21)
Control (n=20)
M
SD
M
SD
M
SD
Item 1
6.00
1.05
6
0.63
6
1.38
Item 2
6.02
1.06
6.01
0.70
5.95
1.37
Item 3
5.71
1.29
5.57
1.08
5.85
1.50
Item 4
5.12
1.45
5.19
1.25
5.05
1.67
Item 5
5.61
1.43
5.81
0.98
5.4
1.79
Item 6
5.71
1.17
5.76
0.83
5.65
1.46
Total
5.70
0.92
5.74
0.91
5.65
1.52
Only the intervenon group responded to statements regarding the SVRG (Table 13). The overall
mean for the perceived usefulness of the SVRG was M = 5.43, SD = 1.05. Therefore, the parcipants in
the intervenon group perceived the surgical video review guide used in this study as “somewhat useful”
to “useful.” The parcipants in the control group agreed that they would have performed beer on the
second simulaon if they had a surgical video review guide (M = 5.25, SD = 1.55).
Table 13
Descripve Stascs of Perceived Usefulness of the SVRG (n=21)
M
SD
5.29
1.59
5.71
1.31
5.29
1.38
5.29
1.62
4.48
1.69
6.00
1.10
5.95
1.02
5.43
1.25
5.43
1.05
73
Instrument Reliability
Cronbach’s alpha was used to determine the reliability of the VRG and SVRG statements (See
Table 14). The VRG statements had a Cronbach’s of 0.83. Removing statement four, video review
guides are more me-consuming than helpful, which was the only reversed scored statement among the
VRG statements, increased Cronbach’s to 0.92. The SVRG statements had a Cronbach’s of 0.89.
Removing statement eight, the video review guide was me-consuming, only slightly increased
Cronbach’s to 0.9, though it is of note that both statements inquired about the guide’s usefulness
compared to me consumpon.
Table 14
Internal Reliability of Perceived Usefulness Scales
Scale
N of Items
Cronbach’s
If One Item Dropped
VRG
6
0.83
0.92
SVRG
8
0.89
0.9
Central Research Queson
What is the impact of ulizing a wrien video review guide during independent video review on the
surgical skills of novice roboc surgeons?
To answer the central research queson, this study sought to explore both the ecacy of a
surgical video review guide and the perceived usefulness of video review guides by novice roboc
surgeons. The results of the rst sub-queson found that the SVRG was not a quanably eecve
method of improving surgical technical skills. The results of the second sub-queson found that surgeons
perceive video review guides as useful in that they believe that guides can help surgeons improve their
surgical skills. Therefore, the answer to the central research queson, that is, determining the impact of
a wrien video review guide on novice roboc surgeons, remains inconclusive.
74
Summary
This chapter presented the results of the stascal analyses calculated to answer the study's two
sub-quesons. The answer to the rst sub-queson is that no stascally signicant dierence was
found between surgeons who used the SVRG during video review and surgeons who did not use the
guide. The intervenon group had less post-score variability and a stronger trendline between their
baseline and post-simulaon scores. In response to the second sub-queson, this study found that
surgeons perceived both the SVRG and video review guides in general as useful. Both stascal
signicance and surgeon percepon should be considered when determining the overall impact of
ulizing wrien video review guides. Therefore, due to the weak power of this study, the ndings remain
inconclusive.
75
CHAPTER FIVE: DISCUSSION
Introducon
This study sought to address the problem of limited expert support and guided video review for
novice roboc surgeons. The value of reecve pracce and the use of video to guide reecon is well
documented in the literature; however, novice surgeons struggle with these skills when tasked to
conduct video review independently. The inability to properly reect on their performance may result in
a longer learning curve, poorer paent outcomes, and failure to adopt robocs into surgical pracce. The
present study invesgated the impact of a surgical video review guide on the technical skills of novice
roboc surgeons. The nal chapter of this dissertaon discusses the study's results in comparison to
exisng research. It examines the limitaons of the study design and oers recommendaons for future
research. The chapter concludes with the study's overall signicance and its implicaons for roboc
surgical training.
Summary of the Study
This quantave study with a between-group quasi-random experimental design examined the
eect of a wrien surgical video review guide on novices' roboc technical skills. The purpose of the
guide was to explicitly teach novice surgeons how to reect on their technical surgical simulaon
performance. The Surgical Video Review Guide (SVRG) was created using Kolb’s Experienal Learning
Cycle (Kolb, 1984) as a framework and quesons from Gibb’s Reecve Cycle (Gibbs, 1988) integrated
throughout the guide. Few studies have been conducted on the use of guided but independent video
reviews to improve roboc surgical skills. This dissertaon study was guided by the following central
research queson:
76
CRQ: What is the impact of ulizing a wrien video review guide during independent video
review on the surgical skills of novice roboc surgeons?
To answer this central queson, the study was designed around the following two sub-quesons.
SQ1: Is there a stascally signicant dierence in the improvement of roboc surgical
technical skills using a simulator between novice roboc surgeons who conduct an
independent video review using a wrien surgical video review guide compared to those
who do not use a guide?
SQ2: To what extent do novice roboc surgeons perceive wrien surgical video review
guides as useful?
Forty-two parcipants were individually recruited from two dierent sites in November 2023
and February 2024. The parcipants were divided into two groups using stracaon based on roboc
case experience to ensure that both groups were homogenous. All of the parcipants watched a
benchmark video of the simulaon, performed the same baseline exercise on the simulator, and
watched their performance. The intervenon group was provided with the SVRG to guide their reecve
process, while the control group was only asked to review their video independently. Both groups then
repeated the simulaon exercise and completed an exit survey based on which group they were
assigned.
Discussion of Sub-Queson One
Is there a stascally signicant dierence in improvement of roboc surgical technical skill using a
simulator between novice roboc surgeons who conduct independent video review using a wrien
surgical video review guide compared to those who do not use a guide?
77
Data analysis found that, on average, both the intervenon and control groups signicantly
improved their roboc technical skills scores from the pretest to the post-test. However, no stascally
signicant dierence was found between the two groups. The ndings of this study are similar to those
of another study that explored the use of a wrien video review guide to improve surgical skills. Wang et
al. (2020) conducted a two-group experimental study with 31 parcipants measuring the knot-tying skills
of a video guide reecon group compared to a self-regulated learning group. The self-regulated
learning group received one hour of supervised pracce with expert feedback, an OSATS evaluaon tool,
and an instruconal video. The video reecon group was provided with a reecve guide, an
instruconal video, and their video performance. Both groups signicantly improved their knot-tying
abilies, but no dierence was found between the groups. The authors concluded that a wrien video
review guide was as eecve in improving knot-tying skills as expert support with the added benet of
cost-savings. The present study sought to extend Wang et al.s research by determining whether there is
a dierence between independent video review with a wrien guide and unguided video review.
The study presented in this dissertaon diers from Wang et al. (2020) in several respects. The
study design by Wang et al. introduced several variables between the two groups, and the data analysis
failed to control for the covariates. Both groups received a knot-tying board and an instruconal video,
but the video reecon group was provided with two addional variables (reecve guide and video
performance) that the self-regulated group did not receive. Likewise, the self-regulated group received
three addional variables (supervised pracce, expert feedback, and an OSAT evaluaon tool) that the
video reecon group did not receive. Without controlling for these ve addional variables, it is dicult
to determine which variable (or combinaon of variables) led to the improved knot-tying performance
for each group. This study addressed these issues by ensuring that both the control and intervenon
groups received the same treatment apart from one independent variable: the SVRG. Examining the
results of both the present study and Wang et al.s together, it is evident that all the parcipants
78
improved technical surgical skills irrespecve of the ulizaon of a wrien video review guide or expert
feedback. Skill improvement in these studies may have been the result of another variable or
combinaon of variables, including repeated tesng and the presence of a benchmark video.
This dissertaon's theorecal underpinnings were Kolbs Experienal Learning Theory (1984) and
Gibb’s Reecve Cycle (1988). Connuous learning and growth within one’s surgical pracce requires a
cycle of planning for and performing surgery, followed by reecng on and learning from the experience.
Reecon and self-assessment are oen used interchangeably in the literature, and while they are
inextricably linked, they are also two disnct concepts. The purpose of reecon is to learn from
experiences by deriving meaning from them and gaining deeper insights (Desjarlais & Smith, 2011). Self-
assessment is a product of self-reecon but focuses more on improving one’s future eorts and skills.
Video review in this study may have lacked deep self-reecon because the simulaon did not oer a
meaningful experience for the parcipants. In contrast to performing a high-risk surgical procedure on a
human being, the roboc simulaon required the parcipants to move pegs, open doors, and suture a
sponge in a low-risk environment. Despite the low-risk environment, self-assessment may sll have taken
place since the simulaon focuses on improving one’s technical skills.
Certain aspects of this dissertaon’s study design were deliberately chosen based on prior
research ndings that examined video's inuence on self-assessment and skill improvement without
expert support. Specically, the designs of four similar studies were examined: The study previously
menoned by Wang et al. (2020) had the parcipants self-assess themselves, and their scores were
compared to the scores of external evaluators. The study found that though both groups improved their
knot-tying skills equally, the self-assessment scores of the video reecon group demonstrated higher
post-test reliability (0.69) with the expert evaluators compared to the self-regulated learning group
(0.36). Hawkins et al. (2012) found that video review with a benchmark video improved the self-
assessment scores of medical students performing a suturing task compared to video review alone.
79
Scadi et al. (2019) found that a benchmark video alone improved the self-assessment scores of novice
endoscopists in the short term, while a benchmark video coupled with a video review improved self-
assessment scores in the long term. Finally, Vyasa et al. (2017) found that video review and benchmark
videos separately improved surgical skills, but only benchmark videos improved the self-assessment
scores of novice endoscopists. These four studies highlight the importance of providing benchmark
videos to novices to support their self-reecon. These studies concluded that video review alone may
not be sucient for improving the self-assessment of novice surgeons. Expert surgeons have a wealth of
prior experience they can compare their performances to, whereas novices have limited prior knowledge
of what to expect from a high-quality performance. When video review is coupled with a benchmark
video, novices’ ability to accurately judge their performance improves because they have a standard to
compare themselves to (Hawkins et al., 2012).
Accurate self-assessment was a necessary skill for the parcipants in this study to possess to
improve their technical skills. Since all of the parcipants were novice roboc surgeons, it was deemed
necessary to provide a benchmark video based on the ndings of the studies menoned above. The
present study built upon previous ndings by determining what type of eect video review with a
wrien video reecon guide would have on surgical technical skills rather than the eect on self-
assessment skills. Though the eect of an SVRG is not stascally evident in this study, the improvement
of surgical skills across both groups supports the posive eect of video review coupled with a
benchmark video on novice surgeons.
Ali and Miller (2018) reviewed video-assisted debrieng (VAD) in healthcare and found the
literature inconclusive on its eecveness. One of the challenges the literature review faced was the lack
of descripon of how the VAD was performed – specically, whether it was led by a facilitator or self-led.
Several studies support the use of expert-led VAD compared to self-led VAD: A study by Aldnic et al.
(2022) found that medical students who received expert video feedback outperformed those who
80
conducted video review alone in cricothyroidotomies. Likewise, Halim et al. (2021) found that surgical
residents who received expert video feedback improved their laparoscopic intracorporeal suturing skills
compared to those who conducted video review alone or received expert verbal feedback with no video.
It is possible that the mixed ndings of Ali and Miller’s (2018) literature review resulted from ineecve
or less eecve self-led VAD. This dissertaon sought to determine whether parcipants who conducted
video review with an SVRG would outperform those who conducted video review alone, thus providing
an explanaon for Ali and Millers ndings. While this study did not compare expert support to
independent video review, it does provide evidence that surgical skills can be signicantly improved
without expert support.
The ndings reported in Chapter Four revealed two potenal trends when comparing the
intervenon group to the control group. The pre-scores of the intervenon group more accurately
predicted the parcipants’ post-scores compared to the control group, as indicated by the steeper
trendline and linear regression. In addion, there was less variability between the post-scores of the
intervenon group compared to the control group, suggesng that the parcipants in the intervenon
group had a more consistent performance. While stascal signicance is crucial for assessing
intervenon eecveness, consistency of improvement oers valuable insights into the reliability and
predictability of the intervenon on the parcipants. The only dierence between the two groups was
using the SVRG since all other variables, including demographics, were controlled in this study. Wang et
al.s (2020) study found no signicant dierence in skill improvement between the video review guide
group and self-regulated learning group; but did nd a signicant dierence in their ability to self-assess.
Likewise, it is possible that underlying learning or improvement of an unmeasured skill, such as self-
assessment, occurred in the present study, accounng for the variability and predictability dierence.
Examining the dierence in variability between groups can determine whether a larger-scale study is
warranted, parcularly when considering internal and external validity issues with this study (discussed
81
in the limitaons secon of this chapter) (Beets et al., 2020). A stascally signicant dierence may not
have been detected in this study due to the sample size, but it is also possible that the reduced
variability was caused by chance. A larger study is required to determine if this dierence is replicable.
Discussion of Sub-Queson Two
To what extent do novice roboc surgeons perceive wrien surgical video review guides as useful?
Perceived usefulness is dened as the extent to which an individual believes ulizing an object or
system will improve their job performance (Davies, 1989). The parcipants in this study found both the
SVRG and VRGs in general to be ‘somewhat useful’ to ‘useful.’ These ndings are supported by previous
research that found that trainees prefer to analyze their videos using an observaon guide (Kong et al.,
2009; Tripp & Rich, 2012). The factors that inuence perceived usefulness include ease of use, how
compable the tool is with the users exisng pracces, beliefs, and values, the perceived benets of the
tool, and the opinions, recommendaons, and experiences of others (Venkatesh et al., 2003). The
following is a breakdown of each of these factors as they relate to the perceived usefulness of video
review guides.
Effective reflective practice can enhance self-awareness and stimulate critical thinking (Patel &
Metersky, 2022). As a result, reflective practice improves patient care by highlighting poor practices and
empowering practitioners towards change. Despite the importance of reflective practice, many
healthcare workers fail to engage in reecon and aribute this failure to the amount of me required
for reecon (Davies, 2012). Ease of use refers to the extent to which one believes a tool will be free of
eort (Venkatesh & Davis, 2000). Therefore, statements regarding how me-consuming video review
guides are and whether they found them distracng were included in the survey to determine the
parcipants’ beliefs on the tool’s ease of use. Many studies have found that learners are interested in
reecon but strongly resist wrien reecons. Providing the parcipants with the opon of taking
82
notes on the SVRG without making wring a requirement allowed them to be more recepve to
reecon and oered psychological safety (Holmes et al., 2018; Shaughnessy & Duggan, 2013; Tonni et
al., 2016; Tuykov, 2023). Only two of the 22 parcipants in the intervenon group chose to write on their
copy of the SVRG. While a video review guide does not decrease the amount of me reecve pracce
requires, the parcipants did not believe that it would increase the required me or felt it to be an
addional barrier, thus concluding that the tool is easy to use. The benet of independent video review
is that it can be performed at any me and in any place, as videos and a reecon guide can be
accessible on a personal phone, which may ease the me barrier.
The perceived compability of a video review guide with novice surgeons’ exisng pracces,
beliefs, and values was measured by inquiring about their exisng video review pracces and whether
they believed a video review guide would be helpful. Eighty-six percent of the parcipants reported
engaging in video review less than half the me, with 41% stang they never watch their videos. This is
consistent with reports that only ve percent of residency programs regularly video-record surgical
operaons (Esposito et al., 2022). Video review is not the only vehicle for reecve pracce; debrieng
can occur post-operavely with an aending and weekly at Morbidity and Mortality conferences
(Anderson et al., 2020; Keiser & Arthur, 2021). However, the lack of video review pracce among surgical
fellows raises the queson of how common any form of self-reecon is in training and whether this skill
is being transferred to surgical pracce aer training. The parcipants in this study may have perceived a
guide as useful because, like the pediatric surgery residents in a study by Naumeri (2023), they recognize
the value of reecve pracce but do not engage in it because of a lack of guidance.
The present study did not explore the social inuences on the parcipants’ consideraons of
ulizing a surgical video review guide, that is, what their peers may think of video review guides.
However, the parcipants responded posively to statements regarding whether they would
recommend a video review guide to other surgeons and whether they themselves would use a guide if
83
provided one in the future. An advantage of a video review guide is that it can be ulized independently
and aord privacy to the user. There is no requirement to submit a wrien reecon for assessment, nor
is there pressure to oer a correct and mely answer, as can be the case during a group debrieng
(Verkuyl et al., 2018). When implemenng video review, it is essenal that it is viewed as a way to build
intrinsic movaon rather than an exercise to fulll a mandatory requirement (Truykov, 2023).
Determining the perceived usefulness of a tool is as important as measuring the ecacy of the
tool. If surgeons have no interest in a product or do not perceive it as valuable, then it will not help
them, even if it is eecve. A systemac review of the adopon of mobile health applicaons validated
the importance of perceived usefulness by healthcare professionals in choosing to ulize new technology
(Gagnon et al., 2016). The parcipants in this study likewise recognized the value of video review guides
for developing reecve skills and shortening the roboc learning curve.
The parcipants' perceived usefulness of video guides also sheds light on trainees' desire for
addional support during video review. A systemac review by Lim et al. (2022) reported that one of the
most common barriers to deep reecon is the lack of guidance on the know-how for learners to carry
out eecve reecon. The present studys ndings support the conclusions of an acon research
project that states that a reecon guiding tool helps learners think about aspects of their performance
they would not have otherwise considered (Holder et al., 2019).
Discussion of Central Research Queson
What is the impact of ulizing a wrien video review guide during independent video review on the
surgical skills of novice roboc surgeons?
The impact of video review guides on novice roboc surgeons was explored by seeking to
understand two underlying processes: the ecacy of a surgical video review guide on roboc technical
skills and the perceived usefulness of video review guides by novice roboc surgeons. The results of the
84
rst sub-queson found that the SVRG did not make a quanable dierence in improving surgical
technical skills over video review alone. The results of the second sub-queson found that surgeons
perceive video review guides as useful and as a way to help surgeons improve their surgical skills.
Together, the ndings of the two sub-quesons provide a more well-rounded understanding of the
impact of a wrien video review guide on surgeons. Based on the mixed results of the two sub-
quesons, the answer to the central research queson remains inconclusive.
The ndings suggest a nuanced understanding of the role of independent but guided video
review in surgical training. Though the immediate impact of a guide on technical skill improvement may
not be stascally evident in this study, the posive percepon of video review guides among novice
roboc surgeons suggests potenal value and praccal relevance in supporng the learning process of
novice roboc surgeons. Even if video review guides do not provide immediate signicant improvement
in surgical skill over the use of video in general, guides can assist in building greater mental schemas by
having the learner explore mulple aspects of their performance, more so than they may have examined
when conducng video review without any support. Larger mental schemas will expand and solidify
their prior knowledge and build their experse (North et al., 2011). This, in turn, will improve their ability
to reect-in-acon during future surgical cases and improve their surgical skills long term (Schön, 1983).
Video review guides are a scaold, a temporary tool to develop reecve skills unl they become second
nature (Kong et al., 2009). Guides tailored for specic procedures help novices by poinng out crucial
areas they might overlook due to their limited prior knowledge and lack of situaonal awareness
(Endsley, 1988; Yang et al., 2021). As these markers of experse increase, the need for a video guide
decreases. The parcipants may have been more inclined to perceive video guides as useful because
they understood that they are temporary tools that do not require addional me or eort.
Furthermore, interacons between intervenons need to be considered as well. A meta-analysis
by Keiser and Arthur (2021) found that various combinaons of dierent factors, including objecve
85
media, facilitaon type, goal type, and duraon, inuence the degree of stascal signicance. We know
that benchmark videos coupled with video review have a posive eect on surgical skills (Vyasa et al.,
2017; Wang et al., 2020), but it remains unclear whether combining these two tools with a video review
guide can have an addional posive eect on surgical skills. Regardless, this study found that surgeons
perceive guides as useful and validated ndings that surgical skills can be improved without expert
support. No negave impact was detected from the use of an SVRG, and it is possible that the
intervenon group experienced decreased variability in their post-scores and a stronger pre-score to
post-score trendline due to the surgical video review guide.
Limitaons
Considering the limitaons of any study is imperave when reporng and discussing the
ndings. This study’s limitaons include sample size, simulator experience as a confounding variable, and
generalizability. Prior to data collecon, it was determined that a minimum sample size of 30, with 15
parcipants in each group, would be sucient based on prior studies examining the eects of video on
surgical skills and self-assessment (Halim et al., 2021; Scadi et al., 2019; Takagi et al., 2023; Vyasa et al.,
2017; Wang et al., 2020). In total, the data of 41 parcipants was analyzed for sub-queson one;
however, a post hoc power analysis determined that a minimum of 128 parcipants would be required
to detect a medium eect size.
The survey the parcipants took at the beginning of the study recorded the amount of roboc
experience they each had. Roboc experience was essenal to collect to conduct straed quasi-random
sampling and ensure that both the control and intervenon groups were similar at baseline. The focus
on roboc experience was due to previous research that found that surgical experience has one of the
greatest impacts on surgical skills (Azari et al., 2020; Ericsson, 2004). Data analysis, however, did not nd
a signicant dierence in either simulator performance or skill improvement between the four levels of
86
roboc experience (0-10, 11-20, 21-30, 31-40 cases) measured in this study. Unfortunately, the present
study did not consider prior simulator experience. It became evident throughout the course of data
collecon that some parcipants were quite familiar with the SimNow Combo Exercise used in the study
while others were not, though they had similar levels of roboc case experience. Although prior
simulator experience was not measured, it did not create a signicant dierence between the two
groups. Conversely, high levels of prior simulator experience may have created a ceiling eect for some
parcipants and may have shortened their roboc learning curve. Currently, novice roboc surgeons are
dened by the number of actual roboc cases they have performed. The variable of simulator pracce is
currently not being considered, though research supports the use of simulators as an eecve method
of improving surgical skills (Yang et al., 2017). This raises the queson of if and how simulator experience
can be accounted for when determining the learning curve for novice roboc surgeons.
Likewise, the survey the parcipants took at the beginning of the study inquired about their
video review frequency but did not explicitly invesgate their current reecve pracce. Though the
majority of the parcipants reported rarely reviewing their surgical videos, it is possible that they
regularly engaged in other means of reecve pracce and did not need a guide to explicitly teach them
eecve reecve strategies. Furthermore, the simulator automacally and immediately provided an
objecve score report to the parcipants upon task compleon. This informaon may have
supplemented the video review by assisng them in determining what areas they needed to focus on
during the subsequent simulaon. Aer actual operaons, no such scoring system is provided to
surgeons. They must determine for themselves how well they did and what areas need improvement;
guided video review can support them during this self-assessment.
The current study design oered some challenges to the external validity of the results. A typical
surgeon would not watch an operaon, perform the operaon, immediately watch the recording of their
performance, and then repeat the idencal operaon in the span of less than one hour, as was required
87
of the parcipants in this study. The quick succession of the pretest followed by the post-test may have
caused a reacve eect of tesng in which performing a pretest greatly inuenced the results of the
post-test (Willson & Putnam, 1982). Evidence from previous research shows that a video review
performed 72 hours aer a simulaon can reduce surgical skill decay (Kun et al., 2019). Due to the lack
of follow-up opportunity, the parcipants in this dissertaon had to complete the study in a single
seng; therefore, there was no me for potenal skill decay or for a video review to improve their
memory aer a prolonged period of me.
The reacve eect of the experimental arrangement refers to parcipants behaving dierently
during a study compared to how they would behave in the real world. While there is evidence that
surgical skills on a simulator transfer to the operang room (Schmidt et al., 2021), reecve pracce
more oen occurs on complex experiences (Mann et al., 2009), which the Combo Exercise simulaon
lacked. Examples of complex experiences would be an operaon where a complicaon occurred,
imperave decisions had to be made, paent safety was in jeopardy, and/or mulple personnel had to
be managed. Surgical skill is a combinaon of technical experse, cognive abilies, clinical and
procedural knowledge, decision-making skills, situaonal awareness, and interpersonal skills (Azari et al.,
2019). Due to the current lack of high-delity complex surgical procedure simulaons, a technical skills
maintenance exercise was chosen for this study. While the selected simulaon oered a well-rounded
exercise of mulple technical surgical skills, it oered very lile in the way of surgical decision-making as
it was a guided exercise. It provided a low-risk environment, which is atypical in the operang theater,
and the exercise involved doors, pegs, and sponges rather than human ssue, nerves, and vessels, which
reduced the concern of harming a paent. These simulator limitaons may have resulted in reduced
opportunity for reecon, which in turn limited the amount of skill improvement.
88
Implicaons for Pracce
Reducing the learning curve for roboc surgeons entails implemenng mulple strategies, each
with its own benet. Instruconal videos can provide procedural knowledge, and simulaons oer
opportunies for deliberate pracce. Expert support through surgical coaching and video-based
assessment are ideal standards in surgical educaon, but unfortunately, they are not always available to
novices. The ndings of this study add to the exisng literature by oering an addional alternave
method of improving the surgical skills of novice roboc surgeons in lieu of expert support. Providing
surgeons with benchmark videos and their own recorded performance can signicantly improve surgical
roboc technical skills. A video review guide is an addional tool that can help surgeons develop
reecve skills and assist them in nocing areas in need of improvement that may otherwise be
overlooked.
This study further adds to the exisng literature by providing evidence that novice surgeons
need to be supported during video review and that there is an interest among novice surgeons in video
review guides. Reecve pracce is crucial for connuous learning and professional development, but it
is oen overlooked by surgeons who believe they lack the skills and/or me for reecon (Davies, 2012;
Naumeri, 2023). Designing video review guides for surgery would not be limited to technical skills but
would encompass all surgical skills, including procedural and clinical knowledge, technical skills, safety,
and interpersonal skills when interacng with the operang room sta, which can provide surgeons with
an opportunity for well-rounded reecon. Assuming that surgeons already have access to video
recordings of their operaons, video review guides are a low-cost soluon that can be ulized in any
locaon at any me, does not require addional equipment, and can be conducted independently.
89
Recommendaons for Future Research
Based on the results of this dissertaon study and the review of current literature, the following
is a list of recommendaons for future research.
1. The majority of the parcipants in this study reported rarely reviewing their surgical videos—
these ndings support previous studies that recording one’s operaons for video review is
currently not a common pracce. Future research should further survey and explore the current
context and frequency of surgical video ulizaon as well as surgeons’ percepons of video
review. Educators and praconers can develop clearer pathways to integrang video review
into surgical training and connuous medical educaon by understanding the barriers and
facilitators to video review.
2. Future research should analyze and compare the dierences between expert and novice
surgeons in how they review videos. Understanding the gaps between these two groups can
help surgical educators develop eecve intervenons for novices to develop reecve pracce
skills more eciently.
3. The surgical video review guide ulized by the parcipants was designed specically for the
roboc simulaon exercise used in this study. As previously discussed, it was limited to roboc
technical skills; therefore, future research should focus on measuring the ecacy of video review
guides designed for actual surgical procedures that encompass mulple surgical skills, including
procedural and clinical knowledge, interpersonal skills, safety, and technical skills. High-delity
exercises simulang human ssue and surgical steps should be further developed and used to
measure and test surgical skills in a more realisc environment.
4. This study ulized a quantave exit survey to determine that the parcipants perceived video
review guides as useful. Qualitave research exploring surgeons’ needs and desires while
90
learning a new modality, technique, or procedure can oer meaningful insights and praccal
implicaons to help design and rene intervenons.
5. A longitudinal study should be conducted to examine the eects of video review on surgical skills
aer six months and one year compared to surgeons who do not conduct video reviews. This
research may provide a beer understanding on the long-term eects of video review and how it
contributes to reecve pracce.
6. Wang et al. (2020) found that those in the reecon group signicantly improved their self-
assessment scores compared to those in the self-regulated group, even though their knot-tying
skills improved to the same degree. Future studies may explore the role of reecon in
improving the self-assessment skills of roboc surgeons, as there may be variaon between self-
assessment ability and skill improvement.
7. There has been increasing discussion regarding arcial intelligences’ (AI) role in surgical
educaon. AI can now analyze videos and oer feedback in mulple elds. While AI does not
replace human reecve pracce, it may be able to enhance and guide it (Abdel-Karim et al.,
2023). Future research should explore training AI programs on evidence-based reecve
techniques, which novices can use as a tool to guide their video review instead of expert
surgeons.
8. This study failed to take into account the parcipants' simulator experience when determining
their roboc surgery experience. Future studies should measure correlaons between simulator
experience and the learning curve for roboc surgeons.
Conclusion
Surgical education faces the challenge of adequately training surgeons in the face of ever-
changing technology. This dissertation addressed the lack of expert support for novice robotic surgeons.
Video review allows professionals to analyze and reect on their pracce to improve and rene their
91
skills. However, reecon is not an innate skill; it should be explicitly taught. Using Kolb’s Experiential
Learning Theory as a conceptual theory and Gibb’s Reflective Cycle as a framework, a surgical video
review guide was created to guide novice robotic surgeons through the video review process. The study
aimed to analyze and describe the impact of video review guide utilization on novice robotic surgeons.
The participants who utilized the guide were compared to a control group to determine what
effect the guide had on their robotic skills. In addition, the study measured the participants’ perceived
usefulness of video review guides to determine the likelihood of novice surgeons adopting them during
video reviews. Overall, the robotic technical skills of both groups significantly improved, and video
review guides were found to be useful. Though there was no significant difference in skill improvement
between the two groups, the intervention group exhibited less variability in their post-scores, with a
stronger linear regression between their pre-test and post-test scores.
Though the study has a few limitations, the findings are relevant to the field of surgical
education in several ways. The improvement across all participants validates previous findings that
combining video review with a benchmark video can improve surgical skills. The perceived usefulness of
video review guides provides a more nuanced understanding of the role of guidance during reflection.
The desire for guidance by novice surgeons indicates that alternative, independent methods need to be
considered, especially when expert guidance is not available. The guide designed for this study focused
only on surgical technical skills; however, video review guides can be adjusted to fit the needs of various
procedures and focus on a wide range of surgical skills, including clinical knowledge and interpersonal
skills. Video review guides are a low-cost and accessible tool that surgeons can use anywhere on their
own time. It is a simple solution that may assist in creating life-long reflective practitioners.
92
APPENDIX A: PERMISSION TO REPRINT SHARP DEBRIEFING TOOL
93
Sunday, September 24, 2023 at 04:56:25 Mountain Standard Time
Subject: Re: SHARP Debriefing Tool
Date: From: To:
Thursday, September 21, 2023 at 11:15:55 AM Mountain Standard Time Mary Soliman
Arora, Sonal
Dr. Arora,
Thank you! Much appreciated!
Mary
From: Arora, Sonal <sonal.arora06@imperial.ac.uk> Date: Thursday, September 21, 2023 at
2:14 PM
To: Mary Soliman <ma210[email protected]>
Subject: Re: SHARP Debriefing Tool
Hi Mary
Yes please do go ahead.
Many thanks Sonal
Sent from my iPhone
This email from [email protected] originates from outside Imperial. Do not click on links and
attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list
to disable email stamping for this address.
On 21 Sep 2023, at 19:06, Mary Soliman <[email protected]> wrote:
Hello Dr. Arora,
I am a doctoral candidate at the University of Central Florida, USA, and I am conductng my research on
the use of video review to improve robotic surgical skill.
I have read your work on surgical debriefing, which you wrote with Dr. Ahmed in 2013, and I think it is
fantastic! My dissertation is testng whether a surgical video review guide (inspired by your SHARP
Debriefing Tool) can help facilitate effective independent video review and ultimately improve robotic
surgical skill.
94
I emailed Dr. Ahmed as she was the corresponding author, however I have not heard back from her. I
realize some time has passed since its publication and she may no longer have an active email address at
Imperial College.
I am writing to ask your permission to reprint the SHARP Debriefing Tool image you created in my
dissertation. I believe this image will greatly benefit the reader in understanding what elements of your
research were used to help create the surgical video review guide as well as offer insight into what
previous research has been conducted on reflective practice for surgeons.
Thank you in advance for your consideration,
Mary Soliman
Mary M. Soliman, M.Ed.
Doctoral Candidate
Curriculum & Instruction Ed.D.
Department of Learning Science and Educational Research College of Community Innovation and
Education
University of Central Florida
95
APPENDIX B: SURGICAL VIDEO REVIEW GUIDE
96
Surgical Video Review Guide
Directions: Please watch your video in its entirety. Reflect on the following questions and
prompts below as you review your simulation performance. You may pause, rewind, and adjust
the playback speed as desired. Feel free to write any notes on this sheet.
What is the purpose of this review?
E.g., review an error, improve technique, monitor progress, share with others, etc.
How did I do?
What went well? Where can I improve?
What did I learn?
How was my performance different from the exemplary video, and why?
What will I do differently next time?
Choose one area of focus to improve your robotic skill
Additional questions to think about…
Did I articulate my wrists? | Did I have wasted movements? | Did I have good visualization? |
How was my bimanual dexterity? | Did I optimally position/reposition? (needle, camera, arms)
97
APPENDIX C: INFORMED CONSENT & DEMOGRAPHIC SURVEY
98
Video Review Study Demographic Survey
Start of Block: Default Question Block
Title of Study: Examining the Ecacy of a Video Review Guide to Facilitate Roboc Surgical Skill
Improvement
Principal Invesgator: Mary M. Soliman, Doctoral Candidate
Key Informaon: The following is a short summary of this study to help you decide whether or not to be
a part of this study. More detailed informaon is listed later on in this form.
Why am I being invited to take part in a research study?
We invite you to take part in a research study because you are a surgeon with prior experience using a
daVinci roboc simulator and have completed fewer than 41 roboc operaons.
Why is this research being done?
The purpose of this study is to invesgate the eecveness of a video review guide in enhancing the
improvement of roboc surgical skills. A potenal benet of this study is reducing the learning curve for
novice roboc surgeons.
How long will the research last and what will I need to do?
We expect that you will be in this research study for 35-45 minutes.
You will be asked to:
1. Watch a video of a roboc surgical exercise
2. Perform the same roboc surgical exercise on a simulator
3. Watch your recorded performance with or without a surgical video review guide
4. Repeat the same roboc surgical exercise
5. Complete an exit survey.
More detailed informaon about the study procedures can be found under “What happens if I say yes, I
want to be in this research?”
Is there any way being in this study could be bad for me?
Parcipang in this study involves minimal risks. Though rare, viewing 3D images on a monitor may
cause temporary moon sickness, perceptual aer-eects, or eye strain. The simulaon exercises are
designed to mimic roboc surgical tasks and might induce mild frustraon or fague.
Will being in this study help me in any way?
We cannot promise any benets to you or others from your taking part in this research. However,
possible benets include gaining insights into your roboc surgical skills, improved reecve and analyc
abilies, and contribung to the advancement of surgical educaon and training methods.
What happens if I do not want to be in this research?
99
Parcipaon in research is completely voluntary. You can decide to parcipate or not to parcipate.
Detailed Informaon:
The following is more detailed informaon about this study in addion to the informaon listed above.
What should I know about a research study?
Someone will explain this research study to you.
Whether or not you take part is up to you.
You can choose not to take part.
You can agree to take part and later change your mind.
Your decision will not be held against you.
You can ask all the quesons you want before you decide.
Who can I talk to?
If you have quesons, concerns, or complaints, or think the research has hurt you, talk to the research
team: Mary M. Soliman, Doctoral Candidate, EdD in Curriculum & Instrucon Program, College of
Community Innovaon and Educaon, UCF, at (480)206-4563 or by email at ma210027@ucf.edu. Dr.
Glenda Gunter, Faculty Supervisor, Department of Learning Sciences & Educaonal Research at (407)823-
2428 or by email at glenda.gunter@ucf.edu
This research has been reviewed and approved by an Instuonal Review Board (“IRB”). You may talk to
them at 407-823-2901or irb@ucf.edu if:
Your quesons, concerns, or complaints are not being answered by the research team.
You cannot reach the research team.
You want to talk to someone besides the research team.
You have quesons about your rights as a research subject.
You want to get informaon or provide input about this research.
How many people will be studied?
We expect 60 people will be in this research study.
What happens if I say yes, I want to be in this research?
You will be randomly assigned to either the intervenon or control group. Both groups will watch an
exemplary video of a simulaon exercise and then engage in the same simulaon exercise using a da
Vinci roboc simulator. Subsequently, you will watch a video recording of your group’s performance. The
key dierence between the two groups lies in the video review process.
Intervenon Group: If assigned to the intervenon group, you will receive a wrien video review guide.
This guide is designed to help you independently reect on your simulaon performance and eecvely
analyze the video recording for areas of surgical skill improvement.
Control Group: If assigned to the control group, you will review the video recording of your simulaon
performance without any addional guidance.
Following the video review, you will repeat the simulaon exercise to assess any potenal skill
improvement. You will then complete an exit survey on your percepons of surgical video review guides.
100
The esmated me it will take for you to complete the study is 35-45 minutes. It will take place in the
exhibit hall of a surgical conference.
Your simulaon performance will be video recorded. No sound, personal images, or other idenfying
informaon will be recorded; therefore, the recording will remain anonymous. If you do not want to be
recorded, you will not be able to be in the study. Discuss this with the researcher or a research team
member. If you are recorded as part of this study, the recording will be kept in a locked, secure place. The
recording will be erased or destroyed aer ve years following study closure.
The group you get will be chosen randomly; you will not choose which group will be assigned to. You will
have a 50% chance of being placed in the intervenon group.
What happens if I say yes, but I change my mind later?
You can leave the research at any me it will not be held against you.
What happens to the informaon collected for the research?
No personal or idenable informaon will be collected during this study. You will be assigned a random
number at the beginning of the study to track your simulaon videos, performance reports, and surveys
in order to compare and report performance dierences within and between groups. Organizaons that
may inspect and copy your anonymous informaon include the IRB and other representaves of this
organizaon.
Do you provide your consent to parcipate in this research study?
o Yes (1)
o No (2)
Q2 Have you been trained to use a da Vinci robot? (e.g. simulator exercises, basic roboc training, etc.)
o Yes (1)
o No (2)
Skip To: End of Survey If Have you been trained to use a da Vinci robot? (e.g. simulator exercises, basic robotic
training,... = No
101
Q12 What type of roboc training have you received? Check all that apply
Online training modules (1)
Simulator training (2)
Hands on training (3)
Case experience (4)
Video review (5)
Q3 Approximately how many roboc cases have you completed independently?
o 0-10 (1)
o 11-20 (2)
o 21-30 (3)
o 31-40 (4)
o 41+ (5)
Skip To: End of Survey If Approximately how many robotic cases have you completed independently? = 41+
Q4 Please enter the number assigned to you for this study.
________________________________________________________________
102
Q5 What is your current surgical posion?
o Resident (1)
o Fellow (2)
o In surgical pracce (3)
Q6 Years of surgical experience (including training).
o 0-5 (1)
o 6-10 (2)
o 11-15 (3)
o 16-20 (4)
o 21+ (5)
103
Q7 Surgical specialty
o Acute care/Trauma (1)
o Cardiothoracic (2)
o Colon & Rectal (3)
o General (4)
o Gynecology/Obstetrics (5)
o Otorhinolaryngology (6)
o Pediatric (7)
o Urology (8)
o Other (9) __________________________________________________
Q13 What state do you currently live in? (Please list country if you live outside the US)
________________________________________________________________
104
Q8 Age
o 24 or under (1)
o 25-34 (2)
o 35-44 (3)
o 45-54 (4)
o 55-64 (5)
o 65-74 (6)
o 75 or above (7)
Q9 Gender
o Male (1)
o Female (2)
o Non-binary / Third gender (3)
o Prefer not to say (4)
105
Q10 How oen do you review your own operave videos?
o Never (1)
o Somemes (2)
o About half the me (3)
o Most of the me (4)
o Always (5)
End of Block: Default Question Block
106
APPENDIX D: PERMISSION TO PRINT SIMULATOR IMAGES
107
From: Gillian Duncan <Gillian.Duncan@intusurg.com>
Sent: Tuesday, May 14, 2024 5:54:28 PM
To: Mary Soliman <marymsoliman@ucf.edu>
Subject: RE: [EXTERNAL] Simulator Images
Hi Mary. You have my approval to use these images in your dissertaon. I look forward to reading it!
Best Wishes,
Gillian
Dr. Gillian S Duncan
Senior Vice President
Professional Education & Program Services – Worldwide
Mobile: 1 408 373 7492
Direct: 1 408 523 2356
INTUITIVE
1020 Kifer Rd
Sunnyvale, CA 94086 USA
intuitive.com
From: Mary Soliman <marymsoliman@ucf.edu>
Sent: Tuesday, May 14, 2024 12:15 PM
To: Gillian Duncan <Gillian.Duncan@intusurg.com>
Subject: [EXTERNAL] Simulator Images
Hi Gillian,
Thank you for meeting with me today!
Attached are the images I would like to include in my dissertation with Intuitive’s permission.
Regards,
Mary
NOTE THAT THIS EMAIL ORIGINATED FROM OUTSIDE OF INTUITIVE SURGICAL.
Be alert for fraudulent emails that spoof internal "@intusurg.com" email addresses. Report any suspicious emails
using the "Report Phish" buon. Click KB0014776 for more informaon on the "Report Phish" buon and to learn
more about dierenang phishing from spam and bulk email, please review KB0014940.
108
APPENDIX E: SAMPLE SIMULATION REPORT
109
110
APPENDIX F: EXIT SURVEYS
111
Video Review Guide Exit Survey - INTERVENTION
Start of Block: Default Question Block
Q1 Please enter the number assigned to you for this study.
________________________________________________________________
Q2 Did you use the video review guide provided to you during this study?
o Yes (1)
o No (2)
Q3 Thinking about the video review guide provided during this study, please rate the following
statements.
112
Strongly
disagree
(1)
Disagree
(2)
Somewhat
disagree
(3)
Neither
agree nor
disagree
(4)
Somewhat
agree (5)
Agree (6)
Strongly
agree (7)
The video
review guide
was helpful.
(1)
o
o
o
o
o
o
o
The video
review guide
improved my
reflection. (2)
o
o
o
o
o
o
o
The video
review guide
allowed me
to notice
things in my
performance
I may not
have noticed
otherwise.
(3)
o
o
o
o
o
o
o
The video
review guide
helped
improve my
simulation
performance.
(4)
o
o
o
o
o
o
o
I would have
performed
the same on
my second
attempt with
or without
the video
review guide.
(5)
o
o
o
o
o
o
o
I would have
performed
better
WITHOUT
the video
review guide.
(6)
o
o
o
o
o
o
o
113
The video
review guide
was
distracting.
(7)
o
o
o
o
o
o
o
The video
review guide
was time-
consuming.
(8)
o
o
o
o
o
o
o
Q4 Thinking about the concept of video review guides in general, please rate the following statements.
114
Strongly
disagree
(1)
Disagree
(2)
Somewhat
disagree
(3)
Neither
agree nor
disagree
(4)
Somewhat
agree (5)
Agree (6)
Strongly
agree (7)
Video
review
guides
would help
surgeons
reflect on
their
practice. (1)
o
o
o
o
o
o
o
Video
review
guides
would help
surgeons
improve
their
robotic
surgical
skills. (2)
o
o
o
o
o
o
o
Video
review
guides
would help
shorten the
robotic
learning
curve for
surgeons.
(3)
o
o
o
o
o
o
o
Video
review
guides are
more time-
consuming
than
helpful. (4)
o
o
o
o
o
o
o
115
If provided,
I would use
a surgical
video
review
guide when
reviewing
my
operative
videos in
the future.
(5)
o
o
o
o
o
o
o
I would
recommend
the use of a
surgical
video
review
guide to
other
surgeons.
(6)
o
o
o
o
o
o
o
Q5 Please provide any comments or suggesons to improve the video review guide.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
End of Block: Default Question Block
116
Video Review Guide Exit Survey - CONTROL
Start of Block: Default Question Block
Q1 Please enter the number assigned to you for this study.
________________________________________________________________
Q2 A video review guide is a wrien guide with prompts and quesons to help guide your reecon
while you review your surgical video.
Thinking about the concept of video review guides, please rate the following statement.
Strongly
disagree
(1)
Disagree
(2)
Somewhat
disagree
(3)
Neither
agree nor
disagree
(4)
Somewhat
agree (5)
Agree (6)
Strongly
agree (7)
A video
review guide
would have
helped me
on my
second
simulation
performance
in this study.
(1)
o
o
o
o
o
o
o
Q3 Thinking about the concept of video review guides in general, please rate the following statements.
117
Strongly
disagree
(1)
Disagree
(2)
Somewhat
disagree
(3)
Neither
agree nor
disagree
(4)
Somewhat
agree (5)
Agree (6)
Strongly
agree (7)
Video
review
guides
would help
surgeons
reflect on
their
practice. (1)
o
o
o
o
o
o
o
Video
review
guides
would help
surgeons
improve
their
robotic
surgical
skills. (2)
o
o
o
o
o
o
o
Video
review
guides
would help
shorten the
robotic
learning
curve for
surgeons.
(3)
o
o
o
o
o
o
o
Video
review
guides are
more time-
consuming
than
helpful. (4)
o
o
o
o
o
o
o
118
If provided,
I would use
a surgical
video
review
guide when
reviewing
my
operative
videos in
the future.
(5)
o
o
o
o
o
o
o
I would
recommend
the use of a
surgical
video
review
guide to
other
surgeons.
(6)
o
o
o
o
o
o
o
Q4 How did watching your video change the way you approached the second simulaon, if at all?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
119
Q5 Reect on what you paid aenon to while watching the video of your simulaon performance.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
End of Block: Default Question Block
120
APPENDIX G: IRB APPROVAL
121
Institutional Review Board
FWA00000351
IRB00001138, IRB00012110
Office of Research
12201 Research Parkway
Orlando, FL 32826-3246
Page 1 of 2
APPROVAL
October 23, 2023
Dear Mary Soliman:
On 10/23/2023, the IRB reviewed the following submission:
Type of Review:
Initial Study, Categories 6, 7a, 7b
Title:
Examining the Efficacy of a Video Review Guide to
Facilitate Robotic Surgical Skill Improvement
Investigator:
Mary Soliman
IRB ID:
STUDY00006024
Funding:
None, None
IND, IDE, or HDE:
None
Documents
Reviewed:
Demographic Survey.pdf, Category: Survey /
Questionnaire;
• Exit Survey Control.pdf, Category: Survey /
Questionnaire;
Exit Survey Intervention.pdf, Category: Survey /
Questionnaire;
HRP 502 Consent VR-UPDATED.pdf, Category:
Consent Form;
HRP 503 VR-UPDATED.docx, Category: IRB Protocol;
Simulation Video Low.mp4, Category: Test
Instruments;
Simulator-Brochure.pdf, Category: Other;
Study Announcement.docx, Category: Recruitment
Materials;
Surgical Video Review Guide.pdf, Category: Debriefing
Form;
VR Demographic Survey UPDATE.pdf, Category:
Survey / Questionnaire;
VR Screening Survey.pdf, Category: Survey /
Questionnaire;
The IRB approved the protocol on 10/23/2023. Continuing review is not required.
This approval includes approval of the request for waiver of documentation of
consent.
In conducting this protocol, you are required to follow the requirements listed in
the Investigator Manual (HRP-103), which can be found by navigating to the IRB
122
Page 2 of 2
Library within the IRB system. Guidance on submitting Modifications and a
Continuing Review or Administrative Check-in is detailed in the manual. If
continuing review is required and approval is not granted before the expiration
date, approval of this protocol expires on that date.
If this protocol includes a consent process, use of the time-stamped version of
the consent form is required. You can find the time-stamped version of the
consent form in the "Documents" tab under the "Final" column.
To document consent, use the consent documents that were approved and
stamped by the IRB. Go to the Documents tab to download them.
When you have completed your research, please submit a Study Closure request
so that IRB records will be accurate.
If you have any questions, please contact the UCF IRB at 407-823-2901 or
[email protected]. Please include your project title and IRB number in all
correspondence with this office.
Sincerely,
Harry Wingfield
Designated Reviewer
123
Institutional Review Board
FWA00000351
IRB00001138, IRB00012110
Office of Research
12201 Research Parkway
Orlando, FL 32826-3246
Page 1 of 1
APPROVAL
January 11, 2024
Dear Mary Soliman:
On 1/11/2024, the IRB reviewed the following submission:
Type of Review:
Modification / Update, Categories 6, 7a, 7b
Title:
Examining the Efficacy of a Video Review Guide to Facilitate
Robotic Surgical Skill Improvement
Investigator:
Mary Soliman
IRB ID:
MOD00004968
Funding:
None, None
IND, IDE, or HDE:
None
Documents Reviewed:
• HRP 503 VR-Modication.docx, Category: IRB Protocol;
• QR Codes.pdf, Category: Survey / Questionnaire;
• Study Announcement Modification.docx, Category:
Recruitment Materials;
The IRB approved this modification on 1/11/2024.
In conducting this protocol, you are required to follow the requirements listed in the
Investigator Manual (HRP-103), which can be found by navigating to the IRB Library
within the IRB system. Guidance on submitting Modifications and a Continuing Review
or Administrative Check-in is detailed in the manual. If continuing review is required and
approval is not granted before the expiration date, approval of this protocol expires on
that date.
If this protocol includes a consent process, use of the time-stamped version of the
consent form is required. You can find the time-stamped version of the consent form in
the "Documents" tab under the "Final" column.
When you have completed your research, please submit a Study Closure request so
that IRB records will be accurate.
If you have any questions, please contact the UCF IRB at 407-823-2901 or irb@ucf.edu.
Please include your project title and IRB number in all correspondence with this office.
Sincerely,
Harry Wingfield
Designated Reviewer
124
LIST OF REFERENCES
Abdel-Karim, B., Pfeuffer, N., Carl, K. V., & Hinz, O. (2023). How AI-based systems can induce reflections:
The case of AI-augmented diagnostic work. MIS Quarterly, 47(4), 13951424.
https://doi.org/10.25300/MISQ/2022/16773
ACGME. (2019, January). Surgery milestones.
https://www.acgme.org/globalassets/pdfs/milestones/surgerymilestones.pdf
Ahmed, M., Arora, S., Russ, S., Darzi, A., Vincent, C., & Sevdalis, N. (2013). Operation debrief:
A sharp improvement in performance feedback in the operating room. Annals of Surgery,
258(6), 958963. https://doi.org/10.1097/SLA.0b013e31828c88fc
Aldinc, H., Gun, C., Yaylaci, S., Senuren, C. O., Guven, F., Sahiner, M., Kayayurt, K., &
Turkmen, S. (2022). Comparison of self versus expert-assisted feedback for cricothyroidotomy
training: A randomized trial. BMC Medical Education, 22(1), 455.
https://doi.org/10.1186/s12909-022-03519-z
Ali, A. A., & Miller, E. T. (2018). Effectiveness of video-assisted debriefing in health education:
An integrative review. Journal of Nursing Education, 57(1), 1420.
https://doi.org/10.3928/01484834-20180102-04
Andersen, S. A. W., Guldager, M., Mikkelsen, P. T., & Sørensen, M. S. (2019). The effect of
structured self-assessment in virtual reality simulation training of mastoidectomy. European
Archives of Oto-Rhino-Laryngology, 276(12), 33453352. https://doi.org/10.1007/s00405-019-
05648-6
Anderson, J. E., Jurkovich, G. J., Galante, J. M., & Farmer, D. L. (2021). A survey of the surgical morbidity
125
and mortality conference in the United States and Canada: A dying tradion or the key to
modern quality improvement? Journal of Surgical Educaon, 78(3), 927–933.
hps://doi.org/10.1016/j.jsurg.2020.10.008
Azari, D., Greenberg, C., Pugh, C., Wiegmann, D., & Radwin, R. (2019). In search of characterizing surgical
skill. Journal of Surgical Education, 76(5), 13481363.
https://doi.org/10.1016/j.jsurg.2019.02.010
Azari, D. P., Miller, B. L., Le, B. V., Greenberg, C. C., & Radwin, R. G. (2020). Quantifying
surgeon maneuevers across experience levels through marker-less hand motion kinematics of
simulated surgical tasks. Applied Ergonomics, 87, 103136.
https://doi.org/10.1016/j.apergo.2020.103136
Baecher, L., Kung, S., C., Ward, S., L., & Kern, K. (2018). Facilitating video analysis for teacher
development: A systematic review of the research. Journal of Technology and Teacher
Education, 26(2), 185216.
Balvardi, S., Kammili, A., Hanson, M., Mueller, C., Vassiliou, M., Lee, L., Schwartzman, K.,
Fiore, J. F., & Feldman, L. S. (2022). The association between video-based assessment of
intraoperative technical performance and patient outcomes: A systematic review. Surgical
Endoscopy, 36(11), 79387948. https://doi.org/10.1007/s00464-022-09296-6
Beets, M. W., Weaver, R. G., Ioannidis, J. P. A., Geraci, M., Brazendale, K., Decker, L., Okely, A. D.,
Lubans, D., Van Sluijs, E., Jago, R., Turner-McGrievy, G., Thrasher, J., Li, X., & Milat, A. J. (2020).
Identification and evaluation of risk of generalizability biases in pilot versus
efficacy/effectiveness trials: A systematic review and meta-analysis. International Journal of
Behavioral Nutrition and Physical Activity, 17(1), 19. https://doi.org/10.1186/s12966-020-0918-y
Birkmeyer, J. D., Finks, J. F., O’Reilly, A., Oerline, M., Carlin, A. M., Nunn, A. R., Dimick, J.,
126
Banerjee, M., & Birkmeyer, N. J. O. (2013). Surgical skill and complication rates after bariatric
surgery. New England Journal of Medicine, 369(15), 14341442.
https://doi.org/10.1056/NEJMsa1300625
Blanca, M. J., Arnau, J., García-Castra, F., J., Alarcón, R., & Bono, R. (2023). Non-normal data in repeated
measures anova: Impact on type I error and power. Psicothema, 35.1, 21-29.
https://doi.org/10.7334/psicothema2022.292
Boet, S., Bould, M. D., Bruppacher, H. R., Desjardins, F., Chandra, D. B., & Naik, V. N. (2011). Looking in
the mirror: Self-debriefing versus instructor debriefing for simulated crises*: Critical Care
Medicine, 39(6), 13771381. https://doi.org/10.1097/CCM.0b013e31820eb8be
Boud, D., Keogh, R., & Walker, D. (1985). Reflection, turning experience into learning. Kogan
Page ; Nichols Pub.
Brajcich, B. C., Stulberg, J. J., Palis, B. E., Chung, J. W., Huang, R., Nelson, H., & Bilimoria, K.
Y. (2021). Association between surgical technical skill and long-term survival for colon cancer.
JAMA Oncology, 7(1), 127. https://doi.org/10.1001/jamaoncol.2020.5462
Brennan, H. L., & Kirby, S. D. (2023). Intraoperative video recording in otolaryngology for surgical
education: Evolution and considerations. Journal of Otolaryngology - Head & Neck Surgery,
52(1), 2. https://doi.org/10.1186/s40463-023-00620-1
Bui, N. T.-N., & Yarsi, P. (2023). Go-deep: A potential reflection model for experiential learning.
International Journal of Learning, Teaching and Educational Research, 22(7), 240257.
https://doi.org/10.26803/ijlter.22.7.13
Chiu, C.-C., Hsu, W.-T., Choi, J. J., Galm, B., Lee, M. G., Chang, C.-N., Liu, C.-Y. C., & Lee,
C.-C. (2019). Comparison of outcome and cost between the open, laparoscopic, and robotic
surgical treatments for colon cancer: A propensity score-matched analysis using nationwide
127
hospital record database. Surgical Endoscopy, 33(11), 37573765.
https://doi.org/10.1007/s00464-019-06672-7
Cook, D. A., & Hatala, R. (2016). Validation of educational assessments: A primer for simulation and
beyond. Advances in Simulation, 1(1), 31. https://doi.org/10.1186/s41077-016-0033-y
Creswell, J. W. (2019). Educational research: Planning, conducting, and evaluation quantitative
and qualitative research (Sixth edition). Pearson.
Davies, S. (2012). Embracing reflective practice. Education for Primary
Care, 23(1), 912. https://doi.org/10.1080/14739879.2012.11494064
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008
Desjarlais, M., & Smith, P. (2011). A comparative analysis of reflection and self-assessment.
International Journal of Process Education, 3(1), 3-18.
Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the
educative process. D.C. Heath.
Dweck, C., & Dweck, C. S. (2000). Self-theories: Their role in motivation, personality, and
development. Psychology Press.
El-Gabri, D., McDow, A. D., Quamme, S. P., Hooper-Lane, C., Greenberg, C. C., & Long, K. L.
(2020). Surgical coaching for advancement of global surgical skills and capacity: A systematic
review. Journal of Surgical Research, 246, 499505. https://doi.org/10.1016/j.jss.2019.09.039
Entrustable professional activities (Epas) for surgeons. (n.d.). American Board of Surgery.
Retrieved April 4, 2024, from https://www.absurgery.org/get-certified/epas/
Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert
performance in medicine and related domains: Academic Medicine, 79(Supplement), S70S81.
https://doi.org/10.1097/00001888-200410001-00022
128
Ericsson, K. A. (2011). The surgeon’s expertise. In H. Fry & R. Kneebone (Eds.), Surgical
Education (Vol. 2, pp. 107121). Springer Netherlands. https://doi.org/10.1007/978-94-007-
1682-7_7
Ericsson, K. A., & Pool, R. (2016). Peak: Secrets from the new science of expertise. New York,
New York: Eamon Dolan/Houghton Mifflin Harcourt.
Esposito, A. C., Coppersmith, N. A., White, E. M., & Yoo, P. S. (2022). Video coaching in
surgical education: Utility, opportunities, and barriers to implementation. Journal of Surgical
Education, 79(3), 717724. https://doi.org/10.1016/j.jsurg.2021.12.004
Fainberg, J., Vanden Berg, R. N. W., Chesnut, G., Coleman, J. A., Donahue, T., Ehdaie, B., Goh,
A. C., Laudone, V. P., Lee, T., Pyon, J., Scardino, P. T., & Smith, R. C. (2022). A novel expert
coaching model in urology, aimed at accelerating the learning curve in robotic prostatectomy.
Journal of Surgical Education, 79(6), 14801488. https://doi.org/10.1016/j.jsurg.2022.06.006
Fatima, T., Khan, R. A., Azher, F., & Mahboob, U. (2020). Thoughtful surgical practice for therapeutic self:
A randomized control trial. Pakistan Journal of Medical Sciences, 36(7).
https://doi.org/10.12669/pjms.36.7.3038
Fecso, A. B., Bhatti, J. A., Stotland, P. K., Quereshy, F. A., & Grantcharov, T. P. (2019).
Technical performance as a predictor of clinical outcomes in laparoscopic gastric cancer surgery.
Annals of Surgery, 270(1), 115120. https://doi.org/10.1097/SLA.0000000000002741
Fitts, P. M., & Posner, M. I. (Eds.). (1973). Human performance. Prentice Hall.
Gagnon, M.-P., Ngangue, P., Payne-Gagnon, J., & Desmartis, M. (2016). M-health adoption by healthcare
professionals: A systematic review. Journal of the American Medical Informatics Association,
23(1), 212220. https://doi.org/10.1093/jamia/ocv052
Gathu, C. (2022). Facilitators and barriers of reflective learning in postgraduate medical
129
education: A narrative review. Journal of Medical Education and Curricular Development, 9,
238212052210961. https://doi.org/10.1177/23821205221096106
Gibbs, G. (1988). Learning by doing: A guide to teaching and learning methods. FEU.
Goh, A. C., Goldfarb, D. W., Sander, J. C., Miles, B. J., & Dunkin, B. J. (2012). Global
evaluative assessment of robotic skills: Validation of a clinical assessment tool to measure
robotic surgical skills. Journal of Urology, 187(1), 247252.
https://doi.org/10.1016/j.juro.2011.09.032
Goldman, L. I., Maier, W. P., Rosemond, G. P., Saltzman, S. W., & Cramer, L. M. (1970).
Miscellaneous. Teaching surgical technique by the critical review of videotaped performance
The surgical instant replay: Obstetrical & Gynecological Survey, 25(2), 144145.
https://doi.org/10.1097/00006254-197002000-00013
Gordon, M. J. (1991). A review of the validity and accuracy of self-assessments in health professions
training: Academic Medicine, 66(12),
762769. https://doi.org/10.1097/00001888-199112000-00012
Gorgy, A., Hawary, H. E., Galli, R., MacDonald, M., Barone, N., & Thibaudeau, S. (2022).
Evaluating the educational quality of surgical YouTube® videos: A systematic review. Health
Sciences Review, 5, 100067. https://doi.org/10.1016/j.hsr.2022.100067
Gray, T. W., & Coombs, C. J. (2018). Developing professional judgement in surgical trainees:
The role of critical reflection. Australasian Journal of Plastic Surgery, 1(1), 95100.
https://doi.org/10.34239/ajops.v1n1.34
Green, J. L., Suresh, V., Bittar, P., Ledbetter, L., Mithani, S. K., & Allori, A. (2019). The
utilization of video technology in surgical education: A systematic review. Journal of
Surgical Research, 235, 171180. https://doi.org/10.1016/j.jss.2018.09.015
Guend, H., Widmar, M., Patel, S., Nash, G. M., Paty, P. B., Guillem, J. G., Temple, L. K.,
130
Garcia-Aguilar, J., & Weiser, M. R. (2017). Developing a robotic colorectal cancer
surgery program: Understanding institutional and individual learning curves. Surgical Endoscopy,
31(7), 28202828. https://doi.org/10.1007/s00464-016-5292-0
Halim, J., Jelley, J., Zhang, N., Ornstein, M., & Patel, B. (2021). The effect of verbal feedback,
video feedback, and self-assessment on laparoscopic intracorporeal suturing skills in novices: A
randomized trial. Surgical Endoscopy, 35(7), 37873795. https://doi.org/10.1007/s00464-020-
07871-3
Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., & Chester, J. F. (2012).
Improving the accuracy of self-assessment of practical clinical skills using video feedback the
importance of including benchmarks. Medical Teacher, 34(4), 279284.
https://doi.org/10.3109/0142159X.2012.658897
Hogg, M. E., Zenati, M., Novak, S., Chen, Y., Jun, Y., Steve, J., Kowalsky, S. J., Bartlett, D. L.,
Zureikat, A. H., & Zeh, H. J. (2016). Grading of surgeon technical performance predicts
postoperative pancreatic fistula for pancreaticoduodenectomy independent of patient-related
variables. Annals of Surgery, 264(3), 482491. https://doi.org/10.1097/SLA.0000000000001862
Holder, N. A. K. A., Sim, Z. L., Foong, C. C., & Pallath, V. (2019). Developing a reflection guiding tool for
underperforming medical students: An action research project. Tuning Journal for Higher
Education, 7(1), 115163. https://doi.org/10.18543/tjhe-7(1)-2019pp115-163
Holmes, C. L., Hubinette, M. M., Maclure, M., Miller, H., Ting, D., Costello, G., Reed, M., & Regehr, G.
(2018). Reflecting on what? The difficulty of noticing formative experiences in the moment.
Perspectives on Medical Education, 7(6), 379385. https://doi.org/10.1007/S40037-018-0486-X
Intuitive reaches 10 million procedures performed using da Vinci Surgical Systems. (2021,
December 14). Intuitive Surgical. https://isrg.intuitive.com/news-releases/news-release-
details/intuitive-reaches-10-million-procedures-performed-using-da-vinci
131
Isaranuwatchai, W., Alam, F., Hoch, J., & Boet, S. (2016). A cost-effectiveness analysis of
self-debriefing versus instructor debriefing for simulated crises in perioperative medicine in
Canada. Journal of Educational Evaluation for Health Professions, 13, 44.
https://doi.org/10.3352/jeehp.2016.13.44
Iskold, L. (2008). Research-based listening tasks for video comprehension. Handbook of
Research on Computer Enhanced Language Acquisition and Learning. F. Zhang & B. Barber
(Eds.). Information Science References, NY: 2008, 116-135.
Isreb, S., Attwood, S., Hesselgreaves, H., McLachlan, J., & Illing, J. (2021). Synchronized
video-review as a tool to enhance reflection and feedback: A design-based feasibility
study. Journal of Surgical Education, 78(1), 18.
https://doi.org/10.1016/j.jsurg.2020.07.014
Jamshidi, R., LaMasters, T., Eisenberg, D., Duh, Q.-Y., & Curet, M. (2009). Video
self-assessment augments development of videoscopic suturing skill. Journal of the American
College of Surgeons, 209(5), 622625. https://doi.org/10.1016/j.jamcollsurg.2009.07.024
Jung, J. P., Zenati, M. S., Dhir, M., Zureikat, A. H., Zeh, H. J., Simmons, R. L., & Hogg, M. E.
(2018). Use of video review to investigate technical factors that may be associated with delayed
gastric emptying after pancreaticoduodenectomy. JAMA Surgery, 153(10), 918.
https://doi.org/10.1001/jamasurg.2018.2089
Kassite, I., Bejan-Angoulvant, T., Lardy, H., & Binet, A. (2019). A systematic review of the
learning curve in robotic surgery: Range and heterogeneity. Surgical Endoscopy, 33(2), 353365.
https://doi.org/10.1007/s00464-018-6473-9
Keiser, N. L., & Arthur, W. (2021). A meta-analysis of the effectiveness of the after-action
review (or debrief) and factors that influence its effectiveness. Journal of Applied Psychology,
106(7), 10071032. https://doi.org/10.1037/apl0000821
132
Kim, H. S. (1999). Critical reflective inquiry for knowledge development in nursing practice.
Journal of Advanced Nursing, 29(5), 12051212. https://doi.org/10.1046/j.1365-
2648.1999.01005.x
Kim, Y. H., Min, J., Kim, S. H., & Shin, S. (2018). Effects of a work-based critical reflection
program for novice nurses. BMC Medical Education, 18(1), 30. https://doi.org/10.1186/s12909-
018-1135-0
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction
does not work: An analysis of the failure of constructivist, discovery, problem-based,
experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 7586.
https://doi.org/10.1207/s15326985ep4102_1
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and
development. Prentice-Hall.
Kong, S. C., Shroff, R. H., & Hung, H. K. (2009). A web enabled video system for
self-reflection by student teachers using a guiding framework. Australian Journal of Educational
Technology, 25(4). https://doi.org/10.14742/ajet.1128
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing
one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social
Psychology, 77(6), 11211134. https://doi.org/10.1037/0022-3514.77.6.1121
Kun, Y., Hubert, J., Bin, L., & Huan, W. X. (2019). Self-debriefing model based on an integrated
video-capture system: An efficient solution to skill degradation. Journal of Surgical Education,
76(2), 362369. https://doi.org/10.1016/j.jsurg.2018.08.017
Larkins, K., Khan, M., Mohan, H., Warrier, S., & Heriot, A. (2023). A systematic review of
video-based educational interventions in robotic surgical training. Journal of Robotic Surgery,
17(4), 13291339. https://doi.org/10.1007/s11701-023-01605-y
133
Leise, C., & Beyerlein, C. (2007). Learning processes through the use of methodologies. In S.W. Beyerlein,
C. Holmes, & D. K. Apple (Eds.), Faculty guidebook: A comprehensive tool for improving faculty
performance (Fourth edition), 217-220. Pacific Crest.
Lim, R. B. T., Hoe, K. W. B., & Zheng, H. (2022). A systematic review of the outcomes, level, facilitators,
and barriers to deep self-reflection in public health higher education: Meta-analysis and meta-
synthesis. Frontiers in Education, 7, 938224. https://doi.org/10.3389/feduc.2022.938224
Lima, D. L, Viscarret, V., Velasco, J., Lima, R. N. C. L., & Malcher, F. (2022). Social media as
a tool for surgical education: A qualitative systematic review. Surgical Endoscopy, 36(7), 4674-
4684. https://doi.org/10.1007/s00464-022-09150-9
Lu, F.-I., Takahashi, S. G., & Kerr, C. (2021). Myth or reality: Self-assessment is central to effective
curriculum in anatomical pathology graduate medical education. Academic Pathology, 8,
23742895211013528. https://doi.org/10.1177/23742895211013528
MacCraith, E., Forde, J. C., & Davis, N. F. (2019). Robotic simulation training for urological
trainees: A comprehensive review on cost, merits and challenges. Journal of Robotic Surgery,
13(3), 371377. https://doi.org/10.1007/s11701-019-00934-1
MacKenna, V., Díaz, D. A., Chase, S. K., Boden, C. J., & Loerzel, V. (2021). Self-debriefing in
healthcare simulation: An integrative literature review. Nurse Education Today, 102, 104907.
https://doi.org/10.1016/j.nedt.2021.104907
Madion, M. P., Kastenmeier, A., Goldblatt, M. I., & Higgins, R. M. (2022). Robotic surgery
training curricula: Prevalence, perceptions, and educational experiences in general surgery
residency programs. Surgical Endoscopy, 36(9), 66386646. https://doi.org/10.1007/s00464-
021-08930-z
Mann, K., Gordon, J., & MacLeod, A. (2009). Reflection and reflective practice in health
134
professions education: A systematic review. Advances in Health Sciences Education, 14(4), 595
621. https://doi.org/10.1007/s10459-007-9090-2
Marino, M. V., Shabat, G., Gulotta, G., & Komorowski, A. L. (2018). From illusion to reality: A
brief history of robotic surgery. Surgical Innovation, 25(3), 291296.
https://doi.org/10.1177/1553350618771417
Martin, J. A., Regehr, G., Reznick, R., Macrae, H., Murnaghan, J., Hutchison, C., & Brown, M.
(1997). Objective structured assessment of technical skill (OSATS) for surgical residents. British
Journal of Surgery, 84(2), 273278. https://doi.org/10.1002/bjs.1800840237
Mayer, R. E. (2002). Multimedia learning. In Psychology of Learning and Motivation (Vol. 41,
pp. 85139). Elsevier. https://doi.org/10.1016/S0079-7421(02)80005-6
McKendy, K. M., Watanabe, Y., Lee, L., Bilgic, E., Enani, G., Feldman, L. S., Fried, G. M., &
Vassiliou, M. C. (2017). Perioperative feedback in surgical training: A systematic review. The
American Journal of Surgery, 214(1), 117126. https://doi.org/10.1016/j.amjsurg.2016.12.014
McKinley, S. K., Hashimoto, D. A., Mansur, A., Cassidy, D., Petrusa, E., Mullen, J. T., Phitayakorn, R., &
Gee, D. W. (2019). Feasibility and perceived usefulness of using head-mounted cameras for
resident video portfolios. Journal of Surgical Research, 239, 233241.
https://doi.org/10.1016/j.jss.2019.01.041
McQueen, S., McKinnon, V., VanderBeek, L., McCarthy, C., & Sonnadara, R. (2019). Video-based
assessment in surgical education: A scoping review. Journal of Surgical Education, 76(6), 1645
1654. https://doi.org/10.1016/j.jsurg.2019.05.013
McVee, M. B. (2018). Video pedagogy in action: Critical reflective inquiry using the
gradual release of responsibility model. Routledge.
Miskovic, D., Ni, M., Wyles, S. M., Kennedy, R. H., Francis, N. K., Parvaiz, A., Cunningham, C., Rockall, T.
135
A., Gudgeon, A., M., Coleman, M. G., & Hanna, G. B. (2013). Is competency assessment at the
specialist level achievable? A study for the national training programme in laparoscopic
colorectal surgery in England. Annals of Surgery, 257(3), 476482.
https://doi.org/10.1097/SLA.0b013e318275b72a
Mota, P., Carvalho, N., Carvalho-Dias, E., João Costa, M., Correia-Pinto, J., & Lima, E. (2018).
Video-based surgical learning: Improving trainee education and preparation for surgery. Journal
of Surgical Education, 75(3), 828-835. https://doi.org/10.1016/j.jsurg.2017.09.027
Mutabdzic, D., Mylopoulos, M., Murnaghan, M. L., Patel, P., Zilbert, N., Seemann, N., Regehr,
G., & Moulton, C.-A. (2015). Coaching surgeons: Is culture limiting our ability to improve? Annals
of Surgery, 262(2), 213216. https://doi.org/10.1097/SLA.0000000000001247
Nagro, S. A., deBettencourt, L. U., Rosenberg, M. S., Carran, D. T., & Weiss, M. P. (2017). The
effects of guided video analysis on teacher candidates’ reflective ability and instructional skills.
Teacher Education and Special Education: The Journal of the Teacher Education Division of the
Council for Exceptional Children, 40(1), 725. https://doi.org/10.1177/0888406416680469
Naumeri, F. (2023). Reflective practice and factors affecting it: Perceptions of pediatric surgery
residents. Annals of King Edward Medical University, 28(4), 417422.
https://doi.org/10.21649/akemu.v28i4.5250
Nayar, S. K., Musto, L., Baruah, G., Fernandes, R., & Bharathan, R. (2020). Self-assessment of
surgical skills: A systematic review. Journal of Surgical Education, 77(2), 348361.
https://doi.org/10.1016/j.jsurg.2019.09.016
North, J. S., Ward, P., Ericsson, A., & Williams, A. M. (2011). Mechanisms underlying skilled anticipation
and recognition in a dynamic and temporally constrained domain. Memory, 19(2), 155168.
https://doi.org/10.1080/09658211.2010.541466
Olsen, R. G., Genét, M. F., Konge, L., & Bjerrum, F. (2022). Crowdsourced assessment of surgical skills: A
136
systematic review. The American Journal of Surgery, 224(5), 1229-1237.
https://doi.org/10.1016/j.amjsurg.2022.07.008
Patel, R. V., Atashzar, S. F., & Tavakoli, M. (2022). Haptic feedback and force-based
teleoperation in surgical robotics. Proceedings of the IEEE, 110(7), 10121027.
https://doi.org/10.1109/JPROC.2022.3180052
Patel, K. M., & Metersky, K. (2022). Reflective practice in nursing: A concept analysis.
International Journal of Nursing Knowledge, 33(3), 180187. https://doi.org/10.1111/2047-
3095.12350
Pernar, L. I. M., Robertson, F. C., Tavakkoli, A., Sheu, E. G., Brooks, D. C., & Smink, D. S.
(2017). An appraisal of the learning curve in robotic general surgery. Surgical Endoscopy, 31(11),
45834596. https://doi.org/10.1007/s00464-017-5520-2
Phillips, A. W., Matthan, J., Bookless, L. R., Whitehead, I. J., Madhavan, A., Rodham, P., Porter,
A. L. R., Nesbitt, C. I., & Stansby, G. (2017). Individualised expert feedback is not
essential for improving basic clinical skills performance in novice learners: A randomized trial.
Journal of Surgical Education, 74(4), 612620. https://doi.org/10.1016/j.jsurg.2016.12.003
Prebay, Z. J., Peabody, J. O., Miller, D. C., & Ghani, K. R. (2019). Video review for measuring and
improving skill in urological surgery. Nature Reviews Urology, 16(4), 261267.
https://doi.org/10.1038/s41585-018-0138-2
Pryor, A. D., Lendvay, T., Jones, A., Ibáñez, B., & Pugh, C. (2023). An American board of surgery pilot of
video assessment of surgeon technical performance in surgery. Annals of Surgery, 277(4), 591
595. https://doi.org/10.1097/SLA.0000000000005804
Qi, C., Liu, L., & Zuo, S. (2022). Fostering noticing in prospective teachers through a
video-based course: Results of an intervention study from China. Asian Journal for Mathematics
Education, 1(2), 204220. https://doi.org/10.1177/27527263221107718
137
Quach, W. T., Vittetoe, K. L., & Langerman, A. (2023). Ethical and legal considerations for
recording in the operating room: A systematic review. Journal of Surgical Research, 288, 118
133. https://doi.org/10.1016/j.jss.2023.02.017
Rapp, A., K., Healy, M. G., Charlton, M. E., Keith, J. N., Rosenbaum, M. E., & Kapadia, M. R.
(2016). Youtube is the most frequently used educational video source for surgical preparation.
Journal of Surgical Education, 73(6), 1072-1076. https://doi.org/10.1016/j.jsurg.2016.04.024
Reck-Burneo, C. A., Dingemans, A. J. M., Lane, V. A., Cooper, J., Levitt, M. A., & Wood, R. J.
(2018). The impact of manuscript learning vs. Video learning on a surgeon’s confidence in
performing a difficult procedure. Frontiers in Surgery, 5, 67.
https://doi.org/10.3389/fsurg.2018.00067
Rice, M., Hodges, J., Bellon, J., Borrebach, J., Abbas, A., Hamad, A., … & Hogg, M. (2020). Association of
mentorship and a formal robotic proficiency skills curriculum with subsequent generations’
learning curve and safety for robotic pancreaticoduodenectomy. Jama Surgery, 155(7), 607.
https://doi.org/10.1001/jamasurg.2020.1040
Ritchie, M. J., Parker, L. E., & Kirchner, J. E. (2021). From novice to expert: Methods for
transferring implementation facilitation skills to improve healthcare delivery. Implementation
Science Communications, 2(1), 39. https://doi.org/10.1186/s43058-021-00138-5
Rivero-Moreno, Y., Echevarria, S., Vidal-Valderrama, C., Stefano-Pianetti, L., Cordova-Guilarte,
J., Navarro-Gonzalez, J., Acevedo-Rodriguez, J., Dorado-Avila, G., Osorio-Romero, L., Chavez-
Campos, C., & Acero-Alvarracin, K. (2023). Robotic surgery: A comprehensive review of the
literature and current trends. Cureus. https://doi.org/10.7759/cureus.42379
Ross, S. B., Modasi, A., Christodoulou, M., Sucandy, I., Mehran, A., Lobe, T. E., Witkowski, E.,
138
& Satava, R. (2023). New generation evaluations: Video-based surgical assessments: A
technology update. Surgical Endoscopy, 37(10), 74017411. https://doi.org/10.1007/s00464-
023-10311-7
Salkind, N. (2010). Encyclopedia of research design. SAGE Publications, Inc.
https://doi.org/10.4135/9781412961288
Scaffidi, M., Walsh, C., Khan, R., Parker, C., Al-Mazroui, A., Abunassar, M., Grindal, A., Lin,
P., Wang, C., Bechara, R., & Grover, S. (2019). Influence of video-based feedback on self-
assessment accuracy of endoscopic skills: A randomized controlled trial. Endoscopy International
Open, 07(05), E678E684. https://doi.org/10.1055/a-0867-9626
Schlick, C. J. R., Bilimoria, K. Y., & Stulberg, J. J. (2020). Video-based feedback for the improvement of
surgical technique: A platform for remote review and improvement of surgical technique. JAMA
Surgery, 155(11), 1078. https://doi.org/10.1001/jamasurg.2020.3286
Schmidt, M. W., Köppinger, K. F., Fan, C., Kowalewski, K.-F., Schmidt, L. P., Vey, J., Proctor,
T., Probst, P., Bintintan, V. V., Müller-Stich, B.-P., & Nickel, F. (2021). Virtual reality simulation in
robot-assisted surgery: Meta-analysis of skill transfer and predictability of skill. BJS Open, 5(2).
https://doi.org/10.1093/bjsopen/zraa066
Schön, D. A. (1983). The reflective practitioner: How professionals think in action. Basic Books.
Shaughnessy, A., & Duggan, A. (2013). Family medicine residents′ reactions to introducing a
reflective exercise into training. Education for Health, 26(3), 141. https://doi.org/10.4103/1357-
6283.125987
Sheetz, K. H., Claflin, J., & Dimick, J. B. (2020). Trends in the adoption of robotic surgery for
common surgical procedures. JAMA Network Open, 3(1), e1918911.
https://doi.org/10.1001/jamanetworkopen.2019.18911
Sheng, A. Y., Chu, A., Biancarelli, D., Drainoni, M.-L., Sullivan, R., & Schneider, J. I. (2018). A novel web
139
based experiential learning platform for medical students (learning moment): Qualitative study.
JMIR Medical Education, 4(2), e10657. https://doi.org/10.2196/10657
Solaini, L., Cavaliere, D., Avanzolini, A., Rocco, G., & Ercolani, G. (2022). Robotic versus
laparoscopic inguinal hernia repair: An updated systematic review and meta-analysis. Journal of
Robotic Surgery, 16(4), 775781. https://doi.org/10.1007/s11701-021-01312-6
Soleimani-Nouri, P., Holtermann Entwistle, O., Fehervari, M., & Spalding, D. (2023). A novel
framework for surgical reflection. Annals of Laparoscopic and Endoscopic Surgery, 8, 20-20.
https://doi.org/10.21037/ales-23-8
Soliman, M. M., & Soliman, M. K. (2023). How expert surgeons review robotic videos: A
grounded theory study. The American Journal of Surgery, S0002961023003781.
https://doi.org/10.1016/j.amjsurg.2023.07.043
Stulberg, J. J., Huang, R., Kreutzer, L., Ban, K., Champagne, B. J., Steele, S. R., Johnson, J. K.,
Holl, J. L., Greenberg, C. C., & Bilimoria, K. Y. (2020). Association between surgeon technical skills
and patient outcomes. JAMA Surgery, 155(10), 960.
https://doi.org/10.1001/jamasurg.2020.3007
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive
Science, 12(2), 257285. https://doi.org/10.1207/s15516709cog1202_4
Takagi, K., Hata, N., Kimura, J., Kikuchi, S., Noma, K., Yasui, K., Fuji, T., Yoshida, R., Umeda,
Y., Yagi, T., & Fujiwara, T. (2023). Impact of educational video on performance in robotic
simulation training (TAKUMI-1): A randomized controlled trial. Journal of Robotic Surgery, 17(4),
15471553. https://doi.org/10.1007/s11701-023-01556-4
Tellez, J. C., Radi, I., Alterio, R. E., Nagaraj, M. B., Baker, H. B., Scott, D. J., Zeh, H. J., &
140
Polanco, P. M. (2024). Proficiency levels and validity evidence for scoring metrics for a virtual
reality and inanimate robotic surgery simulation curriculum. Journal of Surgical Education, 81(4),
589596. https://doi.org/10.1016/j.jsurg.2024.01.004
Thompson, N., & Pascal, J. (2012). Developing critically reflective practice. Reflective Practice,
13(2), 311-325. doi:10.1080/14623943.2012.657795
Timmins, F., Murphy, M., Howe, R., & Dennehy, C. (2013). “I hate Gibb’s reflective cycle 1998”
(Facebook©2009): Registered nurses’ experiences of supporting nursing students’ reflective
practice in the context of student’s public commentary. Procedia - Social and Behavioral
Sciences, 93, 13711375. https://doi.org/10.1016/j.sbspro.2013.10.046
Tommaselli, G. A., Sehat, A. J., Ricketts, C. D., Clymer, J. W., & Grange, P. (2022). Value of the crowd
sourced assessment of technical skills (C-SATS) platform in surgical procedures: A systematic
review of evidence. Surgical Research, 4(2). https://doi.org/10.33425/2689-1093.1047
Tonni, I., Mora, L., & Oliver, R. G. (2016). Postgraduate orthodontics students’ and mentors’
perceptions of portfolios and discussion as tools for development of reflection. Journal of Dental
Education, 80(9), 10981108. https://doi.org/10.1002/j.0022-0337.2016.80.9.tb06192.x
Tripp, T., & Rich, P. (2012). Using video to analyze one’s own teaching: Video self-analysis.
British Journal of Educational Technology, 43(4), 678704. https://doi.org/10.1111/j.1467-
8535.2011.01234.x
Trumbo, S. P. (2017). Reflection fatigue among medical students. Academic Medicine, 92(4),
433434. https://doi.org/10.1097/ACM.0000000000001609
Truykov, L. (2023). Medical students’ honesty in summative reflective writing: A rapid review.
The Clinical Teacher, e13649. https://doi.org/10.1111/tct.13649
U.S. Food & Drug Administration (2023, November 15). What we do. FDA. https://www.fda.gov/about
fda/what-we-do
141
Van De Graaf, F. W., Eryigit, Ö., & Lange, J. F. (2021). Current perspectives on video and audio
recording inside the surgical operating room: Results of a cross-disciplinary survey. Updates in
Surgery, 73(5), 20012007. https://doi.org/10.1007/s13304-020-00902-7
Van Der Leun, J. A., Siem, G., Meijer, R. P., & Brinkman, W. M. (2022). Improving robotic
skills by video review. Journal of Endourology, 36(8), 11261135.
https://doi.org/10.1089/end.2021.0740
Varban, O. A., Thumma, J. R., Carlin, A. M., Ghaferi, A. A., Dimick, J. B., & Finks, J. F. (2022).
Evaluating the impact of surgeon self-awareness by comparing self versus peer ratings of
surgical skill and outcomes for bariatric surgery. Annals of Surgery, 276(1), 128132.
https://doi.org/10.1097/SLA.0000000000004450
Vassiliou, M. C., Feldman, L. S., Andrew, C. G., Bergman, S., Leffondré, K., Stanbridge, D., & Fried, G. M.
(2005). A global assessment tool for evaluation of intraoperative laparoscopic skills. The
American Journal of Surgery, 190(1), 107-113. https://doi.org/10.1016/j.amjsurg.2005.04.004
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four
longitudinal field studies. Management Science, 46(2), 186204.
https://doi.org/10.1287/mnsc.46.2.186.11926
Venkatesh, Morris, Davis, & Davis. (2003). User acceptance of information technology: Toward a unified
view. MIS Quarterly, 27(3), 425. https://doi.org/10.2307/30036540
Verkuyl, M., Lapum, J. L., Hughes, M., McCulloch, T., Liu, L., Mastrilli, P., Romaniuk, D., &
Betts, L. (2018). Virtual gaming simulation: Exploring self-debriefing, virtual debriefing, and in-
person debriefing. Clinical Simulation in Nursing, 20, 714.
https://doi.org/10.1016/j.ecns.2018.04.006
Vyasa, P., Willis, R. E., Dunkin, B. J., & Gardner, A. K. (2017). Are general surgery residents
142
accurate assessors of their own flexible endoscopy skills? Journal of Surgical Education, 74(1),
2329. https://doi.org/10.1016/j.jsurg.2016.06.018
Walker, S. G., Mattson, S. L., & Sellers, T. P. (2020). Increasing accuracy of rock-climbing
techniques in novice athletes using expert modeling and video feedback. Journal of Applied
Behavior Analysis, 53(4), 22602270. https://doi.org/10.1002/jaba.694
Wang, P. Z. T., Xie, W. Y., Nair, S., Dave, S., Shatzer, J., & Chahine, S. (2020). A comparison
of guided video reflection versus self-regulated learning to teach knot tying to medical students:
A pilot randomized controlled trial. Journal of Surgical Education, 77(4), 805816.
https://doi.org/10.1016/j.jsurg.2020.02.014
Wee, I. J. Y., Kuo, L., & Ngu, J. C. (2020). A systematic review of the true benefit of robotic
surgery: Ergonomics. The International Journal of Medical Robotics and Computer Assisted
Surgery, 16(4). https://doi.org/10.1002/rcs.2113
Weis, J. J., Wilson, E., Tellez, J., & Scott, D. (2023). Diffusion of innovation: A 10 year review of
the adoption of robotics in fellowship training [Preprint]. In Review.
https://doi.org/10.21203/rs.3.rs-3064131/v1
Willson, V. L., & Putnam, R. R. (1982). A meta-analysis of pretest sensitization effects in experimental
design. American Educational Research Journal, 19(2), 249258.
https://doi.org/10.3102/00028312019002249
Wolcott, M. D., McLaughlin, J. E., Hann, A., Miklavec, A., Beck Dallaghan, G. L., Rhoney, D.
H., & Zomorodi, M. (2021). A review to characterise and map the growth mindset theory in
health professions education. Medical Education, 55(4), 430440.
https://doi.org/10.1111/medu.14381
Wong, S. W., & Crowe, P. (2022). Factors affecting the learning curve in robotic colorectal
143
surgery. Journal of Robotic Surgery, 16(6), 12491256. https://doi.org/10.1007/s11701-022-
01373-1
Woods, M. S., Liberman, J. N., Rui, P., Wiggins, E., White, J., Ramshaw, B., & Stulberg, J. J.
(2023). Association between surgical technical skills and clinical outcomes: A systematic
literature review and meta-analysis. JSLS : Journal of the Society of Laparoscopic & Robotic
Surgeons, 27(1), e2022.00076. https://doi.org/10.4293/JSLS.2022.00076
Yang, X., König, J., & Kaiser, G. (2021). Growth of professional noticing of mathematics
teachers: A comparative study of Chinese teachers noticing with different teaching experiences.
ZDM Mathematics Education, 53(1), 2942. https://doi.org/10.1007/s11858-020-01217-y
Yang, K., Perez, M., Hubert, N., Hossu, G., Perrenot, C., & Hubert, J. (2017). Effectiveness of
an integrated video recording and replaying system in robotic surgical training. Annals of
Surgery, 265(3), 521526. https://doi.org/10.1097/SLA.0000000000001699
Yelle, L. E. (1979). The learning curve: Historical review and comprehensive survey. Decision Sciences,
10(2), 302328. https://doi.org/10.1111/j.1540-5915.1979.tb00026.x
Youssef, S. C., Aydin, A., Canning, A., Khan, N., Ahmed, K., & Dasgupta, P. (2023). Learning
surgical skills through video-based education: A systematic review. Surgical Innovation, 30(2),
220238. https://doi.org/10.1177/15533506221120146
Zhang, H., Mörelius, E., Goh, S. H. L., & Wang, W. (2019). Effectiveness of video-assisted
debriefing in simulation-based health professions education: A systematic review of quantitative
evidence. Nurse Educator, 44(3), E1E6. https://doi.org/10.1097/NNE.0000000000000562
Zhao, B., Hollandsworth, H. M., Lee, A. M., Lam, J., Lopez, N. E., Abbadessa, B., Eisenstein,
S., Cosman, B. C., Ramamoorthy, S. L., & Parry, L. A. (2020). Making the jump: A qualitative
analysis on the transition from bedside assistant to console surgeon in robotic surgery training.
Journal of Surgical Education, 77(2), 461471. https://doi.org/10.1016/j.jsurg.2019.09.015
144
Zhao, B., Lam, J., Hollandsworth, H. M., Lee, A. M., Lopez, N. E., Abbadessa, B., Eisenstein, S., Cosman, B.
C., Ramamoorthy, S. L., & Parry, L. A. (2020). General surgery training in the era of robotic
surgery: A qualitative analysis of perceptions from resident and attending surgeons. Surgical
Endoscopy, 34(4), 17121721. https://doi.org/10.1007/s00464-019-06954-0