Handbook of Robotic and Image-Guided Surgery [1st ed.] 9780128142462

Handbook of Robotic and Image-Guided Surgery provides state-of-the-art systems and methods for robotic and computer-assi

1,879 233 41MB

English Pages 724 Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Handbook of Robotic and Image-Guided Surgery [1st ed.]
 9780128142462

Table of contents :
Cover......Page 1
Handbook of Robotic and Image-Guided Surgery......Page 3
Copyright......Page 4
Dedication......Page 5
About the Book......Page 6
Visual-Info......Page 7
Foreword......Page 8
References......Page 9
Foreword......Page 11
About the Editor......Page 13
Acknowledgments......Page 14
Organs Directory......Page 15
List of Contributors......Page 16
1 Senhance Surgical System: Robotic-Assisted Digital Laparoscopy for Abdominal, Pelvic, and Thoracoscopic Procedures......Page 20
1.2 Robotic-assisted digital laparoscopy......Page 21
1.2.1 System components......Page 22
1.2.1.1 Patient positioning......Page 23
1.2.1.3 Eye sensing......Page 24
1.2.2 Indications......Page 25
1.3.1 Training......Page 27
1.3.2 Procedure planning......Page 28
1.4.1.1 Monolateral ovarian cyst removal......Page 29
1.4.1.4 Hysterectomy in obese patients......Page 30
1.5 Cost considerations......Page 31
References......Page 32
2 A Technical Overview of the CyberKnife System......Page 34
2.1 Introduction......Page 35
2.2 System overview......Page 36
2.3.1.1 Treatment manipulator......Page 39
2.3.1.2 Coordinate systems and treatment workspace calibration......Page 40
2.3.1.3 Treatment paths and node properties......Page 41
2.3.1.5 Xchange table and tool mounting calibration......Page 42
2.3.1.6 RoboCouch......Page 43
2.3.2 Treatment head......Page 44
6D skull tracking......Page 46
Xsight lung tracking system......Page 47
2.3.4.2 Real-time respiratory motion tracking......Page 48
2.3.5.2 Automated image segmentation......Page 50
2.3.5.3 Retreatment......Page 51
2.3.6.1 Dose calculation algorithms......Page 52
2.3.6.2 Dose optimization algorithms......Page 53
2.3.7 Data management and connectivity systems......Page 54
2.4 Summary......Page 55
References......Page 56
3 The da Vinci Surgical System......Page 58
3.2 The intuitive surgical timeline......Page 59
3.3 Basic principles and design of the da Vinci Surgical System......Page 60
3.4.1 Fluorescence imaging......Page 64
3.4.2 Tomographic imaging......Page 65
3.5.1 Stapler......Page 66
3.5.3 Integrated table motion......Page 67
3.6.1 da Vinci SP system......Page 69
3.7 Technology training......Page 70
3.8 Clinical adoption......Page 71
3.8.2 Publications......Page 72
References......Page 73
4 The FreeHand System......Page 75
4.3 Development and iterations of FreeHand......Page 76
4.6 Operative use......Page 80
4.7 Experience with FreeHand......Page 85
4.7.1 Advantages of FreeHand......Page 90
4.7.2 Disadvantages of FreeHand......Page 91
4.8 Discussion......Page 93
References......Page 96
5 Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery......Page 97
5.2 System overview......Page 98
5.3.3 Driver (ring and motor set)......Page 99
5.4 Advantages and disadvantages of VIKY-assisted surgery......Page 101
5.5 Current clinical applications and data......Page 104
References......Page 105
6 Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery......Page 107
6.3.1 Structure......Page 108
6.3.2 Joystick......Page 109
6.4 Installation—specifics about each operation......Page 113
6.4.1 Laparoscopic appendectomy (Fig. 6.11A and B) (Video, see online for video)......Page 114
6.4.2 Laparoscopic inguinal hernia repair (right side) (Fig. 6.12A–C) (Video, see online for video)......Page 115
6.4.3 Laparoscopic cholecystectomy (multiport) (Fig. 6.13A–C) (Video, see online for video)......Page 116
6.4.5 Laparoscopic distal gastrectomy (Fig. 6.15A and B) (Video, see online for video)......Page 117
6.4.6 Laparoscopic colectomy (right-side colon) (Fig. 6.16A and B) (Video, see online for video)......Page 118
6.4.8 Laparoscopic rectal resection and five-port left-side colectomy (Fig. 6.18A and B) (Video, see online for video)......Page 119
6.4.10 Thoracoscopic esophageal resection (Fig. 6.19A–C) (Video, see online for video)......Page 120
6.5 Clinical experience and discussion......Page 121
References......Page 122
7 The Sina Robotic Telesurgery System......Page 124
7.1 Background......Page 125
7.2 System overview......Page 126
7.2.1 Sinastraight......Page 127
7.2.2 Sinaflex......Page 130
7.3 Challenges and future directions......Page 136
References......Page 137
8 STRAS: A Modular and Flexible Telemanipulated Robotic Device for Intraluminal Surgery......Page 139
8.2 Recent technical advances in intraluminal surgery......Page 140
8.3.1.1 The Anubiscope platform......Page 142
8.3.2.1 Rationale for robotization......Page 143
8.3.2.3 Modules......Page 144
8.3.3 Features of the slave system......Page 147
8.3.4 Control of the robot by the users......Page 149
8.3.4.3 Control of the main endoscope......Page 150
8.3.5 Control and software architecture......Page 152
8.3.6 Robot calibration and working modes......Page 153
8.4.1 Workflow of single port and transluminal robotic assistant for surgeons use for intraluminal surgery......Page 154
8.4.1.1 Change of instruments......Page 155
8.4.2 Feasibility and interest......Page 156
8.5 Current developments and future work......Page 157
8.6 Conclusion......Page 159
References......Page 160
9 Implementation of Novel Robotic Systems in Colorectal Surgery......Page 163
9.1.2 Introduction of robotics......Page 164
9.2.1 Visualization......Page 165
9.3.1 Preoperative course......Page 169
9.3.3 Total mesorectal excision......Page 170
9.4.1 Bending of the scope......Page 171
9.5.1 Single-port designs......Page 172
References......Page 173
10 The Use of Robotics in Colorectal Surgery......Page 175
10.2 Challenges with open and laparoscopic surgery......Page 176
10.3 Robotic surgery experience......Page 177
10.4 Patient selection and evaluation......Page 178
10.6 Operative setup......Page 179
10.7 Surgical technique......Page 181
10.8 Discussion......Page 182
References......Page 184
11 Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer?......Page 187
11.1 Robotic surgical anatomy of the prostate......Page 188
11.2.1 Preoperative imaging modality for prostate cancer......Page 189
11.2.2 Preoperative clinical assessment......Page 190
11.2.3 Anesthesiological considerations......Page 191
11.2.4 Da Vinci robot and its docking......Page 192
11.2.4.1 The Da Vinci robot Xi......Page 193
11.3.1 Extraperitoneal approach......Page 195
11.3.3 Retzius-sparing approach......Page 199
11.4.3 Anterograde intrafascial dissection......Page 200
11.5 Complications......Page 204
References......Page 207
12 Robotic Liver Surgery: Shortcomings of the Status Quo......Page 209
12.1 Introduction: the development of robotic-assisted minimally invasive liver surgery......Page 210
12.2.2 Disadvantages of robotic liver surgery......Page 212
12.3 Patient selection and preoperative preparation......Page 213
12.4.1 Operative setup......Page 214
12.4.2 Surgical technique......Page 215
12.5.1 Operative setup......Page 216
12.5.2.3 Transection of the liver......Page 217
12.6.2 Surgical technique......Page 218
12.7 Extreme robotic liver surgery: robotic surgery and liver transplantation......Page 219
12.8 Cybernetic surgery: augmented reality in robotic liver surgery......Page 220
12.9 The financial impact of the robotic system in liver surgery: is the robot cost prohibitive?......Page 222
References......Page 223
Further reading......Page 226
13 Clinical Applications of Robotics in General Surgery......Page 227
13.1 Utilization of robotics in general surgery......Page 228
13.2.1 Procedure background......Page 229
13.2.3 Robotic sleeve gastrectomy......Page 230
13.3.2 Robotic ventral hernia repair......Page 231
13.3.4 Robotic inguinal hernia repair......Page 232
13.4.2 Robotic Nissen fundoplication......Page 233
13.5.2 Robotic colon surgery......Page 234
13.7 Conclusion......Page 235
References......Page 236
14 Enhanced Vision to Improve Safety in Robotic Surgery......Page 238
14.1 Introduction......Page 239
14.3.1 Semiautomatic preoperative identification......Page 240
14.3.1.1 Registration......Page 241
14.4.1 Semantic segmentation......Page 242
14.4.2 Surgical scene reconstruction......Page 243
14.4.3 Tissue tracking......Page 245
14.5.1 Augmented reality visualization......Page 246
14.6 Application in abdominal surgery: Enhanced Vision System for Robotic Surgery system......Page 247
References......Page 249
15 Haptics in Surgical Robots......Page 253
15.1.1 Fundamentals of haptics......Page 254
15.1.2 Surgery and haptics......Page 255
15.1.3 Tele-operated surgical robot systems......Page 256
15.2.1 The surgical robotics landscape......Page 257
15.2.2 Commercial surgical robot systems......Page 259
15.2.2.1 General surgery: Senhance......Page 260
15.2.2.3 General surgery: Medtronic MiroSurge......Page 261
15.2.2.5 Endovascular: sensei......Page 262
15.2.3.3 Neurosurgery......Page 263
15.2.4 Emerging surgical needs......Page 264
15.3.1 Sensing systems......Page 265
15.3.2 Haptic feedback systems......Page 267
15.3.3 Human interaction......Page 269
15.4 Future perspectives......Page 270
References......Page 272
16 S-Surge: A Portable Surgical Robot Based on a Novel Mechanism With Force-Sensing Capability for Robotic Surgery......Page 278
16.1 Introduction......Page 279
16.2 Overview of the surgical robot......Page 280
16.3.1 Kinematic analysis......Page 281
16.3.2 Workspace optimization......Page 282
16.3.2.1 Jacobian analysis......Page 284
16.4 Sensorized surgical instrument......Page 286
16.5.1 Surgical manipulator......Page 287
16.5.2 Sensorized surgical instrument......Page 289
16.5.3 Entire surgical robot: S-surge......Page 290
16.6.1 Experimental environment......Page 291
16.6.2 Experimental results......Page 292
References......Page 295
17 Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback for Robotic Surgery......Page 297
17.2 Feedback modalities......Page 298
17.3.1 Sensing technology......Page 299
17.3.2 Actuation and feedback technology......Page 300
17.4.2 Sensory unit......Page 301
17.4.3 Signal processing unit......Page 304
17.4.5.1 Reduction in grip forces......Page 306
17.4.5.2 Visual–perceptual mismatch......Page 309
17.4.5.4 Knot tying......Page 310
References......Page 312
18 Applications of Flexible Robots in Endoscopic Surgery......Page 314
18.2 Technical challenges in current endoscopic surgery using manual tools......Page 315
18.3.1 Purely mechanical endoscopic robots......Page 316
18.3.2 Motorized endoscopic robots......Page 318
18.4 Advantages of flexible robots in the application of endoscopic surgery......Page 320
18.5 Basic coordinate system and kinematic mapping of a continuum manipulator......Page 321
18.5.1 Manipulator-specific mapping......Page 322
18.5.2 Manipulator-independent mapping......Page 325
18.5.3 Drawback of a typical coordinate system......Page 327
18.6 Experimental results from several successfully developed endoscopic surgical robots......Page 329
References......Page 332
19 Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots......Page 334
19.1.1 Urology......Page 335
19.1.2 Gastroenterology......Page 336
19.1.3 Proposed robotic platforms......Page 337
19.2.2 Robotic catheter: design, materials, and manufacturing......Page 338
19.3.1 Clinical motivation......Page 341
19.3.2 Endoscopic arm: design, materials, and manufacturing......Page 342
19.4 Conclusion and future work......Page 347
References......Page 349
20 Robotic-Assisted Percutaneous Coronary Intervention......Page 352
20.1.1 Percutaneous coronary intervention......Page 353
20.1.2 Robotic-assisted percutaneous coronary intervention......Page 354
20.2.2 Articulated arm......Page 355
20.2.3 Robotic drive and cassette......Page 356
20.2.4 Control console......Page 358
20.3.2 Robotic procedure......Page 359
20.3.3 Safety considerations......Page 360
20.4.1.1 Denavit–Hartenberg method......Page 361
20.4.1.2 Forward kinematics formulation of the arm......Page 363
20.4.2.2 Inverse kinematics formulation......Page 365
20.5.1 Permanent magnet synchronous motor model......Page 368
20.5.2 Direct quadrature control architecture for permanent magnet synchronous motors......Page 369
20.5.3 Quadrature current control of brushless linear DC motors......Page 371
References......Page 372
21 Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery......Page 374
21.2 Background......Page 375
21.3 Image stabilization......Page 376
21.4 Strip-wise affine map......Page 377
21.5 Shared control......Page 378
21.5.3 Haptic assistance......Page 379
21.6.1 Robotic system description......Page 380
21.6.2 Graphics system description......Page 381
21.7 Simulation experiments......Page 382
21.8 Conclusion......Page 384
References......Page 385
22 Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy, Driven by Pneumatic Stepper Motors......Page 386
22.1.3 Actuation methods for magnetic resonance-safe/conditional robots......Page 387
22.1.4.1 Pneumatic magnetic resonance imaging robots by Stoianovici, Bomers, and Sajima......Page 388
22.1.4.2 Stormram 1–4 and Sunram 5......Page 390
22.2.1 Rectangular cross-sectional shape......Page 391
22.2.3 Design of the single-acting cylinder......Page 392
22.3 Stepper motors......Page 393
22.3.1 Design of the two-cylinder stepper motor......Page 394
22.3.3 Dual-speed stepper motor......Page 395
22.4.1 Kinematic configuration......Page 397
22.4.2 Mechanical design of Sunram 5......Page 398
22.5 Control of pneumatic devices......Page 400
22.6.1 Stepper motor force......Page 402
22.6.2 Stepping frequency......Page 403
22.6.4 Stormram 4 evaluation......Page 404
References......Page 406
23 New Advances in Robotic Surgery in Hip and Knee Replacement......Page 408
23.2.2 Patellofemoral arthroplasty......Page 409
23.3.1 Unicompartmental knee arthroplasty......Page 410
23.3.2 Total knee arthroplasty......Page 411
23.5 Preoperative preparation......Page 412
23.6 Operative setup......Page 414
23.7 Surgical technique......Page 415
23.8 Future directions......Page 416
References......Page 419
Further reading......Page 421
24 Intellijoint HIP: A 3D Minioptical, Patient-Mounted, Sterile Field Localization System for Orthopedic Procedures......Page 422
24.1 Background......Page 423
24.2.1 System overview......Page 425
24.2.2 Camera......Page 426
24.2.3 Software framework......Page 428
24.3 Minioptical system calibration......Page 429
24.4.2 Other applications......Page 431
24.5 Accuracy performance......Page 432
References......Page 434
25 More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device......Page 435
25.1 Introduction......Page 436
25.2 The Orthopilot device......Page 437
25.3 Operative procedures: total knee arthroplasty......Page 438
25.3.2 Navigation of the bone cuts......Page 439
25.3.4 Rotation of the femoral implant......Page 440
25.3.6 Implanting the final prosthesis......Page 441
25.4.1 High tibial opening wedge osteotomy......Page 442
25.5 Osteotomy for genu valgum deformity......Page 443
25.6 Uni knee arthroplasty......Page 444
25.7 Uni knee arthroplasty to total knee arthroplasty revision......Page 446
25.8.2 Uni knee arthroplasty and revision to total knee arthroplasty......Page 448
25.9 Discussion......Page 449
25.10 Conclusion......Page 450
References......Page 451
26 NAVIO Surgical System—Handheld Robotics......Page 452
26.2 The NAVIO surgical workflow......Page 453
26.2.1.1 Bone tracking hardware......Page 454
26.2.2 Registration—image-free technology......Page 455
26.2.3 Prosthesis planning......Page 457
26.2.4 Robotic-assisted bone cutting......Page 461
26.2.5 Trial reduction......Page 463
26.2.6 Cement and close......Page 465
Further reading......Page 466
27 Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty......Page 467
27.2.1 System overview......Page 468
27.2.2 BoneMorphing/shape modeling......Page 469
27.2.3 OMNIBot miniature robotic cutting guide......Page 472
27.2.5 Initial prototype design requirements......Page 473
27.2.6 Proof of concept......Page 474
27.2.8 Verification, validation, and regulatory clearances......Page 475
27.2.9 Surgical workflow......Page 478
27.2.10 Cadaver labs and clinical results......Page 479
References......Page 480
28 Unicompartmental Knee Replacement Utilizing Robotics......Page 482
28.2.1 Limb alignment/component positioning......Page 483
28.3 Robotic surgery experience......Page 484
28.4.1 Indications for use—RESTORIS partial knee application......Page 485
28.4.2.2 Securing the leg and IMP De Mayo knee positioner......Page 486
28.4.2.5 Array assembly (femur and tibia)......Page 488
28.4.2.8 Patient time out page......Page 489
28.4.3.4 Registration verification—Mako product specialist/surgeon......Page 490
Main window......Page 492
28.4.4.3 Visualization and stereotactic boundaries......Page 493
28.4.4.5 CT view......Page 494
28.5 Discussion......Page 495
References......Page 498
Further reading......Page 499
29 Robotic and Image-Guided Knee Arthroscopy......Page 500
29.1 Introduction......Page 501
29.2.1 Why steerable robotic tools are necessary for arthroscopy......Page 502
29.2.2 Mechanical design......Page 503
29.2.5 Sensing......Page 505
29.2.6 Evaluation......Page 506
29.3.1 Leg manipulation systems......Page 507
29.4.1 Complementary metal-oxide semiconductor sensors for knee arthroscopy......Page 508
29.4.2 Emerging sensor technology for medical robotics......Page 510
29.4.3.1 Validation of stereo imaging in knee arthroscopy......Page 511
29.5.2.1 Automatic and semiautomatic segmentation and tracking......Page 513
29.5.2.3 Ultrasound-guided robotic procedures......Page 514
29.5.3 Ultrasound guidance and tissue characterization for knee arthroscopy......Page 515
29.6 Toward a fully autonomous robotic and image-guided system for intraarticular arthroscopy......Page 516
29.6.3 Vision-guided operation with steerable robotic tools......Page 517
29.8 Conclusion......Page 518
References......Page 519
30 Robossis: Orthopedic Surgical Robot......Page 522
30.2 Robot structure......Page 523
30.3.3 Singularity effects on actuator forces and torques......Page 526
30.4.1 Trajectory tracking......Page 530
30.4.2 Surgical workspace......Page 531
30.4.3 Force testing......Page 532
References......Page 534
31 EOS Imaging: Low-Dose Imaging and Three-Dimensional Value Along the Entire Patient Care Pathway for Spine and Lower Limb.........Page 536
31.2.1 System description......Page 537
31.2.2 Benefits of slot-scanning weight-bearing technology......Page 538
31.3.1 Modeling technology......Page 540
31.3.3 Lower limbs......Page 542
31.4.1 spineEOS......Page 545
31.4.2 hipEOS......Page 549
31.4.3 kneeEOS......Page 552
31.5 Conclusion......Page 554
References......Page 555
32 Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures......Page 557
32.1.2 Evolution of image-guided surgery system......Page 558
32.1.3.2 Intraoperative three-dimensional image-guided surgery systems......Page 559
32.1.4 Preoperative image-guided surgery systems......Page 560
32.2 Motivation and benefits of the Machine-vision Image-Guided Surgery system......Page 561
32.2.2 Extended surgical time due to workflow disruptions......Page 562
32.2.4 Requiring nonsterile user assistance......Page 563
32.2.6 Large device footprint......Page 564
32.3.2 The Machine-vision Image-Guided Surgery system workflow......Page 565
32.3.3 Flash Registration......Page 567
32.4.1 Revision instrumented posterior lumbar fusion L3–L5......Page 568
32.4.2 Revision instrumented posterior lumbar fusion L4–S1......Page 569
32.4.4 Cervical fusion......Page 572
32.4.5 Left temporal open biopsy......Page 575
32.5.1 Multilevel registration for spine deformity procedures......Page 577
32.6 Conclusion......Page 578
References......Page 579
33 Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery......Page 581
33.2.1.1 Three-dimensional image acquisition......Page 582
33.2.2.1 Augmented reality–based three-dimensional image-guided techniques......Page 583
33.2.2.2 Three-dimensional integral videography image overlay for image guidance......Page 584
33.3.1 Intraoperative patient–three-dimensional image registration......Page 585
33.4.1 Three-dimensional image–guided planning and operation......Page 586
33.4.2 Robot-assisted operation......Page 587
33.4.3 Integration of diagnosis and treatment in minimally invasive surgery......Page 588
References......Page 589
34 Prospective Techniques for Magnetic Resonance Imaging–Guided Robot-Assisted Stereotactic Neurosurgery......Page 591
34.2 Clinical motivations for magnetic resonance imaging–guided robotic stereotaxy......Page 592
34.3 Significant platforms for magnetic resonance imaging–guided stereotactic neurosurgery......Page 593
34.4 Key enabling technologies for magnetic resonance imaging–guided robotic systems......Page 595
34.4.1 Nonrigid image registration......Page 597
34.4.2 Magnetic resonance–based tracking......Page 599
34.4.3 Magnetic resonance imaging–compatible actuation......Page 600
References......Page 601
35 RONNA G4—Robotic Neuronavigation: A Novel Robotic Navigation Device for Stereotactic Neurosurgery......Page 605
35.2.1 Historical development of the RONNA system......Page 606
35.2.2 RONNA G4 system—the fourth generation......Page 607
35.3 RONNA surgical workflow......Page 609
35.4 Automatic patient localization and registration......Page 610
35.4.1 Robotic navigation and point-pair correspondence......Page 612
35.4.2.1 Automatic localization in image space......Page 614
35.5 Optimal robot positioning with respect to the patient......Page 615
35.5.1 Dexterity evaluation......Page 616
35.5.2 RONNA reachability maps......Page 617
35.5.3 Single robot position planning algorithm......Page 618
35.5.6 Robot localization strategies......Page 619
35.6 Autonomous robotic bone drilling......Page 620
35.6.2 Force controller......Page 621
35.7 Error analysis of a neurosurgical robotic system......Page 622
35.7.1.1 Kinematic model......Page 624
35.7.1.2 Measurement setup......Page 626
35.7.1.4 Validation......Page 627
References......Page 628
36 Robotic Retinal Surgery......Page 632
36.1.1 Human factors and technical challenges......Page 633
36.1.2 Motivation for robotic technology......Page 634
36.1.4 Models used for replicating the anatomy......Page 636
36.2.1.1 Stereo microscope......Page 637
36.2.1.3 Light sources......Page 638
36.2.2 Real-time optical coherence tomography for retinal surgery......Page 639
36.2.3 Principle of Fourier domain optical coherence tomography......Page 641
36.2.3.1 Axial resolution of spectral-domain optical coherence tomography......Page 642
36.2.3.3 Imaging depth of spectral-domain optical coherence tomography......Page 643
36.2.4 High-speed optical coherence tomography using graphics processing units processing......Page 644
36.3 Advanced instrumentation......Page 645
36.3.1.1 Retinal interaction forces......Page 646
36.3.1.3 Force gradients......Page 648
36.3.3 Impedance sensing......Page 649
36.3.4 Dexterous instruments......Page 650
36.4.1 Mosaicing......Page 651
36.4.2 Subsurface imaging......Page 652
36.4.5 Tool tracking......Page 653
36.4.6 Auditory augmentation......Page 654
36.5.1.1 Electric-motor actuation: impedance-type versus admittance-type......Page 655
36.5.1.2 Piezoelectric actuation......Page 656
36.5.1.3 Remote-center-of-motion mechanisms......Page 658
36.5.3 Cooperative-control systems......Page 659
36.5.4 Teleoperated systems......Page 660
36.5.7 General considerations with respect to safety and usability......Page 661
36.6.1 Closed-loop control for handheld systems......Page 662
36.6.2.1 Robot control algorithms based on tool-tip force information......Page 663
36.6.3 Closed-loop control for teleoperated systems......Page 664
36.7.1 Image-guidance based on video......Page 665
36.8 Conclusion and future work......Page 667
36.8.3 Novel therapy delivery methods......Page 668
References......Page 669
37 Ventilation Tube Applicator: A Revolutionary Office-Based Solution for the Treatment of Otitis Media With Effusion......Page 678
37.1.1 Objectives......Page 679
37.1.2.2 Operation time......Page 680
37.2.1 Mechanical structure......Page 681
37.2.2.1 Tool set......Page 682
Stress and deformation analysis......Page 683
37.2.2.2 Mechanism for cutter retraction......Page 684
37.3.1 Working process......Page 685
37.3.2 Force-based supervisory controller......Page 686
37.4 Motion control system......Page 687
37.4.1.1 System description of the ultrasonic motor stage......Page 688
Nonlinear term......Page 689
37.4.2.1 LQR-assisted PID controller......Page 690
37.4.2.2 Nonlinear compensation......Page 692
37.5.1 Experimental setup......Page 694
37.6 Conclusion......Page 695
References......Page 696
38 ACTORS: Adaptive and Compliant Transoral Robotic Surgery With Flexible Manipulators and Intelligent Guidance......Page 698
38.2.1 Clinical requirements......Page 699
38.2.3 Flexible parallel manipulators......Page 700
38.2.3.2 Parallel mechanism......Page 701
38.2.3.3 Motion transmission......Page 702
38.3.1 Performance of the manipulators......Page 703
38.3.2 Cadaveric trial with the manipulators......Page 704
38.4 Conclusion......Page 705
References......Page 706
Index......Page 707
Back Cover......Page 724

Citation preview

https://t.me/MBS_MedicalBooksStore

Handbook of Robotic and Image-Guided Surgery

Handbook of Robotic and Image-Guided Surgery

Edited by Mohammad H. Abedin-Nasab Rowan University, Glassboro, New Jersey, United States Robossis, Glassboro, New Jersey, United States

Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-814245-5 For Information on all Elsevier publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisition Editor: Fiona Geraghty Editorial Project Manager: Joshua Mearns Production Project Manager: Sruthi Satheesh Cover Designer: Mark Rogers Typeset by MPS Limited, Chennai, India

Dedication To my parents; my wife; and my children.

About the Book Handbook of Robotic and Image-Guided Surgery provides state-of-the-art systems and methods for robotic and computer-assisted surgeries. In this masterpiece, contributions from 169 researchers from 19 countries have been gathered to provide 38 chapters. This handbook consists of over 744 pages, including 659 figures and 61 videos. It also provides basic medical knowledge for engineers and basic engineering principles for surgeons. A key strength of this text is the fusion of engineering, radiology, and surgical principles into one book, as follows: G

G G

a thorough and in-depth handbook on surgical robotics and image-guided surgery which includes both fundamentals and advances in the field; a comprehensive reference on robot-assisted laparoscopic, orthopedic, and head-and-neck surgeries; chapters are contributed by worldwide experts from both engineering and surgical backgrounds.

vii

Foreword

Russell H. Taylor Johns Hopkins University, Baltimore, MD, United States Computer-integrated systems are beginning to have an impact on surgical care comparable to that of computerintegrated systems in other sectors of our society. By combining human judgment with the capabilities of robotic and imaging technology and computer information processing, these systems can transcend human limitations to improve the precision, safety, and consistency of surgical interventions, while reducing their invasiveness. In addition to enabling better care for each individual patient, these systems enable the use of statistical methods combining information from many interventions to improve treatment processes for future patients. The chapters of this book provide an excellent sampling of the current state-of-the-art in this rapidly developing field. When I was asked to write this foreword, I found myself wondering what to say. Upon reflection, I have decided to discuss a few general themes that have emerged over the 30 or so years that I have been working in this area and to speculate a little about what a similar collection produced some years from now might look like. I first became aware of surgical robotics in 1985, at about the time of the first use of a robot in a stereotactic brain procedure [1], when IBM Research was approached by surgeons from the University of California at Davis to see if we could develop a robot for joint replacement surgery. This led to the development of the prototype of what was eventually commercialized by Integrated Surgical Systems (now Think Surgical) as the ROBODOC system [2 4]. Concurrently, we also developed a prototype computer-integrated planning and navigation system for craniofacial osteotomies with Court Cutting at NYU Medical Center [5,6]. One key attribute of these systems was that they combined steps of image-based modeling of the patient, use of the model to plan the intervention, registration of the model and plan to the patient, and the use of technology (robots, navigational trackers, video displays) to assist the surgeon in carrying out the procedure. Since much of my previous work had been in robotics for manufacturing, it seemed natural to refer to this process as surgical Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM). Of course, there has been much progress over the past 30 years in these areas, and many of the systems reported in this book can reasonably be called surgical CAD/CAM systems. In early 1991, I gave a talk at a SAGES meeting in Monterrey, CA. This was about the time that laparoscopic surgery was rapidly displacing conventional open surgery, and I began to wonder whether robotic systems could help surgeons address some of the associated technical problems. Several other researchers (notably, Satava [7,8], Green [9] and Wang [10]) had the same insight, and this work led to the development of several early commercial telerobotic laparoscopic surgery systems [10 13]. Work at IBM focused on a system we called the “LARS” [14,15], which combined teleoperation with image guidance and stereotactic applications. Although the LARS was never commercialized, many of its features (notably, the “remote center-of-motion” design and user interface concepts) have been widely adapted. These systems emphasize the interactive nature of surgical decision-making, and it is natural to think of them as surgical assistant systems. Again, many examples of such systems may be found in the chapters of this book. xi

xii

Foreword

I should emphasize that the distinction between surgical CAD/CAM and surgical assistant systems is both fuzzy and arbitrary. We are really dealing with systems that enable a three-way partnership between physicians, technology, and computers to improve surgical care. Accordingly, it is perhaps better to refer to the field as computer-integrated surgery or computer-integrated interventional medicine. One very interesting aspect of this volume is an “organ directory” illustrating the rather broad scope of clinical application that the systems reported here address. As I mentioned earlier, the chapters in this collection provide an excellent sampling of current work in this field. It is necessarily a sampling. One could easily double (or even triple) the number of chapters in attempting to provide a truly complete coverage of current work. Given the very rapid progress in the field, truly comprehensive coverage will become even harder in the future. However, I believe that some trends are emerging, and it may be useful to speculate about trends that may be found in some future Encyclopedia of Computer-Integrated Surgery. The organ directory will expand to cover essentially every part of the body, with structures at every scale, down to cellular levels. Technology will continue to advance to improve the dexterity and precision of robotic devices, which will be available in smaller and smaller scales to facilitate access inside the body. Imaging and other sensors will become increasingly integrated with tools, in order to provide direct feedback on tool-to-tissue interactions. Human machine interfaces will continue to advance in order to improve ergonomics and to provide effective two-way communication between the physician and the system, in order to provide truly immersive environments for intraoperative decision-making and shared control. However, I think that the biggest changes will be in the use of information at all phases of the treatment cycle. Advances in this area are, of course, highly synergistic with advances in technology and human machine interfaces. One theme will be real-time information fusion to provide comprehensive, intraoperative modeling of the patient state. Other themes may be found in the emerging discipline of surgical data science [16], including the use of statistical modeling and machine learning to relate clinical outcomes to surgical technical outcomes, together with the use of these methods for treatment planning, decision support, and training. Another theme will be increasing levels of “autonomy” exhibited by surgical robots. In many ways, robotic systems have always exhibited some level of autonomy, in the sense that computers convert surgeon intentions into motor control signals. Further, radiation machine therapy systems are arguably the first “surgical robots” and are highly autonomous surgical CAD/CAM systems; and systems like ROBODOC execute preplanned tool path trajectories to prepare bones to receive implants. There have recently been several attempts (e.g., Ref. [17]) to categorize levels of autonomy for surgical robots. These can provide useful insights, and there will certainly be systems exhibiting autonomy to varying degrees. As with the distinction between surgical CAD/CAM and surgical assistance, one need not get too hung up in drawing very fine distinctions. The key issues are: (1) unambiguously expressing the surgeon’s intention for what the robot is to do; and (2) executing the intended commands with great reliability and safety. To conclude, this book reflects something of a way point in the evolution of surgical technology from hand tools whose manipulation relies almost exclusively on a surgeon’s own senses, memory, and appreciation of the patient state to a three-way partnership between surgeons, technology, and information to enhance clinical care. In exploring it, you may find it interesting and useful to consider how the systems reported reflect these broader themes and where future systems may build upon them.

References [1] Kwoh YS, Hou J, Jonckheere EA, et al. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans Biomed Eng 1988;35-2:153 61. [2] Taylor RH, Paul HA, Kazanzides P, Mittelstadt BD, Hanson W, Zuhars JF, et al. An image-directed robotic system for precise orthopaedic surgery. IEEE Trans Robot Autom 1994;10-3:261 75. [3] Bargar W, DiGioia A, Turner R, Taylor J, McCarthy J, Mears D. Robodoc multi-center trial: an interim report. In: Proc. 2nd Int. Symp. on medical robotics and computer assisted surgery, Baltimore, MD, November 4 7, 1995. p. 208 14. [4] “Think Surgical”, ,https://thinksurgical.com.; 2018. [5] Taylor RH, Paul HA, Cutting CB, Mittelstadt B, Hanson W, Kazanzides P, et al. Augmentation of human precision in computer-integrated surgery. Innov Technol Biol Med 1992;13-4:450 9 (special issue on computer assisted surgery). [6] Cutting CB, Bookstein FL, Taylor RH. Applications of simulation, morphometrics and robotics in craniofacial surgery. In: Taylor RH, Lavallee S, Burdea G, Mosges R, editors. Computer-integrated surgery. Cambridge, MA: MIT Press; 1996. p. 641 62. [7] Satava R. Robotics, telepresence, and virtual reality: a critical analysis of the future of surgery. Minim Invasive Ther 1992;1-:357 63. [8] Satava RM. Surgical robotics: the early chronicles, a personal historical perspective. Surg Laparosc Endosc Percutan Tech 2002;12-1:6 16. [9] Green P, Satava R, Hill J, Simon I. Telepresence: advanced teleoperator technology for minimally invasive surgery (abstract). Surg Endosc 1992;6-91.

Foreword

xiii

[10] Sackier JM, Wang Y. Robotically assisted laparoscopic surgery. From concept to development. Surg Endosc 1994;8-1:63 6. Available from: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd 5 Retrieve&d-b 5 PubMed&dopt 5 Citation&list_uids 5 8153867. [11] Reichenspurner H, Demaino R, Mack M, Boehm D, Gulbins H, Detter C, et al. Use of the voice controlled and computer-assisted surgical system Zeus for endoscopic coronary artery surgery bypass grafting. J Thorac Cardiovasc Surg 1999;118(1):11 16. [12] Yoshino I, Hashizume M, Shimada M, Tomikawa M, Tomiyasu M, Suemitsu R, et al. “Thoracoscopic thymomectomy with the da Vinci computer-enhanced surgical system”. J Thorac Cardiovasc Surg 2001;122-4:783 5. [13] Tewari A, Peabody J, Sarle R, Balakrishnan G, Hemal A, Shrivastava A, et al. Technique of da Vinci robot-assisted anatomic radical prostatectomy. Urology 2002;60-4:569 72. [14] Taylor R, Funda J, LaRose D, Treat M. A telerobotic system for augmentation of endoscopic surgery. In: IEEE conference on engineering in medicine and biology, Paris, October, 1992. p. 1054 6. [15] Taylor RH, Funda J, Eldridge B, Gruben K, LaRose D, Gomory S, et al. A telerobotic assistant for laparoscopic surgery. IEEE Eng Med Biol Mag 1995;14-:279 87. [16] Maier-Hein L, Vedula SS, et al. Surgical data science for next-generation interventions. Nat Biomed Eng 2017;1:691 6. [17] Yang G-Z, Cambias J, Cleary K, Daimler E, Drake J, Dupont PE, et al. Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci Robot 2017;2(4):eaam8638, Editorial. doi:10.1126/scirobotics.aam8638.

Foreword

Jacques Marescaux1,2,3 1

University of Strasbourg, Strasbourg, France IHU Strasbourg, Strasbourg, France 3 IRCAD, Strasbourg, France 2

I am Professor of Surgery at the University of Strasbourg, Chairman of the Institute of Image-Guided Surgery (IHU Strasbourg), and President and Founder of the IRCAD (1994), a uniquely structured institute, dedicated to research and training, advancing the field of surgery into the information era. Over the last 24 years, this center has gained international acclaim by training more than 40,000 surgeons from 124 countries. In 2000, I developed WeBSurg, a virtual online surgical university resulting from the need to maintain the link between the training center and surgeons. In 2001, I performed the first transcontinental laparoscopic surgery in a patient located in Strasbourg (France) while being in New York; this is known as “Operation Lindbergh” (Nature 2001). On April 2, 2007, I was the first in the world to operate, with my team, on a patient without leaving any scar (Archives of Surgery 2007). Current telemanipulators have a common architecture made up of a command console, where the surgeon takes place. This console is equipped with haptic systems to remotely control the electromechanical instruments docked at the patient’s side. End effectors replicate human movements. In this configuration, the procedure should be properly defined as a computer-assisted surgery. The advantages of such an operating tool are multiple and immediately perceived, that is, enhanced ergonomics in the operating surgeon, enhanced stability of the camera system with, in most cases, 3D view, and precision movements, thanks to the filtering function which eliminates natural and physiological tremors. All these assets have been used collectively to generate a frankly undeniable marketing argument. Indeed, the telemanipulator enhances the skills of surgeons and shortens the learning curve of the minimally invasive surgery (MIS) approach radically. As a result, the sort of intelligence placed between the bare hand of the surgeon and the instruments has the potential to improve the uptake of MIS, for the benefit of a larger number of patients. This is an apt argument. However, there are some drawbacks when it comes to the real-life application: current systems are expensive and entail extra costs associated with consumables, maintenance, increased operating room time for docking and undocking, and increased staffing needs. If robotics will without a doubt allow the use of MIS for increasingly complex operations, there is so far no high level of evidence-based studies demonstrating a real benefit of “robotically” assisted MIS vs. standard MIS, as far as surgical outcomes and cost benefits are concerned.

xv

xvi

Foreword

This failure to demonstrate compelling advantages is easy to explain: the literature has been comparing robotic results with those of laparoscopic surgery performed in expert centers, however the main advantage of robotic surgery is to allow surgeons who do not necessarily have the required skills in conventional laparoscopic surgery to perform MIS. Moreover, many new telemanipulators have been released, including flexible endoscopic platforms. While still in a master slave configuration, the flexible endoscopic telemanipulator can outperform the standard systems when dealing with complex endoluminal surgery, including endoscopic submucosal dissections for early-stage cancer or per-oral endoscopic myotomy. A similar consideration can be applied to the field of percutaneous surgery, where energy-based tumor ablations can be greatly improved by electromechanical assistance and potentially reach comparable outcomes to surgical removal, in selected cases. The future of real robotic surgery lies clearly in the development of completely autonomous and intelligent devices presenting context awareness with multiple sensors, artificial intelligence selecting the best theranostic algorithms, automatic task execution complemented with multimodal advanced imaging guidance, including virtual and augmented reality, fluorescence and hyperspectral imaging, and so on. This groundbreaking product change, coupled with a think-out-of-the-box attitude in conceiving new processes and therapeutic approaches, will provide the real sense and potential of robotic technologies. Companies which have developed commercially available telemanipulator systems should be credited for having opened the OR to mechatronics and for broadening surgeons’ minds with these new and stimulating perspectives. Within the next 5 years, many new “robots” will be released onto the market. This will facilitate healthy competition, and truly demonstrates that the change from conventional surgery to robotics is inevitable. Dr. Abedin-Nasab should be congratulated for the great amount of work provided in bringing together 169 experts from 19 countries to create the Handbook of Robotic and Image-Guided Surgery. The handbook is an up-to-date, stateof-the-art of mechatronics applied to surgery, with great iconography and supplementary video material. The original chapter division based on engineering and clinical challenges facilitates the reading of the handbook. Future perspectives on the alternative use of telemanipulators are also nicely described. It is a fantastic achievement of a very demanding task.

About the Editor Mohammad H. Abedin-Nasab specializes in surgical robotics, robotics, biomechanics, and nonlinear modeling. He has focused on both basic and applied research endeavors, ensuring that his research is consistently relevant to the scientific community as well as the health-care system and medical robotics industry. He has a passion for the development of novel technologies for application in critical areas of biomedicine, including clinical research and surgery. Email: [email protected]

xvii

Acknowledgments I thank God for all the blessings in my life. I am grateful to Marziyeh Saeedi-Hosseiny for her great help and countless assistance on this project. I am sincerely thankful to Dr. Ali A. Houshmand, President of Rowan University, for his outstanding leadership and support for healthcare research. I am indebted to Dr. Arye Rosen, a member of the National Academy of Engineering and former Associate Vice President of Rowan University, for his unique support and guidance. Special thanks are due to Dr. Mark Byrne, Founding Head and Professor of the Department of Biomedical Engineering, Rowan University for his continuous help and invaluable advice. I would like to express my sincere appreciation to Fiona Geraghty, Senior Acquisitions Editor at Elsevier, who was the key person at the publisher helping in the creation of this handbook. I am grateful to Kattie Washington, Serena Castelnovo, and Joshua Mearns who served as Editorial Project Managers of this project during a 2-year period. I would like to thank Sruthi Satheesh, Project Manager at Elsevier, and her team for their revisions and final touches to the handbook. My gratitude also goes to Mark Rogers who designed the very beautiful cover, and Matthew Limbert who designed impressively unique title pages for the chapters. The chapters of this handbook have been thoroughly reviewed. I wish to express my heartfelt thanks for the care and thoughts of the reviewers; they are Riddhi Gandhi, Matthew Talarico, Manpreet Singh, Nicholas Silva, Matthew Goldner, Caroline Smith, Daniel Infusino, and Mason Weems.

xix

List of Contributors Jake J. Abbott Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, United States

Mohammad H. Abedin-Nasab Robossis, Glassboro, NJ, United States; Rowan University, Glassboro, NJ, United States

Ahmad Abiri University of California Los Angeles, Los Angeles, CA, United States

Elnaz Afshari Sina Robotics and Medical Innovators Co.,

Philip Wai Yan Chiu The Chinese University of Hong Kong, Shatin, Hong Kong

Hyouk Ryeol Choi Sungkyunkwan University, Suwon, South Korea

Darko Chudy Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia; School of Medicine, Croatian Institute for Brain Research, University of Zagreb, Zagreb, Croatia

Giovanni Cochetti University of Perugia, Perugia, Italy

Ltd., Tehran, Iran

Alireza Alamdar Sina Robotics and Medical Innovators Co.,

Ross Crawford Queensland University of Technology,

Ltd., Tehran, Iran; Sharif University of Technology, Tehran, Iran

Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia; Prince Charles Hospital, Brisbane, QLD, Australia

Ali Alazmani University of Leeds, Leeds, United Kingdom Oliver Anderson Colchester General Hospital, Colchester, United Kingdom

Axel Andres University of Geneva, Geneva, Switzerland Maria

Antico Queensland Brisbane, QLD, Australia

University

of

Technology,

Tan Arulampalam Colchester General Hospital, Colchester, United Kingdom

Mahdi Azizian Intuitive Surgical, Sunnyvale, CA, United States

Christos Bergeles School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom

Per Bergman Corindus Vascular Robotics, Waltham, MA, United States

James Bisley University of California Los Angeles, Los Angeles, CA, United States

Steven J. Blacker Corindus Vascular Robotics, Waltham, MA, United States

Andrea Boni University of Perugia, Perugia, Italy Nicolas Christian Buchs University of Geneva, Geneva, Switzerland

Turgut Bora Cengiz Cleveland Clinic, Cleveland, OH, United States

Danny Tat-Ming Chan The Chinese University of Hong Kong, Hong Kong

William Cross St James’s University Hospital, Leeds, United Kingdom

Peter Culmer University

of

Leeds,

Leeds,

United

Kingdom

Simon Daimios Intuitive Surgical, Sunnyvale, CA, United States

Michel De Mathelin ICube Laboratory, Strasbourg, France Elena De Momi Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy

Jacopo Adolfo Rossi De Vermandois University of Perugia, Perugia, Italy

Domagoj Dlaka Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia

John R. Dooley Accuray, Sunnyvale, CA, United States Luka Drobilo Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Erik Dutson University of California Los Angeles, Los Angeles, CA, United States

Thomas Erchinger Geisinger, Wilkes-Barre, Pennsylvania, United States

Zhencheng Fan Tsinghua University, Beijing, China Richard Fanson Intellijoint Surgical, Waterloo, ON, Canada

xxix

xxx

List of Contributors

Farzam Farahmand Sina Robotics and Medical Innovators Co., Ltd., Tehran, Iran; Sharif University of Technology, Tehran, Iran

Zahra Faraji-Dana 7D Surgical Inc., North York, ON, Canada

Kelly R. Johnson Geisinger, Wilkes-Barre, Pennsylvania, United States

Yaqub Jonmohamadi Queensland University of Technology, Brisbane, QLD, Australia

Yen-Yi Juo University of California Los Angeles, Los

Koorosh Faridpooya Eye Hospital Rotterdam, Rotterdam, Netherlands

Angeles, CA, United States

Marin Kajtazi Faculty of Mechanical Engineering and Naval

Anthony Fernando TransEnterix, Morrisville, NC, United States

Davide Fontanarosa Queensland University of Technology, Brisbane, QLD, Australia

Chee Wee Gan Department of Otolaryngology, National

Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Jin U. Kang Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States

John M. Keggi Orthopaedics New England, Middlebury, CT,

University of Singapore, Singapore, Singapore

United States; Connecticut Joint Replacement Institute, Hartford, CT, United States

Mathieu Garayt EOS Imaging, Paris, France Gianluca Gaudio University of Perugia, Perugia, Italy Emre Gorgun Cleveland Clinic, Cleveland, OH, United States Jon C. Gould Medical College of Wisconsin, Milwaukee, WI, United States

Iman Khalaji Intuitive Surgical, Sunnyvale, CA, United States

Warren Kilby Accuray, Sunnyvale, CA, United States Uikyum Kim Korea Institute of Machinery & Materials,

Vincent Groenhuis Robotics and Mechatronics, University of Twente, Enschede, The Netherlands

Daejeon, South Korea

Yong Bum Kim Sungkyunkwan University, Suwon, South

Warren Grundfest University of California Los Angeles, Los Angeles, CA, United States

Korea

Sujith Konan University College London Hospitals, London,

Ziyan Guo The University of Hong Kong, Hong Kong Anjuli M. Gupta Geisinger, Wilkes-Barre, Pennsylvania, United States

United Kingdom

Nicholas Kottenstette Corindus Vascular Robotics, Waltham, MA, United States

Monika Hagen University of Geneva, Geneva, Switzerland

Ka-Wai Kwok The University of Hong Kong, Hong Kong

Rana M. Higgins Medical College of Wisconsin, Milwaukee,

Ka Chun Lau The Chinese University of Hong Kong, Shatin,

WI, United States

Hong Kong

Andre Hladio Intellijoint Surgical, Waterloo, ON, Canada

Jeffrey M. Lawrence Gundersen Health System, Viroqua, WI, United States

Joe Hobeika EOS Imaging, Paris, France Iulian Iordachita Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, United States

Martin Chun-Wing Leong The University of Hong Kong, Hong Kong

Anjali Jaiprakash Queensland University of Technology,

Michael K.K. Leung 7D Surgical Inc., North York, ON,

Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia

Yun Yee Leung The Chinese University of Hong Kong,

Branislav

Jaramaz Smith Pennsylvania, United States

&

Nephew,

Pittsburgh,

David Jayne University of Leeds, Leeds, United Kingdom Bojan Jerbi´c Faculty of Mechanical Engineering and Naval

Canada Shatin, Hong Kong

Changsheng Li National University of Singapore, Singapore, Singapore

Wenyu Liang Department of Electrical and Computer Engineering, National University of Singapore, Singapore

Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Hongen Liao Tsinghua University, Beijing, China

Alexander H. Jinnah Wake Forest School of Medicine,

Zhuxiu Liao Tsinghua University, Beijing, China

Winston-Salem, NC, United States

Riyaz H. Jinnah Southeastern Regional Medical Center, Lumberton, NC, United States; Wake Forest School of Medicine, Winston-Salem, NC, United States

Chwee Ming Lim National University Hospital, Singapore, Singapore

Hsueh Yee Lim Department of Otolaryngology, National University of Singapore, Singapore, Singapore

List of Contributors

May Liu Intuitive Surgical, Sunnyvale, CA, United States Longfei Ma Tsinghua University, Beijing, China Carla Maden University College London, London, United Kingdom

Michael J. Maggitti Southeastern Regional Medical Center, Lumberton, NC, United States

Adrian L.D. Mariampillai 7D Surgical Inc., North York, ON, Canada

Leonardo S. Mattos Biomedical Robotics Lab, Department

xxxi

Theodore Pappas Duke University School of Medicine, Durham, NC, United States

Andrea Peloso University of Geneva, Geneva, Switzerland Jake Pensa University of California Los Angeles, Los Angeles, CA, United States

Veronica Penza Biomedical Robotics Lab, Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy

Christopher Plaskos OMNI, Raynham, MA, United States

of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy

Wai-Sang Poon The Chinese University of Hong Kong, Hong

Calvin R. Maurer, Jr. Accuray, Sunnyvale, CA, United States

Bogdan Protyniak Geisinger, Wilkes-Barre, Pennsylvania,

Ettore Mearini University of Perugia, Perugia, Italy Jamie Milas EOS Imaging, Paris, France Alireza Mirbagheri Tehran University of Medical Sciences, Tehran, Iran; Sina Robotics and Medical Innovators Co., Ltd., Tehran, Iran

Riddhit Mitra Smith & Nephew, Pittsburgh, Pennsylvania, United States

Sara Moccia Biomedical Robotics Lab, Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy; Department of Information Engineering, Universita` Politecnica delle Marche, Ancona, Italy

Mehdi Moradi Sina Robotics and Medical Innovators Co., Ltd., Tehran, Iran

Philippe Morel University of Geneva, Geneva, Switzerland George Moustris National Technical University of Athens, Athens, Greece

Jeffrey Muir Intellijoint Surgical, Waterloo, ON, Canada Faisal Mushtaq University of Leeds, Leeds, United Kingdom Florent Nageotte ICube Laboratory, Strasbourg, France M. Ali Nasseri Ophthalmology Department, Technical University of Munich, Munich, Germany

Mohan Nathan TransEnterix, Morrisville, NC, United States Michael Naylor Accuray, Sunnyvale, CA, United States Gordian U. Ndubizu Geisinger, Wilkes-Barre, Pennsylvania, United States

Cailin Ng NUS Graduate School for Integrative Sciences and Engineering, Singapore, Singapore

Daniel Oh Intuitive Surgical, Sunnyvale, CA, United States Yasushi Ohmura Department of Surgery, Okayama City Hospital, Okayama, Japan

Elena Oriot EOS Imaging, Paris, France Ajay K. Pandey Queensland University of Technology, Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia

Kong United States

Liang Qiu National University of Singapore, Singapore, Singapore

Andrew Razjigaev Queensland University of Technology, Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia

Hongliang Ren National University of Singapore, Singapore, Singapore

Cameron N. Riviere Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States

Jonathan Roberts Queensland University of Technology, Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia

Sheila Russo Boston University, Boston, MA, United States Omid Saber Corindus Vascular Robotics, Waltham, MA, United States

Marzieh S. Saeedi-Hosseiny Robossis, Glassboro, NJ, United States; Rowan University, Glassboro, NJ, United States

Dominique

Saragaglia CHU Grenoble-Alpes, Teaching Hospital, Grenoble, France

South

Saeed Sarkar Tehran University of Medical Sciences, Tehran, Iran; Sina Robotics and Medical Innovators Co., Ltd., Tehran, Iran

Fumio Sasazawa Queensland University of Technology, Brisbane, QLD, Australia

Sohail Sayeh Accuray, Sunnyvale, CA, United States ˇ Bojan Sekoranja Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

William J. Sellers Geisinger, Wilkes-Barre, Pennsylvania, United States

Dong-Yeop Seok Sungkyunkwan University, Suwon, South Korea

Sami Shalhoub OMNI, Raynham, MA, United States

xxxii

List of Contributors

Franc¸oise J. Siepel Robotics and Mechatronics, University of Twente, Enschede, The Netherlands

Saeed Sokhanvar Corindus Vascular Robotics, Waltham, MA, United States

Jonathan Sorger Intuitive Surgical, Sunnyvale, CA, United States

Beau A. Standish 7D Surgical Inc., North York, ON, Canada Scott R. Steele Cleveland Clinic, Cleveland, OH, United States

Ivan Stiperski Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Stefano Stramigioli Robotics and Mechatronics, University of Twente, Enschede, The Netherlands; ITMO University, Saint Petersburg, Russia

Mario Strydom Queensland University of Technology, Brisbane, QLD, Australia

Hao Su City University of New York, New York City, NY, United States

ˇ Filip Suligoj Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Songping Sun University of California Los Angeles, Los Angeles, CA, United States

ˇ Marko Svaco Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Raphael Sznitman ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland

Masahiro Takahashi Takahashi Surgery Clinic, Yamagata, Yamagata, Japan

Kok Kiong Tan Department of Electrical and Computer Engineering, National University of Singapore, Singapore

Anna Tao University of California Los Angeles, Los Angeles, CA, United States

Alex Todorov Cobot, United States Christian Toso University of Geneva, Geneva, Switzerland Morena Turco University of Perugia, Perugia, Italy Marija Turkovi´c Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Costas Tzafestas National Technical University of Athens, Athens, Greece

Emmanuel Vander Poorten Department of Mechanical Engineering, KU Leuven, Heverlee, Belgium

Josip Vidakovi´c Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Nikola Vitez Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Andrea Volpin Royal Derby Hospital, Derby, United Kingdom

Liao Wu Queensland University of Technology, Brisbane, QLD, Australia; Australian Centre for Robotic Vision, Brisbane, QLD, Australia

Yeung Yam The Chinese University of Hong Kong, Shatin, Hong Kong

Victor X.D. Yang Sunnybrook Health Sciences Centre, Toronto, ON, Canada

Philippe Zanne ICube Laboratory, Strasbourg, France ˇ Adrian Zgalji´ c Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia

Xinran Zhang Tsinghua University, Beijing, China Lucile Zorn ICube Laboratory, Strasbourg, France ˇ Ivan Zupanˇ ci´c Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia



Senhance Surgical System: RoboticAssisted Digital Laparoscopy for Abdominal, Pelvic, and Thoracoscopic Procedures Theodore Pappas 1, Anthony Fernando 2 and Mohan Nathan 2 1Duke

University School of Medicine, Durham, NC, United States Morrisville, NC, United States

2 TransEnterix,

CHAPTER •

.0 nn

i=ocus

ENGINEERING & CLINICAL

TECHNOLOGY ROBOTIC&

IMAGE-GUIDED

LINK TO VIDEO

ABSTRACT Robotic-assisted digital laparoscopy (Senhance Surgical System) provides a digitized interface between the surgeon and the patient designed to increase surgeon control and reduce surgical variability. The physical open architecture of the system is composed of independent robotic-assisted manipulator arms that are compatible with conventional trocars and familiar laparoscopic instruments-and thus taking advantage of the existing operating room and surgical suite ecosystem. The fully reusable nature of the Senhance instruments allows for no preset limitation to the number of reuses of the instruments. Instrument movement is precise, scaled, and tremor-free with haptic feedback and familiar laparoscopic motions and technique. Digital 3D imaging with eye tracking provides continuous camera control by the surgical operator. Surgeon and support staff training includes learning the operation of the system and leverages existing laparoscopic technique and equipment. Postoperative evaluation of the system is by postmarket registry. Clinical and cost outcomes are described and summarized. Digital laparoscopy leverages surgical expertise of general and gynecologic laparoscopic surgeons and is designed to increase surgeon control, reduce variability, improve efficiencies, and eliminate waste. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.0000l-3 © 2020 Elsevier Inc. All rights reserved.

2

1.1

Handbook of Robotic and Image-Guided Surgery

Challenges of general surgery and the need for value-driven solutions

Traditional laparoscopy is the gold standard of minimally invasive surgery (MIS): it provides patients with improved perioperative outcomes, improved postoperative recovery with earlier return to normal activity and work, and minimal incisions and scarring compared to open surgery. Patients commonly are discharged the same day they undergo laparoscopic procedures—unlike the longer and increased care required for inpatients following open surgery. Although these benefits to patient outcomes of laparoscopic surgery are not debated, less has been reported on the physical and mental toll of laparoscopy on the general surgeon [1,2]. Park and colleagues revealed that 86.9% of laparoscopic surgeons suffer from performance-related symptoms, with the principal predictor being high case volume [1]. Burnout—a syndrome characterized by emotional exhaustion, depersonalization, and a decreased sense of personal accomplishment caused by work-related stress—is particularly prevalent in surgical specialties (with a range of 32% 55%) and has a 49% prevalence among general surgeons [3,4]. Considering the ergonomic, vision, and control limitations of laparoscopy, the high rate of general surgeon burnout, and our growing elderly patient population, healthcare systems need to adapt to the shifting technological environment and address and update technology to help surgeons do their jobs efficiently and effectively. General surgeons operate across a broad range of surgical indications including those among a heterogeneous and growing elderly patient population, and the high burnout rate among general surgeons indicates they may not be able to keep up with patient demand. These factors feed surgical variability, which often leads to disparate outcomes for patients and higher resource utilization, costs, and waste, such as time, inventory, motion, waiting, and skills. Value-based healthcare requires hospitals to find new ways to deliver the best clinical outcome relative to the optimal cost of care within an environment that fosters the right patient experience delivered by engaged and satisfied surgeons. Until recently, innovations have not been driven to benefit all stakeholders—patients, surgeons, hospitals, and government and private payers—and have not addressed operating room inefficiencies, cost containment, and surgical variability. Current technology does not leverage existing laparoscopic experience and training and, as a result, imposes a high hurdle to learning new techniques. Robotic-assisted digital laparoscopy with the Senhance Surgical System (TransEnterix, Inc., Morrisville, NC, United States) (formerly Telelap Alf-X; SOFAR, Milan, Italy) is designed to be used in the majority of laparoscopic procedures, with similar operating room times to laparoscopy and comparable per-procedure costs to standard laparoscopy. The fully reusable nature of the Senhance instruments allows for no preset limitation to the number of reuses of the instruments. The only true disposable component of the system is the required sterile draping. The 3D digitized interface between the surgeon and the patient affords surgeon control and reduces surgical variability. The open architecture system comprises independent robotic-assisted manipulator arms that are compatible with conventional trocars and familiar instruments—thereby, leveraging the existing operating room and surgical suite environment. Senhance builds on the foundation of laparoscopy and is powered by robotic features, such as haptic feedback and eye tracking, which facilitate the transition to robotic-assisted digital laparoscopic procedures by laparoscopic surgeons. In the following pages, we describe the engineering and technological premises of robotic-assisted digital laparoscopy, detail the flow of surgical training and procedure planning, define the benefits and value to patients, providers, hospitals, and payers, and describe associated clinical and costs outcomes.

1.2

Robotic-assisted digital laparoscopy

Robotic-assisted digital laparoscopy with the Senhance Surgical System is best described as a digitized interface between the surgeon and the patient during a laparoscopic procedure that enhances surgeon control of the instruments. From an ergonomically supportive open console, the surgeon controls visualization of the surgical field and senses haptic forces of the instrument tissue interface. Eye-tracking visualization allows the surgeon continuous control of the camera without needing to interrupt performing the procedure to reposition the camera or rely on support staff to adjust the laparoscope’s field of view. The sensation of touch and force was designed into the system to minimize tissue trauma. The force sensing not only provides feedback to the surgeon, it alerts the surgeon if excessive force is detected either at the instrument or the abdominal wall. Regardless of their laparoscopic experience or their engagement with the system’s force feedback, surgeons rapidly adapt to the controls of Senhance [5]. The robotic component allows minimally invasive access to difficult-to-reach anatomy; precise, scaled, and tremor-free instrument control; and the ability to visualize within anatomically tight spaces using 3D cameras. The latter confers clarity of detail regarding delicate tissues and depth and spatial relationships in the surgical field.

1.2.1

3

System components

Senhance is an open-architecture, console-based, multiarm surgical system that enables a surgeon to control surgical instrumentation remotely during MIS in the lower abdomen, pelvis, and thoracic cavity. The capital equipment is comprised of three main subsystems (Fig. 1.1): G

G

G

Console: The station where the surgeon inputs information through hand and eye movements to direct the motion of the arms in the surgical field. Manipulator arms: Independent mechanized support arms that interface with the endoscope and surgical instruments. The manipulator arms produce output movements based on the instructions from the surgeon at the console. The system is configurable with up to three arms in the United States and up to four arms in the European Union. Node: A relay unit which connects the console inputs to the manipulator arms in the system and which converts and transmits the video signals to the 2D/3D monitor on the console.

The open architecture benefits the surgeon, patient, hospital, and payer. The system’s compatibility with conventional surgical tools (Table 1.1) not only reduces the hospital’s capital investment in system-specific equipment, it leverages the surgeon’s and surgical team’s familiarity with conventional laparoscopic equipment and makes possible rapid, as-needed conversion to standard laparoscopic or hybrid procedures. With the open console, the surgeon can maintain a comfortable seating posture and view the entire operating room as well as make use of the eye-tracking software to move the laparoscope for constant imaging and assessment of the surgical field as well as to manipulate the instruments. This smooth and continuous camera control reduces the surgeon’s reliance on additional support staff to manually move the laparoscope. The optimal turning and pivot point of each trocar is calculated within each manipulator arm so as to minimize bruising and trauma to the port site tissue. Each arm is linked electronically to the node, which processes data regarding the positioning of each arm and the connected instrument and its degrees of movement. The output is transmitted to the console, the console monitor, and an operating room monitor. The operating room monitor transmits the same operating field view to the surgical assistant and nurse. Laparoscopic techniques and approaches are the foundation of the system. As indicated in Table 1.1, Senhance procedures integrate standard trocars, which allow the surgical assistant to intervene laparoscopically if needed and to use additional standard instruments through other ports. In addition, the console, manipulator arms, and node are in close proximity to the patient, which permits quick transition from the console to the patient in the event the surgeon decides to convert the procedure to standard laparoscopy or open surgery. The proximity of the surgeon console to the patient also facilitates verbal communication within the team. Lastly, the entirety of the system can be installed in a standard operating theater and, consequently, does not require creation of a dedicated space. FIGURE 1.1 Typical operating room setup with the Senhance Surgical System. The console is in the foreground with the patient table in the middle of the image with the manipulator arms, laparoscope, and instruments in place. A secondary monitor, which shows the same view of the operative field that the surgeon sees, is available to the assistant and nurse.

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

4

Handbook of Robotic and Image-Guided Surgery

TABLE 1.1 Surgical equipment compatible with the Senhance Surgical System. Category

Equipment manufacturer and instrument family

Camera and vision

CONMED 3DHD NOVADAQ PINPOINT Stryker 1588 AIM Richard Wolf 3DHDa

Electrosurgical

CONMED System 5000 Covidien/Valleylab ForceTriad Covidien/Valleylab Force FX Erbe VIO 300D BOWA ARC 400a

Patient table

Any operating table used for laparoscopic surgery

Insufflator

Any insufflator used for laparoscopic surgery

Trocar

Any trocar used for laparoscopic surgery

Suction irrigation

Any suction irrigation unit used for laparoscopic surgery

a

Available only in CE-marked countries.

FIGURE 1.2 Typical Senhance setup that includes three manipulator arms for a supine patient. Credit: r 2018 TransEnterix, Inc.

1.2.1.1 Patient positioning The surgeon can position the patient according to the requirements of the procedure and the patient’s medical condition. Most laparoscopic procedures are performed with the patient supine and often in a Trendelenburg or reverse Trendelenburg position, depending on the anatomical location of the surgery (lower pelvis or foregut, respectively). Because the robotic manipulator arms are independent from one another, the surgeon can position the patient prior to surgery, as well as change the patient’s position during the case, without requiring disconnection of the arms from the patient. The independence and mobility of each arm confer positioning advantages over stationary, single-unit robotic-assisted devices. Three-arm setups of the system with the patient supine or lateral are presented in Figs. 1.2 and 1.3, respectively. Certain procedures, such as partial or total nephrectomy, adrenalectomy, pulmonary lobectomy, and thymectomy, are best performed with the patient on his or her side to facilitate access to the targeted anatomy. The system does not impose restrictions on optimal patient positioning. A four-arm setup for a supine patient is presented in Fig. 1.4.

5

FIGURE 1.3 Typical Senhance three-arm setup for a patient in the lateral position. Credit: r 2018 TransEnterix, Inc.

FIGURE 1.4 Senhance four-arm setup for a supine patient. Credit: r 2018 TransEnterix, Inc.

1.2.1.2 Docking The docking time is the period required to adjust all settings, and determine and set the optimal positions of the robotic arms and intraabdominal placement of the instruments prior to starting the procedure. Stephan et al. recently reported their early experience with Senhance as applied to visceral surgery [6]. The majority of the procedures that these experienced laparoscopic surgeons performed were unilateral transabdominal preperitoneal (TAPP) inguinal hernia repairs and bilateral TAPP inguinal hernia repairs. As the surgeons became more adept with the system they advanced their efforts to more complicated procedures including cholecystectomy and sigmoid resection. During the first 5 months, they achieved an average docking time of ,9 minutes and, for the last 20 procedures during that 5-month period, their average docking time was 7 minutes. Their docking times were similar to those reported by others who used the surgical system in gynecologic and colorectal surgery [7 15]. Stephan’s average console time for the 29 cases of inguinal hernia repair was 37 minutes.

1.2.1.3 Eye sensing Eye-sensing technology has been used for years to assist individuals who have cerebral palsy, quadriparesis, or other disabling physical and cognitive conditions that prevent the user from utilizing a traditional human machine interface, such as a computer mouse or trackpad. Eye sensing or eye tracking determines where the user is looking at a screen by measuring reflections in the eye corneas; this process of controlling the computer through gazing at the screen is called gaze interaction. The system emits a near-infrared light at specific duty cycles, and the system computes the

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

6

Handbook of Robotic and Image-Guided Surgery

gaze position and distance of the user after an initial calibration. The system’s digital imaging and eye tracking are intended for use with current visualization technologies (Table 1.1) and take advantage of 2D and 3D high-definition laparoscopic imaging systems that are based on white light and 2D indocyanine green fluorescence imaging. Senhance’s eye-sensing system assists the surgeon in positioning the endoscope and the attached manipulator arm or navigating the functions area of the console monitor. The surgeon can also choose to use the trackpad and handles at the console to move the endoscope instead of taking advantage of eye sensing. The system first completes a calibration sequence, which includes verification that the surgeon’s eye position can be read. Without successful calibration, the system turns off the eye tracker and the eye tracker cannot be overridden by the user. Several safety mitigations are built into the eye-sensing system. The system only permits the surgeon to reposition the endoscope under the surgeon’s control, which is accomplished after depressing the select button on both the right and left handles while the surgeon gazes at a position on the screen. The system then moves the endoscope in the direction of the surgeon’s gaze. Motion is stopped when either of the two select buttons is released or when the gaze is lost—thus preventing unintended movement of a manipulator arm or surgical instrument.

1.2.1.4 Fulcrum The console of the Senhance Surgical System includes two handles styled similarly to traditional laparoscopic handles; these console handles can command up to three or four manipulator arms. The manipulator arms connect to modified laparoscopic instruments (graspers, needle holders, scissors, etc.), which are used through trocars to grasp, dissect, mobilize, suture, and retract tissue in an insufflated abdominal space in the same manner as laparoscopy. The end of each robotic manipulator arm is instrumented with force and torque sensors. These sensors determine the fulcrum point or the point of intended rotation of an inserted trocar. The sensors also prevent application of excessive force to surrounding tissue. During set up, a modified laparoscopic instrument is magnetically coupled with an adaptor to the manipulator arm and is inserted into the abdomen through a trocar. The arm makes very small movements to locate the center of rotation of the trocar, that is, the center of the incision point. Once this point has been captured in 3D space, the arm moves the instrument around and through the fulcrum point of the trocar to minimize stress to the surrounding tissue and to the incision site. As with standard laparoscopy, as the surgeon moves the instrument handle, the working tip or effector moves in the opposite direction; this paradoxical motion is conveyed on the console and operating room monitors.

1.2.1.5 Force feedback (haptics) The scaled 1:1 force feedback feature of the Senhance System measures the forces experienced by the instruments in the X, Y, and Z Cartesian plane and transfers these forces electronically to the handles at the console. Accordingly, the surgeon receives and perceives the same forces that he or she experiences when using standard laparoscopic instrumentation for the same surgical maneuvers and with a high degree of sensitivity (35 g). Force feedback helps to ensure that the surgeon has seamless real-time access to relevant information about instrument tip tissue contact. The surgeon perceives tissue presence, contact, and relative stiffness via force feedback information regarding contacted tissues with the instrument tips and/or shaft during surgical maneuvers. When combined with the depth of the palpation and instrument tip geometry information provided by the vision system, the user can perceive the physical characteristics of the tissue.

1.2.2

Indications

In the United States, the system is indicated for adult use and is intended to assist in the accurate control of laparoscopic instruments for visualization and endoscopic manipulation of tissue including grasping, cutting, blunt and sharp dissection, approximation, ligation, electrocautery, suturing, mobilization, and retraction in laparoscopic gynecological surgery, colorectal surgery, cholecystectomy, and inguinal hernia repair. In the European Union, Senhance has received the CE mark and is intended to be used in adults for laparoscopic surgery in the abdomen and pelvis and for limited uses in the thoracic cavity excluding the heart and greater vessels. Currently labeled US and EU procedures are listed in Table 1.2. The system is contraindicated for use in surgeries where laparoscopic approaches and techniques are contraindicated. Currently available and fully reusable surgical instruments that are system-specific are listed in Table 1.3. Of

TABLE 1.2 Labeled procedures for the Senhance Surgical System in the United States and the European Union. United States

European Union

Colorectal surgery Lower anterior resection including total mesorectal excision

Lower anterior resection including total mesorectal excision

Colectomy (right, left, total, transverse, hemicolectomy, sigmoidectomy)

Colectomy (right, left, total, transverse, hemicolectomy, sigmoidectomy)

Rectopexy

Rectopexy

Small bowel resection

Small bowel resection

Abdominoperineal resection

Abdominoperineal resection

Gynecological surgery Radical hysterectomy

Radical hysterectomy

Total hysterectomy

Total hysterectomy

Ovarian cystectomy

Ovarian cystectomy

Oophorectomy

Oophorectomy

Myomectomy

Myomectomy

Lymphadenectomy

Lymphadenectomy

Endometriosis resection

Endometriosis resection

Salpingectomy

Salpingectomy

Adnexectomy

Adnexectomy

Omentectomy

Omentectomy

Parametrectomy

Parametrectomy

Adhesiolysis

Adhesiolysis

General surgery Appendectomy

Appendectomy

Cholecystectomy

Cholecystectomy

Inguinal hernia repair

Inguinal hernia repair Ventral hernia repair Nissen fundoplication EndoStim implantation Decompression of celiac axis (treatment of Dunbar syndrome) Sleeve gastrectomy

Urological surgery Adrenalectomy Prostatectomy Partial nephrectomy Renal cyst decortication Thoracic surgery Pulmonary lobectomy Pleural biopsy

7

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

8

Handbook of Robotic and Image-Guided Surgery

TABLE 1.3 Fully reusable surgical instruments manufactured for the Senhance Surgical System. 3.0 mm Atraumatic single-action grasper Cobra grasper DeBakey grasper Needle holder Monopolar Maryland dissector Monopolar curved Metzenbaum scissors 5.0 mm Allis grasper Johan grasper Kocher grasper Strong grasper Mixter dissector Babcock forceps Needle holder right Needle holder left Fundus grasper Monopolar Maryland dissector Monopolar curved Metzenbaum scissors Monopolar L-Hook electrode Bipolar large grasping forceps Bipolar curved grasping forceps Bipolar Maryland dissector insert Bipolar curved scissors insert Weck Hem-o-lock ML clip applier 10.0 mm Right angle dissector Weck Hem-o-lock ML clip applier

note is the addition of fully reusable 3-mm instruments, which augment and redefine MIS for robotic-assisted digital laparoscopy. In a report of one laparoscopic surgeon’s experience with these smaller instruments and tips for hysterectomy (n 5 4), the surgeon felt that the haptic feedback of the Senhance System allowed him to feel the flexibility of the instrument, which is especially important during careful minimally invasive traction and dissection [16]. The median operative time was 97.5 minutes (80 120 minutes) and estimated blood loss (EBL) was ,50 mL. Patients were discharged on day 1 and no 30-day postoperative complications occurred.

1.3 1.3.1

User training and procedure planning Training

The surgical team consists of the surgeon, the surgical assistant, and the surgical nurse, and training of the team is standardized in compliance with validated protocols approved in the EU by the CE and by the US FDA. The team is

9

FIGURE 1.5 Senhance Surgical System training paradigm. Credit: r 2018 TransEnterix, Inc.

trained according to a general model (Fig. 1.5), which has been developed and was based upon recommendations of the Society of American Gastrointestinal and Endoscopic Surgeons to combine the fundamental elements of laparoscopic and robotic-assisted surgeries [17]. The purpose of the training is to facilitate surgeon, surgical assistant, and nurse training on the Senhance Surgical System and to provide trainers with all materials necessary to train a new user. All users of the Senhance Surgical System receive training specific to their roles. The objective is to provide users with the information and the opportunity to learn and practice skills required for safe and effective use and to enable interaction with the surgical system and its unique features before, during, and after a laparoscopic procedure. Users receive 2 days of training defined by sequential modules, the last of which is a proficiency assessment based on mock case support. Users practice the skills they learned throughout the first day by performing a surgical procedure on the second day on a live porcine model. Each training session is attended by the three members of the surgical team. Users are advised to perform their first case within 1 week after training completion. Table 1.4 provides an outline of the training session; times are approximate, and breaks should be built into the days based on trainer and trainee discretion. A postmarket registry of all Senhance procedures and outcomes enables up-to-date evaluation of the system.

1.3.2

Procedure planning

The surgical team follows the same general procedural and operative room setup as for a traditional laparoscopic case. Specific considerations that are recommended are operating room table height, trocar positions, and manipulator arm placement. The operating room table height should not be so low or so high as to limit the motion of any of the manipulator arms during the course of the procedure. A visual guide—the “sweet spot”—is located on both the horizontal and vertical joints of each manipulator arm, and surgical teams are encouraged to recognize this guide and avoid placement of a manipulator arm too close to the end of its physical range of motion. Trocar positions should not be so close, that is, should not be less than 8 cm apart, so as to increase the possibility of arm collisions during surgery. In nearly all cases and patient populations, standard traditional port placement has been used without increased collision risk requiring special modification of the port placement. Consideration of the manipulator arm placement is according to preprocedure planning of the primary location of the surgical assistant to provide the assistant adequate room to access the perioperative space. The surgical team can consult recommended, procedurespecific manipulator arm placement diagrams to ensure adequate spacing for the intended location of the surgical assistant in the sterile field.

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

10

Handbook of Robotic and Image-Guided Surgery

TABLE 1.4 Training session overview for the Senhance Surgical System. Module

Method

Trainee role(s)

Time (h)

Learning objectives

Presentation with narrated videos

Surgeon, surgical assistant, nurse

1.5

Gain initial familiarity with the system. Receive system overview, including indications for use, features, and functionality.

Goals and expectations

Presentation (PowerPoint)

Surgeon, surgical assistant, nurse

0.5

Review the learning objectives for the subsequent modules.

Dry lab practicum

System hands-on/FLS tasks using trainer checklists

Surgeon, surgical assistant

2

Learn and practice the system skills required for effective use of the Senhance Surgical System. Focus on system setup, manipulator arm calibration and positioning, draping, instrument assembly, and instrument exchange.

System hands-on using trainer checklists

Surgical assistant, nurse

Advanced Instrument (s) hands-on using trainer checklists

Surgeon, surgical assistant, nurse

2 5

Learn and practice the system skills required for effective use of the Senhance Surgical System Advanced Instrument(s). Focus on case preparation with sterile and nonsterile components, instrument exchange, cockpit orientation, and FLS tasks.

Team training

Surgeon, surgical assistant, nurse

1

Practice team use of the system, including specific troubleshooting and communication among team members.

Pre-wet lab overview

Presentation (PowerPoint)

Surgeon, surgical assistant, nurse

1

Review differences between porcine and human anatomy to prepare for wet lab tasks.

Task-based wet lab and proficiency assessment

Porcine lab

Surgeon, surgical assistant, nurse

6

Practice the system skills required for effective use of the Senhance Surgical System and demonstrate proficiency by correctly completing tasks (no use errors) without trainer assistance. Nurse sets up the system and circulates according to role checklist. Surgical assistant supports patient-side tasks according to role checklist. Surgeon operates system according to role checklist.

Prior to arrival at the training facility Prelearning

Day 1

Learn and practice the system skills required for effective use of the Senhance Surgical System. Focus on endoscope and instrument insertion, instrument exchange, cockpit orientation, and FLS tasks.

Day 2

FLS, Fundamentals of laparoscopic surgery.

1.4 1.4.1

Clinical findings Gynecologic procedures

Many of the published clinical studies of robotic-assisted digital laparoscopy describe gynecologic procedures for the treatment of benign and malignant disorders.

1.4.1.1 Monolateral ovarian cyst removal Gueli Aletti et al.’s first experience with Senhance was for a small homogeneous adult population of women (n 5 10; BMI , 30 kg/m2), who required monolateral ovarian cyst enucleation [9]. One experienced laparoscopic surgeon performed the procedures. There were no conversions to standard laparoscopy or to laparotomy and no intraoperative

11

complications. Median docking time was 6 minutes (range, 3 8 minutes), median operative time was 46.3 minutes (range, 22 80 minutes), and median EBL was 50 mL (range, 0 200 mL).

1.4.1.2 Heterogeneous series of gynecologic procedures In a phase II single-center study of the safety and feasibility of Senhance in the hands of experienced laparoscopic surgeons for a heterogeneous series (n 5 146) of gynecologic procedures (Group A: mono- or bilateral salpingooophorectomy or cyst enucleation, n 5 62; Group B: myomectomy, n 5 4; Group C: total hysterectomy, n 5 46; and Group D: staging of endometrial cancer, n 5 34), median docking time was 7 minutes (range, 3 36 minutes) [13]. Patients ranged in age from 19 to 79 years and the median BMI was 17.3 kg/m2 (range, 17.3 34.0 kg/m2). Over the course of the study, operative time lessened significantly for hysterectomy (P , .001) and adnexal procedures (P , .002). In both Group A and Group C, there were two conversions to standard laparoscopy. In Group D, one patient underwent laparoscopic conversion and two patients had conversion to laparotomy. This study was the first reported series of the novel robotic-assisted digital laparoscopic approach for the treatment of various gynecologic disorders. The procedures were successfully completed in 95.2% of the cases, with two reported complications (Group C: intraoperative, n 5 1; postoperative, n 5 1) for an overall complication rate of 1.4%. Intraoperative and early postoperative complication rates were similar to those reported for other minimally invasive gynecologic surgeries [18].

1.4.1.3 Senhance and standard laparoscopy for benign and malignant disease A more recent, single-institution, case control study provided a retrospective comparison of the safety and feasibility of Senhance and standard laparoscopy (control group) for total hysterectomy in women with benign (Fig. 1.6) and early malignant gynecologic disease (Fig. 1.7) [13]. A total of 203 women were enrolled (Senhance, n 5 88; standard laparoscopy, n 5 115). The median operative time was longer for the women in the Senhance group compared with that for women in the standard laparoscopy group (147 minutes; range, 58 320 and 80 minutes; range, 22 300, respectively). Noteworthy, however, was the comparability of the Senhance operative time to time reported for other robotic-assisted laparoscopic hysterectomy procedures. The longer operative times may have been attributable to learning a new surgical approach [19,20].

1.4.1.4 Hysterectomy in obese patients Outcomes from a pilot study of the safety and feasibility of robotic-assisted digital laparoscopy for elective total extrafascial hysterectomy with bilateral salpingo-oophorectomy in obese (BMI $ 30 and ,40 kg/m2) patients was recently reported [11]. Ten patients were enrolled (median BMI: 33.3 kg/m2; range, 30.4 38.3 kg/m2); the indication for each patient’s procedure was early-stage (FIGO IA) endometrial cancer. Median docking time was 10.5 minutes (range, 5 25 minutes); median operative time was 110 minutes (range, 70 200 minutes); and median EBL was 100 mL (50 200 mL). No conversions to either standard laparoscopy or laparotomy occurred. The surgeons were experienced in laparoscopic techniques. They felt that the tactile feedback was especially important in terms of safety in this potentially challenging patient population. FIGURE 1.6 Mobilization of the uterine artery with advanced dissection during total hysterectomy using Senhance. Credit: r 2018 TransEnterix, Inc.

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

12

Handbook of Robotic and Image-Guided Surgery

FIGURE 1.7 Robotic-assisted digital laparoscopic pelvic lymph node dissection aided by indocyanine green fluorescence imaging (Novadaq, Toronto, ON, Canada). Credit: r 2018 TransEnterix, Inc.

1.4.2

Colorectal disease

The first clinical experience with Senhance in the surgical treatment of patients (n 5 45) with colorectal disease was recently reported by Spinelli et al. [15]. The heterogeneous surgical indications included: colorectal cancer (66%), complicated inflammatory bowel disease (18%), diverticular disease (11%), and endoscopically unresectable adenoma (4.4%). Median operating time for all of the procedures was 256 minutes and the median docking time was 10.7 minutes. Conversions were limited to three procedures that required conversion to standard laparoscopy; no procedures were converted to laparotomy. Median EBL for the entire cohort was ,50 mL. Patients with malignant disease received R0 resections with disease-free margins, and the average number of nodes removed was 24.8. No device-related perioperative complications were reported. The authors reported that the system’s haptic feedback may have been instrumental in preventing intraoperative complications.

1.4.3

Inguinal hernia repair

Stephan et al. described in detail their first experiences with Senhance in a consecutive adult population (n 5 116) requiring primarily inguinal or ventral hernia repair, with some patients requiring cholecystectomy and a few requiring sigmoid resection for diverticular disease [6]. After approximately 30 procedures, the console time of inguinal hernia repair via the TAPP technique was comparable to the incision-to-suture time of a standard laparoscopic TAPP approach.

1.5

Cost considerations

The cost of ownership of robotic systems (acquisition, maintenance, and instruments) has been cited as a drawback to robotic-assisted surgery [21]. However, Senhance may provide a cost-effective robotic surgery program in light of its fully reusable instruments that can be used with conventional 5-mm trocars. In 2016, Rossitto et al. reported the first one-way sensitivity analysis of costs associated with the use of Senhance in a consecutive series of procedures for the treatment of early-stage endometrial cancer or benign uterine disease in low-risk patients [22]. The procedures, which were performed by three experienced laparoscopic surgeons, were hysterectomy and bilateral salpingectomy (n 5 13), radical hysterectomy and bilateral salpingo-oophorectomy (n 5 50), and radical hysterectomy bilateral salpingo-oophorectomy and pelvic lymphadenectomy (n 5 18). The average cost per patient for all of the procedures was h3391.82 and consisted of an average surgical staff cost of h1493.06, operating

13

room cost of h1225.66, and equipment cost of h673.09. Equipment, which did not include depreciation costs, represented 19.8% of the average procedure cost. Rossitto’s paper provides an improved understanding of healthcare resource allocation with the Senhance Surgical System in commonly performed gynecologic surgeries: the costs and outcomes of a robotics program based on Senhance may be similar to many programs involving traditional laparoscopic procedures. Future economic studies might include case-controlled and comparative analyses describing direct and indirect costs associated with perioperative outcomes for hernia repair, colorectal procedures, upper gastrointestinal procedures, prostatectomy, and thoracoscopic surgeries.

1.6

Conclusion

Published clinical data support the clinical safety, efficacy, and economic feasibility of digital laparoscopy with the Senhance Surgical System in the treatment of a variety of benign and malignant disorders. The system provides novel robotic-assisted benefits to the surgical team with rapid docking and a short learning curve as well as cost of ownership that is comparable to standard laparoscopy.

Acknowledgment The authors thank Conor Carlsen (TransEnterix, Inc.) for facilitating the development of this chapter.

References [1] Park A, Lee G, Seagull FJ, Meeneghan N, Dexter D. Patients benefit while surgeons suffer: an impending epidemic. J Am Coll Surg 2010;210 (3):306 13. [2] Miller K, Benden M, Pickens A, Shipp E, Zheng Q. Ergonomics principles associated with laparoscopic surgeon injury/illness. Hum Factors 2012;54(6):1087 92. [3] Dimou FM, Eckelbarger BS, Riall TS. Surgeon burnout: a systematic review. J Am Coll Surg 2016;222(6):1230 9. [4] Grisham S. Medscape general surgeon lifestyle report 2018: personal happiness vs work burnout, ,https://www.medscape.com/slideshow/2018lifestyle-surgeon-6009226#4.; 2018 [accessed 23.06.18]. [5] Hutchins AR, Manson RJ, Lerebours R, Farjat AE, Cox ML, Mann BP, et al. Objective assessment of the early stages of the learning curve for the Senhance surgical robotic system. J Surg Educ 2018;76:201 14. Available from: https://doi.org/10.1016/j.jsurg.2018.06.026 [Epub ahead of print]. [6] Stephan D, Sa¨lzer H, Willeke F. First experiences with the new Senhance telerobotic system in visceral surgery. Visc Med 2018;34:31 6. Available from: https://doi.org/10.1159/000486111. [7] Stark M, Pomati S, D’Ambrosio A, Giraudi F, Gidaro S. A new telesurgical platform—preliminary clinical results. Minim Invasive Ther Allied Technol 2015;24(1):31 6. Available from: https://doi.org/10.3109/13645706.2014.1003945. [8] Gueli Alletti S, Rossitto C, Cianci S, Restaino S, Costantini B, Fanfani F, et al. Telelap ALF-X vs standard laparoscopy for the treatment of early-stage endometrial cancer: a single-institution retrospective cohort study. J Minim Invasive Gynecol 2016;23(3):378 83. Available from: https://doi.org/10.1016/j.jmig.2015.11.006. [9] Gueli Alletti S, Rossitto C, Fanfani F, Fagotti A, Costantini B, Gidaro S, et al. Telap Alf-S assisted laparoscopy for ovarian cyst enucleation: report of the first 10 cases. J Minim Invasive Gynecol 2015;22(6):1079 83. [10] Gueli Alletti S, Rossitto C, Cianci S, Scambia G. Telelap ALF-X total hysterectomy for early stage endometrial cancer: new frontier of robotic gynecological surgery. Gynecol Oncol 2016;140(3):575 6. Available from: https://doi.org/10.1016/j.ygyno.2016.01.018. [11] Gueli Alletti S, Rossitto C, Cianci S, Perrone E, Pizzacalla S, Monterossi G, et al. The Senhance surgical robotic system (“Senhance”) for total hysterectomy in obese patients: a pilot study. J Robotic Surg 2018;12(2):229 34. Available from: https://doi.org/10.1007/s11701-017-0718-9. [12] Fanfani F, Restaino S, Gueli Alletti S, Fagotti A, Monterossi G, Rossitto C, et al. TELELAP ALF-X robotic-assisted laparoscopic hysterectomy: feasibility and perioperative outcomes. J Minim Invasive Gynecol 2015;22(6):1011 17. Available from: https://doi.org/10.1016/j. jmig.2015.05.004. [13] Fanfani F, Monterossi G, Fagotti A, Rossitto C, Gueli Alletti S, Costantini B, et al. The new robotic TELELAP ALF-X in gynecological surgery: single-center experience. Surg Endosc 2016;30(1):215 21. Available from: https://doi.org/10.1007/s00464-015-4187-9. [14] Fanfani F, Restaino S, Rossitto C, Gueli Alletti S, Costantini B, Monterossi G, et al. Total laparoscopic (S-LPS) versus TELELAP ALF-X robotic-assisted hysterectomy: a case-control study. J Minim Invasive Gynecol 2016;23(6):933 8. Available from: https://doi.org/10.1016/j. jmig.2016.05.008. [15] Spinelli A, David G, Gidaro S, Carvello M, Sacchi M, Montorsi M, et al. First experience in colorectal surgery with a new robotic platform with haptic feedback. Colorectal Dis 2017. Available from: https://doi.org/10.1111/codi.13882 [Epub ahead of print]. [16] Gueli Alletti S, Perrone E, Cianci S, Rossitto C, Monterossi G, Bernardini F, et al. 3 mm Senhance robotic hysterectomy: a step towards future perspectives. J Robotic Surg 2018;12(3):575 7. Available from: https://doi.org/10.1007/s11701-018-0778-5.

1. Senhance Surgical System

Senhance Surgical System Chapter | 1

14

Handbook of Robotic and Image-Guided Surgery

[17] Brunt M. Bulletin No. 11 Fundamentals of laparoscopic surgery: celebrating a decade of innovation in surgical education. American College of Surgeons; 2014. p. 99. [18] Fanfani F, Fagotti A, Rossitto C, Gagliardi ML, Ercoli A, Gallotta V, et al. Laparoscopic, minilaparoscopic and single-port hysterectomy: perioperative outcomes. Surg Endosc 2012;26:3592 6. [19] Rosero EB, Kho KA, Joshi GP, Giesecke M, Schaffer JL. Comparison of robotic and laparoscopic hysterectomy for benign gynecologic disease. Obstet Gynecol 2013;122:778 86. [20] Rossitto C, Gueli Alletti S, Fanfani F, Fagotti A, Costantini B, Gallotta V, et al. Learning a new robotic surgical device: Telelap Alf X in gynaecological surgery. Int J Med Robotics Comput Assist Surg 2016;12(3):490 5. [21] Lotan Y. Is robotic surgery cost-effective: no. Curr Opin Urol 2012;22:66 9. [22] Rossitto C, Gueli Alletti S, Romano F, Fiore A, Coretti S, Oradei M, et al. Use of robot-specific resources and operating room times: the case of Telelap Alf-X robotic hysterectomy. Int J Med Robotics Comput Assist Surg 2016;12:613 19. Available from: https://doi.org/10.1002/ rcs.1724.



A Technical Overview of the CyberKnife System Warren Kilby, Michael Naylor, John R. Dooley, Calvin R. Maurer, Jr. and Sohail Sayeh

Accuray, Sunnyvale, CA, United States

' CHAPTER FOCUS •

ENGINEERING

Llr-.iKTO VIDEO

ABSTRACT The CyberKnife System is a frameless, image-guided robotic technology used to deliver stereotactic radiosurgery and radiotherapy anywhere in the body where it is clinically indicated. The treatment procedure is automated and delivered under user supervision. Throughout treatment the radiosurgical target is continually sensed using a combination of X-ray and optical imaging. The target pose is localized into a common reference frame using a combination of image registration algorithms and precisely calibrated coordinate transformations. A robotic couch, on which the patient is positioned, and a robotic treatment manipulator on which a medical linear accelerator is mounted, are aligned using this localization . Manipulation is achieved by delivering ionizing radiation to the target using a high-energy X-ray beam generated by the linear accelerator. Treatment involves delivery of a large number of nonoverlapping treatment beams, from a noncoplanar workspace, that allows sufficient radiation dose to be delivered to the target while respecting the dose tolerances of surrounding healthy tissues. The radiation dose delivered by each beam is determined by a 2D modulated fluence pattern controlled by variable beam collimation, linac dose control, and robot pointing. Pretreatment planning involves segmenting the target and healthy organ volumes using multimodality medical imaging, and using this to optimize the set of beam directions and the modulated fluence pattern for each beam. This chapter describes the CyberKnife System technology, and its major subsystems, as current in 2019 . Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00002-5 © 2020 Elsevier Inc. All rights reserved.

llll

16

2.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Stereotactic radiosurgery (SRS) is a noninvasive alternative to conventional surgery using precisely targeted beams of ionizing radiation directed from outside the patient to replace the surgical resection of solid tumors, other lesions, or functional targets. SRS was originally developed for intracranial applications and required the use of a stereotactic frame mechanically attached to the patient’s skull to achieve the required beam alignment precision. Treatment was delivered in a single session (or treatment fraction) using multiple beams distributed over a large solid angle [1]. Subsequently, the same general principles have been applied with new technologies to treat extracranial targets, either as an alternative to, or in combination with, conventional surgery and radiation therapy. Such extracranial treatment is commonly referred to as stereotactic body radiation therapy (SBRT) [2] or stereotactic ablative radiotherapy (SABR) [3], the subtle distinctions between which are beyond the scope of this chapter. Current SRS/SBRT/SABR (hereafter referred to collectively as radiosurgery) techniques use either mechanical frame-based or imaging-based stereotactic alignment, and are typically delivered in one to five treatment fractions. Common clinical indications include intracranial targets [malignant and benign tumors, arteriovenous malformations (AVMs), and functional diseases], spinal tumors and AVMs, and malignancies in the lung, prostate, liver, head and neck, and other sites, including both primary and metastatic diseases [4,5]. A fundamental difference between radiosurgery and conventional radiation therapy is that the former employs more aggressive dose-fractionation schemes. Since all radiation treatment is limited by toxicity to healthy tissues, this requires that radiosurgery achieves higher levels of geometric accuracy and dose conformality than conventional radiation therapy, as outlined as follows. 1. Geometric accuracy: The geometric uncertainties associated with aligning external treatment beams to a clinical target volume (CTV) within the body are usually managed by adding margins to the CTV during the treatment planning phase to form a planning target volume (PTV) [6]. Delivering the prescription dose to the entire PTV ensures that the CTV receives the intended dose. The CTV PTV margin is a combination of setup and internal margins [7]. The setup margin includes the uncertainties in localizing the CTV at the start of each treatment in the same frame of reference as the treatment device, and of aligning the treatment beams to the CTV in this frame. The internal margin accounts for intratreatment motion of the CTV after initial localization. The most common example is respiratory motion, which affects targets within the lung, liver, pancreas, and kidneys. However, clinically relevant internal motion also affects the gastrointestinal and genitourinary systems (e.g., changes in bladder filling or rectal content during treatment can significantly alter the pose of a prostate cancer CTV [8,9]), and for essentially all target locations when gross patient movements are considered. The latter are usually limited by patient immobilization devices, but movements of a few millimeters are observed over typical treatment periods even for immobilized patients with intracranial or spinal CTVs [10]. Accommodating geometric uncertainties using margins is problematic when the CTV is adjacent to radiosensitive healthy tissues (e.g., lung, liver, or brain tissue surrounding a tumor, or the bladder and rectum adjacent to a prostate cancer CTV) since parts of these healthy tissues overlap the PTV and receive the full prescribed dose. These overlaps limit the dose-fractionation that can be employed, especially when complications are associated with relatively small volumes of healthy tissue receiving high doses. 2. Dose conformality: The physical nature of radiation transport means that dose deposition cannot be limited to the PTV only, but also spills out into the surrounding healthy tissues. In the standard nomenclature, the volume of tissue receiving the prescription dose is the treated volume (TV), and the volume receiving a dose that is significant in terms of normal tissue tolerance is the irradiated volume (IV) [6]. A perfect treatment would limit the TV to be identical to the PTV and have an infinite dose gradient in all directions beyond the TV such that the TV to IV expansion is zero. Neither of these is practically achievable. The PTV to TV expansion is generally determined by the way in which the incident radiation fluence from each treatment beam is modulated (e.g., by collimating the beam to the precise projection of the CTV using a collimation system that achieves very narrow beam penumbra and very low transmitted fluence outside of the collimated area). The TV to IV expansion is determined by the radiation modality and by the number of treatment beams used and their spatial arrangement. Dose gradients are maximized when many nonoverlapping beams are used. This makes use of noncoplanar beam arrangements distributed across a large solid angle beneficial. The CyberKnife System provides a method for frameless radiosurgery, enabling treatment to be delivered anywhere in the body where it is clinically indicated. Radiosurgery delivered by CyberKnife is autonomous but delivered under human operator supervision. The treatment paradigm involves a combination of image-based alignment, robotic manipulation of the treatment beam and patient, continual image-based tracking of the target throughout treatment, and use of

17

a large noncoplanar workspace. This paradigm can be employed anywhere in the body and is used for every treatment delivered by the system. CyberKnife was conceived in 1992 [11] and first fully described in 1997 [12]. The system received FDA approval for intracranial treatment in 1999, which was extended to include extracranial treatment in 2001. To date, it is estimated that over 400,000 patients have been treated worldwide using CyberKnife, covering the full range of clinical indications described above. In the 25 years since its inception the technology has undergone continual development such that almost no component remains unchanged from the original design. A complete technical description of the system was published most recently in 2010 [13]. There have since been major changes to several key subsystems and the introduction of a major new system version. This chapter will therefore provide the first technical overview of the CyberKnife M6 System, as current in 2018.

2.2

System overview

The treatment room shown in Fig. 2.1 illustrates the layout of the major treatment delivery subsystems. The patient lies on a flat couch top and except in some pediatric procedures is fully conscious throughout treatment. The couch is supported by a 6 degrees of freedom (6DOF) robotic manipulator that enables the patient pose to be adjusted along all six translational and rotational axes. At the head of the treatment couch is another 6DOF robotic manipulator supporting a medical linear accelerator (linac) that generates the treatment beam. This treatment manipulator allows beams to be directed without the isocentric pointing or coplanar workspace constraints of a traditional C-arm linac gantry. Next to the treatment manipulator is a pedestal in which exchangeable secondary collimator assemblies are stored when not in use. Above the couch are two ceiling-mounted X-ray tubes. These produce square imaging beams directed toward the floor, the central axes of which are each at 45 degrees to the vertical. Image detection is accomplished using two flat panel X-ray detectors mounted flush to the floor. The point in space where the central axes of the two imaging beams intersect is the machine center. The treatment manipulator is positioned such that beams can be directed from many noncoplanar directions toward points within a treatment volume, which is a region of space surrounding the machine center. Treatment accuracy relies on the position and orientation of the imaging system with respect to the treatment manipulator being known with high precision, which is ensured by mechanical alignment during installation and calibration procedures performed during system commissioning. To supplement the X-ray imaging system, an optical imaging system is used for treatments in which respiratory motion is tracked in real time. An array of three optical cameras is installed in a ceiling-mounted boom-arm which can be swung out of the way when not in use. During respiratory motion tracking, the camera is used to continuously measure the position of three optical markers attached to a vest worn by the patient. These major subsystems are described in more detail in Section 2.3. Because of the radiation hazard of the treatment beam, the system is installed within a shielded bunker. During treatment delivery, the operators (usually radiation therapists) monitor the system from a control room situated outside the treatment room. This control area, as shown in Fig. 2.1, contains the treatment delivery computer. This displays the live X-ray images acquired during treatment and pregenerated digitally reconstructed radiographs (DRRs), target location tracking results, and information about each treatment beam including the dose delivered and beam collimation settings. Once commenced, treatment proceeds autonomously unless errors are detected or the operator pauses delivery. Because the radiation shielding makes it impossible for the operators to view the patient directly the treatment room is fitted with CCTV, and an intercom enables the patient and operators to communicate during treatment. Additional system hardware, including controllers for the two robotic manipulators, linac, and imaging systems, power distribution equipment, linac cooling and gas systems, and a patient database computer, is installed in a separate equipment room close to the treatment room. The first step in the treatment process is treatment planning, as illustrated in Fig. 2.2. Vendor-provided treatment planning software (TPS) is installed on one or more computers, usually situated in a separate planning office. Each TPS allows a simulation of the treatment delivery in which the optimum geometric arrangement of treatment beams and radiation fluence per beam is determined. The planning process starts with acquisition of a three-dimensional (3D) CT, which is transferred to the TPS via a dedicated patient database. A 3D patient model is built from the planning CT, in which a patient coordinate system is defined and the tissue density at each voxel is calculated. Target volumes and relevant healthy tissues are segmented within this model using a combination of automatic and manual methods. To aid tissue segmentation, multimodality secondary image sets can be registered to the primary CT scan. A virtual alignment of the treatment imaging system to the patient model is then performed, such that the treatment target is close to the virtual machine center. This defines the transformation from the patient coordinate system to the virtual imaging system, and since the imaging system to treatment manipulator transformation is already known, this enables a set of

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

FIGURE 2.1 The CyberKnife treatment suite. The left image shows the patient lying on a flat couch mounted on a robotic manipulator, with the linear accelerator mounted on a second robotic manipulator at the head of the couch. In this case the multileaf collimator assembly is shown attached to the treatment robot. Behind the patient and treatment robot in this image is the pedestal used to store the three alternative secondary collimator assemblies when not attached to the treatment robot. The two ceiling-mounted X-ray tubes are shown, with the floor-mounted X-ray image detectors housed under the rectangular green panel beneath the couch. The optical camera array in its ceiling-mounted boom-arm is shown near the left side of the image. The right image shows the control area. The treatment delivery computer to the left displays live X-ray images, DRRs, tracking results, and treatment-related data during treatment. To the right, the CCTV and intercom systems monitor and communicate with the patient during treatment. The small white console next to the keyboard contains low-level controls to initiate and interrupt treatment. DRR, Digitally reconstructed radiograph. Control area image courtesy Dr. J.A. Gersh, Gibbs Cancer Center and Research Institute—Pelham, Greer, SC, United States.

19

FIGURE 2.2 An example treatment plan for a lesion in the right lung. Top-left shows an anterior view of the 3D patient model (for simplicity the tissue density has been windowed to show only bone, and the only segmented structure is the PTV, shown in red). The blue lines show 100 noncoplanar beam directions that are available to treat this target. Each line corresponds to a position and orientation of the treatment robot relative to the patient that does not violate robot joint, cable management, or collision constraints. Top-right shows the subset of 24 beams which in this case were selected during the treatment plan optimization. The bottom panels show axial and coronal sections of the CT scan. The dose distribution resulting from the optimized set of beam directions and beam fluence is shown by the colored isodose lines (connecting points of constant dose). In this case the conformality of the treated volume (within the green line) to the PTV is high, as is the dose fall-off in all directions (shown by the closely spaced isodose lines outside the PTV). These are among the dose representations that the treating physician will typically review before approving the plan for treatment. 3D, Three-dimensional; PTV, planning target volume.

virtual treatment beams to be defined onto the patient model corresponding to achievable treatment manipulator orientations. Typically, more than 100 of these feasible beam orientations are found. Using this simulation, the operator selects the optimal set of beam directions and radiation fluence per beam. This complex task is usually performed with the aid of automated optimization algorithms implemented within the TPS. The methods used for image registration, segmentation, dose calculation, and plan optimization are described in more detail in Section 2.3. The resulting dose distribution is reviewed and approved by the treating clinician (usually a radiation oncologist or neurosurgeon) prior to treatment, and the approved plan is stored in the patient database. From the approved plan a set of machine instructions needed to deliver the treatment is automatically generated. In addition, a set of DRRs is generated by ray-casting through the 3D CT model using the simulated orientation of the X-ray imaging system relative to the patient. Prior to treatment, the plan is transferred to the treatment delivery computer. Treatment alignment is based on registering live stereoscopic X-ray images to the precalculated DRRs generated from the treatment plan. Each 2D image registration is calculated automatically using skeletal anatomy (either skull or spinal vertebrae), lung tumor, or implanted fiducial markers (the details of these registration algorithms are provided in Section 2.3). The results of the 2D registrations are combined by geometric back projection to give a 3D transformation of the target anatomy in the live images to the corresponding anatomy in the planning 3D model. The registration result contains the change in the target pose (3D position and 3D rotation) within the treatment room with respect to the simulated geometry in the treatment plan. Initially the couch manipulator is adjusted to grossly align the patient, such that these offsets are relatively small (typically a few millimeters and degrees). Once this is accomplished the couch remains static during

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

20

Handbook of Robotic and Image-Guided Surgery

treatment, and all fine alignment corrections are achieved by adjusting the treatment manipulator position and orientation from the settings stored in the treatment plan based on the target pose deviation and the known transformation from imaging system to treatment manipulator system. Using the treatment manipulator rather than the couch manipulator to perform the fine alignment is important, as it removes the need for the patient to be considered as a rigid object statically attached to the couch. Intratreatment motion is tracked continually by repeating the cycle of X-ray acquisition—image registration—pose deviation calculation—treatment manipulator correction throughout treatment, typically every 30 60 seconds. The exception to this alignment strategy is for targets affected by respiratory motion, which move too rapidly to be managed at this imaging frequency. In these cases, the optical camera system is used to monitor the position of markers positioned on the patient surface in real time (approximately every 10 ms). Prior to treatment the correlation of external marker positions to internal target positions is calculated using a series of X-ray images acquired at multiple phases of the breathing cycle. During treatment the realtime optical signal is combined with this correlation model to sense the target position in real time, which is used to determine the treatment manipulator corrections needed to track the target in real time and maintain the same static beam-target orientation that was simulated in the treatment plan. Additional intratreatment X-ray images are used to verify and adapt the correlation model throughout treatment. More detail of this respiratory tracking technology is provided in Section 2.3. The most meaningful definition of geometric accuracy with any radiosurgery system is the total system error (TSE) of the entire treatment planning and delivery sequence (a measurement originally developed for frame-based radiosurgery and termed “total clinically relevant error” [14]). With CyberKnife this is most commonly tested using a phantom containing a hidden spherical target object in which two radiochromic films are mounted orthogonally. The phantom is CT scanned using the standard patient protocol, a treatment plan is developed that encloses the target with a conformal spherical dose distribution, and this plan is then delivered to the phantom. The vendor specification for the radial offset between the dose centroid measured on the films and the intended position (center of the spherical target) is # 0.95 mm. Versions of this phantom enable the test to be performed for all anatomical tracking methods (skull, spine, lung tumor, and fiducial marker tracking), and with real-time phantom motion to simulate respiratory motion. The TSE combines uncertainties in the full treatment process, including CT acquisition, image segmentation, dose calculation and plan optimization, X-ray to DRR registration, treatment beam alignment, and dose delivery. TSE is measured during acceptance testing and periodic quality assurance tests with every CyberKnife System to demonstrate that this accuracy specification is maintained. A summary of user-published TSE results is provided in tables II and IV of Kilby et al. [13].

2.3 2.3.1

Major subsystems Robotic manipulation

2.3.1.1 Treatment manipulator The treatment manipulator for the CyberKnife M6 System is the KUKA QUANTEC KR300 R2500 Ultra robot. This is a 6DOF robot with a maximum payload of 300 kg, 2496 mm reach, and position repeatability of 6 0.06 mm. A primary reason for selecting this robot was the higher payload required for the addition of the multileaf collimator (MLC) (described later)—the KR300 has a 60 kg maximum payload increase over the treatment manipulator used in the CyberKnife VSI system [13]. The increased payload comes with a trade-off in overall reach of the robotic arm— approximately a 200 mm reduction in maximum reach. This required changing the treatment room layout in two ways. First, the robot was moved from 45 degrees superior-lateral to the patient to be in-line superior to the patient. Second, a custom pedestal was designed to raise the robot 412 mm off the floor, allowing the system to reach over the patient and maximize the available workspace for delivery. Cable management, predominantly to support the operation of the linac and other devices in the treatment head, as well as industrial design are added to the KUKA robot (Fig. 2.3). The conduit for the cable management is 95 mm in diameter and carries dozens of cables including sensors for beam collimation systems, power and signal cables for the monitor chamber and dosimetry electronics, a pneumatic air hose, and high-energy cabling for the linac. The cable management is designed to support the full range of robot motion throughout its workspace—this results in a 750 mm extendable range of the conduit when the robot is stretched out, but also requires that the cabling retracts when the robot head is superior to the patient. A tensioning mechanism is located at the elbow joint to ensure that the cabling is retracted as needed while maintaining minimum bend radii in all cases.

21

FIGURE 2.3 The treatment robot showing the KUKA robot, treatment head, cabling and tensioning mechanism, and pedestal (left), and in the treatment room with industrial covers fitted together with the treatment couch and Xchange table (right). FIGURE 2.4 Lateral view of robot system coordinate frames. Red is 1 X, blue is 1 Z, and green is 1 Y.

2.3.1.2 Coordinate systems and treatment workspace calibration There are three primary coordinate frames when describing the robot workspace within the treatment room (Fig. 2.4). The robot world frame has its origin at the base of the robot, centered on the first axis of rotation where the robot is mounted on top of the pedestal. The robot tool frame is defined by a laser mounted inside the linac, such that the laser is coincident with the radiation beam, and the origin is the treatment X-ray beam source (center of the linac target). The robot user frame is defined with its origin at the machine center, with rotations aligned to the robot world frame. The treatment volume for the CyberKnife is centered at the machine center. This point is nominally 2175 mm 1 X and 508 mm 1 Z in the robot world frame—when combined with the pedestal, this gives a height of 920 mm off the floor. To calibrate the exact position of this point relative to the robot, a calibration post is inserted into a floor frame

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

22

Handbook of Robotic and Image-Guided Surgery

which provides a reference point. The calibration of the robot relative to the machine center is performed using the linac laser to scan across a point like photodetector located at the tip of the calibration post. Scans of this point are performed from positions throughout the robot workspace and a least-squares minimization is found which provides the calibrated robot tool frame and origin of the user frame at the calibration point. This provides a baseline calibration for the robot. However, to reach submillimeter accuracy throughout the entire clinical workspace, additional calibrations are required. Section 2.3.1.3 discusses the discretization of the workspace into “paths” and “nodes,” as well as the next steps in system calibration.

2.3.1.3 Treatment paths and node properties The available workspace that the robot can reach for clinical use is discretized into specific points, referred to as nodes, which are grouped into larger sets, referred to as paths. The robot can be oriented such that the treatment beam source (center of the linac X-ray target) is coincident with each node, and therefore nodes represent the source positions of treatment beams. Nodes are selected from a set of points located on concentric spheres about the machine center, with radii [referred to as source-to-axis distance (SAD)] ranging from 650 to 1200 mm. The node positions are selected to maximize robot reachability during real-time tracking of patient motion, as well as to provide flexibility for robot traversals to other nodes while avoiding collisions in the room and ensuring that the cable management is not stretched or compacted too much. Separate paths are defined for different target anatomy and collimator types. The “head” path is designed such that the head portion of the couch top is located at the machine center. This is a smaller volume of the couch and leaves the section of the room between the couch top and the robot base open for treatment nodes. Fig. 2.5A shows the position of nodes in this path. This path has shorter SAD nodes (650 900 mm) to maximize the effective dose rate. The extracranial “body” path is designed for the couch extended fully superior to support placement of targets in the thorax or abdomen at the machine center. This path has longer SAD nodes (800 1200 mm) to allow for larger patient clearance due to respiratory motion tracking, prostate pitch tracking, and the larger range of possible alignment positions throughout the entire body (rather than just the center of the skull for the head path).

FIGURE 2.5 (A) Head path, seen from above (left) and from the side (right). Each red dot is a node. (B) Body path seen from above (left) and from the side (right). Each red dot is a node.

23

Although the fixed collimator housing and Iris collimator have the same external geometry, the MLC is very different, requiring different paths to handle this distinction (see Section 2.3.2 for a description of these secondary collimators). This results in four primary clinical paths, excluding those used for quality assurance. Each of these paths is calibrated independently, including separate calibrations for the fixed and Iris collimators due to mass differences. Path calibration consists of moving to each node in each path and performing a scan of the calibration post with the linac laser. This process generates a list of offsets which is applied at the delivery time. A final correction offset is measured using an arrangement of treatment X-ray beams directed at the machine center, to a phantom containing orthogonal X-ray-sensitive films (the TSE measurement described in Section 2.2). This combination of calibrations allows the system to achieve submillimeter accuracy in treatment beam delivery. The path sets for the CyberKnife M6 System include 117 clinical nodes for the “body” Iris and fixed collimator path, 179 for the “head” Iris and fixed path, 102 for the “body” MLC path, and 171 for the “head” MLC path. These nodes, used as source points for delivering radiation, are referred to as “dose nodes.” Each path also contains “dummy nodes” which are used only as traversal waypoints. Each node has associated data to inform the system how that node may be used from treatment planning through treatment delivery. This information includes maximum rotational and translational tracking corrections, X-ray imaging status (i.e., whether placing the treatment head at the node causes the treatment robot to obstruct one of the X-ray imaging systems), and a list of nodes in the path to which the robot can safely move, as well as an associated traversal time. The maximum tracking corrections guide the treatment planning system (TPS) to utilize the proper set of nodes given the parameters of the plan being created. For example, nodes used in prostate treatments require additional robot reachability to support 6 5 degrees of pitch correction, above the standard 6 1.5 degrees, while nodes used with real-time respiratory motion tracking must support 6 25 mm of translation corrections rather than the standard 6 10 mm. The X-ray imaging status of nodes is used to order the nodes appropriately so that there will be guaranteed imaging opportunities where the robot is not blocking either of the X-ray imaging devices. Once the dose nodes are selected during treatment planning, this provides the information required to do a traveling salesman optimization which minimizes the time spent moving between them. The traveling salesman algorithm is based on the Lin Kernighan heuristic [15], but simplified and customized to work with the constraints and parameters for CyberKnife—primarily ensuring that imaging nodes are visited based on a custom time interval defined by the operator.

2.3.1.4 Collision avoidance and proximity detection Although the paths were designed to avoid collisions with other components of the system, as well as a large bound on the patient volume, the system continuously monitors the position of each robot (treatment robot and patient couch) and calculates distances between these moving objects versus other moving components and static obstacles in the room. All objects are modeled as adjoining convex polytopes for the purposes of the proximity calculations. Fig. 2.6 shows an example of the polytopes when the robot is at a superior head path node, the colors are only to differentiate between separate models. It is important to note that the couch position can only act as a guide for the patient position, and depending on setup factors (pillows, blankets, pads, etc.), the patient may not be in the exact, expected position. The user can define one of a set of standard avoidance volumes around the couch based on the patient size and setup. The patient-facing portion of the collimator housing is also encased in a touch sensor so that if the patient reaches out to touch the robot, or there is a collision, an interlock is tripped that stops motion.

2.3.1.5 Xchange table and tool mounting calibration One feature of the CyberKnife System is exchangeable secondary collimators via a pneumatic tool-changing mechanism. This enables rigid mounting of the linac and accurate calibration of the robot, while enabling the use of up to three different secondary collimation devices. To enable automated secondary collimator exchange, the storage table must be located and calibrated with respect to the robot. The tool-changing mechanics require submillimeter accuracy to operate, which requires a full 6DOF calibration of the table coordinate frame and sensor positions. The table has spots for three calibration posts, which are smaller versions of the one used for primary system calibration, and the calibration process moves the robot through a vertical and horizontal scan of each post. This provides three points, giving a full 6DOF representation of the table top plane relative to the robot. At the center of the storage well for each secondary collimator is another sensor. Each of these is scanned with the linac laser to calibrate the center of the well. In turn, the robot picks up and drops off each housing to ensure that the calibration scans successfully found the center positions. The bottoms of the wells are spring loaded, with overtravel sensors to monitor if the robot pushes down too far—while the tool mounting face of the linac has sensors to determine when a physical connection is made.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

24

Handbook of Robotic and Image-Guided Surgery

FIGURE 2.6 Example view of proximity detection modeling, with static and dynamic modeled components. The patient avoidance area around the couch is not shown.

The combination of the calibration procedure and these sensors allows the system to safely and quickly exchange between the different secondary collimators. In addition to enabling the use of multiple collimator types, the table also enables the system to perform a laser alignment check prior to each treatment. This occurs either through a successful exchange of collimators or a single check of the laser value over the center of one of the housing wells. As the linac laser is aligned and calibrated with the treatment beam, this simple check ensures that the treatment beam is aligned consistently. This is checked prior to each treatment delivery.

2.3.1.6 RoboCouch The patient couch (RoboCouch) is also a serial link manipulator which brings additional functionality to the CyberKnife System. RoboCouch is a custom-designed robot, based on the selective compliance articulated robotic arm (SCARA) configuration, which utilizes the same KUKA controller and KUKA wrist as the treatment robot. The first axis is a vertical axis to provide Z motion, with the two subsequent axes providing planar X/Y motion, followed by a three-axis intersecting wrist to enable rotation positioning (Fig. 2.7). This design enables a therapist to fully align a patient in all 6DOF without having to physically adjust the patient inside the treatment room. The design of the couch provides the user with the ability to treat patients up to 500 lb, while maintaining submillimeter accuracy. The RoboCouch workspace allows for 100 cm of travel in the inferior/superior direction, while extending 6 18 cm in the patient left/right direction, and a minimum load height of # 55 cm off the floor, which equates to 37 cm of travel posterior to the machine center. This workspace also includes nominal rotation ranges of 6 5 degrees about each axis, although there are reduced limits at the exterior of the translation limits. The combination of couch limits with the treatment robot tracking limits allows the CyberKnife flexibility with initial patient position on the couch, as well as to handle large patient movements without frequent manual adjustments of the patient. The RoboCouch is calibrated to the rest of the CyberKnife System by performing a sequence of couch moves and tracking calibration targets using the X-ray imaging system. By correlating native couch coordinates to the imaging system coordinates, the system can accurately adjust the patient position based on imaging system results to maintain

25

FIGURE 2.7 RoboCouch highlevel diagram with axis labels.

patient alignment about the machine center. Once this calibration is performed, the CyberKnife utilizes the couch to adjust the patient position to nominal alignment with the DRRs, and then allows the treatment robot to adjust on-the-fly as the patient moves during treatment delivery.

2.3.2

Treatment head

The treatment head produces and controls the X-ray treatment beam. In the following description “up” is the direction along the beam central axis away from the patient, and “down” is toward the patient. The treatment head is mounted to the treatment manipulator and is divided into two parts (Fig. 2.8). The upper part is permanently attached, and the lower part is mechanically exchangeable for one of three alternatives. These lower head assemblies are stored in a pedestal next to the treatment robot base (Fig. 2.1) and are exchanged automatically by the treatment manipulator using a pneumatic tool-changing mechanism as described in Section 2.3.1.5. The fixed (upper) part of the treatment head contains the linac which generates the treatment beam by accelerating electrons along an evacuated accelerator structure using microwave power, and colliding these accelerated particles with a thin metal target to generate X-rays principally through bremsstrahlung interaction. The linac is powered by an X-band cavity magnetron, also mounted in the treatment head, and delivers a 6 MV X-ray beam with a dose rate of 1000 cGy/min at the reference treatment distance of 800 mm from the beam source. The X-ray target is situated within the primary collimator, which is a large tungsten enclosure designed to minimize radiation leakage in all directions except along a fixed rectangular aperture defining the maximum possible treatment field size. Downstream of the target is the monitor chamber, which is a sealed, gas-filled ionization chamber used to control the dose delivered to the patient. Charge measured in this chamber is proportional to the X-ray fluence emitted by the linac. The relationship between this charge measurement and dose delivered to the patient is carefully characterized when the system is commissioned. During treatment, this signal is used to terminate each treatment beam when the radiation dose specified in the treatment instructions has been delivered. In addition, the chamber monitors the treatment dose rate, beam uniformity, and beam symmetry, and terminates treatment if these deviate outside a tolerable range. The laser mirror assembly is aligned to direct the beam from a low-power optical laser mounted at right angles to the treatment beam along an axis coincident with the center of the radiation beam. The laser is not used therapeutically, but is essential for various quality assurance and calibration procedures and as part of the assembly exchange mechanism described previously. The detachable part of the treatment head contains the secondary collimation system that defines the shape of the treatment beam incident on the patient. At the top of each of these is an intermediate collimator. This is fixed tungsten

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

26

Handbook of Robotic and Image-Guided Surgery

FIGURE 2.8 A section of the treatment head showing the major components responsible for shaping and controlling the treatment beam. The linear accelerator (not shown, to the left of this figure) directs a beam of electrons accelerated to approximately 6 MeV onto the X-ray target where they interact to generate the X-ray beam shown in purple. This beam is collimated by the combination of primary, intermediate, and secondary collimators to shape the beam that is incident on the patient (not shown, to the right of this figure). The electric charge created by the beam as it traverses the gasfilled monitor chamber is used to control the dose delivered to the patient. The 45 degrees mirror directs a low-power laser beam (not shown) that is coincident with the beam central axis. This is used for various quality assurance and other functions, but is not involved in treatment delivery. Automated pneumatic connection and disconnection of the master tool plate to tool plate allows alternate secondary collimator assemblies to be attached for each treatment. The fixed collimator assembly shown here is the simplest of these assemblies. The alternate secondary collimator assemblies (not shown) are an Iris variable circular aperture collimator, and a multileaf collimator.

shielding designed to reduce the field size from the maximum defined by the primary collimator down to the maximum supported by the variable secondary collimation sitting below, and to provide some additional shielding to the primary collimation. The three secondary collimator assemblies are described next. 1. Fixed conical collimators. These are 12 static tungsten cylinders, each with a circular aperture. The apertures define beam diameters (at 800 mm from the source) of 5 60 mm. The two smallest sizes have straight apertures. The others are focused to the beam source, which is important to achieve a sharp beam edge (i.e., to minimize the beam penumbra). The individual collimators are manually fitted within the fixed collimator assembly. The collimator size is sensed by the delivery system and an interlock prevents treatment delivery unless this matches the size contained in the treatment plan for each beam. 2. Iris variable aperture collimator. This collimator can replicate the same set of 12 circular field sizes as the fixed collimators without the need for any manual exchange of parts. It is formed by 12 triangular tungsten segments divided into two banks of six, mounted one above the other. Each bank defines a hexagonal beam aperture, and the two banks are rotated through 30 degrees with respect to each other to achieve a dodecagonal beam that closely approximates a circle. The rotational offset between the two banks also minimizes the radiation leakage between the segments, since any gap between segments in one bank is shielded by the body of a segment in the other. The upper bank forms a smaller aperture than the lower one to approximate the focusing of the fixed collimators. All 12 segments are driven using a single motor. This collimator assembly is described in greater detail elsewhere [16]. 3. InCise MLC. This collimator can form irregular beam shapes within a maximum aperture of 115 mm 3 100 mm (projected at 800 mm from the source). The beam collimation is provided by 52 tungsten leaves, divided into two opposing banks of 26, which move along linear trajectories orthogonal to the beam central axis. The leaves are driven independently and are capable of unlimited interdigitation and overtravel. The edges of the treatment beam are collimated either by the tips or sides of the leaves. To minimize beam penumbra, the leaf tips have a three-sided design such that they are focused at the target when the leaf is fully open, fully closed, and at the beam center, with

27

a compromise made at other positions. The leaves are flat-sided and thicker at the bottom than at the top, meaning that the sides are focused at the target. However, to minimize interleaf radiation leakage the entire leaf assembly is rotated by 0.5 degrees, which achieves a compromise between penumbra and leakage. A detailed description is provided elsewhere [17]. The secondary collimator selection is made during treatment planning. Generally, fixed collimators are preferred for treatments of very small targets such as trigeminal neuralgia, because they deliver the sharpest beam penumbra and avoid any aperture size uncertainty. The Iris collimator is generally preferred for treating larger target volumes with circular apertures since it allows multiple beam diameters to be combined without the need to manually exchange fixed collimators during treatment. The combination of multiple aperture diameters confers both a treatment time and dosimetric advantage over fixed collimators in many cases [16]. The MLC is the most flexible of the collimator options, allowing arbitrary noncircular apertures and larger apertures than are possible with the other options. This flexibility makes MLC treatments generally the fastest to deliver and can also result in dosimetric advantages, although the radiation transmission and mechanical positioning tolerances are larger than with the other collimators. Treatment plans can be generated for multiple secondary collimator options and compared during the planning phase to inform this decision, or it can be based on prior clinical experience of similar cases.

2.3.3

Imaging systems for treatment delivery

CyberKnife includes both X-ray and optical imaging systems. The former uses two ceiling-mounted X-ray tubes with beams tilted at 45 degrees to the vertical. This provides an orthogonal stereo image pair from which the 3D location of image features can be obtained. X-ray techniques from 40 kV to 150 kV are available. The X-ray beams project a fixed field size of approximately 15 cm 3 15 cm at the machine center. The nominal distance from the X-ray source to the machine center is 2.2 m, as is the distance from the machine center to the image detector, giving a magnification factor of two. The image detectors are mounted at floor level under load-bearing covers. Each detector is a CsI scintillator deposited onto an amorphous silicon photodiode with a detection area of 40 cm 3 40 cm divided into 1024 3 1024 pixels. The entrance surface dose [18] at the machine center for a typical chest exposure (120 kV, 11.5 mAs) is 0.21 mGy. After images are acquired for initial patient alignment the user specifies the interval between subsequent image acquisitions which are used to detect and correct for intrafraction target motion. This interval is typically set at 30 60 seconds. For a lung radiosurgery treatment delivered in three fractions, 138 image pairs might be acquired in total. Using the entrance surface dose per image, the total effective dose delivered by the imaging system during this treatment estimated using the methods in Ref. [18] is 2.6 mSv, which is lower than that from a diagnostic chest CT [19 22]. The optical imaging system, which is used in combination with the X-ray system during respiratory motion tracking, consists of three cameras in a ceiling-mounted retractable boom-arm. This camera array detects the position of three optical markers attached to the patient surface. Each marker is the tip of an optical fiber, with the other end connected to a red LED. The LEDs are pulsed sequentially (i.e., marker 1, then 2, then 3) so that they can be differentiated by the camera, which reads their positions at about 100 Hz. The camera controller calculates each marker position in a 3D camera frame, which is later reduced to a scalar measurement along the marker principal axis of motion for use in the internal external correlation model that is described later. This system operates in normal room lighting conditions.

2.3.4

Target localization and tracking methods for treatment delivery

2.3.4.1 Registration of live X-ray images and digitally reconstructed radiographs As described in Section 2.2, the basis for target localization and tracking during every CyberKnife treatment is automatic registration of live X-ray images acquired continually throughout treatment (typically every 30 60 seconds) to DRRs generated from the treatment planning CT image in which the position and orientation of the treatment beams are known. Four image registration methods are available. 6D skull tracking This method involves tracking the skeletal features of the skull, and is used for intracranial and some upper cervical spine and head and neck targets. Prior to treatment a library of DRRs is calculated, simulating different patient roll angles (i.e., rotation of the planning CT image about the patient superior inferior axis), which is the rotation out-ofplane for the X-ray images. First, each live image is rigidly registered to the corresponding zero roll angle DRR using an intensity-based image similarity measure and a multiresolution search (from coarse to fine spatial resolution), from

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

28

Handbook of Robotic and Image-Guided Surgery

which the 2D transformation (two translations and in-plane rotation) is estimated. Next this transformation is used to align the live image with the set of DRRs with varying roll angle, sampled every 1 degree, and this set is searched using a different image similarity measure to estimate the roll angle. The in-plane transformation and out-of-plane roll angle are fine-tuned in a similar two-step iterative process, with each estimate providing the starting point for the next iteration, but using more accurate and computationally expensive methods and finer roll angle resolution. The final rigid 3D transformation is calculated by geometric back projection of the two 2D results, with the roll angle averaged between the two 2D results. A complete description is given by Fu and Kuduvalli [23]. Xsight spine tracking system This method is used for targets located within the spine or those fixed relative to it. During treatment planning a tracking volume is segmented which typically contains the three spinal vertebrae closest to the target volume, and registration with the live images is performed using only the tracking region corresponding to this volume of interest (VOI) projected in each DRR. While superficially a similar problem to skull tracking, spine tracking is complicated by two main factors: (1) overlying high-contrast anatomical structures within the CT and live X-ray field of view (e.g., ribs, clavicle) may deform relative to the vertebrae (e.g., due to a change in arm position, or breathing) changing the image intensity within the tracking region observed in the live images from that simulated in the DRR, (2) the vertebrae within the tracking volume can deform relative to each other. The principal changes in the spine registration method versus skull are that (1) the tracking DRR is calculated by ray-tracing only through a spine volume segmented in the planning CT, so it only contains information related to the spine and not overlying tissues, and (2) rather than one global 2D 2D registration of the tracking region within the tracking DRR to the live image, this registration is performed using a grid of “nodes” distributed within the tracking region. The displacement of each node between the live image and the tracking DRR is calculated independently using an intensity-based similarity measure within a small image subregion surrounding the node, and the node displacements are combined with smoothness constraints to estimate a rigid transformation of the target in each projection. The 2D registration results are combined by back projection to estimate the 3D transformation, and the out-of-plane roll angle is estimated by comparison of the 2D images to a library of precalculated DRRs with varying simulated angles. Details are provided in Refs. [24 26]. Fiducial marker tracking This method requires implantation of radiopaque fiducial markers (usually metal seeds, coils, or clips) within or adjacent to the target volume. Typically, this is used for soft-tissue targets not fixed relative to the skull or spine such as prostate, liver, pancreas, and breast. Marker implantation is usually performed percutaneously under image guidance, although lung fiducials can also be implanted bronchoscopically [27,28]. A minimum of three fiducials are needed to calculate translations and rotations, and typically three to five are implanted (one study has shown there is little benefit to implanting . five [29]). During planning, the markers are localized within the CT image and tracking is performed by registering these known features in the DRRs with corresponding features in the live images (algorithm details are provided in Refs. [30 32]). There is a risk of fiducial migration between planning CT and treatment delivery leading to treatment inaccuracy. To mitigate this the planning CT is usually acquired at least 1 week after implantation to allow the markers to fixate. In addition, migration changes the intermarker distances between the live image and DRR, and the system provides tools to identify these changes and omit individual markers from the registration calculation if needed. Xsight lung tracking system While fiducial marker tracking can be used for lung tumors, this is often undesirable due to the risk of the pneumothorax during percutaneous marker implantation. Fortunately, the radiographic contrast between the tumor and surrounding low-density lung tissue is large enough to enable fiducial-less tracking in a proportion of patients. Xsight lung tracking is performed in two stages. The first is alignment of the patient using Xsight spine tracking of the spinal vertebrae closest to the lung tumor in the planning CT. This is used to adjust the treatment couch to place the estimated lung tumor location near the center of the imaging field of view, and is also used to rotationally align the patient. This step is performed only once, at the start of treatment. The second step is to register the live images to the corresponding DRRs using the image intensity associated with the tumor itself, and this step is repeated for every live image acquired during treatment. The algorithm used to solve this soft-tissue image registration problem has evolved through three major generations. In all cases a tracking volume that encapsulates the lung tumor is defined in the planning CT, and the projection of this VOI onto each DRR, the tumor template, is used to calculate the registration with the live image.

29

The original Xsight lung algorithm was released in 2006 [33]. The second-generation algorithm released in 2009 contained incremental improvements on this method, which extended the algorithm success rate (the proportion of the patient population for which the algorithm successfully tracked the tumor) [34]. The third-generation algorithm, released in 2011, uses a fundamentally different approach to the two earlier versions. As in the second version, the tumor template DRR is generated by ray-tracing through a subvolume of the planning CT that contains only the immediate region around the tumor. However, rather than considering this template as a single rigid structure to localize in the live image, the template is divided into overlapping patches, each only a few millimeters square. Patches are registered to the live image independently using a localized intensity similarity measure (normalized cross-correlation). Only patches which have a strong correlation with their locations in the full CT DRR (i.e., those expected to give a strong correlation between features in the tumor and corresponding features in the live X-ray image) are used. Furthermore, the intensity standard deviation within each patch is used to weight the registration result when all of the patch registrations are combined (i.e., more importance is given to patches that include clearly identifiable features). Finally, this weighted sum of individual 2D patch registrations is used to calculate the 2D registration for the whole tumor template, and the two image projection results are back-projected to give the 3D transformation. Dividing the registration problem into a set of smaller patch registration problems makes the method more robust in the presence of tumor rotation and deformation during breathing, and makes it less sensitive to differences between DRR and live image intensity (e.g., due to variations in X-ray technique or image detector nonuniformity) since a local image similarity measure can be used that is normalized to the intensity variation within that patch, and allows the registration to be calculated using only those parts of the tumor template that contain identifiable features that correlate well with the tumor as projected in the full DRR. In some cases, it is possible to localize a lung tumor in one X-ray projection but not the other (usually because in one projection the tumor is superimposed on the mediastinum or spine). Therefore the system provides a 1-view tracking mode in which Xsight lung is used to track the tumor only in one projection, and the positional uncertainty normal to this projection is accommodated using a larger PTV margin in that direction only. This additional margin can be informed using a 4D-CT study. It is important to note that the X-ray system geometry means that the superior inferior tumor position, which tends to represent the largest component of respiratory motion, is detected in both X-ray projections. In other cases, particularly if the tumor is very small, it might be impossible to localize it in either projection. Therefore a 0-view tracking option is available, in which case the initial spine alignment is used to localize the tumor, and the positional uncertainty associated with untracked respiratory motion is accommodated in a larger PTV (Fig. 2.9). For these patients, fiducial tracking is also an option. The CyberKnife System provides a lung treatment simulation mode which allows the patient to be imaged and the feasibility of 2-view, 1-view, or 0-view tracking to be established before the treatment plan is generated. Retrospective analysis (previously unpublished) of image data from 100 lung tumors treated using fiducial marker tracking in 94 patients shows that the third-generation Xsight lung algorithm could be used in 2-view mode for 64% of cases (increased from 45%, and B20% with the second- and first-generation algorithms, respectively); 1-view tracking was possible in 81% of cases using the third-generation algorithm.

2.3.4.2 Real-time respiratory motion tracking Section 2.3.4.1 describes methods for localizing the tumor (or tumor surrogate) in live X-ray images acquired typically every 30 60 seconds. These methods are useful for tracking quasistatic tumor motion (i.e., tumors that move infrequently, or with a period that is much longer than 1 minute), but are not sufficient for tracking tumors that move due to respiration. Continually acquiring X-ray images at the frequency required to track respiratory motion throughout treatment is undesirable because of patient dose. Therefore the Synchrony Respiratory Motion Tracking System is a hybrid method which relies on the correlation between external marker positions on the patient surface (monitored optically in real time) and tumor position (monitored by X-ray imaging every 30 60 seconds). Unlike gating or breath-hold methods, this tracking approach maintains a 100% duty cycle (i.e., treatment is delivered continuously during all phases of breathing) and allows the patient to breath normally throughout. The robotic treatment manipulator provides an ideal platform for respiratory motion tracking, unique to the CyberKnife System, because it allows the entire treatment head to follow the 3D tumor trajectory in real time. Alternative experimental approaches for real-time tracking suffer disadvantages in comparison, namely (1) tracking by real-time beam reshaping is possible with an MLC but not circular collimators, and has limited tracking resolution in the direction normal to MLC leaf motion and no ability to track motion parallel to the beam central axis, and (2) tracking by real-time couch motion requires that the patient is assumed to act as a rigid body statically attached to the couch during relatively high accelerations and sometimes erratic motion.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

30

Handbook of Robotic and Image-Guided Surgery

FIGURE 2.9 An example CTV (purple contour) to PTV (yellow contour) expansion for 2-view (left), 1-view (middle), and 0-view (right) Xsight lung tracking. In the 2-view case the positional uncertainty, and therefore the CTV PTV margin is minimized. In the 1-view case shown in this example, the tumor can be localized using the X-ray source located at the top-right of the image but not the one at the top-left. Therefore a larger margin is needed normal to the projection of the top-right X-ray source image to account for the positional uncertainty in this direction. Note that this margin does not represent the entire range of motion in this direction, since the uncertainty is constrained by tracking the motion in the other projection. In the 0-view case the margin accounts for the full range of respiratory motion and intrafraction uncertainty in the position of the tumor relative to the spine, since only the spine is aligned during treatment. CTV, Clinical target volume; PTV, planning target volume.

Of the image registration methods in Section 2.3.4.1, fiducial marker tracking, Xsight lung, and Xsight spine (the latter for spinal targets treated in the prone position) can be combined with Synchrony. Immediately prior to treatment X-ray images are acquired at multiple phases of the breathing cycle and the external marker (scalar) positions are captured at the same instants. For each optical marker, a correlation model is fitted to these external marker positions versus tumor position data, with a separate model generated for superior inferior, anterior posterior, and left right tumor motion components. During treatment delivery, the markers are monitored continuously, and tumor position is estimated in real time using the average of the correlation model outputs associated with each visible marker. The 3D offset of this position from the static tumor position in the treatment plan is applied by the robotic treatment manipulator in real time. The main complications of tracking respiratory motion using an internal external correlation model are (1) the correlation between internal and external motion may be nonlinear due to hysteresis in lung inflation/deflation and nonzero phase difference between external and internal chest motion [35,36], (2) the correlation may not be stable between, or during, treatment sessions [35 37], and (3) the treatment manipulator cannot respond instantaneously to a requested change in position. The first of these problems is addressed by fitting multiple correlation models, including linear and nonlinear functions to the internal external position data, and automatically selecting between them using a measure of the fit quality. The second problem is greatly reduced because the correlation model is built immediately prior to each treatment session using data acquired at that time, so there is no assumption that any prior motion data (say from a planning 4D-CT or from previous treatment sessions) is representative of the motion pattern today. The problem of intratreatment model stability is mitigated by acquiring new images throughout treatment, and using these new internal external position data to continually adapt the correlation model. Fifteen model points are used in a first-in first-out method to construct each model. When a new model point is acquired the external marker signal is used to synchronize the X-ray acquisition such that the new point corresponds to the same phase of the breathing cycle as the data point about to be ejected. Finally, the third complication is minimized by reducing the latency period as much as possible, to 115 ms in the current system. The tumor may still move as much as 2 mm during this period, and so a prediction algorithm is required to convert the output of the correlation model to an instant 115 ms in the future. This algorithm uses a combination of pattern matching (searching the history of correlation model outputs for the pattern

31

found just before each instant, and using this to predict the output 115 ms later) and a least mean square prediction, with preference usually given to the former. A more detailed description is given by Sayeh et al. [38].

2.3.5

Image registration and segmentation algorithms for treatment planning

During treatment planning, a 3D CT image of the patient must be imported to the TPS. This CT, referred to as the primary image, is mandatory because it allows construction of the DRRs required for tracking during treatment delivery (see Section 2.3.4.1) and calculation of tissue properties needed for radiation dose calculation (see Section 2.3.6.1). An important additional role of this CT image is to enable segmentation of the target volume(s) and relevant organs at risk (OARs), which are needed to construct and evaluate the treatment plan (see Section 2.3.6.2). CT is not the optimal imaging modality for segmenting all structures, and the amount of manual work in image segmentation can form a large part of the treatment planning process. Therefore the TPS enables the user to import multiple secondary images and coregister them to the primary CT image to assist with segmentation, and provides tools for automated segmentation of some anatomies.

2.3.5.1 Multimodality image import and registration Up to five secondary 3D image sets (or 15 if multiple phases of 4D-CT are used) can be imported into any treatment plan. The most common secondary images include MRI, CT with contrast enhancement, and multiple phases from a 4D-CT series. Tools are provided to rigidly or deformably register these secondary images to the primary CT image. The simplest option is to indicate that the images are already coregistered, which may, for example, be the case if the primary CT is acquired using a PET CT scanner and the secondary image is a PET scan acquired at the same time. This option might also be used if third-party software has already been used to register the two images. Rigid registration is performed using an intensity-based (normalized mutual information) similarity measure. The user has the option of guiding the registration result by manually defining seed points in the two images, or of omitting the intensity-based registration entirely and registering only on the seed points. For intensity-based registration the user may limit the calculation to a subvolume of the images (e.g., to a region around the tumor if the secondary image will only be used to guide the tumor segmentation). Deformable image registration uses a proprietary nonrigid algorithm. The algorithm assumes no specific parameterization of the deformation field but instead estimates it independently at discrete points, subject to smoothness regularization. The algorithm uses an intensity-based similarity measure (normalized cross-correlation) calculated over small patches, and the deformation field is optimized iteratively in a coarse to fine resolution sequence with smoothness regularization applied during each iteration (typically three or four resolution levels and up to 500 iterations per level). The algorithm is implemented on graphics processing unit (GPU) and typically takes 20 30 seconds to register two 512 3 512 3 300 images. The algorithm and its validation are described elsewhere [39,40]. The registration result from any secondary image can be copied to any other. For example, the same registration can be applied to all phases of a 4D-CT, of if the CT component of a PET CT study is registered to the primary image, then the same registration can be applied to the PET component.

2.3.5.2 Automated image segmentation The deformable image registration method is also used to perform autosegmentation of healthy anatomy within the primary image by registering it to multiple presegmented atlases. Specifically, autosegmentation tools are provided for brain (which requires a T1w patient MR registered to the primary CT image) and head and neck anatomies [39]. Brain autosegmentation uses a library of 50 segmented MR scans, each containing 157 structures manually defined by a neuroanatomist (including cortical parcellations). The 20 most similar atlases to any patient image are automatically selected and deformably registered to the patient image, the process typically taking about 3 minutes (Fig. 2.10). Majority voting is used to allocate each voxel in the patient image to one of the atlas structures. The head and neck autosegmentation tool uses a similar approach to segment 15 structures. A different autosegmentation approach is provided for the male pelvic anatomy, which combines aspects of model-based and atlas-based methods to segment seven structures, including prostate, seminal vesicles, urethra, bladder, and rectum, using the primary CT image. All automatically segmented VOIs can be reviewed and manually edited.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

32

Handbook of Robotic and Image-Guided Surgery

FIGURE 2.10 Autosegmented brain structures (top) versus manually segmented structures (bottom). This leave-one-out comparison was performed by taking one of 20 expert atlases as an estimate of the ground truth, and comparing against autosegmentation of this image using the deformable registration method applied to the other 19 atlases. Auto- and manually segmented structures were compared using mean surface distance, with results varying from 0.3 mm (Brainstem, Optic Chiasm, Putamen, Pallidum, Ventral Diencephalon) to 1.1 mm (Postcentral Gyrus).

2.3.5.3 Retreatment Another use of deformable registration is for situations where a patient requires treatment using CyberKnife having already received radiation therapy (delivered using CyberKnife or some other treatment delivery system) to the same, or an adjacent, anatomical region. In these cases, it can be important to visualize the dose distribution of the new treatment in combination with that delivered previously in order to properly assess the complication risks and optimize the new treatment plan. The retreatment functionality provided by the TPS allows the new primary CT to be deformably registered to the planning CT image from the previous treatment. This registration is then used to overlay the isodose

33

line representations of the previous treatment onto the new primary image and convert these into VOIs that can be used to guide the dose optimization in the new plan. For example, it is possible to identify the volume of an OAR in the new plan which received more than a certain dose previously (this volume is the intersection of the OAR in the new CT, with the corresponding isodose deformably registered from the prior treatment CT), and ensure that the dose in this region is constrained to a safe level in the new plan.

2.3.6

Radiation dose calculation and optimization algorithms for treatment planning

2.3.6.1 Dose calculation algorithms The dose calculation problem involves modeling the interaction of primary X-ray photons within the patient, and the energy deposition of secondary electrons generated by these interactions. Dose calculation algorithms are often differentiated by whether the secondary electron transport is considered explicitly (type B algorithm) or energy deposition is considered to be a constant proportion of photon energy lost in each voxel (type A). For many clinical applications, these two methods have equivalent accuracy. The major exception is for small beams inside low-density organs (such as lung) or at the interfaces between soft tissues and low-density materials (such as tumor lung or tumor air interfaces). In these cases, type A algorithms can result in dose calculation errors of 10% or more (Fig. 2.11).

FIGURE 2.11 A comparison of dose measurements ( 3 ) and calculations using the type A FSPB algorithm without lateral kernel scaling (dotted line) and the type B Monte Carlo algorithm (solid line) along the beam central axis of a 15.4 mm 3 15.4 mm MLC beam. Relative dose (normalized at 20 mm depth) is plotted as a function of depth along the central axis. As shown in the figure, the beam (indicated by the red arrow) is normally incident onto a phantom containing 5 cm of water-equivalent plastic, followed by 8 cm of lung-equivalent plastic, and then another 15 cm of waterequivalent plastic. Within the water-equivalent material, which has similar radiation absorption and scattering properties to most soft tissues, both calculations agree well with measurement. However, within the lung material, and just downstream of the lung water interface, the type A algorithm overestimates the dose while the type B algorithm maintains a good agreement with the measurement. FSPB, Finite-size pencil beam.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

34

Handbook of Robotic and Image-Guided Surgery

TABLE 2.1 Type A and B dose calculation algorithms provided for CyberKnife circular (fixed and Iris collimators) and irregular (multileaf collimator) apertures.

Fixed and Iris collimators Multileaf collimator

Type A algorithm

Type B algorithm

Ray-trace Finite-size pencil beam

Monte Carlo Monte Carlo

The CyberKnife TPS includes type A and B algorithms (Table 2.1). The ray-tracing algorithm is an example of a type A tabulated beam data algorithm [41], and relies on a large set of stored data obtained from dose measurements made in a uniform water phantom to reconstruct the dose at any point within a patient. These data are measured for each of the 12 possible field sizes during system commissioning. This approach is not applicable to the MLC because of the infinite number of possible apertures that can be formed. Instead, a pencil beam algorithm (also type A) generates the dose distribution by the weighted summation of kernels distributed across an arbitrary 2D beam aperture, that each represent the dose distribution delivered by a narrow pencil beam in an infinite water phantom [41]. The specific finite-size pencil beam (FSPB) algorithm employed is described elsewhere [42]. For both algorithms, the effect of density variations (i.e., to compensate for the fact that the patient is not uniformly water equivalent) on photon energy loss is modeled using equivalent path length, where the effective depth along a ray-line linking the X-ray source to the dose calculation point is calculated as the integral of relative electron density along the ray (electron density at each voxel/ electron density of water). The FSPB algorithm accuracy can be further improved in low-density materials by scaling the kernels laterally based on local density to simulate the extended electron range [43], which is implemented as an option. The type B algorithm provided is Monte Carlo dose calculation [44 48]. In this approach the interactions of individual primary photons, and of secondary electrons and photons generated in these initial interactions, are explicitly simulated. This is a stochastic process since only the probabilities of individual interactions can be known, and therefore the dose calculation involves simulating many thousands of incident photons to achieve a result with acceptable random uncertainty at each voxel. Ray-trace and FSPB dose calculations are faster (requiring just a few seconds) in comparison to Monte Carlo (which calculates a typical MLC lung treatment plan with 2% uncertainty in 1 2 minutes [48]). Most commonly, type A algorithms are used for clinical applications other than lung cancer and some head and neck cancers, where type B algorithms are preferred. The type B algorithms can be used for final dose calculation to minimize the dose prescription error, and for preoptimization dose calculation to minimize the optimization convergence error [49]. The most appropriate use of type B algorithms in this context is somewhat contentious [50,51], and so it is important that the user retains control over the algorithm selection at each stage of treatment planning.

2.3.6.2 Dose optimization algorithms During radiation treatment plan optimization, the planner determines how to apply the delivery system to treat a specific patient. The inputs are the segmented patient anatomy, possible beam orientations, dose calculations for those beams, and the objectives for the patient treatment. The output is the geometry and fluence distribution of radiation beams that most closely achieve those objectives. The CyberKnife TPS includes two optimization methods, sequential optimization (SO) and VOLO. For both methods, the user can set objectives on the dose to segmented anatomical regions, such as tumors and OARs, the dose delivered to artificial structures such as rings around the tumor or other target volumes used to control dose conformality, and the number of monitor units (MU) delivered by individual beams, summed over beams delivered from the same node, or total MU (summed over all beams). For SO [52], the planner defines objectives which can be either constraints or goals. A constraint cannot be violated during optimization. Constraints can only be set on maximum values, which guarantee that all constraints can be achieved. The maximum MU per beam and per node, if specified, are always constraints. Maximum point dose within any segmented VOI, maximum dose volume within any VOI (e.g., no more than X% of the VOI may receive more than Y dose), and maximum total MU objectives can be treated as constraints and/or goals. Goals are objectives that can be violated. Goals may also include minimum point dose or minimum dose volume for any tumor or other VOI. In SO, the

35

user defines the goals in a prioritized sequence and each goal is optimized individually. The first goal to be optimized must be a minimum dose goal to a target VOI. This can be either the minimum dose to the whole volume or the minimum volume of the tumor that receives the specified dose. Once the goal, or the closest to it that is possible without violating the existing constraints, is achieved this result is converted into an additional constraint to be applied for all subsequent steps during the optimization. A small user-defined relaxation of the constraint may be defined during this conversion. The following steps in the optimization are usually ordered based on the priority of the clinical objective, with the higher priority objectives optimized before the lower priority objectives. Like the first goal, each other goal is optimized individually within the existing constraints, and its result is relaxed and the relaxed result is set as a constraint for subsequent steps. For fixed and Iris collimators, the user selects the volumes to be targeted and the collimator diameters to use. Before optimization begins a solution space of beams is randomly generated. For the MLC, the planner selects the number of nodes to include in the optimization, the volumes to be targeted, and parameters for shape heuristics used to create individual beams. Before optimization begins, the first node is randomly selected and each additional node is chosen to maximize the spatial diversity of the current set of nodes. Shape heuristics are applied at each node to create the beams that comprise the solution space. This predetermined solution space permits formulation of the optimization as a linear programming problem, because the solution vector is the weight or MU of each beam, and MU is linearly proportional to the dose deposited in the patient, which are the values constrained or optimized. The simplex algorithm is used to solve for the optimum at each step [53]. VOLO only allows the planner to specify objectives as goals. For all collimators, the planner selects the number of nodes. As with SO, the first node is randomly determined, and subsequent nodes are calculated to maximize the spatial diversity of the current set of nodes. Also, as with SO, circular beams are randomly chosen to target the volume. For the MLC, however, the projection of the MLC is divided into rectangular pixels, called beamlets, that create a fluence map projecting from each node to each target volume. The planner manually applies a weighting factor to each clinical goal and these weighted goals are summed into a single cost function and are simultaneously optimized. The cost is formulated as a quadratic function and a quasi-Newton gradient search algorithm finds the optimal beam weights [54 60]. For circular collimators, the weights of the beams are optimized directly. For the MLC, the fluence map is initially optimized (i.e., individual beamlet weights are optimized independently). This optimization is generally performed in seconds, which allows the planner to quickly explore the solution space by varying goals and weights. Each optimal fluence map is then sequenced into deliverable apertures and the dose is recalculated using these apertures [61]. To increase the similarity between the fluence map and aperture dose distributions, the fluence optimization includes a regularization component [62] that smooths the intensities of the fluence map. After sequencing, the weight of the apertures is optimized using the same goals, weights, and search algorithm as used during fluence optimization. Both optimizers provide dose evaluation tools such as dose volume histograms, 2D isodose lines, 3D isodose surfaces, and calculated dose statistics that allow the planner and other clinicians to determine whether the result can be approved for treatment or requires further modification.

2.3.7

Data management and connectivity systems

The CyberKnife treatment delivery system consists of multiple computers connected via a dedicated local area network (LAN). The system requires connection to at least one CT scanner but more typically to multiple imaging systems, information systems, and other systems distributed within the hospital and beyond. Interconnectivity is achieved using an integrated data management system (iDMS) located on the LAN. As illustrated in Fig. 2.12, iDMS acts as a central patient database capable of supporting multiple Accuray-supplied treatment delivery systems (CyberKnife, TomoTherapy, and Radixact), and the Accuray-supplied Precision TPS. In addition, it allows connection via a firewall across the hospital network to imaging systems (e.g., CT and MR scanners), hospital information systems, and picture archive and communication systems. Connectivity with third-party radiation therapy systems is enabled by interfacing with oncology information systems provided by other vendors. Connection to the hospital network also allows for additional Accuray TPS and treatment review software to be installed across the hospital. Connection of iDMS to systems beyond the hospital network is also supported via firewall, which enables remote data backup, import of imaging data from remote centers, and remote system diagnostics and service.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

36

Handbook of Robotic and Image-Guided Surgery

FIGURE 2.12 The iDMS acts as a central hub for patient and treatment data used across multiple Accuray-supplied treatment systems and treatment planning systems (dark blue shaded lines) and third-party systems (light green shaded lines), including those connected to the main hospital network or external to it. iDMS, Integrated data management system.

2.4

Summary

The CyberKnife System is a radiation treatment system optimized for frameless radiosurgery anywhere in the body that it is clinically indicated. It provides a unique combination of robotic and image-guided technologies, and is to date the only commercial system that is custom designed for this application rather than representing a modification of an older existing treatment platform. In our previous technical review of the system in 2010 [13] we noted that while the CyberKnife System had undergone significant technical development since its initial design in the late 1990s, the basic principles of that design remained sound and were not changed. This comment can be reasserted today. As described in this chapter, since 2010 significant technical developments have been made. These include integration of a new robotic treatment manipulator and room layout to improve the available treatment workspace, development of more time-efficient methods for treatment path traversal, introduction of an MLC and associated dose calculation and optimization algorithms which improve treatment delivery and planning speed and dosimetric quality in many cases, improved methods for fiducial-less lung tumor tracking which significantly increase the proportion of patients who can benefit from this tracking method, new deformable image registration and autosegmentation methods which enable more consistent image segmentation and decreased manual workload, and improved network integration. Nevertheless, these developments have been achieved without modifying the core technical principles of the system which remain (1) minimize the CTV to PTV margin, achieved by continual image guidance, robotic treatment beam manipulation, and motion tracking (including real-time tracking of respiratory motion), and (2) minimize the treated and irradiated volumes, achieved by the large noncoplanar workspace and nonisocentric beam-targeting capabilities that are enabled by the robotic treatment manipulator, together with variable beam collimation and automated beam position/weight/shape optimization algorithms.

37

Acknowledgments The authors would like to thank their colleagues Russ Farnell, Ravi Pendse, Beth Kaplan, and Ying Xiong for their assistance with the figures.

References [1] Leksell L. The stereotactic method and radiosurgery of the brain. Acta Chir Scand 1951;102:316 19. [2] Benedict SH, Yenice KM, Followill D, Galvin JM, Hinson W, Kavanagh B, et al. Stereotactic body radiation therapy: the report of AAPM Task Group 101. Med Phys 2010;37(8):4078 101. [3] Loo BW, Chang JY, Dawson LA, Kavanagh BD, Koong AC, Senan S, et al. Stereotactic ablative radiotherapy: what’s in a name? Pract Radiat Oncol 2011;1(1):38 9. [4] Timmerman RD, Herman J, Cho LC. Emergence of stereotactic body radiation therapy and its impact on current and future clinical practice. J Clin Oncol 2014;32(26):2847. [5] Chin LS, Hahn SS, Patel S, Mattingly T, Kwok Y. Trigeminal neuralgia. Principles and practice of stereotactic radiosurgery. New York: Springer; 2015. p. 649 57. [6] Jones D. ICRU report 50—prescribing, recording and reporting photon beam therapy. Med Phys 1994;21(6):833 4. [7] Landberg T, Chavaudra J, Dobbs J, Gerard JP, Hanks G, Horiot JC, et al. Report 62. J Int Comm Radiat Units Meas 1999;32(1):1 52. [8] Kupelian P, Willoughby T, Mahadevan A, Djemil T, Weinstein G, Jani S, et al. Multi-institutional clinical experience with the Calypso System in localization and continuous, real-time monitoring of the prostate gland during external radiotherapy. Int J Radiat Oncol Biol Phys 2007;67 (4):1088 98. [9] Xie Y, Djajaputra D, King CR, Hossain S, Ma L, Xing L. Intrafractional motion of the prostate during hypofractionated radiotherapy. Int J Radiat Oncol Biol Phys 2008;72(1):236 46. [10] Hoogeman MS, Nuyttens JJ, Levendag PC, Heijmen BJ. Time dependence of intrafraction patient motion assessed by repeat stereoscopic imaging. Int J Radiat Oncol Biol Phys 2008;70(2):609 18. [11] Guthrie BL, Adler JJ. Computer-assisted preoperative planning, interactive surgery, and frameless stereotaxy. Clin Neurosurg 1992;38:112 31. [12] Adler Jr JR, Chang SD, Murphy MJ, Doty J, Geis P, Hancock SL. The Cyberknife: a frameless robotic system for radiosurgery. Stereotact Funct Neurosurg 1997;69(1 4):124 8. [13] Kilby W, Dooley JR, Kuduvalli G, Sayeh S, Maurer Jr CR. The CyberKnifes robotic radiosurgery system in 2010. Technol Cancer Res Treat 2010;5:433 52. [14] Maciunas RJ, Galloway Jr RL, Latimer JW. The application accuracy of stereotactic frames. Neurosurgery 1994;35(4):682 95. [15] Lin S, Kernighan BW. An effective heuristic algorithm for the traveling-salesman problem. Oper Res 1973;21(2):498 516. [16] Echner GG, Kilby W, Lee M, Earnst E, Sayeh S, Schlaefer A, et al. The design, physical properties and clinical utility of an Iris collimator for robotic radiosurgery. Phys Med Biol 2009;54(18):5359. [17] Asmerom G, Bourne D, Chappelow J, Goggin LM, Heitz R, Jordan P, et al. The design and physical characterization of a multileaf collimator for robotic radiosurgery. Biomed Phys Eng Express 2016;2(1):017003. [18] Murphy MJ, Balter J, Balter S, BenComo Jr JA, Das IJ, Jiang SB, et al. The management of imaging dose during image-guided radiotherapy: report of the AAPM Task Group 75. Med Phys 2007;34(10):4041 63. [19] Galanski M, Nagel HD, Stamm G. Expositionsdosis bei CT-Untersuchungen: Ergebnisse einer bundesweiten Umfrage. Ro¨Fo 2000;172: M164 8. [20] Fiberg EG. Norwegian radiation protection authority. Østera˚s, Norway: Department of Radiation Protection and Nuclear Safety; 2000. p. 193 6. [21] Smith-Bindman R, Lipson J, Marcus R, Kim KP, Mahesh M, Gould R, et al. Radiation dose associated with common computed tomography examinations and the associated lifetime attributable risk of cancer. Arch Intern Med 2009;169(22):2078 86. [22] de Gonza´lez AB, Mahesh M, Kim KP, Bhargavan M, Lewis R, Mettler F, et al. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med 2009;169(22):2071 7. [23] Fu D, Kuduvalli G. A fast, accurate, and automatic 2D 3D image registration for image-guided cranial radiosurgery. Med Phys 2008;35 (5):2180 94. [24] Fu D, Kuduvalli G. Enhancing skeletal features in digitally reconstructed radiographs. Medical imaging 2006: image processing, vol. 6144. International Society for Optics and Photonics; 2006. p. 61442M. [25] Fu D, Kuduvalli G, Maurer CR, Allision JW, Adler JR. 3D target localization using 2D local displacements of skeletal structures in orthogonal X-ray images for image-guided spinal radiosurgery. Int J Comput Assist Radiol Surg 2006;1:198 200. [26] Ho AK, Fu D, Cotrutz C, Hancock SL, Chang SD, Gibbs IC, et al. A study of the accuracy of Cyberknife spinal radiosurgery using skeletal structure tracking. Oper Neurosurg 2007;60(Suppl. 2):ONS-147. [27] Anantham D, Feller-Kopman D, Shanmugham LN, Berman SM, DeCamp MM, Gangadharan SP, et al. Electromagnetic navigation bronchoscopy-guided fiducial placement for robotic stereotactic radiosurgery of lung tumors: a feasibility study. Chest 2007;132(3):930 5. [28] Reichner CA, Collins BT, Gagnon GJ, Malik S, Jamis-Dow C, Anderson ED. The placement of gold fiducials for CyberKnife stereotactic radiosurgery using a modified transbronchial needle aspiration technique. J Bronchol Interv Pulmonol 2005;12(4):193 5.

2. The CyberKnife System

A Technical Overview of the CyberKnife System Chapter | 2

38

Handbook of Robotic and Image-Guided Surgery

[29] Murphy MJ. Fiducial-based targeting accuracy for external-beam radiotherapy. Med Phys 2002;29(3):334 44. [30] Mu Z, Fu D, Kuduvalli G. Multiple fiducial identification using the hidden Markov model in image guided radiosurgery. In: Computer vision and pattern recognition workshop, 2006. CVPRW’06. IEEE; Conference on June 17; 2006. pp. 92 92. [31] Hatipoglu S, Mu Z, Fu D, Kuduvalli G. Evaluation of a robust fiducial tracking algorithm for image-guided radiosurgery. Medical imaging 2007: visualization and image-guided procedures, vol. 6509. International Society for Optics and Photonics; 2007. p. 65090A. [32] Mu Z, Fu D, Kuduvalli G. A probabilistic framework based on hidden Markov model for fiducial identification in image-guided radiation treatments. IEEE Trans Med Imaging 2008;27(9):1288 300. [33] Fu D, Kahn R, Wang B, Wang H, Mu Z, Park J, et al. Xsight lung tracking system: a fiducial-less method for respiratory motion tracking. Treating tumors that move with respiration. Berlin, Heidelberg: Springer; 2007. p. 265 82. [34] Jordan P, West J, Sharda A, Maurer C. SU-GG-J-24: retrospective clinical data analysis of fiducial-free lung tracking. Med Phys 2010;37(6 Part 9):3150. [35] Seppenwoolde Y, Shirato H, Kitamura K, Shimizu S, Van Herk M, Lebesque JV, et al. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy. Int J Radiat Oncol Biol Phys 2002;53(4):822 34. [36] Shirato H, Seppenwoolde Y, Kitamura K, Onimura R, Shimizu S. Intrafractional tumor motion: lung and liver. In: Seminars in radiation oncology (vol. 14, no. 1, pp. 10 18). WB Saunders; 2004. [37] Keall PJ, Mageras GS, Balter JM, Emery RS, Forster KM, Jiang SB, et al. The management of respiratory motion in radiation oncology report of AAPM Task Group 76 a. Med Phys 2006;33(10):3874 900. [38] Sayeh S, Wang J, Main WT, Kilby W, Maurer CR. Respiratory motion tracking for robotic radiosurgery. Treating tumors that move with respiration. Berlin, Heidelberg: Springer; 2007. p. 15 29. [39] Jordan P, Myronenko A, Gorczowski K, Foskey M, Holloway R, Maurer Jr. CR. Accurays deformable image registration: evaluation and description (white paper). Sunnyvale, CA: Accuray Inc.; 2017. [40] Gupta V, Wang Y, Romero AM, Myronenko A, Jordan P, Maurer C, et al. Fast and robust adaptation of organs-at-risk delineations from planning scans to match daily anatomy in pre-treatment scans for online-adaptive radiotherapy of abdominal tumors. Radiother Oncol 2018;127 (2):332 8. [41] Rosenwald JC, Rosenberg I, Shentall G. Patient dose computation for photon beams. Handbook of radiotherapy physics—theory and practice. Taylor & Francis; 2007. p. 559 85. [42] Jele´n U, So¨hn M, Alber M. A finite size pencil beam for IMRT dose optimization. Phys Med Biol 2005;50(8):1747. [43] Jele´n U, Alber M. A finite size pencil beam algorithm for IMRT dose optimization: density corrections. Phys Med Biol 2007;52(3):617. [44] Ma CM, Li JS, Deng J, Fan J. Implementation of Monte Carlo dose calculation for CyberKnife treatment planning. In: Journal of physics: conference series (vol. 102, no. 1). IOP Publishing; 2008. p. 012016. [45] Wilcox EE, Daskalov GM. Accuracy of dose measurements and calculations within and beyond heterogeneous tissues for 6MV photon fields smaller than 4cm produced by Cyberknife. Med Phys 2008;35(6 Part 1):2259 66. [46] Muniruzzaman M, Dooley J, Kilby W, Lee M, Maurer C, Sims C. WE-E-AUD B-02: validation tests for CyberKnifes Monte Carlo dose calculations using heterogeneous phantoms. Med Phys 2008;35(6 Part 24):2953. [47] Marijnissen H, Hol M, van der Baan P, Heijmen B. Verification of Monte Carlo dose calculations in an anthropomorphic thorax phantom for Cyberknife treatment of small lung tumors. In: Radiotherapy and oncology (vol. 88); 2008. pp. S114 5. [48] Dooley JR, Noll JM, Kilby W, Fong W, Yeung T, Goggin LM, et al. Abstract ID: 145 Monte Carlo for CyberKnife radiosurgery with the InCise multileaf collimator. Phys Med 2017;42:31. [49] Jeraj R, Keall PJ, Siebers JV. The effect of dose calculation accuracy on inverse treatment planning. Phys Med Biol 2002;47(3):391. [50] Hoogeman MS, van de Water S, Levendag PC, van der Holt B, Heijmen BJ, Nuyttens JJ. Clinical introduction of Monte Carlo treatment planning: a different prescription dose for non-small cell lung cancer according to tumor location and size. Radiother Oncol 2010;96(1):55 60. [51] Lacornerie T, Lisbona A, Mirabel X, Lartigau E, Reynaert N. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms. Radiat Oncol 2014;9(1):223. [52] Schlaefer A, Schweikard A. Stepwise multi-criteria optimization for robotic radiosurgery. Med Phys 2008;35(5):2094 103. [53] Dantzig GB. Linear programming and extensions. Princeton, NJ: Princeton University Press, 1963. ISBN-13: 978-0691059136. [54] Broyden CG. The convergence of a class of double-rank minimization algorithms 1. General considerations. IMA J Appl Math 1970;6(1):76 90. [55] Fletcher R. A new approach to variable metric algorithms. Comput J 1970;13(3):317 22. [56] Goldfarb D. A family of variable-metric methods derived by variational means. Math Comput 1970;24(109):23 6. [57] Shanno DF. Conditioning of quasi-Newton methods for function minimization. Math Comput 1970;24(111):647 56. [58] Byrd RH, Lu P, Nocedal J, Zhu C. A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput 1995;16 (5):1190 208. [59] Zhu C, Byrd RH, Lu P, Nocedal J. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans Math Softw (TOMS) 1997;23(4):550 60. [60] Morales JL, Nocedal J. Remark on “Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization”. ACM Trans Math Softw (TOMS) 2011;38(1):7. [61] Xia P, Verhey LJ. Multileaf collimator leaf sequencing algorithm for intensity modulated beams with multiple static segments. Med Phys 1998;25(8):1424 34. [62] Carlsson F, Forsgren A. Iterative regularization in intensity-modulated radiation therapy optimization. Med Phys 2006;33(1):225 34.

3 G

The da Vinci Surgical System Mahdi Azizian, May Liu, Iman Khalaji, Jonathan Sorger, Daniel Oh and Simon Daimios Intuitive Surgical, Sunnyvale, CA, United States

ABSTRACT The da Vinci Surgical System is a platform for robot-assisted minimally invasive surgery. This chapter provides an overview of the design of the da Vinci System, as well as several of its key subsystems for vision, tissue manipulation, anatomical access, and operator technology training. Clinical adoption of the system to-date is described briefly, and we leave the reader with some thoughts on possible future development directions. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00003-7 © 2020 Elsevier Inc. All rights reserved.

39

40

3.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Surgery is fundamentally a balance of extirpating diseased tissue while simultaneously preserving or reconstructing physiological function. The greatest obstacle to this goal is that disease often occurs deep within the body cavity. To access these deep recesses in traditional surgery, a large incision through the abdominal or chest wall was historically required, which included disrupting multiple layers of muscle, fascia, and bone. For deeper areas of interest, even larger incisions were needed to avoid operating through deep, dark holes with poor visualization. These large incisions resulted in complex operations that, while effective, came at a significant cost to the patient. A large amount of tissue trauma resulted in delayed functional recovery, increased pain, and a longer time to rehabilitate. The evolution of surgical technology has evolved toward reaching these spaces with less tissue damage and impact on the surrounding healthy tissue. The development of laparoscopic technology in the 1970s allowed surgeons for the first time to visualize deep within the abdomen and pelvis. With further advances in video camera technology, true minimally invasive surgery evolved, first with Semm’s novel approaches to gynecologic procedures in the 1970s and subsequent endoscopic appendectomy in the early 1980s. After this, the first laparoscopic cholecystectomies were performed by Erich Mu¨he and Phillipe Mouret in the mid-1980s [1]. The ability to use laparoscopic technology while insufflating CO2 into the abdomen dramatically improved visualization and exposure. In the chest, this same technology was used for video-assisted thoracic surgery, but CO2 was not necessary as the lung could be isolated and collapsed out of the way by the anesthesiologist. Laparoscopic surgery was a true revolution, but after decades of adoption the limitations of the technology hindered further adoption in complex cases. One of the primary limitations of laparoscopic surgery has been the fixed fulcrum of the trocar that is inserted through the body wall combined with straight instruments. This limits the surgeon’s ability to work in certain angles and makes suturing extremely difficult. Moreover, the video remained two-dimensional (2D) and the flat view with limited angles of entry made laparoscopic surgical approaches and visualization very different from that of traditional open surgery. Nevertheless, the original premise of balancing extirpation of diseased tissue while preserving function could be better realized with smaller incisions and less tissue trauma. The advent of the da Vinci robotic platform provided a novel, alternative minimally invasive platform that addressed several of the shortcomings of laparoscopic surgery. The EndoWrist design of the instrumentation allowed for improved manipulation of the end effectors such that the surgeon could reproduce the movements of open surgery and eliminated the use of straight instruments working through a fixed fulcrum. The use of high-definition three-dimensional (3D) video allowed surgeons to see tissue similar to an open operation, and also magnified the tissue. The ability of the da Vinci system to scale motion and filter physiologic tremor greatly enhanced the surgeon’s precision during fine dissection or suturing. All of these advances brought about a third revolution in surgery and ushered in a new world of minimally invasive surgery.

3.2

The intuitive surgical timeline

The founding of Intuitive was the convergence of two evolutionary trends. The first was laparoscopic surgery, which finally allowed minimally invasive procedures to be performed reliably and safely. However, the limitations of this technology became apparent by the 1990s and stifled further innovation and adoption, as laparoscopic surgery was particularly difficult in confined spaces such as the pelvis or thoracic cavity, where the restricted angulation of the instruments was problematic. The second trend was the development of telepresence surgery, whereby the surgeon could control master manipulators to effect actions at an end of an instrument. This technology was incubating at Stanford Research Institute (SRI; Menlo Park, CA, USA) in the 1990s, and was a culmination of over a decade of research in this area. Much of the research had been sponsored by the Defense Advanced Research Programs Agency, which originally envisioned that this technology would allow military surgeons to remotely operate on injured soldiers on the front lines of a battlefield. The vision of Intuitive’s founders, namely to combine these two concepts—minimally invasive surgery with telepresence technology—allowed for the next revolution in surgery. Intuitive Surgical was founded in 1995 by John Freund, Frederick Moll, and Rob Younge. The company licensed telepresence surgical technology from SRI as well as key technologies from IBM and MIT and began developing what would ultimately become the da Vinci Surgical System. The distinguishing features of this system included the reproduction of open surgery in a minimally invasive platform using 3D video, 7-degrees of freedom from wristed instruments that fit through small trocars identical to laparoscopy, natural eye hand alignment, smooth movements with tremor filtration, and motion scaling. The first da Vinci System was set up for clinical trials in Belgium in 1997. Early marketing and sales of the system were focused outside of the United States, until the company received US Food and

The da Vinci Surgical System Chapter | 3

41

Drug Administration (FDA) clearance for the da Vinci System in 2000. This first FDA clearance was for applications in general surgery; however, additional indications for thoracoscopic (chest) and radical prostatectomy procedures followed 1 year later. During this early period, Intuitive Surgical faced competition from a company called Computer Motion, Inc., which made the Zeus surgical system. The Zeus system was based on an early product called AESOP, a voice-controlled endoscope manipulator that was the first robotic device used in assisting surgery to receive FDA approval. Zeus was launched in 1997, the same year that da Vinci was launched in Europe. Initially, the Zeus was preferred by general laparoscopic surgeons, while the da Vinci was adopted by open surgeons who did not perform laparoscopic surgery. Zeus was smaller, had a lower price point, but was less capable. Initially, Computer Motion and Intuitive targeted different procedures, but by 1999, Computer Motion began to move toward similar applications as the da Vinci. Competition and filing of several lawsuits between the two companies ultimately led to a merger between Intuitive Surgical and Computer Motion in 2003. Shortly afterward, the Zeus system was phased out in favor of the da Vinci System. Some of the key events of Intuitive Surgical’s early history are indicated in a company timeline in Fig. 3.1. Further details on the founding of the company are retold in Ref. [2]. In its second decade, Intuitive Surgical developed and launched a series of products to extend and evolve the da Vinci platform. New clinical indications were added as the technology was refined and as surgeons of different specialties adopted da Vinci surgery. Six models of the system have been launched globally to-date, as illustrated in Fig. 3.2. The development of the da Vinci system was the first robot-assisted technology that allowed surgeons to go one step further than laparoscopy through the integration of teleoperation technology that placed a computerized control system between the surgeon and the surgical field. Section 3.3 describes the basic principles of operation and key design refinements in recent embodiments of the system. Subsequent sections elaborate on the platform subsystems and how these take advantage of the computerized control system that is central to the architecture of da Vinci systems.

3.3

Basic principles and design of the da Vinci Surgical System

The da Vinci system comprises three distinct subsystems: (1) the patient-side cart; (2) the surgeon console; and (3) the vision cart. These subsystems comprise a consistent design of each generation of da Vinci systems that have been manufactured to-date, as shown in Fig. 3.2. In the operating room, the surgeon is seated at the surgeon console with the high-resolution stereo viewer to provide 3D vision. The console is typically positioned a few feet away from the operating table where the patient is located, and from there the surgeon controls the movement of the surgical instruments and the camera. The patient-side cart is comprised of four patient-side manipulators or arms, each of which is docked to the trocars placed in the abdominal or thoracic wall of the patient’s body. Unlike the laparoscopic approach—which connects the surgeon to the surgical field mechanically—the da Vinci operates using the principle of teleoperation. The da Vinci trocars have a remote center feature that keeps the fulcrum of the trocar centered in the wall of the body to minimize torque or excessive trauma of the surrounding tissue. This feature is especially important in the chest whereby the remote center keeps the trocar from rubbing on the surrounding ribs. The idea of “teleoperation” or “telemanipulation” has been described in science fiction writing since the 1940s [3] and it has since been deployed to space exploration, deep sea exploration, hazardous material handling, ordinance

3. The da Vinci Surgical System

FIGURE 3.1 Timeline of selected company milestones.

42

Handbook of Robotic and Image-Guided Surgery

FIGURE 3.2 Six models of the da Vinci Surgical System. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

disposal, and a variety of other applications where the operator is required to be located separate from where the end effectors are working. A brief history of the evolution of telerobotics in surgery is provided in Ref. [2]. In the context of the da Vinci system, this approach relies on an electronic connection between the surgeon’s “master interface” and the surgical instruments that are driven by “slave manipulators.” A computerized control system acts as an intermediary in this master slave architecture, and is a key component of the system. The master slave architecture of the da Vinci system is illustrated in Fig. 3.3. The patient-side manipulators or arms are mounted to the patient-side cart via a setup structure, which is discussed in detail later. Each manipulator may support a stereo endoscopic camera or a surgical instrument, such as a grasper, a scissor, or a needle driver. The patient manipulators are covered with a sterile drape, and all of the trocars, camera, and instruments are sterile. The surgeon sits at the console in a nonsterile environment. The computerized control system extends the surgeon’s “presence”—their sensory awareness and control—into the surgical field by transmitting video images from the endoscopic camera to the stereo viewer of the console, and transmitting the surgeon’s hand motions— measured by the master interfaces—to the slave manipulators. Since this is an electronic link, the software of the control system can modify the signals, so as to filter out the surgeon’s normal physiological tremor, or to scale down their motions for enhanced precision (Fig. 3.4). The control system can incorporate additional imaging technology that may augment the surgeon’s view of the anatomy. In the future, this could provide navigation and guidance information; or it may help the surgeon to better anticipate critical task steps. This ability to enhance the surgeon’s capabilities is a key advantage of this type of system and

The da Vinci Surgical System Chapter | 3

43

FIGURE 3.4 The surgeon console blends visualization and instrument control into an intuitive and immersive user interface. r2018 Intuitive Surgical, Inc. Used with permission.

is an aspect that is explored later in this chapter. Currently, the surgeon may stream a second video source or imaging feed into the console’s TilePro feature so that it may be viewed adjacent to the endoscopic view of the surgical field. Future developments may allow different types of imaging to be fully integrated into the endoscopic video view. The four original product pillars of the da Vinci system included four key specifications. First and foremost, the system had to be reliable and failsafe in order to be feasible as a surgical device used on patients; second, the system was to provide the user with intuitive control of the instruments; third, the instrument tips were to have 6-degrees-offreedom dexterity as well as a functional gripper. The fourth pillar was to provide the surgeon with compelling 3D visualization of the anatomy. By transposing the surgeon’s eyes and hands into the patient in a reliable and effective way, these product pillars supported the ultimate goal to provide the surgeon with several key benefits of open surgery that had been lost in the laparoscopic approach, while maintaining minimal invasiveness. Subsequent generations of da Vinci system have extended these original product pillars to improve ease of use for the patient care team. Since the system does not function autonomously, a coordinated team is needed in order to perform surgery, including a bedside assistant, a surgical technologist, and a circulating nurse. The components of this team are no different than in traditional surgery, but there are new interactions and workflows. Several members of this team will interact with the da Vinci system and its components during the multiple phases of a surgery, which include: preparing the system for use, sterilely draping the robotic arms, roll-up (positioning the patient cart next to the patient bed), deployment (adjusting the angles of the robotic arms to ensure clearance between the arms and the patient), docking (securing the connection between the robotic arms and the patient), removing and inserting instruments during the

3. The da Vinci Surgical System

FIGURE 3.3 At the heart of the da Vinci system is a master slave teleoperation architecture, with the surgeon console containing two master interfaces that are used to control slave manipulators that are part of the patient-side cart. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

44

Handbook of Robotic and Image-Guided Surgery

FIGURE 3.5 Comparison of the da Vinci Si and Xi setup structures. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

operation, undocking, undraping, and cleaning, stowing the robotic arms to minimize space required for storage, and reprocessing of instruments and accessories. Apart from handling these specific tasks, the number of individuals in the operating room is not necessarily more than that required for traditional laparoscopic surgery. Distinct from prior generations, the latest da Vinci Xi system uses a gantry system or boom to position the instrument manipulators directly over the operating table (Fig. 3.5). This gantry makes the position of the cart base largely independent from the orientation of the surgical workspace, thereby allowing the operating room staff more ease and flexibility when positioning the base of the cart at the bedside, as they now have fine control of the instrument cluster position and orientation overhead. This is in contrast with the older da Vinci Si patient-side cart, where the reachable workspace is highly dependent on the orientation of the cart since the instrument manipulators are directly connected to the cart base. With the Si system, the team is required to anticipate a good location of the cart with respect to the patient, based on the requirements of the surgery, so as to avoid possible interruptions of the surgery for repositioning. This is just one example of a design solution that has been motivated by the need for ease of use and workflow efficiency. The da Vinci Xi system is the current flagship platform of Intuitive. It enables optimized, focused-quadrant surgery, such as for procedures like colorectal resections, pulmonary lobectomy, ventral hernia repair, and partial nephrectomy, among others. It features more flexible port placement and state-of-the-art 3D digital optics with a fully integrated endoscope and increases operational efficiencies by means of setup technology that uses voice and laser guidance and a sterile drape design that simplifies surgery prep. To provide a lower-cost solution to meet the needs of global customers who want a choice in price points, while offering access to some of the key innovations developed for the da Vinci Xi system, in 2017, Intuitive Surgical released the da Vinci X Surgical System. With the da Vinci X, the instrument manipulators are similar to those of the Xi but are fixed to the patient cart similar to the Si, without a gantry. The da Vinci X system uses the same vision cart and surgeon console found on the da Vinci Xi system, thus enabling customers the option of adding advanced capabilities, and providing an upgrade pathway, should they choose to do so as their practice and needs grow. While the mechatronic arms are the most iconic part of the da Vinci System, building and running a robot-assisted surgery program requires an ecosystem of products and services. This ecosystem starts with a range of robotic systems that address different clinical needs and price points, as well as a family of dozens of different instruments and accessories. It is important that an integrated ecosystem allows for a seamless user experience for both the surgeon and the hospital, as opposed to having different manufacturers responsible for different components of an operation. These include advanced instruments such as staplers and vessel sealers, as well as endoscopic stereo-imaging systems that include Firefly near-infrared imaging technology. The majority of da Vinci systems are connected to a network infrastructure that allows Intuitive Surgical to perform predictive maintenance, minimize downtime, as well as to share analytic insights with customers. A global team of field service specialists provides rapid round-the-clock support for customer systems. Experienced surgeons teach dozens of different advanced courses to their peers in the use of da Vinci technology. Well over a thousand da Vinci Skills Simulators are in use, along with hundreds of real-time training consoles that support intraoperative, collaborative learning (dual console configuration). We discuss several of these aspects of the ecosystem in subsequent sections (Fig. 3.6).

The da Vinci Surgical System Chapter | 3

45

3. The da Vinci Surgical System

FIGURE 3.6 The Intuitive Surgical ecosystem. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

3.4

Visualization

The da Vinci surgical system was among the first commercial products to use stereoscopic endoscopes to guide softtissue surgery. Until recently, such endoscopes have captured and displayed white-light images that show only the visible surfaces of organs. This section introduces several advanced imaging technologies that can provide information that may not be directly visible in white-light images. For more detailed information see Ref. [4].

3.4.1

Fluorescence imaging

Near-infrared fluorescence imaging (Firefly) is a recent innovation that augments visualization during da Vinci surgical procedures. The components of this system include a fluorescent agent, a corresponding excitation light source, and a detector as shown in Fig. 3.7. Regulatory clearance is required for agents with demonstrated clinical utility and safety. The first agent to be widely used intraoperatively during robot-assisted surgical procedures is indocyanine green (ICG). Following injection into the bloodstream, ICG rapidly binds to plasma proteins in the blood. The near-infrared signal that is detected by the imaging sensor in the endoscope is used to highlight the white-light image with false color that provides the surgeon with an augmented view of the tissue, thereby giving the surgeon the ability to see vasculature and tissue perfusion (Fig. 3.8). This property has been valuable for assessing bowel or stomach viability during reconstruction, so that the connection or anastomosis is created at a well-perfused level [5]. Another application of ICG is for defining segmental planes during lung resection. After the arteries feeding the segment are divided, ICG is injected into the circulation, and the segment of lung to be resected will remain dark due to a perfusion deficit. The surgeon can then divide the lung along this defined border. After binding to blood proteins, ICG is metabolized by the liver and subsequently secreted into bile. This pathway for ICG excretion accounts for the rapid decrease in apparent brightness of ICG fluorescence in the vasculature after administration in patients with healthy liver function. However, the accumulation of ICG in bile enables surgeons to use ICG imaging to visualize the bile duct structures as bile is excreted from the liver [6]. In biliary surgery, such as cholecystectomy (removal of the gallbladder), inadvertent injury to the common bile duct is a major concern. The use of Firefly technology has enabled surgeons to help distinguish the anatomy in difficult, inflamed tissue. In general, the biliary structures become visible about 45 minutes following intravenous administration of ICG so the timing is important based on whether the purpose is to examine vascular perfusion or biliary excretion. The excitation and emission wavelengths of ICG are in the near-infrared region of the light spectrum, and as adipose and fascia tissue are somewhat

46

Handbook of Robotic and Image-Guided Surgery

FIGURE 3.7 Overview of the da Vinci Firefly imaging feature. A laser light source is used to excite the fluorophore (at wavelengths around 800 nm) and the emitted light is captured by the image sensors on the endoscope. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

FIGURE 3.8 Vessel identification in FireFly fluorescence imaging mode (view of the renal hilum) during da Vinci partial nephrectomy. A whitelight view is shown on the left and the fluorescent view on the right. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

transparent at these wavelengths, it is possible to see the ICG fluorescence through a modest thickness of intervening tissue. Researchers continue to explore other uses of ICG and its application to conditions suited to robot-assisted surgery. An example of this is ongoing research to explore the use of ICG to image lymphatic system drainage and to localize lymph nodes in order to reduce the invasiveness of node harvesting during cancer surgery [7]. In addition, many academic groups and small companies have brought forward novel imaging agents that fluoresce in the nearinfrared wavelength region. A host of such agents are being developed to image cancer margins using a variety of targeting techniques such as novel cell receptors [8], antigens [9], pH differences [10], the enhanced permeability and retention effect [11], and even scorpion venom, whose mechanism of action is currently unknown [12]. Others are being developed to image sensitive structures that surgeons wish to avoid during resections, such as nerves and ureters. For a review of fluorescent imaging agents currently under development, see Ref. [13].

3.4.2

Tomographic imaging

Tomographic imaging takes cross-sectional images of an object using penetrating waves such as X-rays (e.g., computed tomography, CT), gamma-rays (e.g., single-photon emission CT), radio-frequency waves (e.g., magnetic resonance imaging), mechanical waves (e.g., ultrasound), etc. These tomographic sectional images can provide information deep in the surface of tissue, as opposed to reflective images captured by endoscopes or bare eye vision. Viewing a series of tomographic sections allows one to gain 3D information about the anatomy of interest. Robot-assisted surgery for soft tissue has mostly focused on reflective imaging that allows the user to see only the visible surface of organs. Tomographic imaging is routinely used by surgeons prior to the operation to visualize deep

The da Vinci Surgical System Chapter | 3

47

3. The da Vinci Surgical System FIGURE 3.9 (Left) Overlay of renal arteries from a rendered CT angiogram over grayscale endoscopic images, (right) overlay of ultrasound images on endoscopic images of the liver. CT, Computed tomography. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

tissue structures such as solid tumors and vasculature, and help plan the conduct of the operation. The ability to use intraoperative tomographic imaging that is integrated into the robotic platform may increase the efficiency and accuracy of the surgeon [14,15]. It is also possible that such augmented imaging would reduce complications by avoiding critical structures and reducing the risk of positive cancer margins with improved awareness of the tumor borders and any involved lymph nodes [14,15]. Tomographic images will need to be acquired preoperatively and need to be aligned (registered) to intraoperative patient coordinates—this is particularly challenging within soft-tissue structures, due to complex tissue deformations [14]. Intraoperative imaging modalities such as ultrasound are often used to acquire live images during surgery. The da Vinci system currently supports feeds from auxiliary video streams into the surgical display (TilePro), thus allowing third-party image and video sources to be viewed adjacent to live endoscopic video. Although the TilePro feature can be used to display tomographic images intraoperatively, the lack of automated alignment between the endoscopic view and the tomographic view makes it challenging to use real time during surgery. Numerous research efforts have attempted to solve the hand eye coordination problem in order to make more effective use of tomographic imaging modalities with the da Vinci system [16]. Fig. 3.9 shows examples of augmented reality images that combine endoscopic and tomographic images.

3.5

Tissue interaction

As part of the Intuitive ecosystem, a variety of instruments has been developed for the da Vinci system in order to facilitate tissue manipulation in various types of surgical procedures. Most of these instruments have an articulated wrist mechanism to allow for dexterous and intuitive tissue interaction, following the surgeon’s wrist articulation while controlling motion from the master interfaces of the surgical console. EndoWrist is the trade name for these articulated instruments, which include various types of scissors, forceps, needle drivers, retractors, monopolar and bipolar energy instruments, stabilizers, staplers, and vessel sealers. Only Intuitive instruments are compatible with the da Vinci system, in keeping with the goal of maintaining a uniform user experience in an integrated ecosystem. Due to the complexity and stress of reprocessing between operations, the instruments have a defined number of lives before they need to be replaced; some instruments, such as the vessel sealer, are single use only. In this section, we review two advanced instruments: the robotic stapler and vessel sealer.

3.5.1

Stapler

The EndoWrist Stapler is an articulated surgical stapling device for the da Vinci Si, Xi, and X systems and is used for resection, transection, or creation of anastomoses in general, thoracic, gynecologic, and urologic surgery. These surgical staplers utilize a disposable cartridge that places multiple staggered rows of staples and then transects the tissue between the staple rows with a knife blade. The stapler instrument can be reloaded multiple times with stapler reloads during a

48

Handbook of Robotic and Image-Guided Surgery

FIGURE 3.10 The da Vinci Xi EndoWrist Stapler. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

procedure. Distinct from traditional hand-held endoscopic staplers, the robotic EndoWrist stapler allows surgeons to control the positioning and firing of the intended structure, rather than relying on a bedside assistant to do this manually. The EndoWrist stapler has two opposing jaws and 6 degrees of freedom: roll, pitch, yaw, grip, “clamp,” and “fire” (Fig. 3.10). Roll, pitch, yaw, and grip are used to position the upper and lower jaws of the instrument relative to the target tissue and are controlled in the same manner as other EndoWrist instruments, via master manipulators on the surgeon console. The term “clamp” is the same motion as grip, but uses a different mechanism to provide significantly higher grip force. The EndoWrist staplers incorporate software that can determine if there is excessive tissue thickness between the jaws to ensure that the staple line is fired only when the tissue is the appropriate thickness for the selected load. This technology is called SmartClamp on the older generation of stapler, and SmartFire on the newer generation. The term “fire” describes the combined action of implantation of staples and transection (translating blade) of the target tissue. Both functions are activated and controlled by the foot pedals at the surgeon console. One jaw of the instrument houses the staple reload, and the other jaw contains features which “form” the staples such that they remain implanted in the tissue. The EndoWrist stapler comes in three lengths: 30, 45, and 60 mm. The 30 and 45 mm are available with a curved tip design on the anvil which eases passing the anvil of the stapler around fragile vessels in small spaces (Fig. 3.10). The EndoWrist stapler cartridges come in different colors that signify different heights of the staples for different tissue thicknesses: gray (2.0 mm), white (2.5 mm), blue (3.5 mm), green (4.3 mm), and black (4.6 mm).

3.5.2

Vessel sealer

The da Vinci Vessel Sealer Extend is a single-use advanced bipolar cautery instrument with an EndoWrist that can seal and cut vessels up to 7 mm in diameter (Fig. 3.11). By applying precise pressure and controlled energy delivery between the jaws, soft-tissue proteins denature within the range of 60 C 90 C, causing the inside wall of the vessel to melt and fuse together. The energy delivery is controlled based on tissue impedance measurements during sealing to maintain temperature within a range that results in sealing, rather than charring or burning. Once sealed, the vessel can be transected by firing a mechanical knife that moves along the length of the instrument jaws, in a slot through the center of the electrodes. The tip of the Vessel Sealer Extend has a blunt end to allow for atraumatic dissection of tissue planes. The flat jaws can be also used for holding tissue, and it can be used for driving a needle during simple suturing. The jaws and wrist of the instrument are controlled using the surgeon console like the other EndoWrist instruments. The energy application and cut function are controlled using the foot pedals on the console.

3.5.3

Integrated table motion

During surgery, it is sometimes necessary to adjust the patient’s position for optimization of exposure. With the da Vinci system docked to the patient, this normally would require all of the instruments to be removed, the system

The da Vinci Surgical System Chapter | 3

49

FIGURE 3.12 Integrated operating table motion with the da Vinci Xi Surgical System. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

undocked, and then the OR table can be adjusted. The integrated table motion feature of the da Vinci Xi system allows the system to communicate with TRUMPF Medical’s advanced operating table—the TruSystem 7000dV (TRUMPF Medezin Systeme, Saalfeld, Germany) (Fig. 3.12). With this feature, surgical teams can reposition the operating table while performing a procedure, while the robotic arms remain docked and the surgeon maintains control of the instruments. This is accomplished through computer-controlled, coordinated motion between the da Vinci system and the table. The table and the surgical system are synchronized, so that the surgical system adjusts the gantry and instrument arms to maintain the pose of instruments and endoscope relative to the patient’s anatomy while the table moves up and down, side to side, or tilting the upper body up or down. The integrated table motion feature improves efficiency in the operating room while offering an additional way of managing surgical access, exposure, and reach—essentially by employing gravity as a fourth “invisible instrument.” This is typically useful in surgery on the colon and rectum.

3. The da Vinci Surgical System

FIGURE 3.11 da Vinci Vessel Sealer Extend instrument. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

50

Handbook of Robotic and Image-Guided Surgery

This feature can be used to: G G

G

Optimally position the table so that gravity exposes anatomy during multiquadrant procedures; Maximize reach and access to target anatomy, thus enabling surgeons to interact with tissue at an ideal working angle; and Reposition the table during the procedure to enhance the anesthesiologist’s care of the patient. Several key functions make this feature safe and effective:

G

G

G G G

Port following—the arms and gantry seamlessly follow the ports to leverage the da Vinci Xi system’s full range of motion. Center of motion—combines table Trendelenberg and Slide to move the patient around a virtual pivot point; maximizes the range of motion. Remote center monitoring—redundant software checks ensure cannulas and arms move in unison for added safety. Intuitive instrument control—surgeon maintains control from the surgeon console while the table is moving. Table tracking—actively rotates the da Vinci Xi camera and instruments to maintain a consistent orientation to anatomy during table movement.

3.6

Surgical access

The da Vinci system provides multiple ports of surgical access through small incisions in the patient’s body. This reduces the morbidity of the procedure, when compared to open surgery where large incisions are required to provide access [17]. However, there is increasing interest in further reducing invasiveness by using only a single incision or by accessing the body via natural orifices (NOTES: natural orifice transluminal endoscopic surgery). NOTES is considered the ultimate example of minimally invasive surgery since the surgical tools are introduced through the mouth, anus, or vagina and there is no skin incision made. As an example, Fig. 3.13 shows incisions required for open, multiport and single-port laparoscopic cholecystectomy (gallbladder removal) procedures.

3.6.1

da Vinci SP system

The latest iteration of the da Vinci system is a departure from the previous platforms, and is called the da Vinci SP, for single port. With this system three multijoint articulating instruments and an articulating camera pass through a single 2.5 cm diameter cannula. This design allows all instruments to enter a single long parallel axis, as opposed to the older single-site technology on the multiport systems (Fig. 3.14). As distinct from the EndoWrist design of the other da Vinci platforms, SP instruments have two joints to allow for expansion of the instruments once they have entered the body cavity. In addition, the articulating 3D high-definition camera is a first for the da Vinci platform, and allows the camera to position itself optimally for greater depth of field. This will make the da Vinci SP system particularly suitable for procedures that require natural orifice access, such as transoral, transrectal, and transvaginal procedures, or for operating in small spaces. It is currently approved in the United States for transabdominal urological procedures as of August 2018.

FIGURE 3.13 (Left) Open surgery with 5 8 in. of incision, (middle) multiport laparoscopic surgery with four small incisions and (right) single-site or single-port surgery with only one incision.

The da Vinci Surgical System Chapter | 3

51

3. The da Vinci Surgical System FIGURE 3.14 (A) da Vinci Si Single-Site system: (left) patient-side cart setup with three arms and curved cannulas used with two flexible instruments, (right) illustration of master manipulator associations with Single-Site instruments. (B) The da Vinci SP system, single-port cannula, three articulated instruments and camera shown. Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

The use of the da Vinci SP system is very similar to the multiport da Vinci Xi system, but there are some differences. The surgeon console appears very similar to the preexisting console with the addition of one foot pedal to allow more versatile camera control. The various uses of the camera and the ability to articulate it are the biggest differences between SP and the Xi. The use of the instrumentation is similar, but again there are some differences, particularly with regard to using the second joint which is distinct from the Xi. There is dedicated instrumentation for the SP, much of which is similar to that for the Xi. Given that the da Vinci SP system only recently received FDA 510(k) clearance in the United States for limited applications in 2018, it will be exciting to follow the different applications surgeons will develop in the near future with this totally different approach to surgical robotics.

3.7

Technology training

Training and continuing education for the surgeon and surgical staff remain crucial for successful use of the da Vinci system. The computer-assisted nature of the da Vinci system creates unique opportunities to deliver enhanced training experiences. Since the central computing system mediates all of the user commands and system outputs, one application of this information stream [18] is to measure and evaluate how the user is operating the system during training sessions [19 21]. It is possible that such approaches could eventually be used for intraoperative feedback to the users during surgical procedures to augment the decision-making capabilities of the surgeon. Another application of the information stream is to replace the endoscopic video feed with a virtual reality feed to the surgeon console during training; indeed the dynamics and controls of the entire patient-side cart can be computationally simulated and fed back to the surgeon console. Intuitive Surgical’s da Vinci Skills Simulator (Fig. 3.15) does just that, and it enables the surgeon to practice using the console without the accessories and support staff. The opportunities for virtual reality simulation in minimally invasive surgical training in general, and da Vinci training in particular [22 25], have been extensively documented, and include quantitative metrics, unlimited diversity of training and surgical scenarios, and the future potential for patient-specific procedure rehearsal [26 28].

52

Handbook of Robotic and Image-Guided Surgery

FIGURE 3.15 The da Vinci Skills Simulator, shown mounted on the surgeon console (left). Examples of digital training content available on the Skills Simulator (right). The upper right image demonstrates a needle targeting exercise (courtesy of Mimic Technologies, Inc., Seattle, WA, USA). The lower right image demonstrates a simulated hysterectomy procedure (courtesy of 3D Systems, Inc., Rock Hill, SC, USA). Reproduced with permission from r 2018 Intuitive Surgical, Inc. Used with permission.

The newest generation of robotic surgery simulation is known as SimNow. A key new feature of this platform is network connectivity of the simulator such that the progress of all the surgeons using it can be tracked for utilization and skills progression. This allows training programs to gain further insight into their residents and fellows training on the da Vinci platform, and to customize the curriculum based on the individual needs of the learning surgeon. Another key feature of SimNow is integration of both skills simulation as well as guided procedure simulation in a subscriptionbased platform, such that new content will be continuously pushed into the simulators remotely as they become available. da Vinci systems can be accessed by Intuitive Surgical via a secure network connection to facilitate preventative maintenance and customer service technical support. Network connectivity can also be used to support remote mentoring of surgeons who may be either mentees or newly trained robotic surgeons. The remote mentoring technology supports two-way audio communication and a 2D endoscopic view for the mentoring surgeon [29,30]. The platform also allows the mentoring surgeon to “draw” or telestrate on the mentee surgeon’s console screen. This technology allows for a surgeon who has completed training and in-person proctoring, to get additional advice or support from a mentor as he or she transitions into fully independent surgical practice on the da Vinci system. It is important that the development of training tools be driven by educational needs, rather than technological novelty. As robot-assisted surgery gains broader adoption, the types of learners and their needs grow as well. Thoughtful attention to different learners’ needs when developing training content and training technologies will ensure that impactful and efficient learning is available to all da Vinci users.

3.8

Clinical adoption

Intuitive Surgical’s current product lines focus on five surgical specialties: gynecologic surgery, urologic surgery, general surgery, cardiothoracic surgery, and head and neck surgery. Specific clearances vary across regulatory bodies worldwide and across the various models of da Vinci systems. In the United States, the da Vinci system is classified by the FDA as a Class II device, and as such, clearances are made for a specific set of indications. At the time of writing, these clearances include indications for urologic surgical procedures, general laparoscopic surgical procedures, gynecologic laparoscopic surgical procedures (specifically for the da Vinci Si system only: transoral otolaryngology surgical

The da Vinci Surgical System Chapter | 3

53

3.8.1

Procedure trends

In 2017, total US procedure volume was approximately 875,000, of which approximately a third was in urology, a third in gynecology, and a third in general surgery. The remaining procedures were in other specialties such as thoracic surgery or head and neck surgery. Most procedures outside of the United States were in urology. The fastest growing segment of robotic surgery currently is in general surgery, with enthusiastic adoption by surgeons for hernia repair and colorectal procedures. Procedure trends between 2000 and 2017 are illustrated in Fig. 3.16.

3.8.2

Publications

As public agencies seek to understand the impact of new technologies on healthcare outcomes and costs, peer-reviewed clinical publications and evidence-based medicine have become increasingly important. There are currently over 14,000 PubMed-indexed publications across multiple surgical specialties related to the clinical uses of the da Vinci system, the vast majority of which were researched and written independent of Intuitive Surgical. Fig. 3.17 shows publications by surgical specialty from 1998 to 2015. Contrary to some criticism that there is a lack of clinical evidence for the efficacy of robotic surgery, the peer-reviewed literature is both deep and compelling across many clinical applications of robotics. FIGURE 3.16 Worldwide da Vinci procedure growth from 2000 through 2017.

FIGURE 3.17 Publications on da Vinci technology from 1998 to 2015 by surgical specialty.

3. The da Vinci Surgical System

procedures restricted to benign and malignant tumors classified as T1 and T2 and for benign base of tongue resection procedures), general thoracoscopic surgical procedures, and thoracoscopically assisted cardiotomy procedures. The system can be employed with adjunctive mediastinotomy to perform coronary anastomosis during cardiac revascularization. The system is indicated for both adult and pediatric use except for transoral otolaryngology surgical procedures.

54

3.9

Handbook of Robotic and Image-Guided Surgery

Conclusion and future opportunities

At Intuitive Surgical, we are keenly focused on continuing to enhance the value of our product and service ecosystem for our customers by striving to make surgery more effective, less invasive, and easier on surgeons, patients, and their families. This chapter has described the da Vinci system as a platform technology that we can leverage to further enhance surgeon perception, tissue manipulation, and minimally invasive access to diseased tissue. In the area of visualization, two major trends of research and development may shape the future of image guidance in robot-assisted surgery: (1) improvements in visualization techniques mainly driven by the gaming industry. In a recent survey of robotic urology surgeons, 87% felt that there is a role for augmented reality as a navigational tool in robot-assisted surgery [31]; and (2) advances in molecular imaging are likely to find more use in robot-assisted surgery with the advent of molecular markers specific to various tissue types and pathologies [32]. In the area of tissue interaction, mechanical manipulation of soft tissue using instruments such as scissors, forceps, and scalpels has been practiced for centuries and is still in use in operating rooms. The first use of electrosurgery at the Brigham and Women’s Hospital (Boston, Massachusetts) dates back to 1926. Development of advanced energy instruments that use electric currents, ultrasonic vibrations, lasers for cutting, tissue fusion and welding, etc. has been a major trend in the past few decades and we anticipate further advancements in energy instruments that are more efficient and precise. Robotic platforms will enable greater dexterity and control of these instruments. Haptic feedback or forcesensing instrumentation is another area of focus for the future, and it is anticipated that this will accelerate learning among inexperienced surgeons. In terms of minimally invasive access to tissue, five major areas of future development in surgical access can be identified as: (1) further miniaturization of surgical instruments, (2) increased use of endoluminal or percutaneous access driven by advances in snake robot-assisted technologies, (3) swallowable robots [33], (4) targeted therapy with magnetic guidance [34], and (5) noninvasive access via focal therapy [35]. Computer- and robot-assisted systems transform our ability to measure, compare, assess, and inform the performance of surgery. This has never before been possible at the scale that it is now enabled by digital surgery platforms, such as da Vinci. This creates tremendous opportunities for applying data science and analytics to model and objectively assess the practice of surgery, in order that we may further enhance clinical outcomes, safety, and operational efficiencies, while reducing complexity and variability by shortening surgeon and team learning curves. Surgical task automation is currently the subject of scientific research and is likely to be a topic of debate as regulatory challenges are realized. At Intuitive, the aim of using artificial intelligence or machine learning is to augment the surgeon’s capabilities, not replace them. This may include gradual introduction of guidance and warning features that will require the system to have some knowledge of the surgical task, similar to early aspects of autonomy in automobiles, where the first steps were recognition of road markings, obstacles, cars, and pedestrians. The future is bright for robot-assisted surgery and the introduction of several robotic surgery companies legitimizes the vision first undertaken by Intuitive over two decades ago. Multidisciplinary cross-fertilization has clearly been a key component of the development of our field to-date and will become increasingly important as new capabilities are realized and new applications are explored. This collaboration between clinical scientists, surgeons, academic researchers, industry engineers, regulatory groups, and many others will help to transition novel ideas into technologies that will ultimately benefit patients and their families in remarkable new ways.

References [1] [2] [3] [4] [5]

Litynski GS. Erich Muhe and the rejection of laparoscopic cholecystectomy (1985): a surgeon ahead of his time. JSLS 1998;2(4):341 6. Rosen J, Hannaford B, et al. Surgical robotics: systems applications and visions. Springer Science & Business Media; 2011. Heinlein RA, Waldo & Magic, Inc. Baen Publishing Enterprises; 2014 Apr 1. Azizian M, McDowall I, et al. Visualization in robotic surgery. The SAGES Atlas in Robotic Surgery; 2016. Hellan M, Spinoglio G, et al. The influence of fluorescence imaging on the location of bowel transection during robotic left-sided colorectal surgery. Surg Endosc 2014;28(5):1695 702. [6] Spinoglio G, Priora F, et al. Real-time near-infrared (NIR) fluorescent cholangiography in single-site robotic cholecystectomy (SSRC): a singleinstitutional prospective study. Surg Endosc 2013;27(6):2156 62. [7] Holloway RW, Bravo RA, Rakowski JA, James JA, Jeppson CN, Ingersoll SB, et al. Detection of sentinel lymph nodes in patients with endometrial cancer undergoing robotic-assisted staging: a comparison of colorimetric and fluorescence imaging. Gynecol Oncol 2012;126(1):25 9. [8] van Dam GM, Themelis G, Crane LM, Harlaar NJ, Pleijhuis RG, Kelder W, et al. Intraoperative tumor-specific fluorescence imaging in ovarian cancer by folate receptor-α targeting: first in-human results. Nat Med 2011;17(10):1315 19.

The da Vinci Surgical System Chapter | 3

55

3. The da Vinci Surgical System

[9] Bao K, Lee JH, Kang H, Park GK, El Fakhri G, Choi HS. PSMA-targeted contrast agents for intraoperative imaging of prostate cancer. Chem Commun (Camb) 2017;53(10):1611 14. [10] Zhou K, Liu H, Zhang S, Huang X, Wang Y, Huang G, et al. Multicolored pH-tunable and activatable fluorescence nanoplatform response to physiologic pH stimuli. J Am Chem Soc 2012;134(18):7803 11. [11] Jiang JX, Keating JJ, Jesus EM, Judy RP, Madajewski B, Venegas O, et al. Optimization of the enhanced permeability and retention effect for near-infrared imaging of solid tumors with indocyanine green. Am J Nucl Med Mol Imaging 2015;5(4):390 400. [12] Veiseh M, Gabikian P, Bahrami SB, Veiseh O, Zhang M, Hackman RC, et al. Tumor paint: a chlorotoxin:Cy5.5 bioconjugate for intraoperative visualization of cancer foci. Cancer Res 2007;67(14):6882 8. [13] Clinical milestones in optical imaging. In: Sorger J Imaging and visualization in the modern operating room: a comprehensive guide for physicians. Springer; 2015. [14] Herrell SD, Kwartowitz DM, et al. Toward image guided robotic surgery: system validation. J Urol 2009;181(2):783 90. [15] Kim YM, Baek S-E, et al. Clinical application of image-enhanced minimally invasive robotic surgery for gastric cancer: a prospective observational study. J Gastrointest Surg 2012;17(2):304 12. [16] Leven J, Burschka D, et al. DaVinci canvas: a telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability. Medical Image Computing and Computer-Assisted Intervention MICCAI 2005. Springer; 2005. p. 811 8. [17] Anderson JE, Chang DC, et al. The first national examination of outcomes and trends in robotic surgery in the United States. J Am Coll Surg 2012;215(1):107 14. [18] Daimios S, Hasser C. The da vinci research interface, miccai workshop on systems and architectures for computer assisted interventions. MIDAS J 2008. Available from: http://hdl.handle.net/10380/1464. [19] Lin HC, Shafran I, et al. Automatic detection and segmentation of robot-assisted surgical motions. Med Image Comput Comput Assist Interv 2005;8(Pt 1):802 10. [20] Lin HC, Shafran I, et al. Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg 2006;11(5):220 30. [21] Kumar R, Jog A, et al. Objective measures for longitudinal assessment of robotic surgery training. J Thorac Cardiovasc Surg 2012;143 (3):528 34. [22] Abboudi H, Khan MS, et al. Current status of validation for robotic surgery simulators - a systematic review. BJU Int 2013;111(2):194 205. [23] Bric JD, Lumbard DC, Frelich MJ, Gould JC. Current state of virtual reality simulation in robotic surgery training: a review. Surg Endosc 2016;30(6):2169 78. [24] Liu M, Curet M. A review of training research and virtual reality simulators for the da vinci surgical system. Teach Learn Med 2015;27 (1):12 26. [25] Smith R, Truong M, et al. Comparative analysis of the functionality of simulators of the da Vinci surgical robot. Surg Endosc 2015;29 (4):972 83. [26] Coste-Maniere E, Adhami L, et al. Planning, simulation, and augmented reality for robotic cardiac procedures: the STARS system of the ChIR team. Semin Thorac Cardiovasc Surg 2003;14(2):141 56. [27] Suzuki S, Suzuki N, et al. Tele-surgery simulation with a patient organ model for robotic surgery training. Int J Med Rob Comput Assisted Surg (MRCAS) 2005;1(4):80 8. [28] Willaert WI, Aggarwal R, et al. Simulated procedure rehearsal is more effective than a preoperative generic warm-up for endovascular procedures. Ann Surg 2012;255(6):1184 9. [29] Lenihan J, Brower M. Web-connected surgery: using the internet for teaching and proctoring of live robotic surgeries. J Rob Surg 2011;6 (1):47 52. [30] Shin DH, Dalag L, et al. A novel interface for the telementoring of robotic surgery. BJU Int 2015;116(2):302 8. [31] Hughes-Hallett A, Mayer EK, et al. The current and future use of imaging in urological robotic surgery: a survey of the European Association of Robotic Urological Surgeons. Int J Med Rob Comput Assisted Surg 2015;11(1):8 14. [32] Mitchell CR, Herrell SD. Image-guided surgery and emerging molecular imaging: advances to complement minimally invasive surgery. Urol Clin N Am 2014;41(4):567 80. [33] Beccani M, Tunc H, et al. Systematic design of medical capsule robots. IEEE Des Test 2015;32(5):98 108. [34] Qiu F, Fujita S, et al. Magnetic helical microswimmers functionalized with lipoplexes for targeted gene delivery. Adv Funct Mater 2015;25 (11):1666 71. [35] Yiallouras C, Ioannides K, et al. Three-axis MR-conditional robot for high-intensity focused ultrasound for treating prostate diseases transrectally. J Ther Ultrasound 2015;3(1):1 10.

4 G

The FreeHand System Oliver Anderson and Tan Arulampalam Colchester General Hospital, Colchester, United Kingdom

ABSTRACT The FreeHand system is a robotic laparoscopic camera-holding and moving device. It replaces a human assistant who normally holds the camera. The image FreeHand provides can benefit the operating surgeon for two main reasons. First, it provides a stable image when stationary, as it does not suffer from hand tremor or fatigue. Second, it allows the operating surgeon hands-free control over the movements of the camera and therefore does not rely on the assistant to provide the desired view. It has been used in a wide variety of laparoscopic operations. Research studies have shown it is as safe and effective as conventional techniques, has a short learning curve, and can result in faster overall operating times. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00004-9 © 2020 Elsevier Inc. All rights reserved.

57

58

4.1

Handbook of Robotic and Image-Guided Surgery

Introduction

In major open surgery, the scrubbed team is usually composed of the operating surgeon, a nurse, and an assistant to the operating surgeon. The incisions in the patient’s skin must be sufficient to expose the relevant anatomy. One of the assistant’s jobs is to retract the incised skin to allow the operating surgeon to see the target area. Retracting can be physically demanding and while retracting, apprentice surgeons cannot practice the operating surgeon’s role. Self-retaining retractors are surgical instruments that do not need to be held in position by the assistant and can be used to free-up the assistant to be able to participate in other ways. In laparoscopic surgery the skin incisions are much smaller since they only need to allow the ports entry to the body and the skin does not need to be retracted. Typically, at least three ports are used, two for the operating surgeon using an instrument in each hand and the third for the video camera. Laparoscopic surgery replaced the need for the surgeon’s assistant to retract the skin with the need for a camera-holder. The camera-holder has the same aim as the skin retractor, to allow the operating surgeon to see the target area. Ideally, the assistant maneuvers the camera with sufficient skill to give the operating surgeon a satisfactory image of the desired view at all times. Holding the camera can be physically and technically demanding and maintaining a satisfactory view is not always achieved. This led to the desire for a camera-holding medical device controlled by the operating surgeon. The FreeHand system (FreeHand 2010 Ltd, Guildford, United Kingdom) is a master slave robot that is designed to hold and move the laparoscopic camera under the control of the operating surgeon and replaces the human camera-holding assistant. In this chapter we detail the challenges with manual surgery that FreeHand addresses, the development and iterations of FreeHand, the preoperative preparation required, operative set up, the operative use of FreeHand, and experience of using FreeHand.

4.2

Challenges with manual surgery that FreeHand addresses

Surgical teams are under pressure to meet the needs of patients in several locations at once. New emergency patients need to be assessed to determine a diagnosis and treatment plan, in-patients on the ward need to be reviewed to check for disease-related and postoperative complications, jobs generated by these activities need to be chased, and patients also need to have their results fed back, further management plans made, and their discharge from hospital arranged in a timely manner. This work is often happening simultaneously with elective and emergency operating lists in the public sector and the requirement for assistants to attend theaters means there are fewer people available elsewhere. Laparoscopic surgery requires an assistant to hold the camera. In operations that can be performed by a single surgeon without an assistant more simply and quickly via the open technique, for example, inguinal hernia repair or appendectomy, this additional personnel requirement might be a barrier to use of the laparoscopic technique. The availability of expert camera-holders is limited. In the English National Health Service, the surgical senior house officer grade which used to be composed of doctors with an interest in surgery and studying for postgraduate surgical exams has been partly replaced by the Foundation Year 2 grade who might have no interest in a surgical career and wish to pursue a career in a completely unrelated specialty. Inexperienced assistants might not maneuver the camera optimally to give the operating surgeon a satisfactory view. They may also tire physically and mentally and their performance may deteriorate, particularly if they lack motivation. This is especially true in long technically and physically demanding operations when optimal camera views are challenging to obtain or repetitive seemingly mundane operations that some may regard as boring when the camera needs to be held in the same position without moving for extended periods of time. Trainee surgeons might not want to hold the camera during the operation and might feel that their training is better if they performed the operative surgeon role. If the trainee is the operating surgeon, a senior surgeon might be the only member of staff available to hold the camera. While the senior surgeon is scrubbed and holding the camera, the trainee may feel that they are not really able to perform the operation independently even though they are controlling the operating instruments.

4.3

Development and iterations of FreeHand

The simplest device to replace the human laparoscopic camera-holder was a mechanical clamp (Fig. 4.1). This provided the operating surgeon with their desired view that was free of hand tremor and other unwanted movements. This was most suitable for operations with few camera position changes, because a new camera position was achieved manually by the operating surgeon.

The FreeHand System Chapter | 4

59

4. The FreeHand System

FIGURE 4.1 Clamp.

Manually adjusting the clamp did not only disrupt the flow of the operation, but could also be a risk to patient safety, because the operating surgeon had to release the instruments to change the view. For example, if an instrument was clamping a bleeding vessel when the view was lost then maintaining control of the vessel while the camera was adjusted could require an assistant to attend who might not be immediately available. Hands-free control of the laparoscopic camera was the design brief for a robotic system. EndoAssist (Prosurgics Ltd, High Wycombe, United Kingdom) was developed to fulfill this brief and was the predecessor to FreeHand. EndoAssist was a robotic laparoscopic camera-holding device. It was a freestanding unit and not attached to the operating table, but mounted on the floor. It was wheeled into position over the camera port. The laparoscope was attached to EndoAssist using a sterilizable steel arm. It was controlled by movements of the human head through a headset that detected tilting movements and sent directional signals to the robot via an infrared receiver on the surgeon’s monitor. Pressing a foot pedal carried out the movement selected by the headset. It could pan (left and right), tilt (up and down), and zoom (in and out). EndoAssist had a safety feature which limited the amount of force it could exert, to prevent tissue damage (Fig. 4.2). Aiono et al. conducted the first randomized controlled trial of EndoAssist [1]. Ninety-three patients who required a laparoscopic cholecystectomy and routine on-table cholangiogram at a district general hospital were entered into the study. Six surgeons participated, three of whom were fully trained (two consultants and one associate specialist) and three were trainees (registrars). Each surgeon was included in the trial after one practice operation using EndoAssist. One additional surgeon could not be included in the trial because they found EndoAssist to be unsuitable after three practice operations, two of which were converted to a human camera-holder. The human camera-holders were at various levels of seniority but all were trainees with some experience of holding the camera. Seven patients were excluded after randomization, because their operations were converted to open. In the EndoAssist group this included one for the reason of obesity and two for adhesions. In the manual camera-holding group this included one for difficult anatomy, one for obesity, and two for adhesions. The EndoAssist robot itself was not felt to be the reason why any operations were converted to open. Of the remaining included patients, 40 were randomized to the robot and 46 to a human camera-holder. Operating time was measured (from the first port insertion to last port removal) and proficiency assessments were made. The mean operating time was less using the robotic camera-holder, even with the extra setup and removal time for the robot included: 66 minutes versus 74 minutes, P , .05. This was statistically significant, and 8 minutes less from an operation lasting just over an hour is just over a 10% saving. The setup time for the robot was less than 10 minutes and removal time less than 1 minute. The learning curves showed that three operations were required to achieve proficiency. Proficiency was defined as there being no difference in the times between operating with the robotic or a human camera-holder. There were no safety issues raised and no instances of harm caused to patients by the robot. The reason for the faster overall operating time was discussed and hypothesized to be due to the high-quality, stable image provided by the robot in a stationary position. The authors also commented that there was an unexpected benefit of the robot for surgical trainees. The surgical trainees could operate using the robot with the senior surgeon unscrubbed, instead of having the senior surgeon scrubbed and holding the camera. This gave the surgical trainees a sense of independence and allowed the senior surgeon to attend to other duties while still being available to advise or participate in the operation as necessary. This was considered good training for becoming a senior surgeon. However, the authors also acknowledged that more junior trainees might spend less time in theater as they were not required to hold the camera and this might impact negatively on their operative experience.

60

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.2 EndoAssist.

Kommu et al. compared the use of EndoAssist with a human camera-holder in urological operations including simple and radical nephrectomy, pyeloplasty, radical prostatectomy, and radical cystoprostatectomy [2]. The three surgeons participating in the study noted the extent of their own body discomfort and muscle fatigue, ease of use, need to clean the telescope lens, time of setup, surgical performance, and the need to reposition the robot during the operation. There were no major intraoperative complications. All three surgeons felt more comfortable using the EndoAssist. There was no difference in muscle fatigue. The EndoAssist was associated with a higher number of telescope lens cleans, which was attributed to the large arc of movement of the arm and therefore, we assume, accidentally bumping the telescope lens into tissues and smudging it. For laparoscopic nephrectomy, the EndoAssist had to be relocated while the human camera-holder did not when the operating table was moved. The EndoAssist was not associated with a longer setup time, which was under 8 minutes in all cases and was considered to be equivalent in terms of surgical performance, viability, complication rates, and total operative time. There was no neck or shoulder discomfort associated with using the head-mounted movement detector to control the EndoAssist. Although the EndoAssist was associated with more instances of cleaning the telescope lens, the authors commented that they felt this was not a useful measure of performance. They commented that telescope lens cleaning depended on many other factors including the human camera-holder, the body fat of the patient, the type of surgery, and the patient’s anatomy. Halin et al. presented data that the mean time for a laparoscopic fundoplication with a human camera-holder was 3 hours and 49 minutes and with the EndoAssist robot it was 2 hours and 3 minutes [3]. This was based on only 2 surgeons performing 16 operations and there was no statistical analysis presented so these results cannot be used to form conclusions beyond reasonable doubt, but they do indicate the potential benefits of the robot. Powar et al. presented data that showed in an unrandomized study of 20 consecutive laparoscopic inguinal hernia repairs there was no difference in overall operating time between the 10 cases done with the EndoAssist and the 10 cases done with a human camera-holder (73 and 76 minutes, respectively, P 5 .071). This included the first cases that the surgeon and theater team had performed with the robot. All cases were completed without complications [4]. Gilbert described the introduction of EndoAssist to the work of a laparoscopic colorectal unit [5]. The robot was successfully used for 77 consecutive colorectal resections and created no problems or concerns and was used for a wide range of colorectal operations (Box 4.1).

The FreeHand System Chapter | 4

61

Advantages of EndoAssist G Stable stationary image G Operating surgeon controls movements of the camera Disadvantages of EndoAssist G Heavy and bulky G Setup needs to be adjusted when the operating table is moved or if the camera is moved to a different port

BOX 4.2 Advantages and disadvantages of AESOP Advantages of AESOP G Attaches to the operating table and does not need to be adjusted if the patient is tilted during the operation G Intuitive controls Disadvantages of AESOP G Each user needs their own voice card set up in advance to control the robot G Background noises can interfere with voice-recognition G Not as fast or accurate as EndoAssist

Alternative robotic camera-holding devices have been developed. These include; SoloAssist (AktorMed, Barbing, Germany), Vicky (EndoControl, Grenoble, France), LapMan, and LapStick (MedSys, Gembloux, Belgium). SoloAssist docked to the operating table and was controlled by a sterile joystick that was attached to the operating surgeon’s laparoscopic instruments. Vicky (vision control for endoscopy) docked to the operating table and was controlled either by voice or by a multidirectional foot pedal. LapMan and LapStick were floor-mounted units that like SoloAssist used a joystick attached to the operating surgeon’s instruments. One alternative hands-free robotic laparoscopic camera-holding device is worth mentioning in more detail, because it used a different control mechanism and a study compared it with EndoAssist. The Automated Endoscopic System for Optimal Positioning (AESOP; Intuitive Surgical Inc, Sunnyvale, CA, United States) was a voice-controlled device. Using voice commands to control a robot might be more intuitive than head movements since the operating surgeons already deliver voice commands to their human camera-holding assistants. However, as background noises can be interpreted by AESOP as movement commands, each user required a voice card to be created beforehand. This helps improve the accuracy of the response of the robot to voice commands, but also entails additional preparatory work to set up the system and is less convenient for use by training surgeons who have high turnover rates. AESOP was attached to and moved with the operating table and therefore did not need to be repositioned if the operating table moved. However, AESOP needed a special lift to dock it to the operating table, because it was too heavy to be done by hand. AESOP 3000 had three movement modes, discontinuous, continuous, and preprogrammed. In discontinuous mode, command words (“left,” “right,” “up,” “down,” “in,” “out”) made the robot move in small increments. The commands were repeated as required. In the continuous mode the command “move” precedes the direction command and the robot moves in that direction either until the command “stop” is given or the robot reaches its limit. In preprogrammed mode, up to three positions can be stored and returned to by the commands “return one,” “return two,” and “return three.” Nebot et al. compared AESOP 3000 with EndoAssist [6]. A single surgeon who was experienced with both robots performed a series of tasks in a simulated environment. For simple and complex tasks, EndoAssist was faster than AESOP discontinuous and continuous modes. This was attributed to a more elliptical path taken by AESOP, incorrect interpretation of voice commands and more accuracy and less overshoot by EndoAssist. The AESOP preprogrammed mode was faster than EndoAssist, but was not reproducible without errors that needed to be corrected. Despite being slower at task performance, the AESOP robot was subjectively regarded to be more user-friendly due to the greater simplicity of the controls. The voice commands were easier than the simultaneous head movement, visual recognition, and foot pedal actuation of EndoAssist Box 4.2.

4. The FreeHand System

BOX 4.1 Advantages and disadvantages of EndoAssist

62

Handbook of Robotic and Image-Guided Surgery

Advantages of EndoAssist included its control mechanism, the accuracy in response, and its ability to provide the operating surgeon with control to obtain the desired view without the need for an assistant. Although it is a freestanding unit and easily wheeled into position, EndoAssist was considered to be very bulky both in footprint and the movement of its arm. Having to move EndoAssist when the operating table was moved was also seen as a disadvantage. A new version of EndoAssist that used the same control mechanism, but attached to and moved with the operating table, that did not need to be repositioned was desirable. This was the design brief for FreeHand Box 4.3.

4.4

Preoperative preparation

Any patient who is suitable for a conventional laparoscopic operation is suitable for a FreeHand system-assisted operation (Figs. 4.6 4.13). Preoperative preparation includes all the same components as a conventional laparoscopic operation and in addition the intention to use FreeHand should be raised at the preoperative briefing and a member of the scrub staff who is trained to set up and use the system should be available and all components should be checked (Box 4.4).

4.5

Operative setup

First, the indicator unit is attached to the monitor that the operating surgeon will be using to view the operative field. Then the headset is donned by the operating surgeon. The patient is placed on the operating table as they normally would be for a conventional laparoscopic operation. The control box is then attached to one side of the operating table by clamping it to the rail. Usually, the FreeHand system is attached to the side opposite the operating surgeon. The system can be placed in a position that maximizes the room available to the operating surgeon taking advantage of the articulation of the arm. The foot pedal is attached to the control box (Figs. 4.14 4.22).

4.5.1

Come downstairs!

The correct sequence of activating each component of the FreeHand system can be remembered using the phrase “come downstairs!” That is, the highest component is activated first and then the next down to the lowest component. The indicator unit is switched on first, followed by the headset, and last the control box. This assumes that the indicator unit on the monitor is higher than the surgeon’s head. The components sequentially pair wirelessly with each other. The sleeve cover, camera clip, and zoom module are sterile and are opened with the other sterile operative instruments and handled by the scrubbed team. The sleeve is attached to the FreeHand arm first by fitting the rigid plastic component to where the zoom module attaches and then rolling the flexible cover over the arm (Figs. 4.23 and 4.24). Then the camera clip can be loaded with the laparoscopic camera telescope. 5 and 10 mm telescopes are compatible. Once loaded with the telescope, the camera clip can then be slotted into the zoom module. The zoom module can then be slotted into its attachment with the robotic motion assembly at the rigid part of the plastic sleeve (Fig. 4.25). The goniometric point is where the camera port passes through the patient’s abdominal wall. It is also called the pivot point and is where the least lateral excursion of the laparoscope occurs. It is important to identify and calibrate the setup so that the robot causes the least tissue trauma to the patient. This is done using the built-in LEDs. The LEDs can be turned on with one hand by gripping the FreeHand unit and in so doing, simultaneously pressing two activation buttons on either side of the robotic motion assembly. The LEDs shine light on the abdomen and when they are focused on the skin at the camera port site they indicate that the robot is in the correct position. Once in position the locking nut on the arm is tightened. During this calibration, the operating table and motion assembly unit should be flat to ensure the greatest accuracy (Fig. 4.26). Once in position, the FreeHand unit does not need to be repositioned if the operating table is tilted (e.g., Trendelenberg, reverse Trendelenberg, or lateral tilt), because it is attached to and moves with the operating table. If the patient slips or if the desired camera port changes then the new goniometric point should be established using the LEDs (Box 4.5).

4.6

Operative use

The operative surgical technique is unaltered by the FreeHand system. The FreeHand system allows the operating surgeon to control the movements of the laparoscopic camera through a hands-free system. The controls are composed of the headset and foot pedal. The operating surgeon wears the headset that detects movements of the head. Head

The FreeHand System Chapter | 4

FreeHand 1.0 (2009) G An iterative development of EndoAssist. It was smaller and lighter, weighing 7 kg and was easily mounted by hand onto the rail on the side of the operating table (Fig. 4.3) FreeHand 1.2 (2012) G Included LEDs for faster setup in the correct position and a narrower profile to reduce tool clashes (Fig. 4.4) FreeHand 2.0 (2018) G Has an even narrower profile allowing more space for single-port surgery and 360-degree rotation to support a wider range of operations, especially colorectal (Fig. 4.5) FIGURE 4.3 FreeHand 1.0.

FIGURE 4.4 FreeHand 1.2.

(Continued )

4. The FreeHand System

BOX 4.3 Iterations of FreeHand

63

64

Handbook of Robotic and Image-Guided Surgery

BOX 4.3 (Continued) FIGURE 4.5 FreeHand 2.0.

FIGURE 4.6 Control box.

The FreeHand System Chapter | 4

65

4. The FreeHand System

FIGURE 4.7 Robotic motion assembly.

FIGURE 4.8 Hands-free control unit (headset).

movements detected by the device include left tilt, right tilt, up, and down. None of the movements are rotations of the neck. Infrared signals from the headset are detected by a receiver in the indicator unit, which is usually positioned on the monitor that the operating surgeon is looking at so that they can see it without taking their eyes far from the operative view. The headset and indicator unit must be within line of sight to communicate. Recognition of head movements is made through the indicator unit via an LED representation of an arrow pointing in the corresponding direction. When the arrow is pointing in the intended direction, the operator can press the foot pedal and the FreeHand system will move the camera in the corresponding direction. In addition to panning right and left and looking up and down,

66

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.9 Indicator unit.

FIGURE 4.10 Food pedal.

FreeHand can also zoom in and out. Tapping the foot pedal briefly changes the mode from pan to zoom and using up and down head movements allows the operator to zoom in and out by pressing the foot pedal again. Zoom in and out are represented by a 1 and 2 sign on the indicator unit. If the telescope lens needs to be cleaned during the operation, the telescope can be removed from the patient by unclipping it from the zoom module using the quick-release clip and sliding it out of the port. After the lens has been cleaned, the telescope can be slid back into the port and clipped back to the zoom module and it will be in the same place as it was before and the desired view does not need to be set up again (Figs. 4.27 4.29).

The FreeHand System Chapter | 4

67

4. The FreeHand System

FIGURE 4.11 Zoom module and clip (scope attachment).

FIGURE 4.12 Positioning template.

The FreeHand 1.2 had a side-to-side pan range of 180 degrees and an up and down tilt range of 70 degrees. FreeHand 2.0 has a side-to-side pan range of 360 degrees. Movements outside of this range can be achieved by resetting the rotation of the robotic motion assembly head around the goniometric point. FreeHand has the same safety feature as EndoAssist which limits the amount of force it can exert to prevent tissue damage.

4.7

Experience with FreeHand

Stolzenburg et al. compared FreeHand with a human camera-holder in a randomized controlled study including 50 consecutive endoscopic extraperitoneal radical prostatectomies for cancer performed by three surgeons [7]. Twenty-five operations were in the experimental and control groups. Each surgeon had experience of over 300 of these operations using a human camera-holder and five with FreeHand. Each human camera-holding assistant had experience of

68

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.13 Sleeve cover.

BOX 4.4 FreeHand system components (checklist for operation briefing) G G G G G G

G

Control box Arm with robotic motion assembly Hands-free control unit (headset) Indicator unit Foot pedal Sterile pack 1 comprising disposable: G Zoom module G Clip (scope attachment) G Positioning template Sterile pack 2 comprising disposable G Sleeve cover

approximately five operations. Operating time, number of camera movements and errors, number of lens cleans, blood loss, and surgical resection margin data were collected. Patient characteristics, operating time, effectiveness, and safety were similar in both groups. FreeHand was associated with significantly faster horizontal and zoom camera movements, fewer lens cleans, and fewer movement errors, but also slower up and down movements. Overall, FreeHand was considered to be a desirable alternative to a relatively inexperienced human camera-holding assistant. Tran compared single-port surgery with and without FreeHand in a series of 32 totally extraperitoneal laparoscopic inguinal hernia repairs all performed by the author. Sixteen right-sided operations were in the experimental group and 16 operations were in the control group. Tran found that only right-sided inguinal hernia repairs were amenable to robotic repair due to the configuration of the equipment available. Additional equipment would have made left-sided FreeHand inguinal hernia repair possible. Cases were matched for important characteristics, but they were not randomized. The overall operation times were comparable. However, the number of times and time spent cleaning the lens of the telescope was significantly different. The telescope lens was cleaned eight times during conventional and one or two times during FreeHand surgery (P 5 .01). The time spent cleaning the lens of the telescope was 8.5 minutes for conventional and 1.5 minutes for FreeHand surgery (P 5 .01). Tran concluded that using the robot was feasible and efficient [8].

The FreeHand System Chapter | 4

69

4. The FreeHand System

FIGURE 4.14 Inguinal hernia setup.

FIGURE 4.15 Cholecystectomy setup.

Ali et al. reported an observational study including 43 consultant surgeons performing 105 operations with FreeHand in 30 hospitals. The range included urology, general, colorectal, thoracic, and gynecological operations. The number of telescope lens cleans was collected, but this was not a comparative study and therefore we do not know if this was different to conventional laparoscopic surgery. Surgeons were asked to rate their satisfaction on a scale of 0 5. Surgeons reported good satisfaction in terms of setup (4.29), ergonomics (4.12), usability (4.39), and overall experience (4.34). A conversion was made from the robotic to a human camera-holder in 7.6% of the operations. The authors noted that many of the surgeons had very limited experience with FreeHand and were inside their learning curve. There were no adverse events related to FreeHand [9].

70

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.16 Appendectomy setup.

FIGURE 4.17 Rectal surgery setup.

Sbaih et al. studied 20 surgical registrars who had never used FreeHand before to see how long it took for them to learn how to use it proficiently. First, participants were familiarized with FreeHand and then performed four different exercises of increasing difficulty. Participants repeated the exercises on two further occasions within 2 weeks of the familiarization and first attempts. Each exercise utilized an activity within a simulation laparoscopic training device. Task one required lateral movements of the camera, task two required up and down movements, task three involved diagonal movements, and task four also required zooming in and out. An observer counted correct and incorrect movements used to control the camera and also made a competency assessment (completed the task with no assistance, with a little verbal assistance, with a lot of verbal assistance, and did not complete the task). Each participant fed back by categorizing their experience with FreeHand into: controlled intuitively—could not be better, controlled without

The FreeHand System Chapter | 4

71

4. The FreeHand System

FIGURE 4.18 Bariatric surgery setup.

FIGURE 4.19 Nissen fundoplication setup.

difficulty, controlled with a little difficulty, controlled with some difficulty, unable to control. Computerized tracking software was also used to analyze videos of participants performing the exercises. This measured head movements, the time taken to complete the exercise, and the number of foot pedal activations. Head movements were used to determine a head movement score, which was calculated by adding up the degree of head movement angulation used to complete the exercise. The less movement required, the more expert the participant was deemed to be. The results showed that participants used less head movement, time, and foot pedal activations to complete the tasks with each repetition as they gained proficiency. By the third repetition, 95% of the participants were using the fewest possible number of head movements and foot pedal activations, 80% completed the exercises without assistance, and 35% felt they were able to

72

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.20 Rectopexy setup.

FIGURE 4.21 Nephrectomy setup.

control FreeHand intuitively—could not be better and 50% felt they had effective control without difficulty. The researchers found that a minimum head angulation of 38 degrees was required to register an up or down movement and 43 degrees for a left or right movement [10].

4.7.1

Advantages of FreeHand

With FreeHand, laparoscopic operations can be carried out without a human camera-holding assistant. It is a stable platform for the camera that moves with the operating table and provides a tremor-free stationary image. The operating surgeon can control the movements of the camera via the hands-free system and therefore does not need to release the operating instruments or interrupt the flow of the operation. Removing the camera to clean the

The FreeHand System Chapter | 4

73

4. The FreeHand System

FIGURE 4.22 Video-assisted thoracic surgery setup.

FIGURE 4.23 Applying the plastic sleeve 1.

lens and replacing the camera does not alter the position of the robot, so the view is not lost. Studies have shown that use of a robotic camera-holding device has been associated with significantly faster overall operation times and is just as safe.

4.7.2

Disadvantages of FreeHand

Trainee surgeons might not be invited to theater if they are not required to hold the camera and may feel that they are getting less experience. However, during a training case that would normally require the trainer to scrub and hold

74

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.24 Applying the plastic sleeve 2.

FIGURE 4.25 Clip (for scope attachment).

the camera, the experience acquired by the junior surgeon can be improved, because the trainer does not need to scrub and therefore is less likely to take the instruments from the junior surgeon and might be more inclined to instruct them in how to complete the operation. The scrub staff and surgeons need to be trained to set up and use the FreeHand system, but in our experience this only requires 1 2 hours. The absence of an assistant means that should the procedure have a complication and need to be converted to an open procedure, then the operating surgeon will not have the necessary assistant immediately available, but in our experience this has not raised any patient safety issues (Box 4.6).

The FreeHand System Chapter | 4

75

BOX 4.5 Sequence of setting up FreeHand 1. 2. 3. 4. 5. 6.

Check FreeHand components at operation briefing Place the indicator unit on the main monitor Put the headset on the operating surgeon Attach FreeHand to the operating table once the patient has been positioned Place the foot pedal on the floor conveniently for the operating surgeon Turn on the devices in descending order of height (come downstairs) G Indicator unit G Headset G Control box 7. At the sterile draping stage, cover FreeHand with the sterile plastic sleeve 8. After the camera port is inserted, load the telescope into the camera holder and zoom module, attach to FreeHand, and position FreeHand over the goniometric point using the LEDs

4.8

Discussion

The development of laparoscopic surgery brought with it the new role of a camera-holder. Assigning this role to a medical device rather than a human being is highly desirable as it can remove the need for a surgical assistant and bring direct control of the camera movements to the operating surgeon. The most important things are that the safety, quality, and speed of the operations is either equivalent or better. The development of camera-holding medical devices began with mechanical clamps, which are stable, but also slow and inconvenient to move. Without an assistant, the operating surgeon must pause the operation, release their hands from the instruments and manually adjust the position of the camera in the clamp, which can be a patient safety issue at a critical moment in the operation.

4. The FreeHand System

FIGURE 4.26 Goniometric LEDs.

76

Handbook of Robotic and Image-Guided Surgery

FIGURE 4.27 Headset in use.

FIGURE 4.28 Foot pedal in use.

EndoAssist is a free-standing robotic camera-holder and allows the operating surgeon hands-free control to change the position of the camera through movements of the head and a foot pedal without interrupting the flow of the operation. Aiono et al. showed that EndoAssist was safe, effective, and 10% quicker than a human camera-holder for laparoscopic cholecystectomy [1]. A huge number of laparoscopic cholecystectomies are performed and this could result in a significant cost and time saving if the effect is maintained once scaled up. The FreeHand system is the successor to EndoAssist and improved on its design in a number of ways. FreeHand is smaller and lighter and can easily be attached to the operating table by hand. It moves with the operating table and therefore its position does not need to be adjusted if the operating table moves. The control system is the same as

The FreeHand System Chapter | 4

77

BOX 4.6 Advantages and disadvantages of FreeHand Advantages of FreeHand G Replaces a human camera-holding assistant G Stable stationary image due to no hand tremor, physical or mental fatigue G Camera movements controlled by the operating surgeon G Short learning curve G Equivalent or faster overall operating times G Improved training experience for advanced surgical apprentices Disadvantages of FreeHand G Some surgeons find it less satisfactory than a skillful human camera-holder G Reduced theater experience for trainee surgeons

EndoAssist. Further research trials showed that surgeons needed a learning curve of just three operations to become proficient with FreeHand and cleaning the telescope lens was required significantly less frequently during inguinal hernia operations. The applicability of FreeHand has been demonstrated in a wide variety of surgical specialties including general, colorectal, urology, thoracic, and gynecology. FreeHand fulfills the brief required of a medical device replacement of the human laparoscopic camera-holder. It allows the surgical assistant to be freed up and potentially deployed elsewhere. It reduces the barrier to using the laparoscopic technique. It requires only the operating surgeon to be trained in its use and it has a short learning curve. It can also benefit training by freeing the senior surgeon’s focus from holding the camera. Alternative robotic camera-holding devices are available. The variety of control mechanisms includes joystick-, voice-, and self-guided. More complex camera-holding robots include the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA). This is not discussed in this chapter. It does have a camera that is held by the robot, but it also has robotic arms that hold and manipulate operative instruments and therefore is not equivalent to the devices in this chapter that function to hold and move the camera during otherwise conventional laparoscopic surgery.

4. The FreeHand System

FIGURE 4.29 Indicator unit in use.

78

4.9

Handbook of Robotic and Image-Guided Surgery

Conclusion

The robotic FreeHand system has been shown to be a safe and effective replacement for the human laparoscopic camera-holder in a wide variety of operations. It provides a stable stationary image and the operating surgeon can control the camera movements to provide the desired view without interrupting the operation. It has a short learning curve. Some studies have shown that FreeHand is associated with fewer lens cleans and faster overall operating times.

References [1] Aiono S, Gilbert JM, Soin B, Finlay PA, Gordan A. Controlled trial of the introduction of a robotic camera assistant (EndoAssist) for laparoscopic cholecystectomy. Surg Endosc 2002;16:1267 70. [2] Kommu SS, Rimington P, Anderson C, Rane A. Initial experience with the EndoAssist camera-holding robot in laparoscopic urological surgery. J Robot Surg 2007;1:133 7. [3] Halin N, Loula P, Aarnio P. Experiences of using the EndoAssist-robot in surgery. Stud Health Technol Inf 2007;125:161 3. [4] Powar MP, Lung P, Parker MC. Flying Solo a pilot study of day case robot assisted laparoscopic surgery. Ambul Surg J 2008;14:88 9. [5] Gilbert JM. The EndoAssist robotic camera holder as an aid to the introduction of laparoscopic colorectal surgery. Ann R Coll Surg Engl 2009;91:389 93. [6] Nebot PB, Jain Y, Haylett K, Stone R, McCloy R. Comparison of task performance of the camera-holder robots EndoAssist and Aesop. Surg Laparosc Endosc Percutaneous Tech 2003;13:334 8. [7] Stolzenburg JU, Franz T, Kallidonis P, Minh D, Dietel A, Hicks J, et al. Comparison of the FreeHand(R) robotic camera holder with human assistants during endoscopic extraperitoneal radical prostatectomy. BJU Int 2011;107:970 4. [8] Tran H. Robotic single-port hernia surgery. JSLS 2011;15:309 14. [9] Ali JM, Lam K, Coonar AS. Robotic camera assistance: the future of laparoscopic and thoracoscopic surgery? Surg Innov 2018;25(5):485 91. Available from: https://doi.org/10.1177/1553350618784224. [10] Sbaih M, Arulampalam TH, Motson RW. Rate of skill acquisition in the use of a robotic laparoscope holder (FreeHand(R)). Minimally Invasive Ther Allied Technol 2016;25:196 202.

5 G

Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery Masahiro Takahashi Takahashi Surgery Clinic, Yamagata, Yamagata, Japan

ABSTRACT Given its many features—such as excellent three-dimensional visualization, its use of a wide range of articulating surgical instruments, and advanced ergonomics—robotic surgery is expanding the scope of its adaptability and is now being used in more complicated and difficult operations. However, a robot’s strength lies in executing simple operations, offering stylized movement, and providing steady and stable motion. Robot-assisted camera work is considered effective in endoscopic surgery, where it is extremely important to maintain a stable field of view; it also makes solo surgery possible. In this chapter, we introduce the solo surgery procedure using VIKY. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00005-0 © 2020 Elsevier Inc. All rights reserved.

79

80

Handbook of Robotic and Image-Guided Surgery

5.1

Background and history of surgical robots

In the 1980s a research team at Ames Research Center, National Aeronautics and Space Administration (NASA), undertook the development of telepresence surgery using virtual reality. One of the objectives of this telesurgery was the development of surgical robots. Also, the US Army and the Stanford Research Institute had been developing medical systems that allow telesurgery/treatment on the battlefield [1]. Based on this research, many surgery-assisting robots have been developed and many clinical trials performed over the last 30 years. The first robot that was approved by the Food and Drug Administration (FDA) in 1994 for clinical use in abdominal surgery was the Automated Endoscopic System for Optimal Positioning (AESOP; Computer Motion, Sunnyvale, California, United States) [2]. This robot arm was developed by computer motion as a part of a space development program financially supported by NASA. Later, this robot arm was modified to hold an endoscope and used as a laparoscope camera-holder. The first model needed a foot switch or hand switch to control the robot arm manually or remotely. Subsequent models adopted a voice control system, giving further flexibility to the surgeon. Still later, inheriting a highly flexible robot arm and surgeoncontrollable computer system, and undergoing a major redesign, AESOP evolved into master slave surgical robot systems, such as Zeus (Computer Motion Inc., Santa Barbara, California, United States) and da Vinci (Intuitive Surgical Inc., Sunnyvale, California, United States) [1,2]. In addition to AESOP, there have been other camera-holding devices available [3 7]. Some have been used in clinical practice, but there are few reports on solo surgery in laparoscopic surgery for the last 10 years [8 14]. VIKY (Vision Kontrol for endoscopY) is one of the newly developed robot camera manipulators that has enabled the performance of solo surgery. The first robot was created by ENDOCONTROL in 2007. Solo surgery is defined as a surgeon performing surgery alone without other assistant surgeons except for a scrub nurse. Minimally invasive laparoscopic surgery was a great benefit for the patient. However, there was a serious problem with this surgical procedure. The visual endoscope camera operation was indirectly operated by the surgical assistant. The disadvantage was that it disturbed the cooperativeness of the surgeon’s surgical field of view and surgical operation. The endoscope camera operation by the assistant was an obstacle, causing undesirable movements for the surgeon. Therefore it was claimed that it is important for the surgeon to directly control the surgical field of view and all surgical instruments by operating the camera. Conceptual differences between VIKY and AESOP include that VIKY specializes as a holding device. The main characteristic is its simple structure and compactness compared to previous systems. Also, since the arm and main body can be autoclaved, this system requires no consumables or drapes, and has very low maintenance costs. The setup procedure is so simple that the surgeon can do it alone with a clean field. After setup, just by pushing a button, voice control is activated, and surgery can be started immediately. With VIKY, there are no elements complicating robot-assisted surgery, which makes it very appropriate for solo surgery.

5.2

System overview

VIKY is a new-generation surgical assistance robot, dedicated to minimally invasive procedures. Designed in collaboration with surgeons, these instruments provide solutions to their specific constraints: limited space in the sterile field, adaptability to different anatomies, and the need for surgeons to stand close to the patient. VIKY optimizes the collaboration between surgeons and robots, thus optimizing both human capacities (medical knowledge, adaptability, flexibility, etc.) and robotic capacities (precision, stability, reproducibility, miniaturization, etc.). Surgeons control the position of the endoscope directly, without the help of an assistant. Through a multilingual voice-activated interface connected to a wireless microphone, the surgeon speaks to the system to move the endoscope to the desired positions, thereby optimizing the exposure of the surgical site. The revolutionary architecture of VIKY developed by and for clinicians is perfectly suited to the constraints of surgery. Lighter and more compact, VIKY improves the operator’s postural comfort while remaining compatible with conventional instruments and techniques. It is therefore applicable to a wide range of procedures (urology, gynecology, abdominal surgery, and thoracic surgery), for the benefit of patients. VIKY system key features: G G

Provides a stable and reliable view of the operating field. Eliminates camera shake and associated eye fatigue.

Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery Chapter | 5

G G G G

Memorizes key positions of the endoscope for easy recall during the procedure. Fully compatible with all types of endoscopes and trocars. Easily fixed onto the operating table rail. Cost-effective (can be autoclaved and has no disposable parts). A system that frees the assistant to perform other tasks.

5.3

The VIKY system

The VIKY system is composed of the following components.

5.3.1

Control unit

The VIKY control unit is specifically designed to control the VIKY driver (ring and motor set). It offers a user-friendly touch panel interface to fine-tune settings. The control of the VIKY driver can be performed, at the same time, by voice (thanks to a wireless microphone) or foot (thanks to a foot pedal). The VIKY driver, the foot pedal, and the USB dongle (for a wireless microphone) are plugged directly into the control unit front panel. These are all that need to be connected. The VIKY system software then interprets user commands and controls motors accordingly. The characteristics of the surgeon’s voice are recognized after about 15 seconds of microphone volume testing, which has to be performed before each use. In this way the computer does not respond to voices of other people or surrounding noise. The recognition capability is very high and the response is quick. It contains many control loops providing a high level of security. It is also possible to do a nonmandatory 15-minute voice training session and thus create a specific voice profile for the surgeon (only one session needed per surgeon) (Fig. 5.1). The control unit allows the surgeon to control: G G

G G G G

Application: endoscope positioner or uterus positioner. Models: it is possible to choose the model (XS, M 1 , or XL) at the beginning of the surgery and change the model during use. Recognition sensitivity: there are seven levels to determine the selectivity of the software voice recognition. Movement speed: there are five levels of speed for each movement (left/right, up/down, and back/in). Microphone volume test: it is possible to calibrate the microphone to a new user voice during VIKY use. Language: it is possible to change the language of VIKY movement control during use.

5.3.2

Arm and clamp

The passive arm is fixed to the rail of the operating table with the rail clamp and enables the ring to be held just above patient’s abdomen. It is adjustable during surgery in order to reposition the driver if necessary thanks to a black screw that releases/fixes three ball joints. It is immersible (IPX7) and steam sterilizable (autoclave) (Fig. 5.2). The clamp enables attachment of the passive arm to the rail of the operating table. The clamp hole can rotate, thus allowing the arm to obtain any angle needed. The clamp is not required to be sterilized because it is installed under the sterile field (Fig. 5.3).

5.3.3

Driver (ring and motor set)

The ring exists in three sizes: XS (9 cm, small ring diameter), M 1 ring (11.5 cm, medium ring diameter), and XL size (18 cm, large ring diameter, for single-access surgery). It is positioned just above the patient’s abdomen and is held in place by the passive arm. The ring and arm are easy to assemble and disassemble. The motor set is comprised of three motors, silicon cables, and a connector allowing the motor set to be connected to the control unit. The three motors are attached to the VIKY ring and actuate three degrees of freedom to move the endoscope (two rotations to visualize the entire abdomen cavity and an axial translation for zooming) (Fig. 5.4). The ring and motor set are immersible (IPX7) and steam sterilizable (autoclave).

5. Solo Surgery with ViKY

G

81

82

Handbook of Robotic and Image-Guided Surgery

FIGURE 5.1 Control unit with software which analyzes the surgeon’s orders and activates the motors of the driver.

FIGURE 5.2 The clamp and passive arm can be fixed anywhere along the rail of the operating table.

FIGURE 5.3 Three installed in the driver.

small

motors

FIGURE 5.4 The endoscope can be freely controlled in three directions (up/down, left/right, in/back).

Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery Chapter | 5

83

FIGURE 5.6 Control interfaces.

5.3.4

Adapters

Adapters are designed to fix the endoscope to the driver. The adapter has to be shift along the endoscope rigid shaft until it reaches the camera’s head. Then a small screw allows the adapter to be fixed to the endoscope. There are adapters for all 5 mm and all 10 mm endoscopes on the market (Fig. 5.5).

5.3.5

Control interfaces: foot pedal and wireless microphones

If the foot pedal is required to be used, it is necessary to plug it into the VIKY control unit. Once this is done, the foot pedal allows the surgeon to easily move the VIKY driver by foot. The “joypad” system offers high ergonomics and precision. A press on the joypad will move the driver accordingly in the “left,” “right,” “up,” and “down” directions (as long as the surgeon presses on the joypad). A press on the “ 1 ” or “ 2 ” buttons will move the driver in the “in” or “back” direction, respectively (as long as the surgeon presses the respective button). The driver can be deactivated with one foot press (blue button), to take accessory control back by hand. Another foot press will reenable the driver back to its normal use. The wireless microphone is approved for use in the operating room. Combined with VIKY, it allows control of the VIKY driver by voice. The wireless microphone is paired with a USB wireless dongle (Bluetooth connection). Its battery gives an autonomy of about 6 hours of use, and it is delivered with a preinstalled earloop, foam, and headband. Once the surgery is complete, the user can connect the microphone to a charging wire (contained in the control unit) to charge the microphone battery (Fig. 5.6). All possible voice orders to control VIKY movements with the wireless microphone are summarized in Fig. 5.7.

5.4

Advantages and disadvantages of VIKY-assisted surgery

There are many advantages to the VIKY system. Its main favorable characteristic is its simple structure and compactness compared to previous systems. Compared to earlier robot arms, it occupies minimal surgical space, allowing more space for the surgeon and assistants to access patients. Also, since the arm and main body can be autoclaved, this system requires no consumables or drape and has very low maintenance costs. The setup procedure is so simple that the surgeon can do it with a clean field. The control unit is connected to the main body of VIKY with one cable with no

5. Solo Surgery with ViKY

FIGURE 5.5 The structure is simple, and the endoscope is hooked to the shaft of the driver with a dedicated small adapter which is screwed on.

84

Handbook of Robotic and Image-Guided Surgery

FIGURE 5.7 Voice control commands.

FIGURE 5.8 Position of VIKY for Solo surgery; surgeon’s view.

complicated settings; thus no clinical engineer is required. After connection, just by pushing a button, voice control is activated, and surgery can be started immediately. One of the main advantages of the VIKY system is that, while performing surgery, surgeons can effortlessly control their field of view using voice control, which offers them more freedom of movement. Furthermore, because the VIKY system is relatively easy to use, time and expensive training are unnecessary, which is apparent because resident doctors can also easily operate this machine. Finally, a surgeon’s utilization of VIKY ensures that no complications of robot-assisted surgery will arise, which makes this system especially appropriate for solo surgery (Figs. 5.8 5.10).

Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery Chapter | 5

85

FIGURE 5.10 Position of VIKY for Solo surgery; insertion point.

5. Solo Surgery with ViKY

FIGURE 5.9 Position of VIKY for Solo surgery; surgeon’s assistant.

86

Handbook of Robotic and Image-Guided Surgery

VIKY also has disadvantages. The VIKY ring is set above the patient’s abdomen to provide stable movement and stable vision. The ring may cause interference with surgical forceps. Therefore care must be taken in positioning the ring and the trocar. We compare the characteristics of VIKY with the da Vinci representative of robotic surgery in Table 5.1.

5.5

Current clinical applications and data

The FDA has approved several robotic systems for certain surgical procedures. For example, AESOP was the first robotic arm that a surgeon could control using voice commands to manipulate an endoscopic camera. VIKY is an evolutionary system of AESOP and various medical practitioners use it as an endoscope holder for laparoscopic and thoracic surgeries, including inguinal hernia, cholecystectomy, lobectomy, and myomectomy. VIKY’s clinical applications are outlined in Table 5.2. TABLE 5.1 Advantages and disadvantages of da Vinci and VIKY. da Vinci

VIKY

Occupation area

Large

Small

Setup

Complicated: inspection of the carts and console, multiple cable connections, independent multiple power sources, draping

Simple: only one connection plug, no drape

Medical staff

Many: operation assistant doctor, surgical and scrub nurse, clinical engineer

One scrub nurse

Emergency procedure

Have to move large carts and need to secure another operating room for reoperation

Can be removed only to an attachment in the sterile field and surgery can be continued as it is

Surgical equipment

Dedicated and expensive

Compatible with all types

Initial cost

Several $100 million

Under the $10 million

Cost of maintenance (per year)

$100 million

Little

Cost of supplies (per year)

Several $10 million

Little

Labor cost

Expensive (several million dollars)

Little

TABLE 5.2 VIKY’s clinical applications. Digestive surgery

Inguinal hernia Cholecystectomy Colectomy Gastric bypass Sleeve gastrectomy

Thoracic surgery

Lobectomy Segmentectomy Wedge resection

Gynecology

Hysterectomy Myomectomy Sacrocolpopexy Endometriosis

Urology

Prostatectomy Partial nephrectomy Pyeloplasty

Solo Surgery With VIKY: Safe, Simple, and Low-Cost Robotic Surgery Chapter | 5

87

5.6

Conclusion

We have described solo surgery using the VIKY voice-controlled endoscopic positioning system. Due to its high capability for voice command recognition, which results in very few errors, surgeries with the VIKY can be performed as safely as surgeries performed with assistants. Moreover, because the operational response of the endoscope holder was good and excellent vision could be maintained for many hours, stress on the operating surgeon is reduced. The system is voice-controlled, so, for camera maneuvers, the VIKY does not occupy the hands or feet of the operating surgeon. Eliminating unnecessary movements has the advantage of not hindering the surgeon’s concentration, thus ensuring that he or she safely completes the surgery. VIKY allows solo surgery, thereby making it unnecessary to ensure the availability of personnel such as specialist nurses or clinical engineers. Finally, additional specialized equipment is not required, minimizing the financial burden. Based on these clinical evaluation findings, the VIKY system was a success, and solo surgery using VIKY is considered to be safe and efficient.

References [1] Anthony R. Robotic surgery: a current perspective. Ann Surg 2004;239:14 21. [2] Sackier JM, Wang Y. Robotically assisted laparoscopic surgery. Surg Endosc 1994;8:63 6. [3] Schurr MO, Arezzo A, Neisius B, Rininsland H, Hilzinger HU, Dorn J, et al. Trocar and instrument positioning system TISKA. An assist device for endoscopic solo surgery. Surg Endosc 1999;13:528 31. [4] Aiono S, Gilbert JM, Soin B, Finlay PA, Gordan A. Controlled trial of the introduction of a robotic camera assistant (EndoAssist) for laparoscopic cholecystectomy. Surg Endosc 2002;16:1267 70. [5] Polet R, Donnez J. Using a laparoscope manipulator (LAPMAN) in laparoscopic gynecological surgery. Surg Technol Int 2008;17:187 91. [6] Yamada K, Kato S. Robot-assisted thoracoscopic lung resection aimed at solo surgery for primary lung cancer. Gen Thorac Cardiovasc Surg 2008;56:292 4. [7] Gillen S, Pletzer B, Heiligensetzer A, Wolf P, Kleeff J, Feussner H, et al. Solo-surgical laparoscopic cholecystectomy with a joystick-guided camera device: a case-control study. Surg Endosc 2014;28:164 70. [8] Takahashi M, Takahashi M, Nishinari N, Matsuya H, Tosha T, Minagawa Y, et al. Clinical evaluation of complete solo surgery with the “ViKYs” robotic laparoscope manipulator. Surg Endosc 2017;31:981 6. [9] Iavazzo C, Iavazzo PE, Gkegkes ID. Solo gynaecologic laparoscopic surgery: a future one-man show for the experienced surgeon and a costeffective approach for the National Health Systems. J Robot Surg 2016;10:283 4. [10] Maheshwari M, Ind T. Concurrent use of a robotic uterine manipulator and a robotic laparoscope holder to achieve assistant-less solo laparoscopy: the double ViKY. J Robot Surg 2015;9:211 13. [11] Tuschy B, Berlit S, Brade J, Su¨tterlin M, Hornemann A. Solo surgery—early results of robot-assisted three-dimensional laparoscopic hysterectomy. Minim Invasive Ther Allied Technol 2014;23:230 4. [12] Fujii S, Watanabe K, Ota M, Yamagishi S, Kunisaki C, Osada S, et al. Solo surgery in laparoscopic colectomy: a case-matched study comparing robotic and human scopist. Hepatogastroenterology 2011;58:406 10. [13] Tchartchian G, Dietzel J, Bojahr B, Hackethal A, De Wilde R. Decreasing strain on the surgeon in gynecologic minimally invasive surgery by using semi-active robotics. Int J Gynaecol Obstet 2011;112:72 5.

5. Solo Surgery with ViKY

Clinical evaluations of VIKY’s performance in various types of surgeries have all been positive. For example, Gumbs et al. [15] proved that, using VIKY, the size, weight, and cost of surgical instruments are reduced for minimally invasive surgeries on animal models. Furthermore, clinical evaluations also demonstrate that VIKY is useful in a variety of medical practices, especially in gynecology. For example, Swan et al. [16] show that, during gynecological surgery, doctors can use VIKY not only as a scope holder, but also as a uterine positioner. Maheshwari and Ind [10] also reported that gynecological surgery without the use of an assistant is possible using the VIKY system. Finally, Hung et al. [17] showed that, during prostrate surgery, VIKY allows for robotically operated, intraoperative ultrasound monitoring in real time. These evaluations suggest that, by using the VIKY system, doctors can safely examine all patients. We have also evaluated the effectiveness and safety of solo surgery in laparoscopic inguinal hernia repair [8]. Use of the VIKY system will not only ensure safety and quality in medicine, but will also allow doctors to gain experience as they discover the opportunities that solo surgery provides. Pandalai et al. [18] showed that, with the VIKY, a single operator could perform complex laparoscopic procedures without the need for an assistant to guide the laparoscope. A robotic camera system allows surgical students to disconnect from assisting duties. This allows the trainer to stand behind the student giving more hands on instruction. Therefore in addition to benefiting nonteaching institutions in terms of cost and reliability, it has a value in teaching hospitals also.

88

Handbook of Robotic and Image-Guided Surgery

[14] Mishra R, Martinez AM, Lorias Espinoza D. Initial clinical experience using a novel laparoscopy assistant. Minim Invasive Ther Allied Technol 2011;20:167 73. [15] Gumbs AA, Crovari F, Vidal C, Henri P, Gayet B. Modified robotic lightweight endoscope (ViKY) validation in vivo in a porcine model. Surg Innov 2007;14:261 4. [16] Swan K, Kim J, Advincula AP. Advanced uterine manipulation technologies. Surg Technol Int 2010;20:215 20. [17] Hung AJ, Abreu AL, Shoji S, Goh AC, Berger AK, Desai MM, et al. Robotic transrectal ultrasonography during robot-assisted radical prostatectomy. Eur Urol 2012;62:341 8. [18] Pandalai S, Kavanagh DO, Neary P. Robotic assisted laparoscopic colectomy. Ir Med J 2010;103:181 2.

6 G

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Yasushi Ohmura Department of Surgery, Okayama City Hospital, Okayama, Japan

ABSTRACT To maximize the specifications of advanced image quality such as 3D and 4K, stable camera operation without trembling is desirable. Recently, robotic scope holders have been developed that allow the operator to control the scope. The Soloassist system is a joystick-guided robotic camera control system, designed simply and compactly. It can be easily installed on any part of the side rail of the operation table and scope movement can be controlled by the surgeon in a straightforward and intuitive fashion via an ergonomic joystick. The simply designed holder arm provides a highly flexible, relaxed working environment for surgeon. It can be universally used both in elective and emergency surgery. The Soloassist system provides the possibilities of saving human resources and shortening operative time, while no additional setup time is required. Soloassist is an effective robot-assisted surgical instrument for various endoscopic surgeries. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00006-2 © 2020 Elsevier Inc. All rights reserved.

89

90

Handbook of Robotic and Image-Guided Surgery

6.1

Introduction

Minimally invasive surgery has become a standard procedure for many operations and further refinements have contributed to its progress, including the introduction of energy devices, high-resolution images, and robotic technology [1 3]. When the human scope assistant controls the laparoscope, shaking of the surgical view owing to fatigue or changing hands cannot be avoided. To maximize the benefits of the imaging progress, there is no doubt that a stable surgical view is desirable. Surgical procedures with a stable view contribute advantages for delicate work and lead to an increase in the quality of the operation. The scope holder can play a very important role in acquiring a stable operating field and reduces the stress and fatigue of the operator. Moreover, it is important to consider that some concerns have been raised regarding an apparent shortage of general surgeons, and a decline in residency applications to surgical departments [4 7]. It is also necessary to seek means to carry out operations without compromising quality, even with fewer human resources. The most recent scope holders have the ability to completely replace the role of the camera assistant, and even if it is an extreme position, and no matter how long it takes, they feel no fatigue or exhaustion. The features of each scope holder can be divided into several types. First, they can be divided into passive and active scope holders. Passive scope holders are divided into those requiring repositioning with both hands and those that allows single-handed repositioning. Recently, active robotic scope holders have been developed that allow the operator to control the scope intuitively, without removing their hands from the forceps [8 11]. The interface of the robotic scope holder includes voice recognition, a foot switch, infrared-guided head motion signals, and a joystick. In infraredguided transmission, it is necessary to use a foot clutch in combination in order to avoid unintended movement. Those controlled by voice recognition also need to safeguard triggering of movement by use of a prearranged phrase. There are also two types of methods for installation, one is a self-standing type installed on the floor and the other is attached to the rail of the operating table. In the self-standing type, it cannot follow the movement of the operating table. Therefore when changing the height or inclination of the operating table, it is necessary to remove the laparoscope and recalibration of the reference point is required. The Soloassist system (AKTORmed, Barbing, Germany), a unique robotic scope holder, is a joystick-guided endoscope remote control system mounting to the operating table rail [11 13]. In this section, we describe the features of the Soloassist system and how to use it in various surgical procedures.

6.2

History of Soloassist

The Soloassist system was developed by AKTORmed in Germany. Initially, Soloassist I, developed as the first commercially available model, was hydraulically driven (Fig. 6.1). The current generation (Soloassist II), a newly evolved model, is driven by computer-controlled electric motors. Due to its hydraulics, Soloassist I was significantly heavier (over 20 kg) than Soloassist II (11.5 kg). In 2001 the companies Delta Entwicklungsgesellschaft and Micro Epsilon, in cooperation with the Research Group for Minimally Invasive Interdisciplinary Therapeutic Intervention, Faculty of Medicine of the Technical University of Munich, launched a state-sponsored project to develop a robotic camera guidance system for laparoscopy. The aim of the development was to achieve a flexible, ergonomic, and easy-to-use system. After the first prototype was completed in 2004, it was decided in 2005 to establish an independent subsidiary company, AKTORmed GmbH, to complete and distribute Soloassist I. In 2007 Soloassist I was introduced to the market. Until 2014 Soloassist I was sold directly or as an original equipment manufacturer product Einstein Vision over 280 times. The system could be controlled by a small joystick attached to the left-hand grasping forceps or a remote control (Einstein Vision). With the experience of several thousand operations by a large number of surgeons, AKTORmed GmbH decided to further develop the system. At the end of 2015 Soloassist II was presented and by mid-2018 more than 120 units had been sold in 28 countries worldwide. Soloassist II can be moved by joystick, voice control, or manually. Today, the device is successfully and efficiently used for minimally invasive operations in visceral surgery, gynecology, urology, cardiothoracic surgery, and endoscopic transoral surgery.

6.3 6.3.1

Soloassist II Structure

The main body of the Soloassist system is made of carbon, and has six joints, three of which are computer-controlled, one which can be adjusted manually, and two which act as a gimbal joint following the movement of the main body

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

91

6. The Soloassist System

FIGURE 6.1 Soloassist I.

(Fig. 6.2). The control panel is installed on the third arm and is used for setting and operation of the system and also provides information on the driving situation (Fig. 6.3). The weight of the Soloassist system is 11.5 kg, and its allowable working load is 1 kg. Docking to the operating table is extremely simple thanks to the quick-coupling device, and it can be installed on any part of the side rail of the operation table. Thus repositioning and registering with reference to the patient are not necessary, even if the operating table is moved during procedures. It is applicable to all commercially available endoscopes and has a wide range of motion (Fig. 6.4A and B). The angle of rotation is determined indirectly using draw-wire sensors, which are installed underneath the operating table. A total of three sensors are installed, which measure the rotation movements. Soloassist has a unique arm, named the “universal joint,” which is a wide U-shaped arm attached to the tip of the extension arm. It is simply and compactly designed and useful for avoiding interference with the forceps by rotating. It moves 360 degrees freely as a gimbal joint and can be fixed in eight directions by turning the extension arm 45 degrees at a time. The endoscope clamp attached to the tip of universal joint is a sleeve type screw, and there are two types, for 10 and 5 mm laparoscopes (Fig. 6.5). The universal joint, endoscope clamp, and joystick are autoclavable and the Soloassist itself can be covered by a sterile single-use drape. The only disposable item is the plastic drape to cover the arm. After the surgical procedure is completed, Soloassist can be hung onto the trolley for storage. The trolley is fully mobile and serves for storing the Soloassist within a limited space (Fig. 6.6). With regard to installation and operation of Soloassist, it does not require any special additional personnel; it is very easy to install and remove, so it can be used promptly not only for elective surgery but also for emergency surgery.

6.3.2

Joystick

Soloassist is controlled by the surgeon in a straightforward and intuitive fashion via an ergonomic joystick positioned on the left-hand forceps (Fig. 6.7). The joystick can be adapted to fit almost all commercially available handpieces using a clamp mount. However, it cannot be attached to an extremely thick forceps ring, and in cases where the joystick conflicts with the rotation of the forceps, it is necessary for them to be placed a slight distance apart (Video, see online

92

Handbook of Robotic and Image-Guided Surgery

FIGURE 6.2 Soloassist II. ① Control panel. ② Extension arm. ③ Unlocking slide. ④ Emergency stop. ⑤ Universal joint. ⑥ Endoscope clamp. ⑦ Tension sleeve. ⑧ Probe tip. ⑨ Attachment device. ⑩ Joystick connector. ⑪ Power supply connector.

FIGURE 6.3 Control panel. ① Set trocar point. ② Joystick. ③ Limits. ④ Service. ⑤ Ready/unlock button.

for video). The wired interface consists of a joystick and IN OUT buttons. The joystick moves the scope intuitively 360 degrees by tipping. Two small buttons next to the joystick are used to move the laparoscope forward and backward (Fig. 6.8). Scope movement can be controlled after adjusting the “trocar point” that defines the axis of motion. While fine movements are adjusted by the joystick, dynamic movements are enabled manually by a single hand pressing the

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

93

6. The Soloassist System

FIGURE 6.4 Range of movement: (A) lateral view; (B) top view.

FIGURE 6.5 Endoscope clamp and sleeve screw.

unlock button. Once the button is released, the arm of Soloassist is locked immediately, remaining in the set position. There is no axis in the movement of the joystick, and it is possible to decide the moving distance as desired. However, because there is no fixed axis of motion, and because its motion draws an almost straight line but in a slight arc, the direction of movement may be slightly shifted. Therefore there are some tips in order to acquire the appropriate

94

Handbook of Robotic and Image-Guided Surgery

FIGURE 6.6 Trolley.

FIGURE 6.7 Joystick attached to the laparoscopic instrument.

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

95

FIGURE 6.9 Calibration of trocar point. (A) Bring the probe tip to the trocar for endoscope insertion. (B) Press the trocar point button. The Ready button flashes green when the calibration is successfully finished.

operative field in a short amount of time. Soloassist can be controlled without time lag. Therefore the combination of a large movement and some small tipping make it easy to secure the optimal surgical field in a short time. As the user becomes accustomed to the handling of the joystick, small movements can be done without interrupting the surgical procedure (Video, see online for video).

6.4

Installation—specifics about each operation

Soloassist is attached to the operating table rail on the opposite side to the standing position of the operator. Then the power cable is connected and prepared for draping with the robotic arm straight. After draping the patient, Soloassist is covered with a dedicated drape attached to the universal joint. Next, the joystick is mounted on the forceps ring and its cable is connected to the main body. In order to set the reference point of the movement of Soloassist, a bar called the “probe tip” is placed at the trocar for laparoscope insertion and the trocar point button on the console unit is pressed to enable calibration (Fig. 6.9A and B). The green lamp blinks in just a few seconds and calibration is completed. The endoscopic camera is registered in the trocar point, which is used as the center of rotation. The entrance point, movement, and directions are saved and defined in a system of coordinates for the complete procedure. The Soloassist system calculates automatically the required individual movements of the axes in order to obtain the entire movement required. Even when the trocar for laparoscope needs to be changed, a brief recalibration enables us to continue the surgical procedure. The scope is inserted through the “endoscope clamp” and attached to the universal joint, thus completing all setting procedures (Video, see online for video). In most cases, due to the laparoscope being fixed from above, collisions with the forceps do not occur when the operator’s hand is below the scope position relative to the abdominal wall. On the other hand, when forceps

6. The Soloassist System

FIGURE 6.8 Movement control. 1: up, 2: left, 3: down, 4: right, 5: out, 6: in.

96

Handbook of Robotic and Image-Guided Surgery

FIGURE 6.10 Location of the subcostal trocar: (A) conventional trocar position; (B) modified trocar position.

FIGURE 6.11 Laparoscopic appendectomy. (A) Trocar placement and the position of the operator. (B) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

manipulation has to be performed at a higher position than the scope, there is a risk of collisions with the Soloassist arm. In most cases, there is no need to change our customary way of working, however, in order to eliminate collisions between the holder arm and forceps, we have devised several ideas for comfortable surgery. For example, in four-port cholecystectomy, when dissecting the right side of the gall bladder, the grasping forceps on the left hand may hit the universal joint. There are two solutions to solving this issue. One is by rotating the universal joint clockwise. Second, by shifting the subcostal port for the operator’s left hand to the caudal side by 5 cm, it is possible to create an environment where the operator’s forceps operation is performed below the scope (Fig. 6.10A and B). In order to avoid interference with the forceps more reliably the latter method is generally preferable [13]. Our setup proposals in typical operative procedures are described below.

6.4.1 G G G G

Laparoscopic appendectomy (Fig. 6.11A and B) (Video, see online for video)

The patient is placed in a supine position with both arms spread. Attach Soloassist to the right-side rail of the operating table at the iliac crest level. Insert 12 mm trocar for pneumoperitoneum from the supraumbilical wound. After inspection of the abdominal cavity, insert 5 mm trocars from the left iliac region and the hypogastrium (threeport procedure).

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

G

G

Move the probe tip to the left iliac trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table lightly head-down (Trendelenburg position) with left-side rotation. The surgeon stands on the left side of the patient, and the entire procedure can be performed as solo-surgery.

6.4.2 Laparoscopic inguinal hernia repair (right side) (Fig. 6.12A C) (Video, see online for video) G

G G G

The patient is placed in a supine position and both arms are fixed to the body, so that we can deal with the occult contralateral lesion. Attach Soloassist to the right-side rail of the operating table at 10 cm distal from the hip joint. Insert a 12 mm trocar for pneumoperitoneum from the supraumbilical wound. After inspection of the abdominal cavity, insert a 12 mm trocar from the right lumbar abdomen and 5 mm trocar at the left lumbar abdomen (three-port procedure).

FIGURE 6.12 Laparoscopic inguinal hernia repair (A, B: right side. C: left side.). (A) Trocar placement and the position of the operator. (B) Soloassist attachment. (C) Positions for bilateral case. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

6. The Soloassist System

G

97

98

G

G G G

Handbook of Robotic and Image-Guided Surgery

Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table lightly head-down (Trendelenburg position) with left-side rotation. The surgeon stands on the left side of the patient, and the entire procedure can be performed as solo-surgery. In bilateral cases the operator moves to the right side of the patient and the operating table is rotated to the right side. There is no need to change the position of Soloassist.

6.4.3 Laparoscopic cholecystectomy (multiport) (Fig. 6.13A C) (Video, see online for video) G G G G

The patient is placed in a supine position with the left arm spread. Attach Soloassist to the right-side rail of the operating table at the nipple level. Insert a 12 mm trocar for pneumoperitoneum from the subumbilical wound. After inspection of the abdominal cavity, insert a 12 mm trocar from the epigastric region and 5 mm trocars from the right midclavicular line 7 cm below the costal margin and the right anterior axillary line 2 cm below the costal

FIGURE 6.13 Laparoscopic cholecystectomy (multiport). (A) Shift the subcostal trocar to the foot side by 5 cm. (B) Trocar placement and the positions of the operator and assistant. (C) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

99

6. The Soloassist System FIGURE 6.14 Laparoscopic cholecystectomy (single incision). (A) Trocar placement and the positions of the operator and assistant. (B) Soloassist attachment. Rotate universal joint three times clockwise (45 degrees 3 3: 135 degrees). 〇: Platform for single-incision surgery. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

G

G G

margin (four-port procedure). (The location of the right hypochondrial trocar is shifted to the caudal side by 5 cm compared to the ordinal position.) Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table lightly head-up (reverse Trendelenburg position) with left-side rotation. The surgeon stands on the left side of the patient and the assistant sitting on the chair arranges the surgical field from the right lateral trocar.

6.4.4 Laparoscopic cholecystectomy (single incision) (Fig. 6.14A and B) (Video, see online for video) G G G

G G

G

G G

G G

The patient is placed in a supine position with the right arm spread. Attach Soloassist to the left-side rail of the operating table at the axilla level. Carry out approximately 2.5 cm laparotomy involving the umbilicus. Attach the platform for single-incision laparoscopic surgery to the incision. After completion of pneumoperitoneum, apply an additional two trocars (three-port procedure). After inspection of the abdominal cavity, rotate the universal joint three times clockwise (45 degrees 3 3: 135 degrees). Move the probe tip to the umbilical platform and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table lightly head-up (reverse Trendelenburg position) with left-side rotation. Intracorporeal organ retractors or a needle instrument are used for arrangement of the operative field in the customary way. The surgeon stands on the left side of the patient from where it is possible to perform all procedures. In multiport cholecystectomy, the extracorporeal scope position is higher than the operator’s hands. On the contrary, in single-incision laparoscopic cholecystectomy, the extracorporeal laparoscope position is lower compared to the operator’s hands. For this reason, in the case that the laparoscope is fixed from above, interference and collision between with forceps and the arm of the scope holder cannot be avoided in single-incision laparoscopic cholecystectomy; laparoscope fixation from below the forceps can solve this problem.

6.4.5 G

G

Laparoscopic distal gastrectomy (Fig. 6.15A and B) (Video, see online for video)

We perform four-ports laparoscopic distal gastrectomy with a dual stomach-lifting technique [14] and intracorporeal Billroth-I reconstruction by hemi-hand-sewn technique [15]. The patient is placed in a supine lithotomy position with the right arm spread.

100

Handbook of Robotic and Image-Guided Surgery

FIGURE 6.15 Laparoscopic distal gastrectomy (dual-stomach lifting method). (A) Trocar placement and the positions of the operator and assistant. (B) Soloassist attachment. 〇: 12 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments. G

G G G

G

G G

G

Hold the legs of the patient with the semiboot foot holder: Levitator (Mizuho, Tokyo, Japan). Fix it with the legs lowered as much as possible so as not to cause a collision with the operator’s forceps. Attach Soloassist to the left-side rail of the operating table at nipple level. Insert a 12 mm trocar for pneumoperitoneum from the subumbilical wound. After inspection of the abdominal cavity, insert three 12 mm trocars from the bilateral midclavicular line 2 3 cm above the costal margin and the right anterior axillary line 2 cm below the costal margin. Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table slightly head-up (reverse Trendelenburg position) without rotation. Retract the lateral segment of the liver. (We use Silicon Disk with 2-0 nylon. When using a Nathanson liver retractor, fix it from the right side of the patient.) The surgeon stands between the legs of the patient and the assistant sitting on the chair arranges the surgical field from the right lateral trocar. It is possible to perform all procedures without changing position.

6.4.6 Laparoscopic colectomy (right-side colon) (Fig. 6.16A and B) (Video, see online for video) Four ports procedure G The patient is placed in a supine position with both arms spread. G Attach Soloassist to the right-side rail of the operating table at the subcostal level. G Insert a 12 mm trocar for pneumoperitoneum from the subumbilical wound. G After inspection of the abdominal cavity, insert a 12 mm trocar from the left midclavicular line 3 cm below the costal margin and 5 mm trocars from bilateral iliac regions. G Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. G Tilt the operating table in accordance with the steps of the operative procedure (median or lateral approach). G The surgeon stands on the left side of the patient with the assistant sitting on the chair arranging the surgical field from the right lateral trocar. Five ports procedure G The patient is placed in a supine position with the left arm spread. G Attach Soloassist to the left side rail of the operating table at the axillar level. G Insert another 5 mm trocar from the right hypochondrium for assistant use.

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

101

6. The Soloassist System FIGURE 6.16 Laparoscopic right-side colectomy. (A) Trocar placement and the positions of the operator and assistant. (B) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

FIGURE 6.17 Laparoscopic left-side colectomy. (A) Trocar placement and the positions of the operator and assistant. (B) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

6.4.7 G G G G G

G

G G

Laparoscopic colectomy (left-side colon) (Fig. 6.17A and B)

The patient is placed in a lithotomy position with the left arm spread. Hold the legs of the patient with the Levitator. Attach Soloassist to the left-side rail of the operating table at the hip joint level. Insert a 12 mm trocar for pneumoperitoneum from the supraumbilical wound. After inspection of the abdominal cavity, insert a 12 mm trocar from the right iliac region and a 5 mm trocar from the right hypochondrium. In addition, insert another 12 mm trocar at the left iliac region for assistant use and drain placement (four-port procedure). Move the probe tip to the umbilical platform and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table head-down (Trendelenburg position) with right-side rotation. The surgeon stands on the right side of the patient with the assistant sitting on the chair arranging the surgical field from the left iliac trocar.

6.4.8 Laparoscopic rectal resection and five-port left-side colectomy (Fig. 6.18A and B) (Video, see online for video) G G

The patient is placed in a lithotomy position and both arms are fixed to the body. Hold the legs of the patient with the Levitator.

102

Handbook of Robotic and Image-Guided Surgery

FIGURE 6.18 Laparoscopic rectal resection. (A) Trocar placement and the positions of the operator and assistant. (B) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the laparoscope. Black arrows: Direction of the surgical instruments.

G G G

G G G

Attach Soloassist to the left-side rail of the operating table at the axilla level. Insert a 12 mm trocar for pneumoperitoneum from the supraumbilical wound. After inspection of the abdominal cavity, insert a 12 mm trocar from the right iliac region and a 5 mm trocar from the right hypochondrium. In addition, insert a 5 mm trocar from the left hypochondrium and a 12 mm trocar from the left iliac region for assistant use and drain placement. Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table head-down (Trendelenburg position) with right-side rotation. The surgeon stands on the right side of the patient with the assistant sitting on the chair arranging the surgical field from the left-side trocars.

6.4.9 Laparoscopic distal pancreatectomy and splenectomy (Video, see online for video) G G G G G G

G

G G

G

G

Carry out the same approach as distal gastrectomy in Section 6.4.5. The patient is placed in a lithotomy position with the right arm spread. Hold the legs of the patient with the Levitator. Attach Soloassist to the left-side rail of the operating table at the nipple level. Insert a 12 mm trocar for pneumoperitoneum from the subumbilical wound. After inspection of the abdominal cavity, insert a 12 mm trocar from the left midclavicular line 2 cm above the umbilicus and a 5 mm trocar at the upper right umbilical region. In addition, a 12 mm trocar is placed at the right epigastric region for assistant use. (Trocar placement is determined according to the location of the lesion and the size of the spleen.) Move the probe tip to the umbilical trocar and carry out calibration of the trocar point as the access port for the laparoscope. Tilt the operating table head-up (reverse Trendelenburg position) with right-side rotation. The surgeon stands between the legs of the patient with the assistant sitting on the chair arranging the surgical field from the right epigastric trocar. After dissection of the omentum and short gastric vessels, lift the stomach together with a lateral segment of the liver with Silicon Disk. These procedures can all be carried out without changing position.

6.4.10 G G

Thoracoscopic esophageal resection (Fig. 6.19A C) (Video, see online for video)

The patient is placed in the prone position with the right arm raised above the head. Attach Soloassist to the left-side rail of the operating table at the iliac crest level.

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

103

6. The Soloassist System FIGURE 6.19 Thoracoscopic esophagectomy. (A) Trocar placement. (B) Trocar placement and the positions of the operator and the assistant. (C) Soloassist attachment. 〇: 12 mm trocar. x: 5 mm trocar. $: Target organ. White arrow: Direction of the endoscope. Black arrows: Direction of the surgical instruments. G

G

G

G

G

After deflating the right lung, insert a 12 mm balloon trocar from the 9th intercostal space just inferior to the scapula tip. After inspection of the thoracic cavity, insert 5, 12, and 5 mm balloon trocars from the 3rd, 5th, and 7th intercostal spaces. Move the probe tip to the trocar placed at the 9th intercostal space and carry out calibration of the trocar point as the access port for the thoracoscope. The surgeon sits on the chair on the right side of the patient with the assistant sitting on the chair arranging the surgical field from the third intercostal trocar. During lower mediastinal dissection the scope manipulation from the fifth intercostal trocar sometimes provides a preferable operative field. In that case, readjustment of the trocar point is required.

6.5

Clinical experience and discussion

The first experimental study was reported in nose, nasopharynx, and larynx (ENT) surgery [12]. In 2014, the results of clinical application of Soloassist in gynecology and cholecystectomy were reported [11,16]. They demonstrated reduced human resources, shortened absolute overall staff time, and comfortable surgical procedures without system-specific complications. A report involving 1033 laparoscopic surgery cases has already been reported [17]. In 2018 we reported our experiences using Soloassist II in 949 laparoscopic and thoracoscopic surgery cases, including 281 emergency operations [13]. There were no cases with system-specific complications or conversion to a human scope assistant. Although it was a retrospective analysis and their backgrounds were different, the apparent decrease in participating surgeons and shortened surgical time were confirmed. Emergency surgery at night could be also carried out with fewer

104

Handbook of Robotic and Image-Guided Surgery

human resources, while no additional setup time was needed. To date, we have performed more than 1200 surgeries not only in gastrointestinal surgery, but also in pulmonary resections and urological surgery. In principle the coaxial procedure with the scope between the operator’s arm is desirable for complicated and delicate procedures and troubleshooting. However, in the coaxial position, the arm of the scope assistant becomes entangled with the operator’s arm. The scope holder provides a wider available space and a more comfortable surgical environment for the operator. Also, the low frequency of scope cleaning leads to a reduction of surgical stress [18 20]. Although various types of robotic scope holders have been developed, previous reports evaluating the usefulness of robotic scope holders are sporadic, and they have not been widely used in clinical practice to date. In addition, some robotic scope guiding systems were too heavy, too bulky, and augmented the surgical workload rather than reducing it. We recognize that Soloassist has some notable features compared with other robotic scope holders. The joystick of Soloassist is a reliable interface operated by the fingers which increase dexterity, and we can freely control both the moving direction and distance. The Soloassist system is simply and compactly designed; therefore movement of the forceps is rarely restricted compared to other robotic scope holders which have large scope-mounting parts around the scope. In laparoscopic surgery, we need to change the inclination of the operating table to create an appropriate surgical field, particularly in colorectal surgery. Soloassist permits easy position change, as it can be attached to the operation table directly. Since Soloassist can be installed in a short time, it does not affect time spent in the operating room [13].

6.6

Conclusion

The Soloassist system appears to be the easiest to handle and most advantageous currently available robotic scope holder to secure an optimal operative field straightforwardly and intuitively. However, in order to be widely recognized based on scientific evidence, objective data to evaluate its efficacy are not yet sufficient. In the near future, because of various social factors, we must think about conducting highly accurate surgery with fewer personnel. Given personnel costs, the maintenance cost required for Soloassist is very small, which can lead to improved efficiency and cost savings for institutions [21]. Therefore further development of active scope holders could play an important role in laparoscopic surgery in the near future.

References [1] Ninomiya K, Kitano S, Yoshida T, Bandoh T, Baatar D, Matsumoto T, et al. The efficacy of laparosonic coagulating shears for arterial division and hemostasis in porcine arteries. Surg Endosc 2000;14(2):131 3. [2] Harada H, Kanaji S, Hasegawa H, Yamamoto M, Matsuda Y, Yamashita K, et al. The effect on surgical skills of expert surgeons using 3D/HD and 2D/4K resolution monitors in laparoscopic phantom tasks. Surg Endosc 2018. Available from: https://doi.org/10.1007/s00464-018-6169-1. [3] Stephan D, Sa¨lzer H, Willeke F. First experiences with the New Senhances telerobotic system in visceral surgery. Visc Med 2018;34(1):31 6. [4] Mizuno Y, Narimatsu H, Kodama Y, Matsumura T, Kami M. Mid-career changes in the occupation or specialty among general surgeons, from youth to middle age, have accelerated the shortage of general surgeons in Japan. Surg Today 2014;44(4):601 6. [5] Chen YC, Shih CL, Wu CH, Chiu CH. Exploring factors that have caused a decrease in surgical manpower in Taiwan. Surg Innov 2014;21 (5):520 7. [6] Deedar-Ali-Khawaja R, Khan SM. Trends of surgical career selection among medical students and graduates: a global perspective. J Surg Educ 2010;67(4):237 48. [7] Marschall JG, Karimuddin AA. Decline in popularity of general surgery as a career choice in North America: review of postgraduate residency training selection in Canada, 1996 2001. World J Surg 2003;27(3):249 52. [8] Yavuz Y, Ystgaard B, Skogvoll E, Ma˚rvik R. A comparative experimental study evaluating the performance of surgical robots Aesop and Endosista. Surg Laparosc Endosc Percutan Tech 2000;10(3):163 7. [9] Aiono S, Gilbert JM, Soin B, Finlay PA, Gordan A. Controlled trial of the introduction of a robotic camera assistant (EndoAssist) for laparoscopic cholecystectomy. Surg Endosc 2002;16(9):1267 70. [10] Takahashi M, Takahashi M, Nishinari N, Matsuya H, Tosha T, Minagawa Y, et al. Clinical evaluation of complete solo surgery with the “ViKYs” robotic laparoscope manipulator. Surg Endosc 2017;31(2):981 6. [11] Gillen S, Pletzer B, Heiligensetzer A, Wolf P, Kleeff J, Feussner H, et al. Solo-surgical laparoscopic cholecystectomy with a joystick-guided camera device: a case-control study. Surg Endosc 2014;28(1):164 70. [12] Kristin J, Geiger R, Knapp FB, Schipper J, Klenzner T. Use of a mechatronic robotic camera holding system in head and neck surgery. HNO 2011;59(6):575 81 (in German). [13] Ohmura Y, Nakagawa M, Suzuki H, Kotani K, Teramoto T. Feasibility and usefulness of a joystick-guided robotic scope holder (Soloassist) in laparoscopic surgery. Visc Med 2018;34(1):37 44. [14] Ohmura Y, Nishi H, Fukuda K, Mano M. Technical modification of laparoscopy-assisted distal gastrectomy. J Jp Soc Endosc Surg 2004;9 (2):191 5 (in Japanese).

Clinical Application of Soloassist, a Joystick-Guided Robotic Scope Holder, in General Surgery Chapter | 6

105

6. The Soloassist System

[15] Ohmura Y, Suzuki H, Kotani K, Teramoto A. Intracorporeal hemi-hand-sewn technique for Billroth-I gastroduodenostomy after laparoscopic distal gastrectomy: comparative analysis with laparoscopy-assisted distal gastrectomy. Mini-invasive Surg 2019;3:4. Available from: https://doi. org/10.20517/2574-1225.2018.69. [16] Beckmeier L, Klapdor R, Soergel P, Kundu S, Hillemanns P, Hertel H. Evaluation of active camera control systems in gynecological surgery: construction, handling, surgeries and results. Arch Gynecol Obstet 2014;289(2):341 8. [17] Holla¨nder SW, Klingen HJ, Fritz M, Djalali P, Birk D. Robotic camera assistance and its benefit in 1033 traditional laparoscopic procedures: prospective clinical trial using a joystick-guided camera holder. Surg Techn Int 2014;25:19 23. [18] Tran H. Robotic single-port hernia surgery. JSLS 2011;15(3):309 14. [19] Merola S, Weber P, Wasielewski A, Ballantyne GH. Comparison of laparoscopic colectomy with and without the aid of a robotic camera holder. Surg Laparosc Endosc Percutan Tech 2002;12(1):46 51. [20] Omote K, Feussner H, Ungeheuer A, Arbter K, Wei GQ, Siewert JR, et al. Self-guided robotic camera control for laparoscopic surgery compared with human camera control. Am J Surg 1999;177(4):321 4. [21] Dunlap KD, Wanzer L. Is the robotic arm a cost-effective surgical tool? AORN J 1998;68(2):265 72.

7 G

The Sina Robotic Telesurgery System Alireza Mirbagheri1,2, Farzam Farahmand2,3, Saeed Sarkar1,2, Alireza Alamdar2,3, Mehdi Moradi2 and Elnaz Afshari2 1

Tehran University of Medical Sciences, Tehran, Iran Sina Robotics and Medical Innovators Co., Ltd., Tehran, Iran 3 Sharif University of Technology, Tehran, Iran 2

ABSTRACT Sina is a robotic telesurgery system which can be used for performing general surgeries. The system has a reconfigurable surgery console which may be used both in sitting, semisitting, or standing surgeon postures. At the surgeon side are surgery handles of scissor, grasper, hammer, or stylus type which may be changed to perform different tasks during a single surgery. The slave subsystem has a modular and open architecture design for placement of surgical robots at one or both sides of the surgical bed and which can be integrated to each other. Noninterruptive reorienting the patient during general surgeries of deformable soft tissues is the most important advantage of such a modular and integrable design. The Sina cameraman robot (RoboLens) can smartly track the surgery instruments which are 5 mm in diameter and fully articulated. Also, the system benefits from tremor reduction, movement scaling, and three degrees of freedom haptic feedback plus pinch force for each master handle. Any available operating room equipment such as electrosurgery devices and vision systems may be integrated into the Sina surgical system. Sina also benefits from both single and reusable instruments to reduce the cost of surgeries, which is one of the main bottlenecks in the generalization of robotic surgeries. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00007-4 © 2020 Elsevier Inc. All rights reserved.

107

108

Handbook of Robotic and Image-Guided Surgery

7.1

Background

Improvements from traditional open surgeries to minimally invasive surgeries (MIS) have brought many valuable advantages for patients, such as shorter hospital stay, outpatient treatment, less pain, less trauma, less bleeding, lower infection rates, and faster patient recovery [1]. MIS is referred to any surgical procedure carried out by endoscopic devices through small ports, with indirect observation of the surgery area. In this regard, laparoscopic surgery is a primary form of minimally invasive surgery at the intraabdominal cavity that is now widely considered as a preferred choice for various types of operations. During laparoscopic surgeries, the abdominal cavity is insufflated by CO2 gas and long narrow surgical instruments and a miniature video camera (laparoscope) are inserted into it through small incisions and trocars. The camera provides images of the interior of the abdomen, enabling the surgeon to explore the internal organs and perform surgical maneuvers through endoscopic vision and instrument handling [2]. Fig. 7.1 demonstrates typical laparoscopic and open surgery methods. In contrast to the valuable advantages for patients, especially considering the role of the surgeon and operational posture to implement the laparoscopic maneuvers through instrument handling, laparoscopic surgery suffers from many serious drawbacks, including lack of surgeon’s tactile sensing causing the surgeon to fail to use a lot of important data [3]. Tactile sensing routinely used during open surgeries to explore tissues/organs makes important data such as stiffness, elasticity, and the tissue surface texture available. Also, due to the straight and long stem of conventional laparoscopic instruments, surgeons usually report back and/or neck pain after a few years of laparoscopic surgery experience [4]. According to this brief introduction, robotic surgery systems could be a suitable ergonomic answer to the problems of laparoscopic surgery. Despite the first applications of robotics in industry, today robotic technologies are utilized in different branches of science. In this regard, medicine, and more specifically surgery, has been influenced significantly by robotic systems. Robotic surgery systems have offered valuable enhancements to surgical processes through improved precision, stability, and dexterity. Indeed, robotic surgery is referred to as computer-assisted “robotic” technologies used to increase the surgeon’s ability to carry out various surgical maneuvers. It is notable that robotic surgery systems are used to improve the outcome of surgery operations, so they should have advantages compared to humans in successful completion of the operative task. Robotic surgery systems have the same valuable advantages previously mentioned for MIS, nevertheless they are not perfect [5]. Robotic surgery systems enhance the dexterity of surgeons in several ways, for example, instruments with increased degrees of freedom (DoF) enhance the surgeon’s ability to manipulate tissues and organs. Surgeons’ tremor can be compensated for in the end effector movement through appropriate hardware and software filters. Moreover, increased magnification and maneuverability can help surgeons to control a stable visual field. Robotic technologies offer surgeons a comfortable and ergonomically optimal operating position. Surgeons no longer have to stand throughout the surgery and do not tire as quickly as in open surgeries [6].

FIGURE 7.1 Typical laparoscopic (left) and open (right) surgery methods.

The Sina Robotic Telesurgery System Chapter | 7

109

G

G

Hardware components: Manipulators are the most important hardware components of a robotic surgery system. Indeed, manipulators are electromechanical systems equipped with sensors and actuators, and are responsible for holding or precisely moving the surgical instrument under computer control. The remote center of motion (RCM) is the most common kinematic architecture of surgical manipulators, and is a specific characteristic of surgical robots as opposed to industrial types. Surgical manipulators use the RCM to facilitate the pivoting motion of instruments about a fixed point in space, normally located on the instrument itself [8]. Another important hardware component is the image acquisition device, for example, video, infrared, ultrasound, X-ray, and magnetic resonance imaging systems. Software components: Computer software is the other important component of robotic surgery systems. Software provides a link between the “data world” of medical images, sensors, and databases, and the physical world of surgical actions. This makes it possible to plan and execute surgical interventions precisely and predictably, using both real-time and presurgical information about the patient [8].

Telesurgery, also known as remote surgery, is the other form of modern surgery and refers to any surgical procedure carried out at a distance between the surgeon and patient. This novel form of surgery is based on robotic surgical technologies and transcends all geographical limitations. Telesurgery systems are robotic surgery systems capable of receiving and transforming surgical real-time data and are also called tele-operative robotic surgery systems. Nowadays there are several robotic surgery systems that have been developed. Some of these are commercialized and some may be commercialized in the near future. The da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA, United States), with special EndoWrist instruments, is one of the most well-known tele-robotic surgery systems in the world [9]. In this robotic surgery system, the surgeon may sit down in an ergonomic position (through the surgeon console) and perform surgery using the master handles. Exploring the robotic-assisted laparoscopic surgery statistics in different areas shows that there is a huge improvement in some specific areas of surgeries such as robotic-assisted radical prostatectomy but less improvement in other areas of general surgeries, especially on the small intestine and other deformable intraabdominal organs. This may be due to some limitations in robotic surgery, including difficulty of patient reorienting during surgery which is very necessary at most parts of general intraabdominal surgeries, especially when the surgeon tries to operate on the small intestine. The other limitation is the lack of tactile sensing to grasp the delicate soft tissues, and injuries which may damage them during the robotic surgery process without tactile sensation. A lack of proper instruments to grasp large and delicate intraabdominal organs such as bladder and spleen, may be another limitation to generalize the robotic surgery application in all fields of general surgery. Also, the lack of a proper method for safely grasping soft tissues is one of the main drawbacks in applications of robotic instruments during the interaction with delicate and soft organs. Over the past decades, the Sina group has focused on these limitations and tried to introduce a new design for a robotic surgery system which can be used to generalize the robotic surgery methods in more common applications beyond the mentioned limitations. Fig. 7.2 shows the milestones in the development of the currently available Sina robotic surgery systems. This started in 2003 with a simple project to design and fabricate a cameraman robot called “RoboLens,” which performed the first human trial in 2005. Then the second generation of RoboLens was commercialized and performed more than 1000 human laparoscopic cholecystectomies up to 2009 in four university hospitals. The Sina group targeted prototyping the Sina robotic telesurgery system since 2010, and after two prototypes the first clinical model called Sinastraight was introduced on 2013. The second and newest model of these machines was introduced at the end of 2018 and called Sinaflex, with flexible instruments and several patented unique values.

7.2

System overview

Sina is a complex robotic platform which can be used in different complicated surgery operations with a minimally invasive approach in the pelvic, abdomen, and thorax areas. Two versions have been presented for Sina robotic

7. The Sina Robotic System

Surgical robots can be classified in several different ways. Some of the common classifications are as follows: through manipulator design (like kinematics, actuation, DoF), through level of autonomy (such as preprogrammed, image-guided, tele-operated, synergetic), by targeted anatomy/technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, endoluminal, and microsurgical); through intended operating environment (e.g., operating room, imaging scanner, hospital floor), and finally through the context of their role in computer-integrated surgery systems (e.g., surgical planner, surgical assistants) [7]. Like all computer-controlled systems, robotic surgery systems trust the interaction of appropriate hardware and software components for acceptable performance. According to this point, the general components of surgical robotic systems can be introduced as below:

110

Handbook of Robotic and Image-Guided Surgery

FIGURE 7.2 Milestones in the development of the Sina robotic surgery systems.

telesurgery systems: Sinastraight and Sinaflex. While the first version of the Sina robotic telesurgery system called Sinastraight implements surgical maneuvers through rod-shaped and solid instruments, the second version, named Sinaflex, works with flexible laparoscopic instruments. Both systems will be introduced separately.

7.2.1

Sinastraight

Sinastraight is a robotic telesurgery system which can be used for locally performing abdominal surgery operations in an ergonomic posture for surgeons and also remotely implementing surgical maneuvers through the internet or other communication channels. This system has two main subsystems including a master robotic console at the surgeon side and a slave robotic system at the patient’s side with two or three surgical robotic arms installed on the sides of a specified surgery bed. A robotic cameraman called RoboLens (Standalone model) is also integrated into the system to take the patient intraabdominal images and send them to the surgeon’s master console. Fig. 7.3 demonstrates two main subsystems of Sinastraight including the surgeon console and the slave robotic system at the patient’s side. The master robots receive the surgeon’s hand movements and transmit them to the patient’s side slave robots which mimic the surgeon’s hand movements in a real-time manner. Simultaneously, the slave robots measure the interaction forces/torques between robot and patient including the pinch forces under instrument jaws and transmit them to the surgeon’s side master robotic system. As a result, all tool tissue interaction forces are fed back to the surgeon’s hands. In the Sinastraight system, the master interfaces are exactly the same as laparoscopic surgery instrument handles, and laparoscopic surgeons are very familiar working with them. The Sinastraight surgery console includes two 5 DoFs force feedback master robots with a landed RCM gimbal mechanism to mimic the conventional laparoscopic surgery situation for surgeons who are familiar with this type of surgery. Other operating room equipment, such as an electrosurgery device and cameraman robot, may be also remotely controlled from the surgeon’s side master console, with several footswitches and miniature keypads at the master handles. Fig. 7.4 demonstrates use of this console during a robotic laparoscopic surgery using the Sinastraight system. The slave surgery subsystem of Sinastraight consists of two to three surgical robotic arms plus one camera holder which will be explained in the following. The surgical robotic arms of the Sinastraight system are 5 DoFs manipulators with basically a spherical RCM mechanism which uses straight-type surgical instruments. Fig. 7.5 shows a surgical robotic arm of the Sinastraight system. This robot is installed on a 5 DoFs passive mechanism (three Cartesian plus pan and tilt rotations) to position and orient the RCM point on the patient’s abdominal incision and has one passive linear joint to replace the instruments.

The Sina Robotic Telesurgery System Chapter | 7

111

7. The Sina Robotic System

FIGURE 7.3 Two main subsystems of Sinastraight including the surgeon console (left) and the slave robotic system at the patient’s side (right).

FIGURE 7.4 Surgeon’s console of Sinastraight including two 5 DoFs fully force feedback master robots with laparoscopic handle types.

The Sinastraight system also uses the RoboLens cameraman robot which may be controlled through foot pedals from the surgeon’s side or smartly track the surgery instruments with no need for any human control command. The RoboLens also may be used separately outside of the Sina surgery system as an assistant robot to hold and maneuver the laparoscopic lens [10]. This system employs an effective low-cost mechanism, with a minimum number of actuated DoFs, enabling spherical movement around a remote RCM positioned at the insertion point of the laparoscopic stem. Kinematic analysis shows a high manipulability measure for the system, with the left/right movements directly governed by rotation of the first rotary actuator, and zoom and up/down movements by the simultaneous motions of the linear and second rotary actuators. Fig. 7.6 shows the RoboLens (standalone model), which can track the laparoscopic

112

Handbook of Robotic and Image-Guided Surgery

FIGURE 7.5 Surgical robotic arm of the Sinastraight robotic surgery system with five active DoFs.

FIGURE 7.6 RoboLens (standalone model) cameraman robot with smart instrument tracking feature through a marker-free image processing method.

The Sina Robotic Telesurgery System Chapter | 7

113

7.2.2

Sinaflex

The Sinaflex system is a new design of the previous version, the Sinastraight, which can use both straight and flexible surgical instruments. As with the previous version, this system has two main parts, the master robotic surgery console and the slave surgical robotic subsystem. Fig. 7.7 shows the Sinaflex system. TABLE 7.1 Technical points of the Sinastraight robotic surgery system. Master robotic surgeon console Total dimensions (L 3 W 3 H)

180 3 95 3 150 cm3

Total weight

210 kg

No. of total active DoFs

10 motorized joints (five for each master robot)

No. of total passive DoFs

Six joints plus two 6 DoF articulated arms for holding monitors

Local communication frequency

1 kHz

Main monitor resolution

Full HD (1080 3 1920)

Slave robotic surgery system Total dimensions (L 3 W 3 H)

200 3 220 3 max. 215 cm3

Total weight

260 kg

No. of total active DoFs

16 motorized joints (five for each surgery robot, three for the cameraman robot, and three for the surgery bed)

No. of total passive DoFs

13 joints (six for each surgery robot and one for the cameraman robot)

Local communication frequency

1 kHz

Endoscope resolution

Full HD (1080 3 1920)

Movement resolution

1 µm in each direction at no load operation

Pinch force sensing resolution

0.1 N

Interaction force sensing resolution

0.5 N

7. The Sina Robotic System

instrument with a free-marker image processing method to detect the laparoscopic instrument in a laparoscopic view from the natural intraabdominal background. In conventional laparoscopic surgery, considering the fact that both of the surgeon’s hands are engaged with the surgical instruments, the endoscope must be manipulated by an assistant. However, the degree of coordination between the surgeon and the assistant is a problem. Handling an endoscope is a static activity and results in the assistant getting tired faster leading to hand tremor, wrong or unwilling image motions, and poor surgeon assistant coordination. Furthermore, the complexity of the counterintuitive hand movements can aggravate the problem based on the fulcrum effect of the incision site. Furthermore, the surgery assistant is usually a surgeon who could be involved in other surgical tasks with his/her other hand. Also, the lens of the endoscope often hits the tissues and gets dirty, due to the assistant losing concentration. Indeed, the poor quality of the image could affect surgeon’s performance, especially during fine hand motions such as suturing [8]. Furthermore, the workspace can be cumbersome for the surgeon because of the space occupied by the assistant [11,12]. It has been demonstrated that the application of assistant robots in laparoscopic and robotic surgeries has promising results in comparison with conventional laparoscopic surgery. Well-designed robots can assist surgeons without limiting their human capabilities, while the robot’s extra benefits could be used to enhance the quality and outcomes of surgery [13,14]. Use of the RoboLens as a robotic camera holder in comparison with a human camera holder has been investigated on 40 patients with single ovarian cyst during laparoscopic ovarian cystectomy [15]. The study was performed as a randomized, single-blind, placebo-controlled, parallel-group trial. The results demonstrated that the surgeons felt less fatigue and surgeries concluded sooner in robotic-assisted groups. Also, the quality of images during operation with RoboLens was either superior or equal to that obtained by a human assistant. The authors concluded that RoboLens, as a low-cost robotic camera holder, is a safe, time- and energy-saving system which helps to obtain improved vision from the surgery site. Table 7.1 summarizes the technical points of the Sinastraight robotic surgery system.

114

Handbook of Robotic and Image-Guided Surgery

FIGURE 7.7 The Sinaflex robotic telesurgery system.

FIGURE 7.8 The Sinaflex master robotic surgery console.

The Sinaflex master robotic surgery console is a reconfigurable surgery console. Fig. 7.8 displays the different parts of this console including the main screen, ergonomic handles, arm rest, setting touch panel, setting push buttons, and foot pedals. Using this system, the surgeon may sit behind the surgery console and adjust it for the best ergonomic posture of him/herself. Also for long-lasting surgeries where the surgeon may prefer to stand during surgery and reduce his fatigue, the console may be preadjusted and reconfigured to a standing posture with special ergonomic parameters for the specific surgeon. Also, the surgeon can use this console in a semisitting posture which has been proved to reduce fatigue during long periods of manual operation. The convenience of the surgeon has been considered as the main aim in designing this master robotic surgery console to bring the best ergonomic posture for surgeons. This system is capable of memorizing preset configurations and reconfiguring from a sitting to a standing posture during surgery in less

The Sina Robotic Telesurgery System Chapter | 7

115

FIGURE 7.9 Reconfiguration of the master robotic surgery console of Sinaflex.

TABLE 7.2 Technical points of the Sinaflex master robotic surgery console. Master robotic console Console type

Ergonomic two postural (sitting and standing)

Total dimensions (L 3 W 3 H)

110 3 95 3 100 170 cm3

Total weight

120 kg

No. of total active DoFs

11 motorized joints

No. of total passive DoFs

Six encoded joints plus three joints for holding monitors

Local communication frequency

10 kHz

Main monitor type

IPS, eye-care

Main monitor resolution

4k (3840 3 2160 pixel)

Remote setting panel

SD touch panel

Posture setting panel

Push button

Automatic setting parameters

Height (based on tool handle): 75 120 cm Distance between two master robots: 35 80 cm Arm support: 65 75 cm

Manual setting parameters

Monitor height (based on its base): 0 20 cm Monitor depth (based on its base): 0 20 cm Monitor angulation (based on its base): 6 10 degrees

Left and right master robot type

7 DoF, fully back drivable; 4 DoF, force feedback

Master robot DoF types

Three force feedback DoF to control surgery instrument position and interaction forces Two encoded DoF to control the surgery tool orientation One encoded DoF to control the surgery tool 360 degrees infinite rotation One force feedback DoF to control the surgery tool grasping and pinch force to soft tissues (Continued )

7. The Sina Robotic System

than 5 seconds. Therefore the surgeon may operate both in sitting and standing postures to reduce his/her fatigue during a long-lasting operation (Fig. 7.9). Table 7.2 summarizes the technical points of the Sinaflex master robotic surgery console. The Sinaflex slave surgical robotic subsystem includes a 3 DoFs (height plus pan and tilt rotations) surgical bed which may be modularly integrated with up to three surgical robotic arms and one cameraman robot. Fig. 7.10 illustrates the Sinaflex slave surgical robotic subsystem. Such a modular design for placement of surgical robots helps surgeons to design their own surgery architecture through reconfiguring the placement of surgery robots at one side or both

116

Handbook of Robotic and Image-Guided Surgery

TABLE 7.2 (Continued) Handle types (optional)

Open surgery instrument type Stylus type Ergonomic type

Workspace of each handle

20 3 20 3 20 cm3

Accuracy of position recording

6 0.1 mm

Accuracy of orientation recording

6 0.1 degree

Resolution of position recording

0.01 mm

Resolution of orientation recording

0.01 degree

Repeatability of position recording

0.1 mm

Repeatability of orientation recording

0.1 degree

Movement indexing (clutch)

Up to 20 cm in each direction

Movement scaling

Up to 10 3 scale down

Range of force feedback at each direction

10 N

Range of pinch force feedback

5N

Accuracy of directional force feedback

61 N

Accuracy of pinch force feedback

6 0.5 N

Resolution of directional force feedback

0.5 N

Resolution of pinch force feedback

0.25 N

Repeatability of directional force feedback

6 0.5 N

Repeatability of pinch force feedback

6 0.25 N

Foot pedals:

Foot pedals for controlling the laparoscopic camera Foot pedals for activating the electrocautery Foot pedals to switch the electrocautery instrument Foot pedals to switch between active instruments (two of three)

FIGURE 7.10 The Sinaflex slave surgical robotic subsystem.

The Sina Robotic Telesurgery System Chapter | 7

117

FIGURE 7.11 Different configurations of the Sinaflex slave surgical robotic subsystem.

7. The Sina Robotic System

FIGURE 7.12 The Sinaflex 7 DoFs surgical robot.

sides of the surgery bed. Indeed, the surgeon can choose his/her own surgery architecture or select from the Sinaflex offer list. Fig. 7.11 shows four different sample configurations of three/four arms of the slave surgery subsystem. Each surgical robot at the Sinaflex slave surgical robotic subsystem is a 7 DoFs basically spherical mechanism which can use both straight and flexible instruments. As with Sinastraight, these robots are installed on a 5 DoFs passive mechanism to adjust their RCM points on patient incisions. Fig. 7.12 demonstrates a Sinaflex 7 DoFs surgical robot.

118

Handbook of Robotic and Image-Guided Surgery

Table 7.3 summarizes the technical points of the Sinaflex slave surgical robotic subsystem. The Sinaflex slave surgical robotic subsystem also uses a new model of Sina group cameraman robot called Robolensbedside. Like the previously presented version, this robot may be integrated at the Sinaflex slave surgical robotic subsystem or be used as an assistant robot to hold and maneuver the laparoscopic lens as a surgeon’s third hand. Using the Robolensbedside, surgeons perform solo-surgery procedures much faster in comparison with methods that use a human assistant. The main difference between Robolensbedside and Robolensstandalone is that the bedside model may be attached to the surgical bed and the surgeon may reorient the surgical bed during the surgery without any interruption to surgery. The other valuable advantages of Robolensbedside can be listed as: smooth six-direction movement, stable view with no unwanted movement or vibration, reducing the surgery time, and a reduction in supernumerary staff. Fig. 7.13 illustrates the Robolensbedside. The install manner and operation on a human body are demonstrated in Fig. 7.14.

TABLE 7.3 Technical points of the Sinaflex slave surgical robotic subsystem. Slave surgery subsystem Surgery bed type

Straight with adjustable head support and longitudinal double rail

Total dimensions (L 3 W 3 H)

200 3 220 3 max. 215 cm3

Total weight

260 kg

Surgery bed total active DoFs

Three motorized joints

Surgery bed movements range

Height: 77 107 cm Pan angle: 215 15 degrees Tilt angle: 215 15 degrees

Surgery bed total manual DoFs

One head support

Quantity of surgical robots

Two or three (optional)

Each surgery robot total active DoFs

Seven motorized joints

Surgical robots active DoFs details

Two DoFs spherical mechanism for laparoscopic tool orientation One DoF for tool insertion One DoF for tool tip rolling Two DoFs for tool wrist pitch and yaw motion One DoF for grasping

Each surgery robot total manual DoFs

Five manual adjusting DoFs

Surgery robots manual movements range

Longitudinal displacement: 175 cm Vertical displacement: 50 cm Lateral displacement: 36 cm Pan rotation: 6 70 degrees Tilt rotation: 6 30 degrees

Cameraman robot total active DoFs

Three motorized joints

Cameraman robot total manual DoFs

Two encoded compatible passive joints

No. of total passive DoFs of slave surgical robotic subsystem

13 18 (depend on quantity of surgery robots)

Local communication frequency

10 kHz

Remote setting panel

SD touch panel

Workspace of each surgical robots

20,000 cm3 (Continued )

The Sina Robotic Telesurgery System Chapter | 7

119

TABLE 7.3 (Continued) 6 0.1 mm in each direction at no load operation

Accuracy of surgery robot’s orientation

6 0.1 degree in each direction at no load operation

Resolution of surgery robot’s position

0.01 mm

Resolution of surgery robot’s orientation

0.01 degree

Repeatability of surgery robot’s position

0.1 mm in each direction at no load operation

Repeatability of surgery robot’s orientation

0.1 degree in each direction at no load operation

Range of force detection at each direction at instrument tip

10 N

Range of pinch force detection

40 N

Accuracy of directional force detection

61 N

Accuracy of pinch force detection

6 0.5 N

Resolution of directional force detection

1N

Resolution of pinch force detection

1N

Repeatability of directional force detection

61 N

Repeatability of pinch force detection

61 N

Instruments type

Single/multiuse straight instruments Single-use flexible instruments

Electrosurgery type

Accepting monopolar (not included)

FIGURE 7.13 The Sinaflex slave surgical robotic subsystem cameraman robot: Robolensbedside.

7.3

Challenges and future directions

Robotics has a variety of applications, including industrial production, education, exploration, sport, and entertainment. In this regard, surgical robots have many advantages compared to surgeons, including having better accuracy in operations, not getting tired, being able to process data simultaneously from multiple sensory systems, being easier to

7. The Sina Robotic System

Accuracy of surgical robot’s position

120

Handbook of Robotic and Image-Guided Surgery

FIGURE 7.14 Installation of Robolensbedside to hold and manipulate a laparoscopic lens.

sterilize, being stable, and having fewer tremors. Nonetheless, robotic surgery systems still have many limitations, the most important of which are listed here: 1. High cost: The cost of robotic surgery systems is very high. It is possible that with technology improvements and more experience with new companies in the market, the price will fall. 2. Large size: These systems have fairly large footprints and cumbersome robotic arms. Due to the already crowded operating rooms, it may be hard for both the surgical team and the robot to fit into the surgical operating room. 3. Precise insertion points: If the incisions are placed incorrectly, the large robot arms can interfere with each other on the outside. Furthermore, incorrect incision placement can lead to collision of tools or inaccessible regions inside the abdomen. 4. Problems of visualization: The problems related to the vision such as injuries to organs are not resolved with the utilization of such surgical robotic systems. Because of the restricted range of motion of the camera, including the initial insertion of tools and the frequent tool changes, there are still regions that are not visible. 5. Limited ability to use information from disparate sensors: However complex 3D imaging information can be preprocessed to allow execution of very accurate tasks, robots have a confined ability to utilize data from disparate sensors to control behavior during the course of a procedure. Similar to the other target areas where robotics is being considered, surgical robots are preferred to be intelligent devices which can collaborate with or substitute human operators. Therefore current attempts and future trends in developing medical robotic systems are concerned with developing robotic systems with human senses, including sight, touch, and hearing.

References [1] Fullum T, Ladapo J, Borah B, Gunnarsson C. Comparison of the clinical and economic outcomes between open and minimally invasive appendectomy and colectomy: evidence from a large commercial payer database. Surg Endosc 2010;24(4):845 53.

The Sina Robotic Telesurgery System Chapter | 7

121

7. The Sina Robotic System

[2] Najarian S, Afshari E. Applications of robots in surgery. In: Tiwari R, Shukla A, editors. Intelligent medical technologies and biomedical engineering: tools and applications. New York: IGI Global; 2010. p. 41 59. [3] Dargahi J, Najarian S, Ramezanifard R. Graphical display of tactile sensing data with application in minimally invasive surgery. Can J Electr Comput Eng 2007;32:151 5. [4] Avery DMJ, Avery DM, Reed MD, Parton JM, Marsh EE. Back and neck pain in gynecologists. Am J Clin Med 2010;7(1):5 10. [5] Najarian S, Fallahnezhad M, Afshari E. Advances in medical robotic systems with specific applications in surgery—a review. J Med Eng Technol 2011;35:19 33. [6] Erguer R, Forkey DL, Smith WD. Ergonomic problems associated with laparoscopic surgery. Surg Endosc 1999;13(5):466 8. [7] Krebs HI, Hogan N, Aisen ML, Volpe BT. Robot-aided neurorehabilitation. IEEE Trans Rehabil Eng 1998;6(1):75 87. [8] Krebs HI, Volpe BT, Williams D, Celestino J, Charles SK, Lynch D, et al. Robot-aided neurorehabilitation: a robot for wrist rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2007;15(3):327 35. [9] Guthart GS, Salisbury Jr KJ, editors. Intuitive telesurgery system: overview and application, vol. 1. Symposia Proceedings of IEEE International Conference on Robotics and Automation, ICRA2000. (Cat. No.00CH37065), San Francisco, CA; 2000. p. 618 621. [10] Mirbagheri A, Farahmand F, Meghdaria A, Karimian F. Design and development of an effective low-cost robotic cameraman for laparoscopic surgery: RoboLens. Sci Iran B 2011;18(1):105 14. [11] Munoz VF, et al. A medical robotic assistant for minimally invasive surgery. In: Proceedings of IEEE international conference on robotics and automation, ICRA; 2000. [12] Munoz VF, et al. On laparoscopic robot design and validation. Integr Comput-Aid Eng 2003;10(3):211 29. [13] Funda J., et al. Control and evaluation of a 7-axis surgical robot for laparoscopy. In: Proceedings of IEEE international conference on robotics and automation, ICRA; 1995. [14] Dowler NJ, Holland SRJ. The evolutionary design of an endoscopic telemanipulator. IEEE Rob Autom Mag 1996;3(4):38 45. [15] Taslimi SH, Samiee H, Jafari A, Asgari Z, Mirbagheri A, Jafari A, et al. Comparing the operational related outcomes of a robotic camera holder and its human counterpart in laparoscopic ovarian cystectomy: a randomized control trial. Front Biomed Technol 2014;1(1):42 7.

8 G

STRAS: A Modular and Flexible Telemanipulated Robotic Device for Intraluminal Surgery Florent Nageotte, Lucile Zorn, Philippe Zanne and Michel De Mathelin ICube Laboratory, Strasbourg, France

ABSTRACT Intraluminal procedures are very attractive techniques for surgery in the digestive tract, but they are very demanding for surgeons when using manual instruments. STRAS has been developed to allow simple intraluminal surgery. It consists of a telemanipulated motorized system based on a flexible endoscope and flexible instruments. It provides two effectors with three degrees of freedom (DoFs) and a controllable overtube, for a total of 10 DoFs. The modularity of the slave system ensures good adaptation to the surgical workflow. The motorized endoscope can be used separately for navigation before being set up as an overtube. The slave robot is then teleoperated using master interfaces specifically designed to intuitively control all available DoFs. Two versions of STRAS have been extensively used in preclinical trials. The feasibility of intraluminal surgery has been demonstrated, as well as advantages in terms of safety and dissection speed with respect to manual systems. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00008-6 © 2020 Elsevier Inc. All rights reserved.

123

124

Handbook of Robotic and Image-Guided Surgery

8.1

Context of intraluminal surgery in the digestive tract

Colorectal cancer is one of the most common cancers (fourth in incidence, second in prevalence), with 1.9 million new cases and almost 900,000 deaths per year (data source: Globocan [1]). However, the survival rates are high when it is diagnosed and treated early. Novel minimally invasive techniques with intraluminal approaches have been developed recently as alternatives to open or laparoscopic access to the colon. Intraluminal surgery does not require external incisions and the surgical instruments are brought to the target by entering the body by natural orifices (mouth or anus) and the treatment is performed from inside the lumen. The interest in performing these procedures from an intraluminal access is important for the patient: no visible scars, reduced risks of infection, decreased hospital stay duration. Moreover, the intraluminal access allows to directly visualize and localize the tumors, which are located in the mucosa and submucosa of the digestive tract. Depending on the stage of development of the tumors, these can be treated using endoscopic mucosal resection or ESD (endoscopic submucosal dissection) [2,3]. Intraluminal surgery in the digestive tract also includes other procedures such as the treatment and resection of polyps, treatment of the muscular layers of the esophagus with techniques such as per-oral endoscopic myotomy [4], and treatment of obesity. Depending on the patient location, such intraluminal procedures can be performed by gastroenterologist or endoscopist surgeons. Intraluminal surgery is very challenging for physicians. For these procedures, dual-channel flexible endoscopes (DCE) from gastroenterology are normally, used together with passive instruments introduced in the channels [3,5]. These tools are however not well-suited for complex surgical tasks: they do not allow the separate control of surgical effectors and endoscopic camera, the instruments always remain parallel (no triangulation), and they lack degrees of freedom (DoFs). Lack of effective tissue retraction means and a stable view are also described limitations. These procedures are also affected by high rates of complications such as perforation of the muscular layers [6]. This therefore limits the use of intraluminal surgery to Eastern countries [7,8], where the prevalence of digestive cancers is very high (especially Japan [1]) and where surgeons receive intensive training at manipulating flexible endoscopes. In Western countries, only a few skilled endoscopists routinely perform such procedures. To help physicians, companies providing medical flexible endoscopes have developed more complex platforms, which essentially combine several flexible systems to provide more DoFs [5]. But practice has shown that these devices are difficult to control manually and they are almost not used in clinical routines [5]. Conventional robotic systems such as the Da Vinci (Intuitive Surgical) have been used to perform minimally invasive surgery in the throat via transoral access (transoral robotic surgery, see for instance [9]) and in the rectum via transanal access [10]. However, the use of rigid devices limits their application to operating areas very close to the natural access and makes manipulation in a restricted area very challenging. Therefore there is a need for instruments for intraluminal surgery, which can provide sufficient DoFs for surgical tasks and which can be easily controlled by endoscopists or surgeons. This need can be fulfilled by telemanipulated flexible robotic systems such as Single port and Transluminal Robotic Assistant for Surgeons (STRAS), which provides numerous DoFs and simultaneous bimanual control of instruments from comfortable master consoles.

8.2

Recent technical advances in intraluminal surgery

In order to facilitate intraluminal surgery, additional devices such as external arms have been used in addition to conventional endoscopes, particularly for improving traction [1114]. Additional mobilities of the instruments have also been proposed, for instance in the R-Scope (Olympus) [15]. The development of intraluminal procedures since 2008 has led to the development of endoscopic platforms adapted to surgical tasks inside the digestive tract or to no-scar surgery (NOTES: natural orifice transluminal endoscopic surgery) (see [1618]). Yeung and Gourlay [5] made a review of these platforms in 2012. The recurrent features of these mechanical, manually actuated systems are: G G G G

Instruments with additional DoFs, usually driven by Bowden cables; A separate overtube allowing the endoscope to be used independently of the instruments; DoFs either on the overtube or on the instruments in order to provide triangulation; Shape-locking systems to provide better stability to the overtube during surgery.

These systems solve large parts of the problems at the distal side, inside the patient. However, the difficulties of manipulation at the proximal side are not solved and because of the increased number of DoFs, the complexity is

STRAS Chapter | 8

125

G G

G G

Comfort of use and control by providing telemanipulation; Possibility for a single user to control all DoFs, since motorized endoscopes can be held in position by supporting arms and motors; Interfaces adapted to the users; Motion scaling.

The development of NOTES, of intraluminal surgery, and of single-port access (SPA) laparoscopic surgery has generated tremendous activities in flexible robotic systems [19]. SPA, in particular, has seen many recent advancements, with robotic systems already commercially available (DaVinci SP [20]), CE marked (Transenterix Spider [21]), clinically tested (VirtualIncision [22]), or close to clinical trials (Titan Medical Sport [23]). It should be noted that the technical challenges of SPA, NOTES, and intraluminal surgery have common roots but also have specificities. Robots for SPA are generally shorter and rely on a rigid shaft [24] and therefore do not allow for navigation deep in the gastrointestinal (GI) tract. On the contrary, for intraluminal surgery the most common concept has been to build robotic systems around existing flexible endoscopes. Indeed, endoscopes provide a large set of field-proven features, such as biocompatibility, and fluid transmission for insufflation, camera cleaning, and smoke aspiration. In the following we briefly present some of the robotic systems developed specifically for intraluminal surgery. The first robotic system developed for intraluminal navigation in the digestive tract was probably the ViaCath from Endovia [25], which combined active catheters (used in vascular surgery) with endoscopic systems and the master console from the Laprotek (robotic system for laparoscopic surgery). This robot has been brought to preclinical trials but no quantitative results have been published. A second version was proposed based on the observed limitations, where flexible instruments were replaced by articulated miniature cable-driven instruments [26]. However, these developments were later discontinued. Master And Slave Transluminal Endoscopic Robot (MASTER) was initially developed at the Nanyang University of Singapore [27,28]. The slave system is based on a dual-channel endoscope (Olympus), which is fitted with miniature arms with discrete DoFs. This architecture allows to apply high lateral forces (up to 5 N) with the instruments. A specific master interface with an exoskeleton architecture was developed for controlling the slave robot. The project was transferred to a startup, Endomaster in 2013. The robot was successfully tested for ESD in animals in the colon, esophagus, and stomach [29]. This is the first robot of this kind that has been used for clinical trials in humans for stomach ESD [30]. Despite these successes, MASTER has drawbacks as the endoscope control remains manual and because the distal arms are attached to the endoscope tip. Therefore an additional overtube is needed and in the case the instruments have to be changed or cleaned the endoscope must be retrieved. The University of Twente in the Netherlands developed a platform for intraluminal surgery which makes use of a conventional endoscope fitted with a lateral channel and articulated flexible instruments from the Anubiscope (Karl Storz) [31]. The slave system is telemanipulated by commercial interfaces (SensAble Phantom Omni) [32]. A simplified version aimed at navigation and diagnostics was tested in vivo [33], but the complete surgical platform was not. K-FLEX is a surgical platform developed at the Korea Advanced Institute of Science and Technology (KAIST) in South Korea [34]. This is a telemanipulated robot with two flexible arms and a specifically developed endoscope serving as an overtube, with a total of 20 DoFs. All flexible subsystems are composed of several consecutive bending sections. The master interfaces allow reproducing the motions of the slave instruments. K-FLEX was tested in throat surgery in animals. It has been recently transferred to a startup called EasyEndo [35]. The FLEX robotic system is a robotic system commercialized by Medrobotics [36]. This is a highly articulated endoscope based on the highly articulated robotic probe (HARP) system originally developed at the Carnegie Mellon University [37]. The user controls the insertion of the endoscope and the direction of the distal tip by using a master interface based on the Omega 3 (Force Dimension). The robotic system implements a “follow-the-leader” concept by relying on the advancement of two concentric overtubes. This allows navigating smoothly in tortuous environments without applying constraints on the surrounding tissues. External channels allow using conventional flexible instruments. FLEX is available for transoral [38] and colorectal surgery. Currently, these instruments are not motorized and the system is mainly intended for safe and easy navigation, rather than for surgical procedures.

8. STRAS Robotic Device

generally even worse than for DCE. At least two specialists are hence generally required, which is not widely affordable or available. In this context robotics is an interesting approach, as it allows decoupling the complexity of the mobilities at the distal side from the complexity of manipulation at the proximal side through the concept of telemanipulation. In this context, robotization can bring many advantages:

126

Handbook of Robotic and Image-Guided Surgery

ESD Cyclop is a robotic prototype developed at the Imperial College London [39] relying on conventional flexible instruments and an endoscope, but arranged in an original manner. It consists of an expandable mechanical system, called a scaffold, attached at the distal side of a conventional endoscope. The scaffold can be expanded by using cabledriven actuation from the proximal side. The scaffold then allows to control the lateral displacement of conventional flexible instruments passing through it by actuating cables from the proximal side. There is currently no commercial platform allowing complete robotic intraluminal surgery. The development of the STRAS robotic system presented in this chapter aims at providing a robotic platform adapted to intraluminal surgery with the ability for a surgeon to easily and intuitively control all DoFs during a complete procedure such as ESD.

8.3 8.3.1

The single port and transluminal robotic assistant for surgeons robotic system Basic concepts and short history

8.3.1.1 The Anubiscope platform STRAS is a robotic system based on the manual Anubiscope platform developed by Karl Storz (Tuttlingen, Germany). The Anubiscope platform is a CE-marked, totally flexible system, initially developed for NOTES surgery by the Institut de Recherche sur les Cancers de l’Appareil Digestif (IRCAD) and Karl Storz in the scope of the Anubis project [40,41]. It consists of a main endoscope and a set of instruments, of which two can be used simultaneously. The instruments and endoscope consist of a flexible passive shaft and are equipped with a steerable distal part, which is controlled from the proximal side using tendons. Each tendon runs inside a sheath inside the shaft of the endoscope/instruments. The sheath is attached at the proximal side of the bendable part of the endoscope/instrument but is free inside the shaft so as to allow good flexibility of the shaft. In the distal bendable part, the tendons go out of the sheath. They run inside lateral cavities of the vertebrae, which form the external structure of the bendable part. The tendons are attached at the tip of the bending part. The main endoscope has an overall length of 55 cm and a diameter of 16 mm. It is equipped with a camera at the distal tip providing 760 x 570 pixels images at 25Hz. The distal part of the endoscope can be deflected along two orthogonal directions and is actuated by two antagonistic pairs of tendons made of braided steel arranged in quadrature. The bendable distal part is 185 mm long. The endoscope acts as a guide (also called an overtube) and provides three working channels for the insertion of surgical instruments. One channel is located at the core of the shaft of the scope (diameter 3.2 mm), while two channels (called lateral channels, diameter 4.3 mm) are located inside, but at the side of the shaft, and terminate in a mobile distal shell acting as a deviation system. The shell is closed during the introduction of the endoscope inside the patient, hence facilitating the motion inside the lumen while maintaining a limited vision of the environment; the shell is opened once the endoscope has been brought to the operating site, and thus deflects the instruments from the main direction of the endoscope. The shell also prevents tissues and organs from falling onto the instruments and in the field of view of the camera. The endoscope is also equipped with all features of standard gastroscopes and colonoscopes, an internal channel for fluids (air for insufflation, aspiration for removing smoke, water for cleaning the camera) and a lighting system. The lateral channels can receive specific bending instruments. These instruments have long flexible shafts (length 900 mm) and a bendable distal part (length L 5 18 mm, diameter 3.5 mm). A pair of antagonistic tendons made of braided steel allows bending the distal part in one direction. These instruments are hollow and can receive inserts equipped with distal tools. Tools can be of a mechanical (grasper, scissors) or electrical type (knife, ball, hook). The insert is attached to the instrument by screwing it on the distal part of the shaft. Therefore the insert must be placed onto the instrument shaft before the insertion of the instrument inside the body of the patient. For mechanical instruments, the effector (grasper, scissor) can be opened and closed by translating a pushpull cable. Overall the Anubis platform has 10 DoFs (plus graspers opening/closing motions). During the Anubis project [200509, funded by Fonds Unique Interministe´riel (FUI)], the ICube Laboratory developed a first prototype of a telemanipulation device based on conventional flexible endoscopes [42]. The partners of the project decided to pursue efforts in the development of instruments and robotic tools for minimally invasive surgery by focusing on intraluminal surgery. During the ISIS project (200913, funded by FUI) the ICube Laboratory developed the first version of the STRAS robotic system by using the novel Anubiscope platform and by building on the experience acquired during the Anubis project [43]. STRAS is the acronym of Single port and Transluminal Robotic Assistant for Surgeons. This name notably indicates that the developments were made on the request of surgeons and primarily for use by surgeons. STRAS is also the beginning of Strasbourg and a familiar and shorter way to talk about the town where the work and the system were developed.

STRAS Chapter | 8

127

8.3.2

Mechatronic design of single port and transluminal robotic assistant for surgeons

8.3.2.1 Rationale for robotization The Anubis platform provides multiple advantages over conventional endoscopes for realizing surgical operations in the GI tract: stability, triangulation, multiple channels, etc. [41,44]. However, many difficulties remain for surgeons with this platform as for most manual platforms as described in Section 8.2. At least two persons must cooperate to handle all DoFs: the main surgeon manipulates both instruments, while an assistant controls the endoscope (see Fig. 8.2). Good coordination between both operators is thus mandatory for performing precise tasks. As for standard laparoscopic surgery, this coordination is difficult to obtain. It is even more critical for laparoscopic surgery, because, due to the tree-like structure of the system, the motion of the camera FIGURE 8.1 Distal side of the Anubiscope/STRAS with the main components (blue arrows), DoFs (green arrows) and dimensions (orange arrows). A mechanical instrument is in the right channel and an electrical instrument in the left channel. The distal deviation system for the channels is open. DoFs, Degrees of freedom.

FIGURE 8.2 Manual use of the Anubiscope. A surgeon controls the instruments and has to collaborate with an assistant who holds and controls the endoscope.

8. STRAS Robotic Device

STRAS is built onto a modified shorter version of the Anubiscope adapted for intraluminal use. For this application, tissue retraction is not required, and therefore the part of the shell in front of the channel ends has been removed. In this manner, the instrument deflection from the main endoscope axis is kept, but the overall dimensions of the distal tip are decreased (Fig. 8.1).

128

Handbook of Robotic and Image-Guided Surgery

also impacts the motion of the instruments. Moreover, the space to be shared by the four hands is much more limited, resulting in uncomfortable and tiring positions. The conceptual idea for STRAS was to design a teleoperated modular platform, which could be easily setup at the operating table side. It was also decided to keep most of the original design of the Anubis platform in order to keep the air and water sealing of the endoscope, and the fluid management (water for irrigation and camera cleaning, air for insufflation and aspiration). The architecture of the slave robot is patented [45].

8.3.2.2 Overview The complete slave system consists of a passive mobile cart with two motorized DoFs, which holds a positioning system for the whole setup, called a cradle. The cart carries an electrical cabinet containing all motor controllers, power supply, and EtherCAT bus (see Section 8.3.5). The cradle provides two DoFs and supports both the endoscope and instruments. The endoscope can be attached to the cradle by clamping it at the distal side of its handle. The instruments are mounted into two motorized modules called “T/R modules,” which are permanently attached at the proximal side of the cradle. The kinematic architecture of STRAS is shown in Fig. 8.3. The endoscope has two independent DoFs, each T/R module provides two DoFs for the instruments, and each motorized instrument has an independent DoF of motion (bending) plus grasper opening/closing (for mechanical instruments). Each module and their resulting motions are described in detail in the following paragraphs. Table 8.1 also summarizes the motors and transmission technology used in each module. The global view of the slave system with its main components is described in Fig. 8.4. Only the surgical part of the gesture is aimed at being robotized. The navigation phase, which only requires the manipulation of the endoscope, is very similar to standard manipulation in the GI tract and only takes a minimal part of the whole operating time (only around 2 minutes in the reported tests; see Section 8.4.2). However, the workspace of the instruments alone (see Section 8.3.3) is not sufficient to cover the complete surgical workspace. Moreover, it is suitable for the user to change the position of the camera with respect to the tissues during the procedure. In order to avoid the need for manual manipulation of the endoscope during the procedure, the motions of the endoscope have therefore to be motorized. However, only short-range translations are necessary during teleoperation and surgical tasks.

8.3.2.3 Modules In order to have a modular slave system, the functionalities have been separated into elementary and coherent subsets, called modules. There are four kinds of modules and a cart: G

G

The cart is a mobile positioning system providing three passive DoFs (positioning and rotation in the horizontal plane) and two motorized DoFs (height and inclination) directly controlled by a pendant. The ranges of positioning (height HA½800; 1400 mm, inclination angle IAA½ 2 30 ; 30 ) allow the use of the system for different medical accesses, notably intraluminal and single-port surgery. The cradle consists of a shallow U-shape metallic arm mounted onto a translation and rotation mechanism. It holds the handle of the endoscope at its distal side and two T/R modules at its proximal side. The geometry of the cradle FIGURE 8.3 Scheme of the kinematic architecture of the slave system of STRAS with the distribution of DoFs among modules. DoFs, Degrees of freedom.

STRAS Chapter | 8

129

TABLE 8.1 Actuation technology, measurement systems, and calibration methods for the motorized teleoperated axes of single port and transluminal robotic assistant for surgeons. Motion

Actuation technology

Calibration (quick calibration)

Measurement

Cradle

Translation FwBw

Faulhaber 2642 W024 CXR 1 brake 1 Faulhaber BS22-1.5 ball screw linear transmission

Homing capacitive sensor

Incremental Encoder IE1024L (1024 lines/ revolution) mounted onto motor

Rotation Θ

Faulhaber 2237 S024 CXR with planetary gearhead 22/7 43:1 1 HPC P55-60 angle transmission worm gear

Homing capacitive sensor

IE1024L mounted onto motor

Endoscope

Deflections (both identical) αx ; αy

Faulhaber 2224 U024 SR, with planetary gearhead 20/1 43:1 1 bevel gears angle transmission 1 spur gears

Hard end stop 1 position error tracking (initial straight position)

IE1024L mounted onto motor

T/R modules

Translation tz

Faulhaber 2224 U 024 SR with planetary gearhead 20/1 66:1 1 Pulley-belt linear transmission

Hard end stop 1 position error tracking (manual positioning at hard stop)

IE1024L mounted onto motor

Rotation θz

Faulhaber 2224 U 024 SR with planetary gearhead 20/1 23:1 1 Pulley-belt rotative transmission

Hard end stop 1 position error tracking (manual positioning at 0 )

IE1024L mounted onto motor

Bending β

Faulhaber 2224 U 024 SR with planetary gearhead 20/1 43:1 1 HPC BLHB20-2 angle transmission gearbox 1 pulley belt linear transmission

Hard end stops on both sides and current monitoring

IE1024L mounted onto motor

Grasper γ

Faulhaber 2224 U 024 SR with planetary gearhead 20/1 43:1 1 HPC BLHB20-2 angle transmission gearbox 1 pulley belt linear transmission

Hard end stops on opening and position error tracking

IE1024L mounted onto motor

Instrument modules

FIGURE 8.4 Global view of the STRAS slave system ready for teleoperation when all modules have been mounted.

8. STRAS Robotic Device

Subset

130

Handbook of Robotic and Image-Guided Surgery

TABLE 8.2 Ranges, velocities, and forces for the distal side. Subset

DoF

Motion

Range

Velocity

Force (N)

Cradle

FwBw

Translation

[0, 100] mm

60 mm/s

N/A

Θ

Rotation

Infinite

30 /s

N/A



60 /s

N/A

75 mm/s

20





Endoscope

αx ; αy

Deflection

[ 2 90 , 190 ]

Instruments

tz

Translation

[0, 75] mm

θz β γ

Rotation Bending Grasper





[ 2 200 , 1200 ] 



[ 2 90 , 190 ] 

[0, 60 ]



4



0.9



3

250 /s 360 /s 120 /s

Ranges and velocities for bending correspond to the bending angle. DoF, Degree of freedom. Source: Adapted from data in Zorn L, Nageotte F, Zanne P, Le´gner A, Dallemagne B, Marescaux J, et al. A novel telemanipulated robotic assistant for surgical endoscopy: preclinical application to ESD. IEEE Trans Biomed Eng 2018;65(4):797808, r2016 IEEE, https://doi.org/10.1109/TBME.2017.2720739.

FIGURE 8.5 Close view of the T/R modules, with the right instrument installed.

G

G

ensures the correct alignment of the instruments with the entrance of the channels of the endoscope. The cradle provides the rotation of the endoscope and T/R modules assembly around the main axis of the endoscope and the translation (with limited range, see Table 8.2) of the assembly along the same axis. One endoscope module, which comprises the endoscope and the motorization of its two deflections. The motorization is mounted onto the endoscope handle and replaces the conventional manipulation wheels (see Fig. 8.5). Metallic telescopic tubes are mounted at the entrance of the channels to guide instruments and prevent their shafts from kinking during translation motions. A joystick attached to the motor box allows the user to control the bending of the distal part of the endoscope during insertion and navigation, while standing near the patient. Instrument modules which consist of flexible instruments directly coupled with the motorization for the bending and grasper opening (for mechanical instruments). The motorization is located inside cylindrical housings with a bull-nose shape. The housings replace the manual handle of the original Anubis instruments. It must be noted that the manual manipulation of the robotic instrument is only required during the insertion of the instruments inside the guide, once the system has been brought to the operating site. The shaft is connected to the distal side of the housing box with a cable gland. The proximal extremities of the cables which drive the bending motion are attached to two carriages mounted onto a toothed belt. Two instrument modules can be used simultaneously, one in each lateral

STRAS Chapter | 8

channel of the endoscope. Instrument modules are fitted with a metallic ring mounted on ball bearings used for rotation guidance (see next paragraph on T/R modules). There are two kinds of instrument modules: modules for mechanical instruments, with grasper opening/closing actuation, and modules for electrical instruments, which are equipped with electrosurgery connection. The pushpull rod of mechanical instruments is attached onto a carriage, similarly to the cables for bending actuation. For both kinds of instruments the effector can be detached and changed. Several kinds of graspers exist (toothed, fenestrated) with different sizes and several electrical effectors (hook, ball). The change of the effector is however not possible during a medical procedure. Therefore as many instrument modules as effectors necessary for the surgical procedure should be available at the beginning of the procedure. The instrument shaft and the cables can be disconnected from the motor box for maintenance and for the change of instrument shafts after several procedures. Two T/R modules are responsible for the translation and rotation of the instrument modules inside the channels of the endoscope. These modules are mounted on the cradle. They are designed to receive the instrument modules and allow easy instrument module change. They have an L-shape, with a hole at their rear side for passing electrical cable toward the electrical cabinet. A quick-closing system allows clamping the instrument modules for rotation guidance. Three mistake proofing pins at the back side ensure that the instrument modules have a known and fixed orientation in the T/R modules. The actuation for translation is located at the bottom part of the module and moves it forward and backward with respect to the cradle. The rotation actuation is located at the rear part and rotates the instrument around its shaft axis.

8.3.3

Features of the slave system

All elements of actuation (motors, gears, end-stops) have been chosen in order to provide a robotic system with similar capabilities to the manual Anubiscope platform. The features of the slave robot in terms of workspace, kinematics, and forces have been assessed in the laboratory using an external measurement system composed of stereoscopic cameras and a one DoF force sensor (load cell MEAS XFTC 300). The features were measured for different configurations of the endoscope: straight, slightly deflected (αx 5 30 ), and more strongly deflected (αx 5 60 ). No significant effects were observed on the forces, velocities, ranges of the instruments, or on the repeatability. Fig. 8.6 shows the proximal positions (motors) to distal DoF relations for one instrument when actuating each DoF separately. Hystereses are observed for rotation and nonlinearities for bending. These are due to static friction of the instruments inside the endoscope working channels (for rotation) and of the cables inside their sheaths (for bending). However, these nonlinear effects are less important than in the initial version of STRAS [43]. This improvement can be attributed to better cable tensioning allowed by the new bending mechanism.

FIGURE 8.6 Typical quasi-static distal position versus proximal position characteristics for separate motor actuations of the instrument (red: bending, green: translation, blue: rotation). Input and output have been normalized so that the expected behaviors are unit slope lines (dotted black).

8. STRAS Robotic Device

G

131

132

Handbook of Robotic and Image-Guided Surgery

The repeatability of the robot has been assessed by analyzing the tip position of the instrument for identical motors positions obtained during arbitrary trajectories and for a given endoscope configuration. It is 3.9 6 1.89 mm, with maximum values of 6.7 mm. For the user, the repeatability represents the possible variability in distal position for a given configuration of the master interface. It is within the range of precision needs usually provided by surgeons for minimally invasive surgery. For telemanipulation, no particular strategy is used to compensate these nonlinearities. Indeed, the particular mapping used between the master interfaces and the slave system (see Section 8.3.3) drastically limits their impact, and the feedback of the endoscopic view allows the user to easily compensate for the residual effects. Nevertheless, for automatic motions, or if other master interfaces had to be used, compensation would be suitable. Because the nonlinear effects depend on the robot configuration, advanced techniques should be used. Several approaches have been developed for this purpose [47]. Joint ranges and velocities expressed at the distal tip of the instrument are reported in Table 8.2 together with forces that can be applied with the instruments onto tissues. For measuring forces, the grasper of one instrument was used to pull a string knot attached to a one-DoF force sensor. The instrument was placed in different configurations and joints were individually actuated while the resulting force was measured. It was observed that forces were mainly limited by the flexibility of the instrument bendable tip, except for translation where the grasping force was the limiting factor. Ranzani et al. [48] measured the necessary forces to lift and pull the mucosa by using sensorized laparoscopic instruments in the scope of transanal endoscopic microsurgery in the rectum. They obtained about 1 N for both directions. When using STRAS, pulling is mainly achieved using the translation of the instrument, while lifting is either performed using rotation when the instrument is bent or bending when the instrument is straight. Table 8.2 shows that at least 0.9 N can be applied for all directions, without any assistance from the main endoscope, which is very close to the requirements, given measurements uncertainties. As discussed in Section 8.4.2, forces were practically sufficient for intraluminal surgery in the rectum and colon of pigs. Fig. 8.7 shows the theoretical workspace of the instruments, with truncated cylinders [45] of radius 32 mm and height 75 mm. The points in the workspace can be reached with at most four discrete orientations [45], except when the instrument is in the straight configuration (β 5 0). Then the rotation θz allows for the continuous self-rotation of the instrument, thus allowing free orientation of the grasping plane for mechanical instruments. However, this capability comes at the cost of losing the holonomy of the instrument’s position. Indeed, the local translation of the tip of the instrument is then constrained in a plane. This singularity has a significant impact on the teleoperation of the instruments [45].

FIGURE 8.7 Theoretical workspace (bluegreen) and superimposed actual trajectories of the left (red) and right (black) instrument. The blue triangle shows the field of view of the endoscopic camera. Adapted from Zorn L, Nageotte F, Zanne P, Le´gner A, Dallemagne B, Marescaux J, et al. A novel telemanipulated robotic assistant for surgical endoscopy: preclinical application to ESD. IEEE Trans Biomed Eng 2018;65(4):797808, r2016 IEEE, https://doi.org/10.1109/TBME.2017.2720739.

STRAS Chapter | 8

133

where L is the length of the bending section and d is the length of the rigid part at the tip of the instrument. Combining instrument bending and rotation allows spanning an almost ellipsoidal surface, with equal x and y axes. Combined with the translation this creates a cylinder whose axis is defined by zch , the direction of the channel exit and truncated by an ellipsoid defined by the maximum translation (see Fig. 8.7). The orientation of the tip is given by the rotation matrix ch R inst 5 Rz ðθz ÞRx ðβÞ, which is generally linked with the instrument position in the workspace. Most of the points in the workspace can be reached with four discrete orientations [50]. When the instrument is in the straight configuration (β 5 0), ch P is independent of θz , while ch R inst is a rotation of axis zch and angle θz . Therefore it is possible to freely orientate the instrument around its axis, for example to orientate the grasping plane for mechanical instruments or the direction of hooks. The2Jacobian, which relates joint velocities q_sl to the linear velocity of the tip of the instrument ch P_ can be written 3 @X=@β 2Y 0 as J 5 4 @Y=@β X 0 5 for β 6¼ 0. For the straight positions (β 5 0) it can be obtained by a second-order develop@Z=@β 0 1 ment of the previous expression: 2 3   cos θz L=2 1 d 0 0 J ð0; θz ; tz Þ 5 4 sin θz L=2 1 d 0 0 5: 0 0 1 The determinant of J is null for β 5 0. The angle θz then has no effect on the position of the tip of the instrument. Hence P can only be moved in the current plane of curvature of the instrument, and the instrument is nonholonomic for this configuration. If β 6¼ 0, detðJ Þ 5 @ðX 2 1 Y 2 Þ=@β is independent of tz and θz . Another singularity appears when the tip of the instrument is at the limit of the cylindrical workspace. 2 3 0 2Y 0 Then J 5 4 0 X 0 5 and there is a redundancy of motion along the zch axis which can be obtained using @Z=@β 0 1 tz or β. The Cartesian control of the instrument tip requires handling these singularities. This is discussed in Section 8.3.4 for the design of the master interfaces.

8.3.4

Control of the robot by the users

In this section, we present how the different motions of the slave system are controlled by the users. The mobile cart can be freely and manually positioned horizontally. In addition, the vertical position of the cradle and its inclination can be modified using a four-button pendant, which directly controls the actuators of the cart. These displacements are only used at the beginning of the procedure and do not require telemanipulation. All other DoFs can be teleoperated using master interfaces as described in the following. Additionally, the bending of the endoscope can be operated using the embedded joystick attached to the motor box of the endoscope module. This control is mainly used during the navigation step of the procedure when the endoscope is manually handled without the instruments.

8. STRAS Robotic Device

By relying on a standard forward kinematic model [49], the position of the tip of the instrument in a frame attached to the exit of the endoscope channel can be expressed as a function of the actuated DoFs of the instrument qinst 5 ðβ; θz ; tz Þ as: 3 20 1 L 6 @ ð1 2 cos β Þ 1 d sin β Acos θz 7 7 6 β 7 2 3 2 3 6 7 60 1 X 0 7 6 7 6 ch 5 for β 5 0 P 5 4 Y 5 5 6 @L ð1 2 cos β Þ 1 d sin β Asin θ 7 for β 6¼ 0; and ch P 5 4 0 z 7 6 β Z L 1 d 1 tz 7 6 7 6 7 6 L 5 4 sin β 1 tz 1 d cos β β

134

Handbook of Robotic and Image-Guided Surgery

The master console of STRAS v2 consists of two specific master interfaces, two screens providing the endoscopic view and the graphical user interface (GUI). The activation of coagulation and cutting for the electrosurgery instruments is controlled by a pedal board (part of the electrosurgery generator).

8.3.4.1 Dedicated master interfaces Master interfaces can have a significant impact on the usability of a medical robotic system, especially because of the limited accuracy and repeatability of flexible cable-driven systems, which require positioning correction by the user by relying on the endoscopic image. As shown for simple endoscope manipulation tasks, the adequate design of a master interface can provide good control [33] or induce manipulation difficulties [51]. STRAS v1 was telemanipulated by using commercial interfaces (Omega.7 Force Dimension). Different master/slave mappings have been tested between the available motions of the master interfaces and the slave system. However, drawbacks have been pointed out by surgical users for each of these mappings. Indeed, it proved especially difficult to handle the kinematic singularities of the instruments at the center and limit of their workspace with a standard mechanical architecture [50]. To solve this problem, we have developed specific master interfaces, whose architecture is patent pending [52].

8.3.4.2 Control of the instruments The master interfaces are mainly dedicated to the control of the instruments, which represents the most demanding task, the control of the endoscope being considered as a secondary feature. The proposed new master console consists of two identical control handles with three DoFs, each aimed at controlling one of the instruments of the slave system. It comprises a handle shaft, designed to be gripped by the operator, mounted onto an L-shaped moving bracket (see Fig. 8.8) 200 mm long and 285 mm high. The bracket can rotate and translate with respect to a support structure along a single horizontal axis. The translation (range 5 [0, 90] mm) and rotation of the bracket (range 5 [ 2 160 , 1160 ]) are used to, respectively, control the translation tz and rotation θz of the associated instrument. The shaft of the handle is connected to the bracket by a revolute joint. The angle B of the shaft with respect to the plane containing the bracket controls the bending β of the tip of the concerned instrument. The master to slave mapping is detailed in Table 8.3. A scaling factor for the bending actuation can be chosen by adjusting the mechanical end stops of the rotation of the handle with respect to the bracket. The joints of the master interface are passive, but the handle is statically balanced using counterweights attached to the back part of the handle (to prevent unwanted bending) and to the top of the vertical bar of the L-shaped bracket (to prevent unwanted rotation). An offset distance (doff 5 70 mm on Fig. 8.8) between the shaft of the control handle and the rotation axis of the shaft with respect to the mounting bracket induces a circular trajectory for the handle displacement, very similar to the trajectory of the tip of the slave instruments during bending. Furthermore, in order to replicate the kinematic singularity of the instrument in the straight configuration, the hand of the operator on the control handle in this configuration is located on the axis of the rotation between the bracket and the support structure. By means of the aforementioned architecture, the operator has access to similar DoFs to those at the flexible distal end of the instrument. Therefore it is sufficient for the operator to perform with the handle the movements that he wishes to realize at the tip of the slave instrument. In other words, the operator moves his hand as if he were holding the end of an instrument, hence making the control of the endoscopic instruments very intuitive. Furthermore, as the control handle device kinematics is realized by means of an articulated structure composed of successive rigid segments, the result is a simple, inexpensive, and robust structure. Each control handle is also fitted with a trigger, controlled by the user with the index, which is used in an impulsion mode for controlling the opening/closing of mechanical instruments.

8.3.4.3 Control of the main endoscope A secondary feature of the master interfaces is the ability provided to control the endoscopic camera intuitively without releasing the control of the instruments. For controlling the motions of the endoscope and the whole platform, each control handle is fitted with a small fourway joystick (NS/WE) mounted on top of the handle. Each joystick is controlled using the thumb and allows to simultaneously control two DoFs. The position of the joystick is used to control the velocity of the associated DoFs. This way, when the joystick is released it goes back to its still position and the associated DoFs stop their movements.

STRAS Chapter | 8

135

TABLE 8.3 Control of the master interfaces and mapping to the slave DoFs (degrees of freedom). DoF

Master movement

Range

Master/Slave mapping

T

Translation

[0; Tmax] 5 [0; 90] mm

tz 5

tzmax Tmax

T

β max Blim

B





B

Rotation (for bending)

[Blim; Blim] with Blim adjustable in [40 ; 170 ]

β5

R

Rotation around horizontal axis

[ 2 160 ; 160 ]

θz 5 R

G

Trigger for grasper

{0; 1} (Off/On)

γ 5 γmax 2 γ prev if G 5 1

WEl

Left joystick—West/East

{0; 1} (Off/On)

α_ x 5 Vα WEl

NSl

Left joystick—North/South

{0; 1} (Off/On)

α_ y 5 Vα NSl

WEr

Right joystick—West/East

{0; 1} (Off/On)

_ 5 VΘ WEr Θ

NSr

Right joystick—North/South

{0; 1} (Off/On)

_ 5 VFwBw NSr FwBw

Source: Adapted from data in Zorn L, Nageotte F, Zanne P, Legner A, Dallemagne B, Marescaux J, et al. A novel telemanipulated robotic assistant for surgical endoscopy: preclinical application to ESD. IEEE Trans Biomed Eng 2018;65(4):797808, r2016 IEEE, https://doi.org/10.1109/TBME.2017.2720739.

8. STRAS Robotic Device

FIGURE 8.8 Master interfaces developed for STRAS. (Top) Computer-Aided Design (CAD) of one master interface showing the DoFs. (Bottom) The master console during telemanipulation. The position of the hands of the user are replicated onto the instruments, here visible on the feedback screen. DoFs, Degrees of freedom; STRAS, single port and transluminal robotic assistant for surgeons.

136

Handbook of Robotic and Image-Guided Surgery

The association between the joysticks and the DoFs of the endoscope can be programmed. The preferred mapping by most users consists in using the left joystick to control endoscope deflection (left/right, up/down) and the right joystick to control endoscope/cradle translation (NS direction) and endoscope/cradle rotation (WE direction). We have chosen to define fixed velocities for the movements of the endoscope (Vα for deflections, VFwBw for translation, and VΘ for rotation). The movements are triggered in an on/off way. This control mode allows to simultaneously control the endoscope and the instruments without requiring switching between instruments and endoscope control. It also gives a constant control on all DoFs, which is suitable from a safety point of view. The control of the endoscope deflection with the on-board joystick works similarly to the control provided by the joystick on the top of the master interface.

8.3.5

Control and software architecture

All motors are controlled by intelligent drives (Technosoft IBL2403) running velocity control loops with proportionalintegral-derivative (PID) compensators and current loops with proportional-integral (PI) compensators. The drives are connected to an EtherCAT bus using Beckhoff modules, which are used as an interface between drives and a computer running higher level control loops (Fig. 8.9). The EtherCAT bus provides numeric-to-analog converters, encoder reading capability, motor enabling/disabling (digital outputs), on/off signal readings, and error signals (digital inputs). All drives, bus, and their power supply are located in an electrical cabinet at the rear side of the cart. A software controller runs on a PC under a Linux/Xenomai operating system connected to the EtherCAT bus of the master interfaces and to the EtherCAT bus of the slave system via RJ45 links. The software controller consists of the four components presented in Fig. 8.10: G

G

G

G

EtherCAT Master module: this runs in the kernel space of the real-time operating system and ensures communication between the EtherCAT hardware port and higher level layers. Low-level controller: this runs in the user space of the real-time operating system at the frame-rate of 1 kHz. It sends control inputs to and receives motor positions from the slave system through the EtherCAT Master Module. GUI: this allows to receive commands (by buttons and sliders) from the user and to provide information on the state of the system to the user. It does not run in real-time. Supervisor: this runs in parallel with the low-level controller in the user space of the real-time operating system. It listens to the EtherCAT bus for warnings and errors coming from the hardware controllers. It also communicates with the low-level controller for analyzing signals such as tracking errors. In case of abnormalities, it can interrupt teleoperation and bring the system to an error mode. FIGURE 8.9 Hardware electrical architecture of STRAS v2 including slave system, master interfaces, and software controller.

STRAS Chapter | 8

137

FIGURE 8.10 Architecture of the software controller and its interface with hardware.

8. STRAS Robotic Device Each joint of the slave robot is controlled by a position loop with a proportional compensator running at 1 kHz on the low-level controller. Signals coming from both the master and slave systems are used by the software controller to compute joint control inputs to be sent to each drive of the slave system. Namely, a trajectory planning algorithm computes the reference positions for the slave motors from the references obtained from the master interfaces. The mapping between the master interfaces motions and the slave system is described in Section 8.3.4. The signals from the on-board joystick are transmitted to the software controller similarly to those of the master interfaces and handled similarly.

8.3.6

Robot calibration and working modes

With the proposed mapping, the master interfaces provide absolute positions to the slave system. In order to allow the correct mapping of the kinematic singularities between master and slave, it is necessary to align them. Master and slave systems make use of incremental encoders only, and therefore both systems have to be calibrated at the beginning of the procedure. Especially, the bending motor position corresponding to the straight configuration of the instruments and the rotation motor position providing a bending plane aligned with the horizontal axis of the endoscopic camera frame have to be known. Calibration is also needed to monitor the configuration of the joints during the procedure so as to avoid reaching end stops with high velocity. Rotation and translation of the platform are fitted with capacitive homing sensors. These DoFs are sent toward their limit and stopped when capacitive sensors are activated. The corresponding encoder position is registered with the known homing position. For instruments, the two end stops of the deflection are detected by monitoring the position tracking error and/or motor torque when the motors are successively sent toward both directions with low velocity. The reference straight configuration is then obtained as the central positions between the estimated end stops. Indeed, the range of motion is dependent on the tensioning of the cable and can therefore vary with the number of previous uses. For rotation and translation, the motors are sent toward hard stops while monitoring the tracking error, which allows detecting the encoder values at the limit of the workspace. After the calibration, it becomes possible for the control software to align the orientation of the master interface with the orientation of the bending plane of the instrument. A quick calibration procedure is also possible for rotation and translation, which avoids automatic motions of the T/R modules. While motors are disabled (standby mode, see Section 8.3.6.3) the user can push translation to its maximum

138

Handbook of Robotic and Image-Guided Surgery

position (corresponding to the instruments at their maximum position outside the channels) and align markings on the instrument modules with respect to the T/R modules, by manually rotating them. The homing position is then validated onto the GUI. After the calibration, software end stops are generated inside the mechanical ranges in order to manage trajectory planning and deceleration when the instruments come close to the mechanical end stops. Practically, the calibration steps requiring motor motions are performed automatically at the system startup, before bringing the system next to the patient. Instrument modules can be calibrated before being mounted onto the T/R modules.

8.3.6.1 Master interfaces calibration The zeros of the three DoFs of the handles of the master interfaces are calibrated by manually bringing each DoF to its end stop. The ranges being known in advance, the calibration is automatically validated when the encoders have spanned the expected ranges.

8.3.6.2 Teleoperation activation The master handles are also equipped with miniature programmable impulsion buttons at their front side. These buttons are used to activate/deactivate teleoperation. When teleoperation is activated, the positions of the slave DoFs are automatically aligned with the position of the master handles by moving slowly toward them. The master handles can also be moved by the user and the normal master/slave mapping is activated for each DoF separately when the positions of the master and the slave are aligned.

8.3.6.3 Working modes The software controller implements a state machine with four working modes (Fig. 8.11): G

G

G

G

Standby mode: This is a passive state, where motors are disabled. In this state the bending part of the endoscope has lower rigidity. The instrument deflection and the grasping forces are passively restricted because high cable tension is impossible in this mode. The T/R modules can be moved by hand, which allows to easily disconnect instrument modules. The cradle rotation (not backdrivable) and the cradle translation (power-off brake) are blocked, which avoids uncontrolled motions of the heaviest parts of the system. The cart can still be manually put aside in case of need. Position control mode: In this mode, each motor is servoed in position. This is useful when the surgeon wants to stabilize the slave system, while changing his/her position at the master console. The current position of the motors is read and stored by the high-level controller when the mode is entered and used as the position reference as long as the mode is not changed or the reference position changed through GUI. Sliders on the GUI allow to incrementally move the reference position for individual motors. This functionality can be used for moving the robot in case of failure of the master console hardware. Teleoperation mode: This is the main working mode, where motion performed on the master interfaces is processed and velocity references are sent to the motor drives (see Section 8.3.3 for more details). Error mode: Motors are disabled with at least one error flag activated in the supervisor. The mechanical behavior is the same as in the standby mode, but returning to teleoperation requires specific user intervention, typically for acknowledging the transient appearance of an abnormality observed by the supervisor (for instance too high torque or too high tracking error).

8.4

In vivo use of the system

8.4.1 Workflow of single port and transluminal robotic assistant for surgeons use for intraluminal surgery The typical workflow of use of STRAS v2 for intraluminal surgery is described in the following. 1. Startup and calibration of master and slave systems (typical duration: 5 minutes). At the end of this step T/R modules are automatically brought to the parking position (maximum back position) in order to allow instrument insertion.

STRAS Chapter | 8

139

8. STRAS Robotic Device

FIGURE 8.11 The different modes of use implemented in the software controller of STRAS and the events generating changes of modes.

2. Insertion/navigation of endoscope (typical duration: 2 minutes). During this step, the endoscope is held by the surgeon. The endoscope shaft is translated and rotated manually as in conventional colonoscopy, and the motorized deflection of the endoscope is controlled using the on-board joystick. During this step the deviation system of the endoscope is closed. 3. Endoscope clamping (typical duration: 1 minute). The cart is brought close to the patient and the cradle is positioned using the pendant in order to adapt to the position and orientation of the endoscope handle. Then the endoscope handle is clamped onto the cradle. 4. Insertion of instruments (typical duration: 1 minute). The instruments are manually inserted inside the channels and the motor box is put in the T/R modules. The presence and type of instrument modules are detected by the software. The distal shell is then opened from the proximal side using a manual multiposition button located onto the endoscope handle. 5. Teleoperation. The surgeon controls all DoFs of the slave system (except those from the cart) from the master console, as well as electrosurgery activation from a pedal board. 6. Retrieval of the system (typical duration: 2 minutes). At the end of the surgical procedure the reverse workflow is used: T/R modules are brought to their back position, teleoperation is stopped, instruments are retrieved, the endoscope is detached from the cart, and it is extracted manually while using the on-board joystick. Tissue specimens can be extracted together with the endoscope by holding them with a conventional grasper inserted in the central channel of the endoscope. Fig. 8.12 shows snapshots of some of the steps during preclinical use.

8.4.1.1 Change of instruments During the procedure, it may be necessary to retrieve the instruments, either to exchange the grasper and electrical knife, to use two graspers, or to thoroughly clean an instrument. On the Anubiscope instruments it is possible to change the effectors while keeping the same flexible shaft and bending tip. This possibility has been kept on STRAS v2. However, since the flexible shaft has to be retrieved from the endoscopic guide anyway, it seemed more practical to have several motorized instruments and to change the complete instrument modules rather than the effectors. Indeed, unscrewing the insert was assessed impractical during a procedure when instruments have been soiled by fluids and tissues. The modularity allows an easy change of instruments by using the following procedure: G

The instrument tip is brought in the straight configuration and the corresponding T/R module is brought in the parking mode in its back position, either by teleoperation or by automatic motions;

140

Handbook of Robotic and Image-Guided Surgery

FIGURE 8.12 Different steps of the use of STRAS during preclinical trials. From top to bottom and left to right: Navigation of the endoscope, attachment of the endoscope to the cradle, insertion of an instrument module inside a T/R module, and teleoperation.

G

G

The instrument module is then manually detached from the supporting T/R module, and the instrument is retrieved from the channel and unplugged if necessary; The replacing instrument is inserted inside the channel of the endoscope, the instrument module is inserted in the T/R module in the reference orientation and it is electrically plugged.

This change of instrument typically takes 2 minutes. Each T/R module is equipped with two switches. The first is located at the base of the clamping ring and indicates if the T/R module is closed and the instrument module is in position. The second is located at the front of the horizontal bar of the L-shaped T/R module and is activated by electrical instruments only, which have a slightly larger shoulder at the front of the cylindrical part just behind the metallic ring on the ball bearing. This allows the software to know if and which instruments are present in each T/R module, without the user having to define it. Moreover, the software also monitors the physical presence of motors by detecting encoder data. This allows to confirm the actual configuration of the system. Note that the instruments can be electrically plugged and unplugged during the use of the system, the user having only to acknowledge a change of instrument to regain the control of the system. Instruments can be retrieved and inserted whatever the position and orientation of the T/R modules. But for reducing mechanical stress this should be performed when T/R modules are at the maximum back position (parking position). Moreover, for safety reasons, teleoperation is deactivated as long as instrument modules are missing. Importantly, the guide provided by the main endoscope remains in place during instrument change, which allows quick and safe instrument change with a constant visual feedback of the operating area.

8.4.2

Feasibility and interest

STRAS was intensively tested during preclinical trials performed on swine [46,53]. These trials were performed at the IRCAD in Strasbourg and approved by the Institutional Ethical Committee on Animal Experimentation (ICOMETH no. 38). They mainly focused on the feasibility of using STRAS for safely performing colorectal ESD. For this purpose, a novice surgeon was enrolled. Fig. 8.13 shows some photos of the use of STRAS during the ESD procedure. The assessment was primarily done on the basis of the success and quality of the procedures (completeness of the dissection,

STRAS Chapter | 8

141

absence of perforation, which is the most common complication of ESD, en bloc resection of the lesion), with quantitative secondary criteria (size of the lesion, total duration of the procedure, duration of the dissection, velocity of the dissection). For comparison, we referred to the work in Ref. [44], which reported results of similar procedures performed with conventional endoscopes and the manual Anubiscope. Table 8.4 reports both the qualitative and quantitative results of the 12 procedures. It must be noted that ESD in the colon of pigs is especially difficult because of the thinness of the colon submucosa. The comparisons show that ESD with STRAS is safe, allows a single user to make the procedure, and allows to increase the mean dissection speed with respect to manual instruments. A general decrease in procedure durations was observed during these trials (Fig. 8.14). However, it was difficult to extract clear lessons of learning curves because a high number of parameters varied during the tests (position of the lesion in depth and orientation, size of the lesion, weight of the animal, particular anatomical conditions such as peristalsis, number of previous uses of the instruments). Moreover, the medical technique used for the dissection was changed several times during the trials. On the one hand, the conventional method used with manual endoscopes includes an initial circumferential cutting of the lesion, which allows dissecting without the need to refer to the margins. On the other hand, STRAS provides the ability to dissect at a distance from the endoscopic camera (typically between 2 and 7 cm) sufficient to keep a global view of the lesion, making prior circumferential cutting optional. In the complete results reported in Ref. [46], learning curves were visible for each technique considered separately from the others. Qualitatively, the surgeons who have worked with the STRAS system during laboratory experiments and preclinical trials praised the stability of the slave system, the smoothness of motion at the master side, and the ease of the control of the endoscope. The experiments also allowed to assess the lifetime of the flexible instruments. It appeared that under normal conditions these can be used safely for five ESDs. STRAS was also tested in SPA surgery in a feasibility study. The endoscope was used through a single incision laparoscopic surgery (SILS) Port (Medtronic) (Fig. 8.15). Three types of procedures were realized on a female swine: simulated appendectomy on the uterus, cholecystectomy, and dissection of the gastroesophageal junction. The endoscope was navigated up to the three targets from the same entry point. The feasibility of the procedures was demonstrated.

8.5

Current developments and future work

Following the very positive results obtained with STRAS v2, a novel development was started in 2014 with the goal to obtain approval for clinical trials. For this purpose, emphasis was put onto compatibility with the operating room.

8. STRAS Robotic Device

FIGURE 8.13 Snapshots of an ESD realized with STRAS v2. (A) Lesion marking with the right hook. (B) Injection of glycerol with methylene blue using an injection needle in the central channel. (C) Dissection using left grasper for pulling tissues and right hook for dissection. (D) End of dissection. ESD, Endoscopic submucosal dissection.

142

Handbook of Robotic and Image-Guided Surgery

TABLE 8.4 Qualitative and quantitative comparison of endoscopic submucosal dissection (ESD) performed with manual system and single port and transluminal robotic assistant for surgeons (STRAS). Endoscopic system

Main practician

#Skilled users

#Perf.

Dissection speed (mm2/min) Circumferential cutting excluded

Total procedure

Conventional endoscope [44]

Endoscopist (expert)

2

8/16

36 6 19

NA

Manual Anubiscope [44]

Surgeon (intermediate)

2

0/9

24 6 5

NA

STRAS (v2) [46]

Surgeon (novice in ESD)

1

1/12

64 6 35

30 6 18

STRAS (v3)

Surgeon (novice in ESD)

1

1/20

NA

57 6 29 (70 6 22 for last 10)

The fourth column reports the numbers of complications observed (perforation). The first column for dissection speed does not take into account the time needed for initial circumferential cutting. It is not available for procedures where no circumferential cutting was realized. The second column takes into account the total time of dissection.

FIGURE 8.14 Durations of the procedures for the preclinical trials with STRAS v2 (numbers on the horizontal axis indicate the initial numbering of the procedure). For comparison purposes, the values have been normalized for lesions of size 3 cm 3 3 cm. The different colors indicate different parts of the procedures and different dissection techniques. If putting aside procedure #9, where insulation problems on the electrical instrument appeared, a general decrease of the duration is visible, indicating a learning curve. (For equivalent durations sorted by endoscopic techniques. Adapted from Zorn L, Nageotte F, Zanne P, Le´gner A, Dallemagne B, Marescaux J, et al. A novel telemanipulated robotic assistant for surgical endoscopy: preclinical application to ESD. IEEE Trans Biomed Eng 2018;65(4):797808, r 2016 IEEE, https://doi.org/10.1109/TBME.2017.2720739.

FIGURE 8.15 STRAS v2 during preclinical trials. (A) Setup for intraluminal colorectal surgery: cradle is low and almost horizontal. (B) Setup for single-port access surgery: cradle is high with strong downward inclination.

STRAS Chapter | 8

143

FIGURE 8.16 Telemanipulation of STRAS v3 with the master console during in vivo experiments. The slave system is visible in the background.

8. STRAS Robotic Device Intraluminal surgery only requires disinfection of an intermediate level of the instruments. However, sterilization of the system was aimed for in order to accommodate other future applications, for instance in SPA or NOTES. The proven concepts of STRAS v2 were therefore kept but an additional separation between the motor box and the endoscope and between the motor box and the instruments was added. Draping will be introduced at these interfaces, thus avoiding for the need to sterilize the actuators. At the master side, capacitive sensors were added in the master interfaces in order to detect the presence of the surgeon’s hands and deactivate the master/slave control when they are not detected. The triggers for controlling the graspers were replaced by wheels allowing continuous control of the opening and closing. Software was partly redeveloped to comply with regulations. Efforts were also made for improving usability, for instance by providing a touch screen and an improved graphics user interface for interaction with the surgeon. For facilitating regulatory acceptance, automatic motions for aligning the slave system with the master interfaces at teleoperation activation were replaced by manual motions of the master interfaces under visual guidance. This third version of STRAS (v3) was tested in vivo on pigs between January 15, 2018 and March 29, 2018 at the IRCAD in Strasbourg (Fig. 8.16). The trials were approved by the ICOMETH no. 38 and involved 20 ESD. The obtained results are reported in Table 8.4 (bottom row). They are qualitatively in line with those obtained with STRAS v2, with only one perforation (during the first experiment). Following the observations of the trials with STRAS v2, the robotic ESD protocol was fixed (no prior circumferential cutting). Moreover, the surgeon also made use of the full rotation of the endoscope, thus allowing to work in almost constant conditions independently of the position of the lesion. Quantitatively, dissection velocities (reported in Table 8.4, bottom row, including the whole procedure) still showed a significant improvement with respect to STRAS v2. Moreover, it was therefore possible to observe clear and fast learning curves: mean dissection velocity was 43 mm2/min for the first 10 procedures and 70 mm2/min for the last 10 (p 5 0.04). This confirms that the proposed masterslave control is very intuitive. The novel version is compliant with standard 60601-1 from International Electrotechnical Commission (IEC) for safety. Approval from ethical committees for clinical trials should be requested soon.

8.6

Conclusion

STRAS is a telemanipulated robotic system based on flexible systems, which provides 10 DoFs, allowing a single operator to perform complex tasks with two miniature instruments and a flexible endoscope acting as an overtube. The telemanipulation interfaces provide intuitive control of the slave robot. STRAS was successfully tested in preclinical trials for intraluminal colorectal surgery, which show both comfort, safety, and efficiency advantages with respect to manual instruments. STRAS is built on totally flexible instruments and is for now mainly aimed at intraluminal procedures where complete flexibility is an advantage. The current length of the endoscope used as an overtube allows access to rectum, sigmoid, and descending colon for anal access and to esophagus and stomach for oral access. The same principles developed in STRAS could be transposed to longer endoscopes, which could expand the possibilities of uses for deeper

144

Handbook of Robotic and Image-Guided Surgery

colon and transluminal surgery. It is expected [54] that nonlinearities linked to cable actuation and friction in the endoscope channel will increase. However, we think that the proposed master/slave control could allow the user to cope with those. Moreover, no high-level techniques have been used for now to reduce the nonlinear effects. Improving the behavior of instruments using feedback of endoscopic camera [55,56] and learning it using machine learning methods [47] is also currently under investigation. The current efforts aim at bringing the robotic system to clinical trials for ESD procedures in the rectum and colon. The novel version is currently under testing for compliance with clinical trials requirements.

Acknowledgments The work described in this chapter was partly funded by FUI (Fonds Unique Interministe´riel, French Economy state Department) Project (ISIS) and by SATT-Conectus Alsace. This work was also supported by French state funds managed by the Agence Nationale de la Recherche (ANR) within the Investissements d’Avenir program under the ANR-11-LABX-0004 (Labex CAMI) and ANR-10-EQPX-44 (Robotex Equipment of Excellence) references. The authors wish to thank the IRCAD (Institut de Recherche sur les Cancers de l’Appareil Digestif, Strasbourg, France) and IHU Strasbourg (Institut de Chirurgie guide´e par l’Image, Strasbourg, France) for medical advice for the realization of in vivo trials. In particular, we want to thank Dr. B. Dallemagne, Dr. A. Legner, Dr. P. Mascagni, and Prof. J. Marescaux for their support. The authors also wish to thank Karl Storz (Tuttlingen, Germany) for providing endoscopes and instruments, and for support and advice.

References [1] Globocan. Website of International Agency for Research on Cancer, ,https://gco.iarc.fr/today/home.; 2018 [consulted 24.11.18]. [2] Wang J, Zhang X, Ge J, Yang C, Liu J, Zhao S. Endoscopic submucosal dissection vs endoscopic mucosal resection for colorectal tumors: a meta-analysis. World J Gastroenterol 2014;20(25):82827. [3] ASGE Technology Committee. Endoscopic submucosal dissection: technology status evaluation report. Gastrointest Endosc 2015;81 (6):131125. [4] NOSCAR POEM White Paper Committee. Per-oral endoscopic myotomy white paper summary. Gastrointest Endosc 2014;80(1):115. [5] Yeung B, Gourlay T. A technical review of flexible endoscopic multitasking platforms. Int J Surg 2012;10(7):34554. [6] Fujiya M, Tanaka K, Dokoshi T, Tominaga M, Ueno N, Inaba Y, et al. Efficacy and adverse events of EMR and endoscopic submucosal dissection for the treatment of colon neoplasms: a meta-analysis of studies comparing EMR and endoscopic submucosal dissection. Gastrointest Endosc 2015;81:58395. [7] Goto O, Fujishiro M, Kodashima S, Ono S, Omata M. Outcomes of endoscopic submucosal dissection for early gastric cancer with special reference to validation for curability criteria. Endoscopy 2009;41:11822. [8] Isomoto H, Shikuwa S, Yamaguchi N, Fukuda E, Ikeda K, Nishiyama H, et al. Endoscopic submucosal dissection for early gastric cancer: a large-scale feasibility study. Gut 2009;58:3316. [9] Rinaldi V, Pagani D, Torretta S, Pignataro L. Transoral robotic surgery in the management of head and neck tumours. Ecancermedicalscience 2013;7:359. [10] Hompes R, Rauh S, Hagen M, Mortensen N. Preclinical cadaveric study of transanal endoscopic da Vincis surgery. Br J Surg 2012;99 (8):11448. [11] Jeon W, You I, Chae H, Park S, Youn S. A new technique for gastric endoscopic submucosal dissection: peroral traction-assisted endoscopic submucosal dissection. Gastrointest Endosc 2009;69(1):2933. [12] Okamoto K, Muguruma N, Kitamura S, Kimura T, Takayama T. Endoscopic submucosal dissection for large colorectal tumors using a crosscounter technique and a novel large-diameter balloon overtube. Dig Endosc 2012;24(Suppl. 1):969. [13] Parra-Blanco A, Nicolas D, Arnau M, Gimeno-Garcia A, Rodrigo L, Quintero E. Gastric endoscopic submucosal dissection assisted by a new traction method: the clip-band technique. A feasibility study in a porcine model (with video). Gastrointest Endosc 2011;74(5):113741. [14] Uraoka T, Kato J, Ishikawa S, Harada K, Kuriyama M, Takemoto K, et al. Thin endoscope-assisted endoscopic submucosal dissection for large colorectal tumors (with videos). Gastrointest Endosc 2007;66(4):8369. [15] Neuhaus H, Costamagna G, Deviere J, Fockens P, Ponchon T, Rosch T. Endoscopic submucosal dissection (ESD) of early neoplastic gastric lesions using a new double-channel endoscope (the “R-scope”). Endoscopy 2006;38(10):101623. [16] Spaun G, Zheng B, Swanstro¨m L. A multitasking platform for natural orifice translumenal endoscopic surgery (NOTES): a benchtop comparison of a new device for flexible endoscopic surgery and a standard dual-channel endoscope. Surg Endosc 2009;23:2720. [17] Thompson C, Ryou M, Soper N, Hungess E, Rothstein R, Swanstro¨m L. Evaluation of a manually driven, multitasking platform for complex endoluminal and natural orifice transluminal endoscopic surgery applications. Gastrointest Endosc 2009;70(1):1215. [18] Swanstro¨m LL, Kozarek R, Pasricha PJ, Gross S, Birkett D, Park PO, et al. Development of a new access device for transgastric surgery. J Gastrointest Surg 2005;9(8):112937. [19] Vitiello V, Su-Lin L, Cundy T, Yang G-Z. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2013;6:11126. [20] Intuitive Surgical. Commercial website of Intuitive Surgical, ,https://www.intuitivesurgical.com/sp/. [consulted 24.11.18].

STRAS Chapter | 8

145

8. STRAS Robotic Device

[21] Pryor A, Tushar J, DiBernardo L. Single-port cholecystectomy with the TransEnterix SPIDER: simple and safe. Surg Endosc 2010;24 (4):91723. [22] Virtual Incision. Commercial website of Virtual Incision, ,https://www.virtualincision.com/. [consulted 24.10.18]. [23] Titan Medical. Commercial website of Titan Medical, ,https://titanmedicalinc.com/technology/. [consulted 24.11.18]. [24] Ding J, Goldman RE, Xu K, Allen PK, Fowler, Simaan N. Design and coordination kinematics of an insertable robotic effectors platform for single-port access surgery. IEEE/ASME Trans Mechatron 2013;18:161224. [25] Rothstein R, Ailinger R, Peine W. Computer-assisted endoscopic robot system for advanced therapeutic procedures. Gastrointest Endosc 2004;59(5):113. [26] Abbott DJ, Becke C, Rothstein RI, Peine WJ. Design of an endoluminal notes robotic system. In: IEEE Int. Conf. on intelligent robots and systems, San Diego, CA; October 2007. [27] Phee SJ, Low SC, Huynh VA, Kencana AP, Sun ZL, Yang K. Master and slave transluminal endoscopic robot (MASTER) for natural orifice transluminal endoscopic surgery (NOTES). In: IEEE international conference on engineering in medicine and biology, Minneapolis, MN; 2009. p. 11925. [28] Sun Z, Ang R, Lim E, Wang Z, Ho K, Phee S. Enhancement of a master-slave robotic system for natural orifice transluminal endoscopic surgery. Ann Acad Med Singapore 2011;40(5):22330. [29] Ho K, Phee S, Shabbir A, Low S, Huynh V, Kencana A, et al. Endoscopic submucosal dissection of gastric lesions by using a master and slave endoscopic robot (MASTER). Gastrointest Endosc 2010;72(3):5939. [30] Phee S, Reddy N, Chiu P, Rebala P, Rao G, Wang Z, et al. Robot-assisted endoscopic submucosal dissection is effective in treating patients with early-stage gastric neoplasia. Clin Gastroenterol Hepatol 2012;10(10):111721. [31] Ruiter J. Robotic flexible endoscope [Ph.D. dissertation]. University of Twente; 2013. [32] Ruiter J, Bonnema G, van der Voort M, Broeders IAMJ. Robotic control of a traditional flexible endoscope for therapy. J Rob Surg 2013;7 (3):22734. [33] Rozeboom E, Ruiter J, Franken M, Broeders I. Intuitive user interfaces increase efficiency in endoscope tip control. Surg Endosc 2014;28 (9):26005. [34] Control Lab. Website of KAIST Telerobotics and Control Lab, ,http://robot.kaist.ac.kr/medical-robots/. [consulted 25.11.18]. [35] EasyEndo. Website of EasyEndo, ,http://easyendosurgical.com/. [consulted 25.11.18]. [36] Medrobotics. Commercial website of Medrobotics, ,https://medrobotics.com/gateway/flex-robotic-system/?c 5 INTL. [consulted 25.11.18]. [37] Degani A, Choset H, Wolf A, Ota T, Zenati M. Percutaneous intrapericardial interventions using a highly articulated robotic probe. In: IEEE international conference on biomedical robotics and biomechatronics; 2006. p. 712. [38] Remacle M, Prasad V, Lawson G, Plisson L, Bachy V, Van der Vorst S. Transoral robotic surgery (TORS) with the medrobotics flext system: first surgical application on humans. Eur Arch Otorhinolaryngology 2015;272(6):14515. [39] Vrielink T, Chao M, Darzi A, Mylonas G. ESD CYCLOPS: a new robotic surgical system for GI surgery. In: IEEE international conference on robotics and automation (ICRA); 2018. [40] Dallemagne B, Marescaux J. The ANUBISt project. Minim Invasive Ther Allied Technol 2010;19(5):25761. [41] Perretta S, Dallemagne B, Barry B, Marescaux J. The ANUBISCOPE(R) flexible platform ready for prime time: description of the first clinical case. Surg Endosc 2013;27(7):2630. [42] Bardou B, Nageotte F, Zanne P, de Mathelin M. Design of a telemanipulated system for transluminal surgery. In: IEEE engineering in medicine and biology conference (EMBC 2009), Minneapolis, MN; September 2009. [43] De Donno A, Zorn L, Zanne P, Nageotte F, de Mathelin M. Introducing STRAS: a new flexible robotic system for minimally invasive surgery. In: IEEE international conference on robotics and automation, Karlsruhe; May 2013. p. 121320. [44] Diana M, Chung H, Liu K-H, Dallemagne B, Demartines N, Mutter D, et al. Endoluminal surgical triangulation: overcoming challenges of colonic endoscopic submucosal dissections using a novel flexible endoscopic surgical platform: feasibility study in a porcine model. Surg Endosc 2013;27:41305. [45] Zorn L, Zanne P, Nageotte F, de Mathelin M. Motorised and modular instrumentation device and endoscopy system comprising such a device. In: European patent EP2822446, WO/2013/132194. 2013. [46] Zorn L, Nageotte F, Zanne P, Le´gner A, Dallemagne B, Marescaux J, et al. A novel telemanipulated robotic assistant for surgical endoscopy: preclinical application to ESD. IEEE Trans Biomed Eng 2018;65(4):797808. [47] Xu W, et al. Data-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators. Int J Med Rob 2017;13(3). [48] Ranzani T, Ciuti G, Tortora G, Arezzo A, Arolfo S, Morino M, et al. A novel device for measuring forces in endoluminal procedures. Int J Adv Rob Syst 2015;12(116). [49] Webster R, Jones B. Design and kinematic modeling of constant curvature continuum robots: a review. Int J Rob Res 2010;29(13):166183. [50] De Donno A, Nageotte F, Zanne P, Zorn L, de Mathelin M. Master/slave control of flexible instruments for minimally invasive surgery. In: IEEE/RSJ international conference on intelligent robots and systems, Tokyo; November 2013. p. 4839. [51] Allemann P, Ott L, Asakuma M, Masson N, Perretta S, Dallemagne B, et al. Joystick interfaces are not suitable for robotized endoscope applied to NOTES. Surg Innov 2009;16(2):11116. [52] De Mathelin M, Le Bastard F, Nageotte F, Zanne P, Zorn L. Master interface device for a motorised endoscopic system and installation comprising such a device. In: International patent application no. PCT/EP2015/051189, publication no. WO/2015/110495.

146

Handbook of Robotic and Image-Guided Surgery

[53] Legner A, Diana M, Halvax P, Liu Y, Zorn L, Zanne P, et al. Endoluminal surgical triangulation 2.0: a new flexible surgical robot. Preliminary pre-clinical results with colonic submucosal dissection. Int J Med Rob 2017;13(3). [54] Bardou B, Zanne P, Nageotte F, de Mathelin M. Control of a multiple sections endoscopic system. In: IEEE international conference on intelligent robots and systems; 2010. p. 234550. [55] Cabras P, Goyard D, Nageotte F, Zanne P, Doignon C. Comparison of methods for estimating the position of actuated instruments in flexible endoscopic surgery. In: International conference on intelligent robots and systems (IROS), Chicago, IL; September 2014. [56] Cabras P, Nageotte F, Zanne P, Doignon C. An adaptive and fully automatic method for estimating the 3D position of bendable instruments using endoscopic images. Int J Med Rob 2017;13(4).

9 G

Implementation of Novel Robotic Systems in Colorectal Surgery Turgut Bora Cengiz, Scott R. Steele and Emre Gorgun Cleveland Clinic, Cleveland, OH, United States

ABSTRACT Colorectal surgery practice has shown substantial changes since the introduction of minimally invasive surgery (MIS) by providing shorter lengths of hospital stay and reduced morbidity rates compared to open surgery. The challenges of MIS in colorectal surgery settings are magnified when the rigid nature of the laparoscopic instruments and assistant-driven camera platforms interfere with the flow of the surgery. Robotic devices have been developed to overcome the drawbacks of laparoscopic surgery and have been further improved with additional haptic feedback and flexible scopes that ease their operation. Colorectal surgery has benefited from the ample opportunities that robotic systems offer, however, the unavailability of the platform in every medical center, prolonged operative time, cost, and the steep learning curve pose obstacles for the globalization of these robotics. Debates continue about whether the purported advantages of the robotic systems will translate into clinical effectiveness over conventional laparoscopy. This chapter discusses the current role of alternative robotic approaches in colorectal surgery settings and future directions of robotic surgery. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00009-8 © 2020 Elsevier Inc. All rights reserved.

147

148

Handbook of Robotic and Image-Guided Surgery

9.1

Introduction

Robotic surgery first emerged as a technique to perform remote surgery for patients who were not able to reach immediate health care, but nowadays it appears as an alternative approach offering many opportunities for both surgeons and patients. With the concurrent developments in the technology, robotic surgery has shown an exponential rise in the past few years, including colorectal surgery. Nevertheless, overwhelming economic outcomes and several technical challenges place substantial barriers to the complete adoption of robotics and its incorporation into daily practice. Thus it is still unknown whether robotic surgery can become the nexus for improved postoperative outcomes that can increase the value of care. This chapter discusses the implementation of novel robotic platforms, particularly the Flex Colorectal Drive (Medrobotics, Corp., Raynham, Massachusetts, United States), into colorectal surgery settings, addressing the advantages and the drawbacks of this robotic device as well as the future of robotic surgery.

9.1.1

Laparoscopic era

Trends in the surgical community have been shifting toward minimizing surgical trauma and maximizing postoperative outcomes, especially as minimally invasive surgery (MIS) has demonstrated revolutionary effects on decreasing length of hospital stay, postoperative complications, and postoperative pain [1,2]. Accordingly, laparoscopy has gained worldwide popularity and become the gold standard treatment for a spectrum of diseases in multiple surgical disciplines, including colorectal surgery. However, several problems remain—the capabilities of laparoscopic instruments are restricted—especially in confined bony spaces where precise dissection is harder than open surgery due to rigid instrumentation, loss of tactile feedback, and visual limitations. Therefore it is no surprise that adoption of laparoscopy in colorectal surgery settings has been a gradual process, even though the literature has supported the benefits of MIS. Whether through the noninferiority of the laparoscopic approaches in the early phases, or the more widely touted Clinical Outcomes of Surgical Therapy [3], Colon Carcinoma Laparoscopic or Open Resection (COLOR) [4], and COLOR II [5] trials, laparoscopy has demonstrated oncologically sound results in colorectal surgery. Unfortunately, many of these advantages did not carry over to rectal surgery. The American College of Surgeons Oncology Group Z6051 [6] and the Australasian Laparoscopic Cancer of the Rectum [7] randomized trials failed to prove noninferiority of laparoscopy against open surgery in rectal cancer. At this point, some surgeons indicated that there was a need for a more advanced platform to carry out more precise dissection—especially in the pelvis—to achieve oncologically safe results.

9.1.2

Introduction of robotics

Robotic surgery is an alternative method to overcome the difficulties of laparoscopy and further potentiate the effects of MIS by providing instruments with better articulation and dexterity to address the Achilles’ heel of laparoscopy. Following the Food and Drug Administration (FDA) approval in 2000, the da Vinci Robot (Intuitive Surgical, Sunnyvale, California) has gained worldwide popularity, mainly in urology, gynecology, and general surgery. Robotic surgery offers internally rotating equipment with high-definition and three-dimensional (3D) visualization in which the operating surgeon controls the camera in a “master slave” fashion (Table 9.1). With increased manipulation closer to the human wrist, robotic surgery has revolutionized the minimally invasive approaches, yet, because of its increased cost and prolonged operative time, there still seem to be substantial barriers to reaching full potential [8,9]. TABLE 9.1 Distinguishing features of robotic and laparoscopic systems. Feature

Robotic

Laparoscopic

High-definition, 3D visualization

Yes

No

Highly articulated equipment

Yes

No

Operating surgeon-controlled camera

Yes

No

Elimination of tremor

Yes

No

Ergonomics of the surgeon

Sitting

Standing

3D, Three-dimensional.

Implementation of Novel Robotic Systems in Colorectal Surgery Chapter | 9

149

9.1.3

Further innovations

Despite the technical obstacles of MIS, the surgical community has further strived to improve the outcomes. One of the most notable advances of MIS was the introduction of natural orifice transluminal endoscopic surgery (NOTES), where natural body orifices are used to perform surgery instead of trocar entry through the abdominal wall. Very recently, the transanal total mesorectal excision (TaTME) technique has appeared as an alternative solution for low rectal tumors and other disease processes that require a proctectomy [12]. The TaTME technique is a form of NOTES, where the traditional intra-abdominal dissection is combined with a distal transanal dissection, with much of the approach through a “bottoms-up” method starting in the deep pelvis. The full scope of benefits of NOTES is yet to be revealed due to technical difficulties, but still, it is a promising field to achieve relatively scarless surgery. To date, surgeons have employed linear instruments for the nonlinear structure of the human body and these instruments have created substantial challenges at the curves of the visceral organs. Recently, the Flex Robotic System has emerged to address the linearity of the MIS platforms by incorporating flexible tools with NOTES. Initially, it was designed by otolaryngologists to perform transoral surgery. After careful vetting, in May 2017, the FDA approved the semirobotic Flex Colorectal Surgery Drive for transanal surgery. The Flex Colorectal Drive uses a transanal route to perform a variety of surgeries including local resection and TaTME, which allows surgeons to control a flexible colonoscope-like camera to reach anatomically constrained areas and to navigate in the convolutions of the rectal lumen (Table 9.2). Despite its promise, the clinical repercussions of this platform are yet to be investigated and revealed.

9.2

Features of the Flex Colorectal Drive

The Flex Colorectal Drive is a semirobotic platform that was designed to perform transanal endoluminal surgery for the rectal lesions. The console contains two different units, the FlexCart which includes a control knob (Fig. 9.1) and the Flex Robotic Base, which includes the Flex Scope and the surgical instruments (Figs. 9.2 9.4). The Flex Robotic Base is placed at the bedside of the patient and then the Flex Scope is mounted on the Flex Base before the transanal docking of the robotic system (Fig. 9.5). Once the Flex Colorectal Drive is stationed completely, two traditional pistol-grip style laparoscopic instruments with flexible necks are introduced through semirigid metal tubes (Figs. 9.6 and 9.7). An AirSeal (Conmed, Largo, Florida, United States) trocar is also placed to regulate the gas in the rectal lumen. Once all segments of the robotic system are ready-to-use, the platform is secured to the operating table.

9.2.1

Visualization

The Flex Scope has two layers, an outer layer which follows the exact movement in 3D space determined by the knob on the control console manipulated by the operating surgeon and transmits the movement to the inner layer, which follows the outer layer and provides visualization with the 0-degree camera. The operating surgeon controls the joystick and advances the outer layer first to create a guide path for the inner mechanism to gain exposure to the surgical field. With a master slave type control system, the robotic scope advances in the lumen with snake-like movements to reach anatomical spaces that are considered inaccessible by rigid laparoscopic scopes. The Flex Scope is designed as a reusable scope with a disposable cap that ensures a leak-free environment when combined with the access channel (Fig. 9.8). Currently, the Flex Scope can reach 17 cm from the anal verge, thereby limiting the use of the Flex Colorectal Drive for patients who have lesions below the 17-cm level including the anus, rectum, and a part of the distal colon. Careful patient selection is crucial with the Flex Colorectal Drive, as well as the exact localization of the lesion

9. Flex Robotic System

The outcomes of robotic surgery with the da Vinci platform have been studied widely in the literature. Earlier reports indicated a shorter length of hospital stay, less intraoperative blood loss, and fewer conversions to open surgery compared to laparoscopy [10,11]. The RObotic versus LAparoscopic Resection for Rectal cancer randomized trial further investigated the impact of the da Vinci Robot in the perioperative setting and concluded that robotic surgery does not confer an advantage over laparoscopy, with the major metric being conversion rate to open. Therefore utilization of the robotic systems is not justified economically yet, since longer operative time and increased cost outweigh the purported merits of da Vinci platform. Subsequently, to reduce the per-surgery cost of robotic surgery and to increase the competitive market, novel robotic systems have been and are being developed to increase the options in the robotic industry and to establish a broader area of effect for the robotic surgery.

150

Handbook of Robotic and Image-Guided Surgery

FIGURE 9.1 The Flex cart with control knob. Courtesy Sam Atallah, MD.

FIGURE 9.2 The Flex robotic base. Courtesy Sam Atallah, MD.

FIGURE 9.3 Flex Scope and end tip. Courtesy MedRobotics.

FIGURE 9.4 The Flex Base mounted with the Flex Scope. Courtesy Sam Atallah, MD.

FIGURE 9.6 Flex Colorectal Drive with laparoscopicstyle flexible instruments. Reproduced with permission from MedRobotics.

9. Flex Robotic System

FIGURE 9.5 Bedside positioning of the Flex Colorectal Drive. Courtesy Sam Atallah, MD.

152

Handbook of Robotic and Image-Guided Surgery

FIGURE 9.7 Flex Colorectal Drive with laparoscopic-style flexible instruments. Reproduced with permission from MedRobotics. Courtesy Sam Atallah, MD.

TABLE 9.2 Features of the da Vinci and Flex Robotic Systems. Features

da Vinci Robotic System

Flex Robotic System

High-definition visualization

Yes

Yes

Articulating instruments

Yes

No

Master slave type control

Yes

Yes

Flexible camera

No

Yes

Flexible instruments

No

Yes

Diameter of instruments

5 mm

3.5 mm

Completely robotic

Yes

No

Used for TaTME

Yes

Yes

TaTME, Transanal total mesorectal excision.

FIGURE 9.8 The Flex Scope and additional equipment. Courtesy Sam Atallah, MD.

Implementation of Novel Robotic Systems in Colorectal Surgery Chapter | 9

153

before surgery to prevent any technical challenges during the operation. Also, by controlling both the camera and the instruments, operating surgeons do not need an assisting surgeon to hold the camera, unlike traditional laparoscopy.

Instrumentation

The da Vinci Xi robot is also being utilized for transanal approaches for lesions located in the rectum, but the extent of this platform is limited by the nonflexible camera and 5-mm diameter instruments that restrict the maneuverability in the pelvis. The Flex Colorectal Drive accommodates 3.5-mm diameter instruments with curving abilities similar to those of the da Vinci platform. The instruments of the Flex Colorectal Drive are analogous to laparoscopic instruments in nature, with additional flexibility, and these instruments are delivered through a mounted bedrail which is initially rigid and then transposes into a flexible texture along with the Flex Scope so that surgeons can advance the instruments in line with the camera. A wide variety of options are available for the equipment, such as monopolar cautery, fenestrated grasper, Maryland dissector, scissors, needle driver, and spatula, all of which are common instruments in both laparoscopic and robotic surgery. Unlike laparoscopy, the distal end of each instrument is flexible, and this flexing action is achieved by bending the handle of the instrument. Together with traditional rotation abilities, the Flex Colorectal Drive increases the range of motion that becomes useful in anatomically restricted areas. With pistol-griplike instruments, the dynamics of the Flex Colorectal Drive are very similar to those of laparoscopy, which makes this platform a semirobotic apparatus rather than a fully robotic system. The Flex Robotic System was initially designed for transoral surgery, owing to its increased maneuverability in narrow spaces, and recently this feature has been adapted to colorectal surgery allowing surgeons to benefit from the colonoscope-like visualization in the rectum and distal colon. However, the Flex Robotic System has distinct features apart from colonoscopy that enhance its ability and use in colorectal surgery. Contrary to colonoscopy, the surgical camera is controlled by a knob and advanced delicately instead of pushing the scope forcefully. The control knob allows surgeons to navigate more consciously and overcomes problems in visualizing difficult-to-reach areas in the rectal lumen. Thus some natural obstacles during colonoscopy are eliminated with the Flex Scope, such as looping of the colonoscope. Given the familiar configuration of the control knob and the laparoscopic-style instruments, one might expect to achieve an easy learning curve for the Flex Colorectal Drive.

9.3 9.3.1

Surgery with Flex Colorectal Drive Preoperative course

The surgical spectrum for the Flex Colorectal Drive varies from local excisions to more radical resections such as TaTME. Patient preparation and preoperative assessment are similar to those of abdominal approaches or other transanal techniques. However, the limits of every platform are different, so does the operative setup and organization of the operating room differ in Flex. Even though subjective experience outweighs some theoretical principles in the operative setup, especially in novel approaches, adequately preparing both the environment and the patient for surgery is crucial. The preoperative workup should include a complete medical and surgical history of the patient. After physical examination, including a digital rectal examination, all patients should undergo a full colonoscopic evaluation to rule out the presence of any synchronous lesions. If the underlying etiology is a cancerous or suspicious growth, biopsies should be performed to evaluate histopathological characteristics of the lesion and the depth of invasion to decide whether the local excision is feasible. It is also vital to consider lesion size, distance from the anal verge, presence of any invasion to surrounding anal sphincters, and exact localization in the lumen (i.e., anterior/posterior or lateral location). Due to range limitations of the Flex Scope, surgeons should carefully assess if the lesion is within the reach of the Flex Colorectal Drive. Since robotic procedures tend to take longer to perform than other techniques, patients should be carefully evaluated to determine whether they can tolerate a MIS. Preoperative bowel preparation, thromboembolic prophylaxis, and antibiotics should be applied as they indicated in the Surgical Care Improvement Project [13]. The patient is usually placed in a modified lithotomy position. This allows the surgical team to easily manipulate the upper abdomen between the patient’s legs when needed. Yellow sponge fins or padded stirrups are used to prevent peroneal nerve injury. A gel pad is placed on the operating table to provide decubitus support. Tape is placed around the patient’s chest to help stabilize him or her and reduce pressure on the lower limbs when the operating table is tilted. Finally, both arms are tucked at the patient’s sides.

9. Flex Robotic System

9.2.2

154

Handbook of Robotic and Image-Guided Surgery

FIGURE 9.9 Intraoperative view of the Flex Colorectal Drive in a cadaveric model.

9.3.2

Local excision

After mechanical bowel preparation, enemas are used to maintain a clean operative field. Insertion of a Foley catheter is optional depending on the anticipated operative time, and is recommended for procedures that are expected to take more than 2 hours. Anesthesia can be either spinal or general. The patient is generally positioned to modified lithotomy, as described above. For local excisions, the Flex Colorectal Drive is docked transanally, and the Flex Base is positioned at the bedside. Prior to docking, local anesthetics and dilatation can be applied. The pneumorectum is established with a CO2 pressure of 10 15 mmHg. The surgeon initially manipulates the Flex Scope with the control knob to visualize the lesion. Once a clear exposure to the lesion is achieved, further movement of the scope is not needed. At this point, the surgeon no longer uses the knob and switches to the flexible instruments to perform the excision (Fig. 9.9). First, the lesion is marked circumferentially with a 5- to 10-mm margin and then excised with a monopolar cautery or scissors, staying perpendicular to the rectal mucosa. Following the complete excision, the lesion is exteriorized with the help of the robotic system. The defect can be closed by using either suture (continuous or interrupted) or endoscopic clips. At this point, the flexible nature of the platform is highly beneficial. Supplemental Video: Reapproximation with Flex Colorectal Drive in a Cadaveric Model, can be found on online at https://doi.10.1016/B978-0-12-814245-5.00009-8. If a full-thickness rectal wall dissection with peritoneal entry occurs during the excision, the defect should be closed to prevent septic complications following the surgery. On the other hand, rectal wall defects without peritoneal entry can be managed conservatively, as recent evidence suggests a similar incidence of complications whether the defect is closed or not [14].

9.3.3

Total mesorectal excision

For TaTME, preoperative workup depends on the diagnosis. If rectal cancer is the underlying etiology, it is recommended to arrange the operation 8 10 weeks after the completion of the neoadjuvant chemoradiation. The bowel preparation is as described above, but the Foley catheter is placed routinely. The patient is again positioned to modified lithotomy and both the perineum and abdomen are prepped. Additionally, rectal irrigation may be needed after the purse-string suture is placed. The Flex Colorectal Drive is docked as described previously. For the TaTME approach, surgeons can employ either simultaneous (both transanal and abdominal dissection take place at the same time with two teams), or one-team (both transanal and abdominal dissection are completed by the same surgical team one following the other) technique. One of the most important steps of the TaTME is the purse-string suture that secures the operative area. Placement of the purse-string suture is related to the location of the lesion, and 2-0 Prolene or Vicryl sutures are commonly used. The flexible instruments of the Flex provide a superior experience within the lumen compared to other rigid systems since circumferential suture necessitates high maneuverability. Once the suture is placed, pneumorectum is established

Implementation of Novel Robotic Systems in Colorectal Surgery Chapter | 9

155

9.3.4

Postoperative course

The postoperative period after surgery with the Flex Colorectal Drive is the same as other techniques used in transanal surgery. To date, there is no evidence regarding postoperative complications associated with the Flex in the literature; but as the data grow, we will be able to assess evidence-based outcomes of the Flex Colorectal Drive. In terms of local excision, patients can be discharged on the same day unless there is an intraoperative complication such as fullthickness perforation. In a case of perforation, patients should be observed overnight. A soft gastrointestinal diet can be resumed after the surgery and prophylactic antibiotic therapy is not routinely recommended. For TaTME, standard postoperative care after the Flex follows the suggestions of the enhanced recovery pathways, but ultimate patient management depends on patient-based, personalized care. Removal of the Foley can be deferred after postoperative day 1, especially in male patients with prostatic enlargement, and very low and deep dissections [15]. Venous thromboembolism and antibiotic prophylaxis are applied as indicated in standard protocols. Oncologic follow-up is maintained according to the American Society of Colon and Rectal Surgeons and National Comprehensive Cancer Network guidelines for rectal cancer.

9.4

Further considerations for the Flex Colorectal Drive

Though the Flex Colorectal Drive brings new opportunities into colorectal surgery, there are still some drawbacks that could prevent this system to reach its maximum potential. So far, there are no reports in the literature about the clinical outcomes of the Flex Colorectal Drive, thus first of all its safety should be carefully assessed. Then the secondary outcomes, namely, overall cost of the surgery, operative time, and its clinical superiority should be evaluated.

9.4.1

Bending of the scope

The Flex Scope is designed to ease the navigation while proceeding in the rectal lumen, but it also has some range of vision challenges, especially when the scope intercepts with the hardware of the system. Even though it is easily coordinated by the control knob while advancing and moving forward, it has some restrictions on coiling. The Flex Scope is, to date, not capable of creating a loop on itself, which creates some challenges when the lesion is on the dentate line, right on the margin of the access channel. In such situations, the Flex Scope cannot bend enough to obtain adequate exposure to the lesion, creating a dead space between the access channel and the scope. Moreover, the flexible instruments are bound to advance along with the scope; this also causes them to work in a technically strenuous angle for the extremely low-lying lesions. When the target lesion is located in proximity to the anus, the access channel can be retracted to create a greater field for the Flex Colorectal Drive, but this also leads to an unstable platform since the access channel may not completely latch to the anal canal. Having an unbalanced surgical system may not be optimal when a meticulous dissection is needed. Nevertheless, all ergonomic problems can be addressed in the future; in fact, Medrobotics has been working on decreasing the diameter of the scope even further. In conclusion, with careful patient selection, these technical drawbacks can be easily eliminated.

9. Flex Robotic System

with the same pressure (10 15 mmHg) as the local excision and the rectal mucosa is marked similarly. A full-thickness rectal incision is made with a monopolar device meticulously to proceed with the cephalad dissection. The flexible robotic and and the camera pose a great advantage in the rectum by providing enhanced control of visualization and manipulation, as at this point of the operation staying in the fascial planes is critical and the increased flexibility is of great benefit. Posteriorly, the dissection is carried out between the presacral fascia and the mesorectum until the sacral promontory, where transabdominal and transanal approaches usually meet. On the anterior, the dissection is carried out between the rectovaginal or rectoprostatic fascia up to the peritoneal reflection. Once the rectum and mesorectum are completely mobilized as an envelope, the specimen is extracted through the transanal or transabdominal route, transected, and the coloanal anastomosis is created either with a stapler or with a handsewn technique. For transanal specimen extraction, the Flex robot should be undocked. A diverting loop ileostomy is built in most of the cases following the anastomosis. Since TaTME comprises a wider field than local excision, multiple adjustments of the surgical camera may be needed in order to continue the dissection circumferentially. One of the distinctive features in terms of TaTME is that with the Flex Colorectal Drive, surgeons can extend their dissection more proximally compared to other transanal techniques. Currently, other transanal techniques are limited to the 15-cm range.

156

9.4.2

Handbook of Robotic and Image-Guided Surgery

Loss of tactile feedback

MIS has brought ample advantages for colorectal surgery, but at the same time, operating through an external system without any tactile sensation enthralled surgeons to work without haptic feedback. Without any direct feeling, surgeons tend to rely on their visual cues, which leads to an increased rate of perforations and inadvertent organ injuries. This problem already exists in the conventional laparoscopy and the da Vinci Robot, and unfortunately, the Flex Colorectal Drive also fails to provide any tactile feedback. The repercussions of the lack of haptic feedback are magnified during intracorporeal suturing, where the tissue tension cannot be accurately measured. Next-generation robotic systems may offer better sensorial feedback that provokes a cognitive response and subsequently lead to a better acquisition.

9.4.3

Range

Another limitation of the Flex Colorectal Drive is that the maximum excursion is limited to 17 cm. Therefore lesions only in the rectum and a part of the distal colon are suitable for resection. However, this area of effect can be modified and expanded more proximally to reach the entire colon. Atallah indicated that the Flex Colorectal Drive has potential to replace several colonoscopic techniques such as endoscopic submucosal dissection and endoscopic mucosal resection in the future [16]. Also, Atallah notified that suturing beyond 15 cm is somewhat difficult due to needle delivery and retrieval not being easy through the curves of the scope [16]. Nonetheless, for the reapproximation of the tissues, traditional colonoscopic clips can be utilized after a particular range [17].

9.4.4

Hybrid nature

One of the merits of robotic surgery is to diminish innate human error by providing stable retraction and precise movement. The Flex Colorectal Drive is a hybrid robotic system which utilizes laparoscopic-style equipment which transmits tremor to the end tip of the instrument, which is different from any other robotic platform. It is also not possible to manipulate both the camera and the arms at the same time due to the semirobotic nature of the Flex. These distinctive characteristics should be considered, and surgeons should be trained appropriately before practicing in the hospital settings.

9.5

Future directions in robotics

The robotics industry has been growing at a steady rate and has already become a significant participant in the health sector. According to Intuitive Surgical, general surgery comprises the largest part among all disciplines that utilized the da Vinci Robotic System in 2017, with colorectal procedures being the leading frontier. Nevertheless, in general, out of all registered hospitals in the United States nearly two-thirds of hospitals cannot afford a robotic device due to its high capital cost and “razor-razorblade” style instrumentation that necessitates purchasing additional disposable equipment continuously, plus the service costs. Therefore new companies with innovative systems that can eliminate the drawbacks of the previous robotic platforms are needed to optimize the robotic experience, and with competitive pricing, the cost-effectiveness of robotic surgery can be maximized. As new-generation surgeons become familiarized to robots during their early practice, we will be able to discover the potential advantages that have yet to be revealed. Main factors causing drawbacks for robotic surgery such as the longer operative times and increased cost per capita should be carefully addressed.

9.5.1

Single-port designs

Single-port (SP) surgery has gained popularity, especially within the last decade, yet, crowding at the port-site brings a spectrum of technical difficulties, mainly due to the straight nature of the instruments. In addition to its cosmetic advantages, SP surgery carries potential to decrease wound-related complications. Transanal surgery carries the same core principle as the SP surgery and as the SP systems keep evolving, advances in this field are likely to be represented in TaTME as well. Recently, the FDA has started an evaluation process for the da Vinci SP Robotic System, which utilizes the SP approach to enter the abdominal cavity and flexible instruments to perform various procedures. Even though the current da Vinci Xi system has been used for robotic TaTME, adaptation of the SP system to transanal surgery would enhance the robotic TaTME experience and create a new riveting facet. Similarly, the SPORT (Single Port Orifice Robotic Technology) robotic system (Titan Medical, Toronto, Ontario, Canada) has been anticipated to appear

Implementation of Novel Robotic Systems in Colorectal Surgery Chapter | 9

157

9.5.2

Haptic feedback

The Senhance Robotic System (TransEnterix, Morrisville, North Carolina), previously known as Telelap ALF-X, is another robotic device that embraces the foundations of laparoscopy, which that was approved by the FDA in October 2017. The Senhance Robotic System not only uses the same virtues as its peers, namely, high-definition and 3D vision and improved ergonomics for the surgeon, but also yields haptic feedback and an eye-tracking system that eases control during surgery. Senhance is a unique robotic system that transmits sensory stimulations to the surgeon and eliminates vision-dependent decision-making. The presence of haptic feedback can decrease inadvertent visceral injuries, especially when the instruments are not directly visualized during their introduction to the abdominal cavity. The system consists of a control cockpit and four separate patient-side arms including a camera which is controlled by the surgeon’s gaze that allows focused surgery. The Senhance robot also has reusable endoscopic parts that can decrease concerns about maintenance costs. After proving its validity in gynecologic surgery, Spinelli et al. reported the first colorectal experience with Senhance, in which the authors stated that the robot is safe and feasible [18,19]. In conclusion, the technological advancements provided by the Senhance robot can potentiate further developments regarding haptic feedback in the future. Moving forward, surgeon-centered, wearable robotic devices are on the horizon. In 2017 the European Commission funded a research project that focuses on a wearable robotic device which covers the surgeon’s hands like an exoskeleton and transmits signals to a surgical device and delivers haptic feedback. It is not known whether the device will be available for colorectal surgery, but the surgical community eagerly awaits new breakthroughs and robots that can enhance MIS.

9.6

Conclusion

As the surgical community seeks to improve MIS, robotic surgery will continue to evolve and create new facets in this field so that more patients can benefit from the purported advantages of the robotics. Transanal surgery is also targeted by new robotic devices and is being exponentially utilized worldwide. The Flex Colorectal Drive appears to be an innovative solution for transanal surgery and pioneers the flexible robotic systems in colorectal surgery. The combination of well-known colonoscopic principles, laparoscopic techniques, and the stability of robotic systems constitute the Flex Colorectal Drive as a high potential platform for the management of rectal lesions. Nonetheless, it is still unknown whether such advantages of the robotic systems will translate into superior clinical outcomes. For that purpose, new-generation surgeons should be acquainted with robotic systems during their practice to increase their use and clinical studies are needed to prove their efficacy both on the surgeon and patient frontiers.

Acknowledgment The authors would like to thank Sam Atallah, MD, for the figures used in this chapter.

References [1] Guillou PJ, Quirke P, Thorpe H, et al. Short-term endpoints of conventional versus laparoscopic-assisted surgery in patients with colorectal cancer (MRC CLASICC trial): multicentre, randomised controlled trial. Lancet 2005;365(9472):1718 26. Available from: https://doi.org/10.1016/ S0140-6736(05)66545-2. [2] Nelson H. Laparoscopically assisted colectomy is as safe and effective as open colectomy in people with colon cancer. Cancer Treat Rev 2004;30(8):707 9. Available from: https://doi.org/10.1016/j.ctrv.2004.09.001.

9. Flex Robotic System

in the operating rooms in 2019, and will potentially provide a new frontier for MIS. The SPORT system also offers a solution to the stationary nature of the other robots by its small footprint and highly mobile setup, but most importantly it is estimated to cost less than the predicate da Vinci Robot. In the future, more robotic systems are expected to establish a bridge between NOTES, and SP approaches similar to the Flex Colorectal Drive. Another entrant expected to the robotic surgery field is a miniature robot from Virtual Incision (Pleasanton, CA). Virtual Incision also embraced the idea of minimization, and further miniaturized their robot to fit through the umbilicus with a flexible-tip camera and two robotic arms. The most recognizable features of this robot are its vertical design and highly portable nature, weighing less than two pounds. Virtual Incision’s robot can adjust itself to multiquadrant surgery easier than other robotic devices due to its increased dexterity and simplified docking system.

158

Handbook of Robotic and Image-Guided Surgery

[3] Clinical Outcomes of Surgical Therapy Study Group, Nelson H, Sargent DJ, et al. A comparison of laparoscopically assisted and open colectomy for colon cancer. N Engl J Med 2004;350(20):2050 9. Available from: https://doi.org/10.1056/NEJMoa032651. [4] Veldkamp R, Kuhry E, Hop WCJ, et al. Laparoscopic surgery versus open surgery for colon cancer: short-term outcomes of a randomised trial. Lancet Oncol 2005;6(7):477 84. Available from: https://doi.org/10.1016/S1470-2045(05)70221-7. [5] Bonjer HJ, Deijen CL, Abis GA, et al. A randomized trial of laparoscopic versus open surgery for rectal cancer. N Engl J Med 2015;372 (14):1324 32. Available from: https://doi.org/10.1056/NEJMoa1414882. [6] Fleshman J, Branda M, Sargent DJ, et al. Effect of laparoscopic-assisted resection vs open resection of stage II or III rectal cancer on pathologic outcomes. JAMA 2015;314(13):1346. Available from: https://doi.org/10.1001/jama.2015.10529. [7] Stevenson ARL, Solomon MJ, Lumley JW, et al. Effect of laparoscopic-assisted resection vs open resection on pathological outcomes in rectal cancer. JAMA 2015;314(13):1356. Available from: https://doi.org/10.1001/jama.2015.12009. [8] Weaver A, Steele S. Robotics in colorectal surgery. F1000Res 2016;5:2373. Available from: https://doi.org/10.12688/f1000research.9389.1. [9] Silva-Velazco J, Dietz DW, Stocchi L, et al. Considering value in rectal cancer surgery. Ann Surg 2017;265(5):960 8. Available from: https:// doi.org/10.1097/SLA.0000000000001815. [10] Memon S, Heriot AG, Murphy DG, Bressel M, Lynch AC. Robotic versus laparoscopic proctectomy for rectal cancer: a meta-analysis. Ann Surg Oncol 2012;19(7):2095 101. Available from: https://doi.org/10.1245/s10434-012-2270-1. [11] Cui Y, Li C, Xu Z, et al. Robot-assisted versus conventional laparoscopic operation in anus-preserving rectal cancer: a meta-analysis. Ther Clin Risk Manag 2017;13:1247 57. Available from: https://doi.org/10.2147/TCRM.S142758. [12] Atallah S, Nassif G, Polavarapu H, et al. Robotic-assisted transanal surgery for total mesorectal excision (RATS-TME): a description of a novel surgical approach with video demonstration. Tech Coloproctol 2013;17(4):441 7. Available from: https://doi.org/10.1007/s10151-013-1039-2. [13] Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM. Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA 2010;303(24):2479. Available from: https://doi.org/10.1001/jama.2010.841. [14] Hahnloser D, Cantero R, Salgado G, Dindo D, Rega D, Delrio P. Transanal minimal invasive surgery for rectal lesions: should the defect be closed? Colorectal Dis 2015;17(5):397 402. Available from: https://doi.org/10.1111/codi.12866. [15] Zhang C, Sylla P. Current endoluminal approaches: transanal endoscopic microsurgery, transanal minimally invasive surgery and transanal total mesorectal excision. In: Lee S, Ross H, Rivadeneira D, Steele S, Feingold D, editors. Advanced colonoscopy and endoluminal surgery. Cham: Springer; 2017. Available from: https://doi.org/10.1007/978-3-319-48370-2_22. [16] Atallah S. Assessment of a flexible robotic system for endoluminal applications and transanal total mesorectal excision (taTME): could this be the solution we have been searching for? Tech Coloproctol 2017;21(10):809 14. Available from: https://doi.org/10.1007/s10151-017-1697-6. [17] Nishizawa T, Suzuki H, Goto O, Ogata H, Kanai T, Yahagi N. Effect of prophylactic clipping in colorectal endoscopic resection: a metaanalysis of randomized controlled studies. United Eur Gastroenterol J 2017;5(6):859 67. Available from: https://doi.org/10.1177/ 2050640616687837. [18] Fanfani F, Restaino S, Rossitto C, et al. Total laparoscopic (S-LPS) versus TELELAP ALF-X robotic-assisted hysterectomy: a case-control study. J Minim Invasive Gynecol 2016;23(6):933 8. Available from: https://doi.org/10.1016/j.jmig.2016.05.008. [19] Spinelli A, David G, Gidaro S, et al. First experience in colorectal surgery with a new robotic platform with haptic feedback. Color Dis 2018;20(3):228 35. Available from: https://doi.org/10.1111/codi.13882.

10 G

The Use of Robotics in Colorectal Surgery Bogdan Protyniak, Thomas Erchinger, William J. Sellers, Anjuli M. Gupta, Gordian U. Ndubizu and Kelly R. Johnson Geisinger, Wilkes-Barre, Pennsylvania, United States

ABSTRACT Minimally invasive techniques have gained popularity in colorectal surgery due to their numerous benefits: shorter length of stay, lower complications, and high-definition visualization. However, increased adoption of laparoscopy in the challenging confines of the narrow pelvis and the need to operate in multiple quadrants was met with steep learning curves and increased conversion rates. Furthermore, recent multicenter randomized controlled trials have cast doubt on the ability of laparoscopic total mesorectal excision to achieve pathologic outcome noninferiority compared to open resection in locally advanced rectal cancers. Robotics mitigates these shortcomings using three-dimensional optics, wrist-like motion, tremor filtering, motion scaling, better ergonomics, and less fatigue. This translates into a lower conversion rate, easier learning curve, and the ability to operate in constricted spaces and in multiple quadrants. We have found these advantages when performing robotic partial colectomy, low anterior resection, total colectomy, abdominoperineal resection, and rectopexy. Robotic-assisted intracorporeal anastomoses are noticeably easier to perform compared to laparoscopic, limiting excessive handing of bowel that leads to ileus and possibly improper orientation. Extraction sites no longer need to be in the midline, which decreases incisional hernia occurrence and patient morbidity. Improved visualization, particularly in the narrow pelvis, helps attain completeness of the mesorectal envelope and decreases the conversion rate. Surgeries are performed using either the da Vinci Xi or Si (Intuitive Surgical Inc., Sunnyvale, California, United States) iterations of the robotic system. The advantages of the newly introduced Xi Surgical System overcome the robotic arm restrictions and difficulty operating in multiple quadrants common with its predecessor. Critics of robotics cite cost as the major deterrent to adopting new technology. However, this can be offset with increased case volume, instrument use optimization, and previously mentioned clinical benefits. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00010-4 © 2020 Elsevier Inc. All rights reserved.

159

160

10.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Laparoscopy was a revolutionary departure from open colorectal surgery in the 1990s, heralding the era of minimally invasive techniques. Its advantages of shorter length of stay, lower overall postoperative complications, similar cancerspecific survival, and high-definition visualization quickly became realized [1,2]. Despite this, multiple studies analyzing trends prior to 2010 have shown slow adoption of laparoscopic colon resection, with an overall utilization of around 20% across both academic and community practices [3]. Subsequent analysis of the National Cancer Database, which accounts for over 70% of all diagnosed cancers in the United States, revealed that laparoscopy was utilized in less than half of all colon resections [4]. The matter is complicated by a trend toward an increased rate of laparoscopic surgery at mid- and high-volume specialty centers [5]. This disparity can be explained by a steep learning curve, which requires a large volume of cases to obtain proficiency and superior outcomes. Consequently, dissemination of laparoscopy to low-volume community surgeons becomes limited. Like laparoscopic colon surgery, laparoscopic rectal resection has plateaued at approximately 19% across the United States, only increasing by 2% from 2009 to 2014. Robotic rectal resection, in contrast, has dramatically increased from 1% to 13% in the same period. Robotic surgery has modernized minimally invasive surgery (MIS) due to its easier learning curve and decreased conversion rate, allowing more patients to benefit from minimally invasive techniques. This makes it a critical tool in the colorectal surgeon’s minimally invasive armamentarium, particularly for late adopters.

10.2

Challenges with open and laparoscopic surgery

Open colorectal surgery has matured tremendously with proven oncologic outcomes and standardized technique. Professor Richard Heald first described total mesorectal excision (TME) in 1982 and went on to prove excellent local control of primary rectal cancer and its locoregional spread [6,7]. Surgeons worldwide quickly adopted the technique due to its remarkably low recurrence rate, recognizing the significance of meticulous dissection in the management of rectal cancer [8]. Following such advancements, surgical technique largely remained unchanged until the advent of laparoscopy in the 1990s. Initial concerns regarding laparoscopic oncologic inferiority and port-site recurrence were disproven with multiple randomized controlled trials [9 12]. Criticism of longer operative times quickly gave way to obvious benefits of reduced length of stay, surgical site infections, ileus, and pain when compared to conventional open colorectal resection [13,14]. Laparoscopy won favorable approval and subsequent adoption in high-volume academic centers with ensuing development of enhanced recovery after surgery (ERAS) pathways to facilitate improved length of stay and decreased morbidity [15]. Despite numerous benefits, adoption of laparoscopy in the challenging confines of the narrow pelvis and the need to operate in multiple quadrants were met with steep learning curves and increased complications among low-volume surgeons [16]. Operating laparoscopically requires a degree of dexterity, exacerbates hand tremor, and loses the advantage of tactile senses when handling the bowel manually. Depth perception is impaired due to the added challenge of operating without a three-dimensional (3D) view, making intracorporeal suturing and anastomoses very difficult. Although 3D laparoscopic optics are offered by industry today, they have not gained popularity. Studies comparing totally robotic versus 3D laparoscopic colectomy have shown easier intracorporeal anastomoses and better outcomes in left colectomies, especially in resections approaching the rectum, when using the robot [17]. Case numbers of 20 70 colectomies were reported to achieve basic proficiency in laparoscopy, approximately double compared to robotic [16,18]. Studies comparing right and left colectomy demonstrated a learning curve of 55 cases for right-sided and 62 cases for left-sided colectomy [19]. High-volume surgeons were found to have a lower probability of both intraoperative and postoperative complications [16]. The increased risk of morbidity and the amount of cases necessary to build experience in effect precludes low-volume surgeons from adopting laparoscopy. However, even in experienced/high-volume hands doubt has been raised whether laparoscopy is noninferior to open rectal surgery. Both the ALaCaRT and ACOSOG Z6051 multicenter randomized controlled trials have caused skepticism in the ability of laparoscopic TME to achieve pathologic outcome noninferiority compared to open resection in locally advanced rectal cancers [20,21]. One of the main authors conjectured that modification of instruments or a different platform such as robotics would improve the efficacy of minimally invasive techniques [20]. The ergonomic challenges associated with operating laparoscopically were not previously encountered with open procedures and have improved very little over the years due to the inherent limitation of in-line instruments. Laparoscopic colectomy, unlike cholecystectomy, is a more dynamic operation and involves dissection in multiple quadrants of the abdomen. This involves multiple views with changing angles, subjecting the surgeon to increased fatigue and muscular stress. Numerous factors must be well-thought-out during the procedure to maximize operative

The Use of Robotics in Colorectal Surgery Chapter | 10

161

10.3

Robotic surgery experience

Robotics overcomes many of the disadvantages of open surgery as well as those still present with laparoscopy. In a way, it embodies the natural progression in the path to MIS. The advantages include: 3D optics, wrist-like motion, tremor filtering, motion scaling, better ergonomics, and less fatigue. This translates into a lower conversion rate, decreased length of stay, easier learning curve, and the ability to operate in constricted spaces. Conversion from MIS to open has a deleterious impact on numerous patient factors, including increased transfusion rate (11.5% vs 1.9%), wound infection rate (23% vs 12%), complication rate (44% vs 21%), length of stay (16 days vs base), and 5-year disease-free survival rate (40.2% vs 70.7%) [24 26]. Recent analyses of the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database comparing thousands of patients who underwent laparoscopic or robotic colorectal surgery found significantly lower conversion rates for robotics and lower length of hospital stay for both abdominal and pelvic robotic cases. There was no difference in postoperative complications when comparing the two groups and a significantly shorter length of stay for robotic procedures [27,28]. Other large database studies comparing the two groups with propensity score matching demonstrated reduced 30-day postoperative septic complications (2.3% vs 4%), hospital stay (mean: 4.8 vs 6.3 days), and discharge to another facility (3.5% vs 5.8%) in favor of robotic colectomy [29]. Analysis of the Michigan Surgical Quality Collaborative database comparing laparoscopic, handassisted laparoscopic, and robotic colon and rectal operations found significantly lower conversion rates for robotics in rectal resections (21.2% vs 7.8%), and approaching significance for colon resections (16.9% vs 9%) [30]. Conversion to open resulted in significantly longer length of stay for robotic (1.3 days) and laparoscopic procedures (1.7 days). Studies have shown that the learning curve for robotic colorectal surgery ranges from 15 to 25 cases. Obtaining a learning curve which is half of that required for laparoscopy requires the surgeon to master three unique concepts of robotic surgery as outlined by Bokhari et al. [18]: (1) substituting visual cues with regard to tension and manipulation of tissues in place of tactile feedback, (2) grasping the spatial orientation of robotic instruments outside the visual field of view to maneuver safely without direct visualization, and (3) envisioning the alignment of the robotic arms and cart while operating remotely at the console, thereby minimizing external collisions [18]. A more recent study has examined whether physician factors (including time since graduation, fellowship status, and number of procedures performed) were associated with hospital stay and complications following common robotic surgery procedures in the State of New York among 1670 patients. Hospital-level factors were also analyzed, including urban versus rural setting, teaching status, hospital size, and the presence of a fellowship. After evaluating all factors in multivariable regression models and adjusting for covariates such as patients’ characteristics and comorbidities, neither physician- not hospital-related factors were significantly related to length of stay or complications [31]. Robotic surgery may eliminate the differences between hospitals and physicians, making outcomes independent of surgeon volume and experience. The benefits of intracorporeal anastomosis and off-midline specimen extraction have already been demonstrated with laparoscopic colorectal surgery. This is made even easier with robotic assistance, limiting excessive handing of bowel that leads to ileus, improper orientation, and avoiding a midline extraction site. Past studies comparing laparoscopic right hemicolectomy with intracorporeal versus extracorporeal anastomosis showed decreased postoperative complications (18.7% vs 35%), infection rate (4.4% vs 14%), length of stay (mean: 5.9 vs 6.9 days), and incisional hernia rate (2.2% vs 17%) [32]. A large study examining extraction site location and incisional hernias after laparoscopic colorectal surgery has shown twice the rate of incisional hernia with midline extraction compared to off-midline (8.9% vs 2.3% 4.8%) [33]. A recent multicenter retrospective study compared robotic right colectomy with intracorporeal anastomosis (RRCIA) to laparoscopic right colectomy with extracorporeal (LRCEA) and intracorporeal (LRCIA)

10. Robotics in Colorectal Surgery

safety and efficiency while diminishing mental and physical stress on the surgeon and assistants. These include: surgeon and patient position, port placement, manipulation angles, laparoscopic instruments, height and resolution of the video monitor, and lighting. Oftentimes ports must be placed in ergonomically inefficient positions to facilitate in-line dissection of certain planes due to lack of endowristed instruments. Proctectomy serves as an example, demanding exceptional skill when working in the deep pelvis with in-line rigid instruments from angles that require complex maneuvers to reach the extremes of the pelvis. Hand-assisted laparoscopic surgery (HALS) sought to overcome the difficulty of inline instrumentation while touting a minimally invasive technique. However, studies comparing HALS with standard laparoscopic colectomy showed the former to be associated with an increased risk of wound complications, postoperative ileus, and readmissions [22]. Augmented reality simulators comparing the two techniques have also shown that although the hand-assisted approach may be technically easier to perform, it is associated with increased intraoperative errors [23]. Furthermore, the principles of TME call for sharp dissection when dealing with rectal cancer, precluding hand-assisted dissection within the pelvis.

162

Handbook of Robotic and Image-Guided Surgery

anastomosis among 236 patients. RRCIA offers significantly better perioperative recovery outcomes compared to LRCEA, with a substantial reduction in the length of stay (4 vs 7 days). Compared with the LRCIA, the RRCIA had a shorter time to first flatus but offered no advantages in terms of the length of stay. Once again, the conversion rate was much lower for RRCIA (3.9%) versus LRCEA (8.5%) versus LRCIA (15%) [34]. This study reinforces the benefits of an intracorporeal anastomosis and the fact that it is much easier to perform robotically, leading to a decreased conversion rate. Multiple studies have demonstrated the safety and feasibility of robotic colorectal resection with regards to short-term oncologic outcomes [35,36]. A recent retrospective study comprised of 732 patients analyzing long-term oncologic outcomes using propensity score matching showed comparable survival between robotic and laparoscopic TME. In multivariate analysis, robotic surgery was a significant prognostic factor for overall survival and cancer-specific survival [37]. The latest and largest randomized clinical trial of robotic-assisted laparoscopic surgery for patients with rectal adenocarcinoma (ROLARR) demonstrated comparable oncologic outcomes to previously published large randomized trials. The positive circumferential resection margin rate (5.7%) was lower than previous trials studying conventional laparoscopy (ACOSOG Z6051, 12.1%; ALaCaRT, 7%). Pathological grading of intact mesorectum (75.3%) was comparable to ACOSOG Z6051 (72.9%). Surprisingly, there was no statistically significant difference in the rate of conversion to open laparotomy for robotic compared with laparoscopic surgery (8.1% vs 12.2%) [38]. The authors attributed this to surgeons having varying robotic experience as compared to the expert laparoscopic group. The fact that less experienced robotic surgeons had the same conversion rate as expert laparoscopists supports the previously mentioned study by Altieri et al. which did not find surgeon robotic experience tied to outcomes or length of stay, in contrast to laparoscopy [31]. Disadvantages of robotic surgery include: increased operative time, lack of haptic feedback, surgeon’s remote location away from the operating room table, inability to perform multiquadrant abdominal surgery, and the cost of technology [38 41]. Several metaanalyses and a most recent ACS NSQIP database analysis have compared operative times for robotic versus laparoscopic colorectal resections with a mean operative time of approximately 40 minutes longer for robotic colorectal resection when compared to laparoscopic [28,42,43]. Longer operative times have been shown to improve with surgeon experience, some single-surgeon studies demonstrating a statistically significant decrease in mean operative time from 267 to 224 minutes [44]. However, larger randomized studies analyzing surgeons with varying robotic experience still showed prolonged operating time when compared to laparoscopy [38]. With experience, visual cues substitute for haptic feedback, thus avoiding excessive tissue manipulation and injury. Numerous studies, previously discussed, have shown the safety and feasibility of robotic surgery with equivalent or decreased complications compared to laparoscopic surgery, thus making the lack haptic feedback a nonsafety issue. One can postulate that with haptic feedback operative time may be reduced but this will require implementation and further study of such technology. Seasoned first assists and a well-trained robotics team can provide confidence and feedback at the bedside for the surgeon while he or she is at the console, minimizing the issue of not being at the patient bedside. It behooves the surgeon to train his or her team and have an action plan in case of emergency bleeding or need to convert to open laparotomy. Finally, the cost of new technology is offset with increased case volume, instrument use optimization, and previously touted clinical benefits. However, this remains a controversial issue since acquiring the latest robotic system costs $1.85 $2.3 million and does not include ongoing instrument and maintenance costs, which can range from $0.08 to $0.17 million/year. The ROLARR randomized clinical trial comparing robotic to laparoscopic rectal surgery suggested that robotic surgery for rectal cancer is unlikely to be cost-saving. The mean difference per operation, excluding the acquisition and maintenance costs, was $1132 driven by longer operating room time and increased cost for robotic instruments [38,45]. In contrast, a recent study examining surgeons with higher experience in robotic and laparoscopic colorectal procedures (30 or more robotic procedures per year) showed no statistically significant difference in total direct cost. When comparing supply costs, robotic surgery was more expensive than laparoscopic surgery (mean: $764) due to increased costs associated with robotic reusable instruments. The total direct costs were comprised of supplies, hospital stay, and operating room costs and showed no difference ($24,473 vs $24,343) likely due to reduced length of stay and lower conversion rate [46]. Cheaper cost can be attained by decreasing operative time, limiting superfluous robotic instrument use, and improving utilization of the robotic system.

10.4

Patient selection and evaluation

Patient selection is paramount in the beginning of a surgeon’s robotics learning curve. Ideal candidates include minimal to no previous abdominal operations and low BMI. With growing experience, the impact of such factors diminishes and it becomes easier to perform robotic surgery on more difficult patients compared to laparoscopy. The type of surgery also must be considered, specifically the need to work in multiple quadrants. Previous da Vinci models required patient

The Use of Robotics in Colorectal Surgery Chapter | 10

163

10.5

Preoperative preparation

Preoperative preparation for robotic colorectal surgery differs from laparoscopy in one significant way: the surgeon sits remotely at the console within the room but does not operate at the patient’s side. Therefore it behooves the surgeon to have a well-trained robotics team that can troubleshoot arm collisions at the bedside, assist with retraction, perform instrument exchanges, and most importantly facilitate emergency undocking should the need arise. Team training should be part of routine preoperative preparation. The enhanced recovery pathway has been shown to play a pivotal role in improved patient outcomes in robotic colorectal surgery and is standard for all elective colorectal surgeries in our institution. Benefits include reduced length of stay, cost, surgical site infections, postoperative pain, facilitating early return of bowel function, and decreasing postoperative nausea/vomiting. The preoperative elements of the ERAS guidelines are established by the American Society of Colon and Rectal Surgeons and the Society of American Gastrointestinal and Endoscopic Surgeons [50]. Preoperative interventions include discussion of milestones and discharge criteria with the patient prior to surgery, ostomy education including marking and counseling on dehydration. Patients stay on clear liquid diet up until 2 hours prior to general anesthesia as this has been shown to be safe and improve patients’ sense of well-being. Carbohydrate loading is encouraged preoperatively in nondiabetics in effort to reduce insulin resistance by starvation and surgery. Both mechanical and oral antibiotic bowel preparations are recommended as this is associated with a decreased complication rate, including overall reduction in total surgical site infection and incisional site infections. Preoperative optimization of patient’s medical comorbidities and deconditioning as well as preset preadmission orders are recommended. Preoperative bundle includes chlorhexidine skin wash, mechanical bowel preparation with oral antibiotics and polyethylene glycol, deep venous thrombosis prophylaxis with subcutaneous heparin, and cefoxitin within 1 hour of incision. A multimodal, opioid-sparing pain management plan is recommended to be implemented prior to anesthesia and has been shown to be associated with earlier return of bowel function and shorter length of stay. Our preoperative multimodal analgesia includes: acetaminophen, gabapentin, tramadol, and surgeon-administered transversus abdominis plane block using liposomal bupivacaine. Perioperative antiemetic prophylaxis should be discussed with the patient preoperatively to assess for risk factors of postoperative nausea/vomiting. Lastly, patients should undergo goal-directed fluid administration to avoid fluid overload or restriction which can result in increased postoperative morbidity and therefore longer length of stay. Preoperative preparation leads to a smoother transition to postoperative care and recovery, with quicker return of bowel function, shortened length of stay, a decrease in surgical site infections, improved nutrition and wound healing, improved pain control, shorter time to rehabilitation postoperatively, and in terms of oncologic patients timely initiation of adjuvant chemotherapy [37,50].

10.6

Operative setup

The robotics team is instrumental in positioning the patient, along with the surgeon, ensuring proper padding of pressure points, bed orientation to facilitate seamless docking, and cooperation with anesthesia to protect the patient’s airway and monitoring equipment. A clean and organized back table comprised of only procedure-specific instruments facilitates efficient instrument exchanges and cuts down on cost. The surgeon must also ensure the patient can tolerate pneumoperitoneum and extreme position changes, specifically steep Trendelenburg to facilitate low rectal dissection. The patient needs to be positioned prior to docking the robot, after which time the bed remains fixed with the robot platform. Patient position varies depending on type of surgery but is based on tilting the patient away from the target organ to facilitate small bowel retraction. Newer table motion technology has recently been released and allows robot and table movement in synchrony without having to undock the robot or reposition instruments [51]. The following serves as an example for left-sided and pelvic fully robotic approaches with single docking using the da Vinci Si or Xi Surgical Systems [49]. The da Vinci Xi is targeted only once at the beginning of the case and does not require arm repositioning for splenic flexure mobilization. A medial to lateral approach is used in all dissections with standardized port placement for both sigmoidectomy and low anterior resection. The patient is positioned in

10. Robotics in Colorectal Surgery

repositioning, redocking, or placement of additional ports to perform multiquadrant surgery such as total colectomy or low anterior resection with splenic flexure mobilization [47,48]. The newest model has redesigned robotic arms to allow closer port spacing and a mobile boom which allows movement of robotic arms while keeping the robot stationery by the bedside. This allows for efficient use of the robot in all types of multiquadrant colorectal resections. Low anterior resection can commonly be performed with concomitant splenic flexure mobilization without redocking the robot or arm repositioning [49].

164

Handbook of Robotic and Image-Guided Surgery

modified lithotomy, 30-degree Trendelenburg, with left side up. The robot is docked off the patient’s left thigh. A green laser cross-hair provides the precise docking position for the da Vinci Xi. For the da Vinci Si, a 12-mm optical trocar is placed under direct visualization superior and immediately to the right of the umbilicus, followed by an 8-mm trocar in the right lower quadrant medial to the anterior superior iliac spine, a 12-mm assistant trocar in the right mid-abdomen anterior axillary line, a 5-mm assistant trocar in the right upper quadrant, an 8-mm trocar in the left mid-abdomen midclavicular line, and an 8-mm trocar in the left mid-abdomen anterior axillary line. We find the 5-mm assistant port only useful when performing difficult low rectal dissection. The right lower quadrant 8-mm robotic trocar can be replaced with a 12-mm robotic trocar to use the endowristed robotic stapler in place of the laparoscopic stapler which is inserted through the 12-mm assistant port. For the da Vinci Xi, a 5-mm optical camera is used to access the peritoneum through a left upper quadrant site in the anterior axillary line and later upsized to an 8-mm trocar. Additional ports are placed diagonally in the following locations: an 8-mm left upper quadrant mid-clavicular trocar, an 8-mm right paraumbilical trocar, a 12-mm right lower quadrant trocar, and an assistant 5-mm right upper quadrant trocar (Fig. 10.1). Robotic right hemicolectomy requires the patient to be positioned in 15-degree reverse Trendelenburg with 15-degree table right side up. The robot is docked at an angle off the patient’s right shoulder. Port placement is as follows: a 12-mm optical trocar is placed under direct visualization inferior and immediately to the left of the umbilicus, followed by an 8-mm trocar in the left mid-abdomen mid-clavicular line, a 12-mm assistant trocar in the left lower quadrant anterior axillary line, an 8-mm suprapubic trocar, and an 8-mm trocar in the right lower quadrant medial to the anterior superior iliac spine (Fig. 10.2). This port placement allows for an intracorporeal anastomosis using a laparoscopic stapler through the 12-mm left lower quadrant assistant port. Alternatively, the 8-mm left mid-abdomen port can be replaced with a 12-mm robotic port to use the robotic stapler.

FIGURE 10.1 A clean and organized back table comprised of only procedure-specific instruments facilitates efficient instrument exchanges and cuts down on cost.

FIGURE 10.2 Port placement for da Vinci Si left-sided and pelvic fully robotic approach.

The Use of Robotics in Colorectal Surgery Chapter | 10

10.7

165

Surgical technique

FIGURE 10.3 Port placement for da Vinci Xi left-sided and pelvic fully robotic approach.

10. Robotics in Colorectal Surgery

A medial to lateral approach is used in all colon resections with standardized port placement as previously mentioned. Dissection is performed sharply using monopolar scissors in embryonic fusion planes. Major vessels are skeletonized, ligated with hemoloc clips and divided using robotic scissors. Fenestrated bipolar robotic graspers facilitate hemostasis. The bowel is divided with the robotic or laparoscopic stapler, depending on surgeon preference and cost. We find the robotic stapler crucial in low and narrow pelvic surgery for proctectomy due to its large range of motion and 90 degrees of articulation. Intracorporeal anastomosis is performed in a side-to-side isoperistaltic fashion using the robotic stapler with absorbable suture closure of the common enterotomy/colotomy robotically in two layers. Specifics for each type of surgery will be outlined below. For robotic right hemicolectomy, the ileocolic vascular pedicle is ligated and divided at its base to include the lymph node envelope with the specimen. The isoperistaltic ileocolic anastomosis is created using the left upper quadrant 12-mm robotic stapler port, which is later enlarged for specimen extraction. Alternatively, a laparoscopic stapler can be used to create the anastomosis through the left lower quadrant 12-mm assistant port, which can be enlarged for subsequent specimen extraction. For robotic left hemicolectomy, the inferior mesenteric artery (IMA) is skeletonized and the left colic artery is divided at its origin. The inferior mesenteric vein (IMV) is identified immediately caudad to the pancreas and divided. The splenic flexure is reduced by gaining access to the lesser sac. The isoperistaltic colocolonic anastomosis is created using the robotic stapler through a right lower quadrant 12-mm robotic port, which becomes the specimen extraction site. For robotic low anterior resection, the IMA is skeletonized and divided distal to the left colic artery takeoff. Division of the IMA at its base is performed when additional length is needed for a low pelvic anastomosis. The IMV is identified immediately caudad to the pancreas and divided. The splenic flexure is reduced by gaining access to the lesser sac. Dissection of the rectum is performed between the endopelvic fascia and the fascia propria of the rectum, avoiding injury to the inferior hypogastric nerves. After the rectum is divided, the robot is undocked. For proximal rectal cancer, a tumor-specific TME is performed by dividing the rectal mesentery at a right angle to the rectum 5 cm distal to the tumor. For mid to distal rectal cancer, a complete TME is performed down to the levators, at which point the rectum is divided with the robotic stapler. A 29 mm anvil is purse-stringed into the exteriorized proximal colon end, followed by laparoscopic end-to-end colorectal anastomosis. The operative steps are similar for an abdominoperineal resection, except the robotic TME dissection continues in a cylindrical fashion by incising the levators and the anococcygeal ligament to gain access to the ischiorectal fat. For ventral mesh rectopexy, the peritoneum is incised in a “lazy-J” shape beginning from the sacral promontory, continuing along the right peritoneal reflection, and terminating just to the left of the anterior rectum. Dissection only takes place anterior to the rectum, proceeding distally to the transversus perinei muscles. A tailored (approximately 16 cm 3 3 cm) biologic mesh is then sutured anteriorly to the distal rectum using horizontal mattress absorbable sutures and attached proximally to the sacral promontory with horizontal mattress nonabsorbable suture. Total abdominal colectomy is the epitome of robotic multiquadrant surgery and therefore cannot be performed with the da Vinci Si using a single-dock technique in our experience. The da Vinci Xi has the advantage of a rotating boom which allows easy manipulation of its arms to facilitate total abdominal colectomy. Ports are placed horizontally at the level of the umbilicus to allow enough space to work both in the pelvis and the upper abdomen (Fig. 10.3). Dissection

166

Handbook of Robotic and Image-Guided Surgery

FIGURE 10.4 Port placement for robotic right hemicolectomy with intracorporeal anastomosis using the 12-mm left lower quadrant assistant laparoscopic stapler.

FIGURE 10.5 Port placement for da Vinci Xi total abdominal colectomy fully robotic approach.

begins with the rectum and proceeds proximally until the middle colic vessels are divided (Fig. 10.4). Attention is then turned to the right lower quadrant where the cecum and right colon are dissected from caudad to cephalad, finishing with mobilization of the hepatic flexure (Fig. 10.5).

10.8

Discussion

Laparoscopic colorectal surgery has clear advantages over open technique, however, it becomes difficult to perform with limited visualization in narrow spaces and multiple quadrants. Robotic surgery allows more surgeons to achieve

The Use of Robotics in Colorectal Surgery Chapter | 10

167

FIGURE 10.6 The robotic arms are aimed at the left lower quadrant for the left-sided and pelvic dissection portion of the total abdominal colectomy.

the advantages of minimally invasive technique via an easier learning curve and decreased conversion rate. Surgeons who are past their robotic learning curve can further utilize the technology and extract clinical benefits via more advanced robotic applications such as performing intracorporeal anastomoses and operating with less difficulty on obese patients and those with previous surgeries. Although the cost of new technology is prohibitive to some, this can be ameliorated by optimizing robot time utilization, instrument use, and future competition from other device manufacturers (Figs. 10.6 and 10.7).

10. Robotics in Colorectal Surgery

FIGURE 10.7 The robotic arms are inverted to face the right lower quadrant to complete the second portion of the total abdominal colectomy.

168

Handbook of Robotic and Image-Guided Surgery

Robotic technology endures to advance and surgeons continue to become increasingly more comfortable with new devices and procedures. New breakthroughs in technology include eye-tracking cameras with the ability to keep the camera centered on the surgeon’s gaze, reusable instruments, single-port robots, miniature in vivo robots, and natural orifice transluminal endoscopic robots. Recent advances in haptic sensing are allowing surgeons to palpate tissue and feel the force applied through their instrument, solving one of the major complaints with current robotic technology. The evolution in robotic technology will continue to improve the learning curve for inexperienced and experienced robotic surgeons alike. These advances will accelerate as more companies compete within the robotic market, leading to novel innovations and improved outcomes in both patient care and robotic training. Increased market competition will also help to decrease the potentially prohibitive costs to hospital systems. Preprocedure costs can be cut by increasing the number of procedures performed robotically. Increasing the number of robotically trained surgeons and having well educated perioperative teams have both been shown to be modifiable factors to help improve the number of cases performed. The educated surgical team is not only valuable for decreasing turnover time and allowing a smooth flow of each surgical procedure, but can also prove to be invaluable in solving intraoperative complications. As the limitations of robotic surgery wither and technology advances, the role of robotics will remain fundamental in colorectal surgery.

References [1] Cone MM, Herzig DO, Diggs BS, et al. Dramatic decreases in mortality from laparoscopic colon resections based on data from the Nationwide Inpatient Sample. Arch Surg 2011;146:594 9. [2] Wilson MZ, Hollenbeak CS, Stewart DB. Laparoscopic colectomy is associated with a lower incidence of postoperative complications than open colectomy: a propensity score-matched cohort analysis. Colorectal Dis 2014;16(5):382 9. [3] Robinson CN, Chen GJ, Balentine CJ, Sansgiry S, Marshall CL, Anaya DA, et al. Minimally invasive surgery is underutilized for colon cancer. Ann Surg Oncol 2011;18(5):1412 18. [4] Hawkins AT, Ford MM, Benjamin Hopkins M, Muldoon RL, Wanderer JP, Parikh AA, et al. Barriers to laparoscopic colon resection for cancer: a national analysis. Surg Endosc 2017. Available from: https://doi.org/10.1007/s00464-017-5782-8. [5] Yeo HL, Isaacs AJ, Abelson JS, Milsom JW, Sedrakyan A. Comparison of open, laparoscopic, and robotic colectomies using a large national database: outcomes and trends related to surgical center volume. Dis Colon Rectum 2016;59(6):535 42. [6] Heald RJ, Husband EM, Ryall RDH. The mesorectum in rectal cancer surgery—the clue to pelvic recurrence? Br J Surg 1982;69(10):613 16. [7] Heald RJ, Moran BJ, Ryall RD, Sexton R, MacFarlane JK. Recta cancer: the Basingstoke experience of total mesorectal excision, 1978 1997. Arch Surg 1998;133(8):894 9. [8] Kapiteijn E, Marijnen CA, Nagtegaal ID, Putter H, Steup WH, Wiggers T, et al. Preoperative radiotherapy combined with total mesorectal excision for resectable rectal cancer. N Engl J Med 2001;345(9):638 46. [9] Green BL, Marshall HC, Collinson F, Quirke P, Guillou P, Jayne DG, et al. Long-term follow-up of the Medical Research Council CLASSIC trial of conventional versus laparoscopically assisted resection in colorectal cancer. Br J Surg 2013;100(1):75 82. [10] Bonjer HJ, Deijen CL, Abis GA, Cuesta MA, van der Pas MH, de Lange-de Klerk ES, et al. A randomized trial of laparoscopic versus open surgery for rectal cancer. N Engl J Med 2015;372(14):1324 32. [11] Kang SB, Park JW, Jeong SY, Nam BH, Choi HS, Kim DW, et al. Open versus laparoscopic surgery for mid or low rectal cancer after neoadjuvant chemoradiotherapy (COREAN trial): short-term outcomes of an open-label randomized controlled trial. Lancet Oncol 2010;11 (7):637 45. [12] Guillou PJ, Quirke P, Thorpe H, Walker J, Jayne DG, Smith AMH, et al. Short-term endpoints of conventional versus laparoscopic-assisted surgery in patients with colorectal cancer (MRC CLASSIC trial): multicenter, randomized controlled trial. Lancet 2005;365:1718 26. [13] Hewett PJ, Allardyce RA, Bagshaw PF, Frampton CM, Frizelle FA, Rieger NA, et al. Short-term outcomes of the Australasian randomized clinical study comparing laparoscopic and conventional open surgical treatments for colon cancer: the ALCCaS trial. Ann Surg 2008;248 (5):728 38. [14] Veldkamp R, Kuhry E, Hop WC, Jeekel J, Kazemier G, Bonjer HJ, et al. Laparoscopic surgery versus open surgery for colon cancer: shortterm outcomes of a randomized trial. Lancet Oncol 2005;6(7):477 84. [15] Spanjersberg WR, van Sambeeck JD, Bremers A, Rosman C, van Laarhoven CJ. Systematic review and meta-analysis for laparoscopic versus open colon surgery with or without an ERAS programme. Surg Endosc 2015;29(12):3443 53. [16] Bennett CL, Stryker SJ, Ferreira MR, Adams J, Beart Jr. RW. The learning curve for laparoscopic colorectal surgery. Preliminary results from a prospective analysis of 1194 laparoscopic-assisted colectomies. Arch Surg 1997;132(1):41 4. [17] Guerrieri M, Campagnacci R, Sperti P, Belfiori G, Gesuita R, Ghizelli R. Totally robotic vs 3D laparoscopic colectomy: a single centers preliminary experience. World J Gastroenterol 2015;21(46):13152 9. [18] Bokhari MB, Patel CB, Ramos-Valadez DI, Ragupathi M, Haas EM. Learning curve for robotic-assisted laparoscopic colorectal surgery. Surg Endosc 2011;25(3):855 60. [19] Tekkis PP, Senagore AJ, Delaney CP, Fazio VW. Evaluation of the learning curve in laparoscopic colorectal surgery: comparison of right-sided and left-sided resections. Ann Surg 2005;242:83 91.

The Use of Robotics in Colorectal Surgery Chapter | 10

169

10. Robotics in Colorectal Surgery

[20] Fleshman J, Branda M, Sargent DJ, Boller AM, George V, Abbas M, et al. Effect of laparoscopic-assisted resection vs open resection of stage II or III rectal cancer on pathologic outcomes: the ACOSOG Z6051 randomized clinical trial. JAMA 2015;314 (13):1346 55. [21] Stevenson ARL, Solomon MJ, Lumley JW, Hewett P, Clouston AD, Gebski VJ, et al. Effect of laparoscopic-assisted resection vs open resection on pathological outcomes in rectal cancer: the ALaCaRT randomized clinical trial. JAMA 2015;314:1356 63. [22] Gilmore BF, Sun Z, Adam M, Kim J, Ezekian B, Ong C, et al. Hand-assisted laparoscopic versus standard laparoscopic colectomy: are outcomes and operative time different? J Gastrointest Surg 2016;20(11):1854 60. [23] Leblanc F, Delaney CP, Neary PC, Rose J, Augestad KM, Senagore AJ, et al. Assessment of comparative skills between hand-assisted and straight laparoscopic colorectal training on an augmented reality simulator. Dis Colon Rectum 2010;53(9):1323 7. [24] Agha A, Furst A, Iesalnieks I, Fichtner-Feigl S, Ghali N, Krenz D, et al. Conversion rate in 300 laparoscopic rectal resections and its influence on morbidity and oncologic outcome. Int J Colorectal Dis 2008;23(4):409 17. [25] Yamamoto S, Fukunaga M, Miyajima N, Okuda J, Konishi F, Watanabe M, et al. Impact of conversion on surgical outcomes after laparoscopic operation for rectal carcinoma: a retrospective study of 1,073 patients. J Am Coll Surg 2009;208(3):383 9. [26] Rottoli M, Stocchi L, Geisler DP, Kiran RP. Laparoscopic colorectal resection for cancer: effects of conversion on long-term oncologic outcomes. Surg Endosc 2012;26(7):1971 6. [27] Bhama AR, Obias V, Welch KB, Vanderwarker JF, Cleary RK. A comparison of laparoscopic and robotic colorectal surgery outcomes using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database. Surg Endosc 2016;30 (4):1576 84. [28] Dolejs SC, Waters JA, Ceppa EP, Zarzaur BL. Laparoscopic versus robotic colectomy: a national surgical quality improvement project analysis. Surg Endosc 2017;31(6):2387 96. [29] Al-Mazrou AM, Chiuzan C, Kiran RP. The robotic approach significantly reduces length of stay after colectomy: a propensity score-matched analysis. Int J Colorectal Dis 2017;32(10):1415 21. [30] Tam MS, Kaoutzanis C, Mullard AJ, Regenbogen SF, Franz MG, Hendren S, et al. A population-based study comparing laparoscopic and robotic outcomes in colorectal surgery. Surg Endosc 2016;30(2):455 63. [31] Altieri MS, Yang J, Telem DA, Chen H, Talamini M, Pryor A. Robotic-assisted outcomes are not tied to surgeon volume and experience. Surg Endosc 2016;30(7):2825 33. [32] Shapiro R, Keler U, Segev L, Sarna S, Hatib K, Hazzan D. Laparoscopic right hemicolectomy with intracorporeal anastomosis: short- and longterm benefits in comparison with extracorporeal anastomosis. Surg Endosc 2016;30(9):3823 9. [33] Samina H, Larence J, Nobel T, Stein S, Champagne BJ, Delaney CP. Extraction site location and incisional hernias after laparoscopic colorectal surgery: should we be avoiding the midline? Am J Surg 2013;205(3):264 7. [34] Trastulli S, Coratti A, Guarino S, Piagnerelli R, Annecchiarico M, Coratti F, et al. Robotic right colectomy with intracorporeal anastomosis compared with laparoscopic right colectomy with extracorporeal and intracorporeal anastomosis: a retrospective multicenter study. Surg Endosc 2015;29(6):1512 21. [35] Speicher PJ, Englum BR, Ganapathi AM, Nussbaum DP, Mantyh CR, Migaly J. Robotic low anterior resection for rectal cancer: a national perspective on short-term oncologic outcomes. Ann Surg 2015;262:1040 5. [36] Pai A, Marecik SJ, Park JJ, Melich G, Sulo S, Prasad LM. Oncologic and clinicopathologic outcomes of robot-assisted total mesorectal excision for rectal cancer. Dis Colon Rectum 2015;58(7):659 67. [37] Cheong C, Kim NK. Minimally invasive surgery for rectal cancer: current status and future perspectives. Indian J Surg Oncol 2017;8 (4):591 9. [38] Jayne D, Pigazzi A, Marshall H, Croft J, Corrigan N, Copeland J, et al. Effect of robotic-assisted vs conventional laparoscopic surgery on risk of conversion to open laparotomy among patients undergoing resection for rectal cancer: the ROLARR randomized clinical trial. JAMA 2017;318(16):1569 80. [39] Tyler JA, Fox JP, Desai MM, Perry WB, Glasgow SC. Outcomes and costs associated with robotic colectomy in the minimally invasive era. Dis Colon Rectum 2013;56(4):458 66. [40] Delaney CP, Lynch AC, Senagore AJ, Fazio VW. Comparison of robotically performed and traditional laparoscopic colorectal surgery. Dis Colon Rectum 2003;46(12):1633 9. [41] Trastulli S, Cirocchi R, Desiderio J, Coratti A, Guarino S, Renzi C, et al. Robotic versus laparoscopic approach in colonic resections for cancer and benign diseases: systematic review and meta-analysis. PLoS One 2015;10(7):e0134062. [42] Araujo SE, Seid VE, Klajner S. Robotic surgery for rectal cancer: current immediate clinical and oncological outcomes. World J Gastroenterol 2014;20:14359 70. [43] Trinh BB, Jackson NR, Hauch AT, Hu T, Kandil E. Robotic versus laparoscopic colorectal surgery. JSLS 2014;18(4) e2014.00187. [44] Byrn JC, Hrabe JE, Charlton ME. An initial experience with 85 consecutive robotic-assisted rectal dissections: improved operating times and lower costs with experience. Surg Endosc 2014;28:3101 7. [45] Trefis T. FDA approval of key instruments could boost sales of Intuitive Surgical’s da Vinci Xi, ,http://www.forbes.com/sites/greatspeculations/2014/08/27/fda-approval-of-keyinstrumentscould- boost-sales-of-intuitive-surgicals-da-vinci-xi/#3c1d08aa3485.; 2016 [accessed 28.02.16]. [46] Rashidi L, Neighorn C, Bastawrous A. Outcome comparisons between high-volume robotic and laparoscopic surgeons in a large healthcare system. Am J Surg 2017;213(5):901 5.

170

Handbook of Robotic and Image-Guided Surgery

[47] Bae SU, Baek SJ, Hur H, Baik SH, Kim NK, Min SB. Robotic left colon cancer resection: a dual docking technique that maximizes splenic flexure mobilization. Surg Endosc 2015;29:1303 9. [48] Sng KK, Hara M, Shin JW, Yoo BE, Yang KS, Kim SH. The multiphasic learning curve for robot-assisted rectal surgery. Surg Endosc 2013;27 (9):3297 307. [49] Protyniak B, Jorden J, Farmer R. Multiquadrant robotic colorectal surgery: the da Vinci Xi vs Si comparison. J Robot Surg 2018;12:67 74. [50] Carmichael JC, Keller DS, Baldini G, et al. Clinical practice guidelines for enhanced recovery after colon and rectal surgery from American Society of Colon and Rectal Surgeons and Society of American College of Gastrointestinal and Endoscopic Surgeons, ,https://www.fascrs.org/ sites/default/files/downloads/publication/clinical_practice_guidelines_for_enhanced_recovery.3.pdf.; 2017 [accessed 18.12.17]. [51] Panteleimonitis S, Harper M, Hall S, Figueiredo N, Qureshi T, Parvaiz A. Precision in robotic rectal surgery using the da Vinci Xi system and integrated table motion, a technical note. J Robot Surg 2018;12:433 6.

11 G

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Andrea Boni, Giovanni Cochetti, Morena Turco, Jacopo Adolfo Rossi De Vermandois, Gianluca Gaudio and Ettore Mearini University of Perugia, Perugia, Italy

ABSTRACT The prostate gland is a solid one-sided and median organ situated in the male pelvis, under the bladder. Prostatic carcinoma (PCa) is frequently located on the peripheral portion of the gland, so that preoperative imaging starts from endorectal ultrasound. However, the role of multiparametric magnetic resonance imaging is now critical, to guide prostatic biopsy. The indications for robot-assisted radical prostatectomy (RP) (RARP) in PCa treatment are essentially the same as those for open and laparoscopic RP. The robot allows the surgeon to have perfect, lessinvasive control of the tissues, increased by 3D vision, working through small cutaneous incisures. Our previous extraperitoneal experience had led us to maintain this approach in RARP. Critical anesthesiologic questions during robotic surgery included a steep Trendelenburg position, restricted access to the patient, and the effects of CO2 insufflation; as a consequence, not every patient can benefit from RARP. Beyond the introduction of robotic surgery the improved knowledge of periprostatic neurovascular structures has permitted the identification of new specific anatomic landmarks and surgical planes between prostate and neurovascular bundles during nerve-sparing procedures, in the continuous attempt to reduce the detrimental effects on sexual function and urinary continence. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00011-6 © 2020 Elsevier Inc. All rights reserved.

171

172

11.1

Handbook of Robotic and Image-Guided Surgery

Robotic surgical anatomy of the prostate

The prostate gland is a solid one-sided and median organ situated in the male pelvis, under the bladder. It is similar to a chestnut, with its base on the top and the apex pointing down and forward. The prostate is quite small, at 15 cm2, and its normal weight is about 8 10 g. Its central part is crossed by the urethra and, therefore, is called the “prostatic urethra.” The prostate is also crossed by the ejaculatory ducts, originating from the seminal vesicles (SVs), where spermatic fluid from the testicles is stored before ejaculation. The prostate fluid, constituting almost 80% of ejaculated fluid, is secreted at the level of the urethral crest, through the gland’s outlet. The gland is covered by a connective tissue capsule. Through the capsule, the prostate is in contact with the bladder neck superiorly and the sphincter muscle of the urethra inferiorly. It lies anteriorly to the rectus, divided by Denonvillier’s rectus bladder fascia and rectus fascia; consequently, it is easily accessible through endorectal examination or imaging technique. In its anterior superior part it maintains contact with the areolar tissue and bladder venous plexus in the Retzius (retropubic) space. On the posterolateral sides, the vascular neurovascular bundle (NVB) runs within periprostatic fasciae. The prostate grows during the 10th week of gestation, thanks to the SRY gene, responsible for male differentiation of the fetus. The gland originates from the urethral evaginations, under hormonal androgenic stimuli (mainly dihydrotestosterone, derived from testosterone hydroxylation). Later, the evaginations merge with each other and undergo a glandular differentiation until the 15th week of gestation, when the gland is completely formed, and begins to produce its fluid. The prostate gland can be anatomically divided into lobes: the anterior lobe (entirely in front of the urethra), the median lobe (between the urethra and the ejaculatory ducts), and two lateral voluminous lobes, including the right and the left one (on the back of a plane passing through the ejaculatory ducts). From a histological viewpoint the gland can be divided into five zones (Fig. 11.1): G G G G G

central zone, peripheral zone, fibromuscular zone, transitional zone, and periurethral zone.

This classification, first described by McNeal in 1968, is very important because prostatic pathology, such as cancer, stenosis, or hyperplasia, mostly originates in each of these zones. This classification is also very useful to exactly interpret prostatic imaging, as described later.

FIGURE 11.1 Midline sagittal section of prostate and its anatomical location.

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

173

G G G

venous prostate plexus; terminal branches of the inferior bladder artery; and nervous prostate plexus.

In 1982 Walsh surgically described prostatic NVBs and related the erectile dysfunction after radical prostatectomy (RP) to injury to the cavernosal nerves [1]. The improved knowledge of periprostatic neurovascular structures and the introduction of robotic surgery have permitted the identification of specific anatomic landmarks and surgical planes between the prostate and NVB. The introduction of a spray-like distribution of the nerves on the lateral and anterolateral surfaces of the prostate led to many technical modifications in recent decades. Recently, more attention has been paid to the presence of parasympathetic fibers on the anteromedial aspect of the prostate. Immunohistochemical differentiation on postprostatectomy specimens has demonstrated that the number of parasympathetic fibers (mainly related to erectile function) did not decrease from the base to the apex of the gland, despite a significant reduction in sympathetic fibers (mainly related to urinary sphincter activity), even if little is known about the functional effect of their preservation. However, many important structures, which play a key role in sexual potency and urinary continence, remain at risk of damage, even using the robotic approach.

11.2 11.2.1

Patients’ preparation Preoperative imaging modality for prostate cancer

Adenocarcinoma is frequently located on the peripheral portion of the gland (70%) and, therefore, it can be explored using the digital rectal examination. About 20% of adenocarcinomas are diagnosed in the anterior medial side of the gland—the so-called transitional zone which is the typical portion affected by the hyperplastic process. In the central zone of the gland cancer is rarely found (5%), but this area can be frequently invaded by a voluminous neoplastic process originating from the neighboring portions. Extension and localization of the cancer are categorized by the tumor, node, metastasis (TNM) score [1]. We need preoperative imaging before surgery starting with an endorectal ultrasound (Fig. 11.2). The role of multiparametric magnetic resonance imaging (mpMRI) is critical. The information offered by mpMRI is in addition to the traditional imaging technique, because of mpMRI potentially improves the timely identification of clinically significant prostate cancer (Fig. 11.3). This imaging technique was used for staging prostate cancer, even before its use for localizing prostate cancer and guiding prostate biopsies. mpMRI is useful to assess the presence or absence of significant cancer, prediction of organ-confined (OC) disease, and prediction of extracapsular extension (ECE) of cancer and SVs invasion. Regarding lymph node involvement (LNi), mpMRI seems equivalent to computerized tomography (CT) and positron emission tomography (PET). It is preferable not to use CT or MRI for nodal staging in low-risk patients, because of their low sensitivity. Therefore they should be reserved for high-risk prostatic carcinoma (PCa) patients. However, mpMRI has been shown to be helpful to surgeons in preoperative planning. In 2012 McClure showed that mpRMI influenced the surgeon’s initial plan, without direct evidence that changing the surgical plan resulted in a difference in the margin status [2]. Regarding CT, this technique provides little information about the prostate and its role is limited in prostate cancer staging, because it shows little contrast between the gland and contiguous organs (such as SVs). Nevertheless, we may use it for the staging of patients with high-risk PCa, in order to assess potential metastasis in nodes, soft tissues, or bones. But even in these patients, its sensitivity in positive node detection is only about 35%. Bone scans, instead, are significantly more sensitive than CT in the diagnosis of bone metastasis. Many cases with normal CT imaging could show a positive radionuclide scan. Choline PET/CT does not have a clinically acceptable diagnostic accuracy for node metastases detection, because of its insufficient sensitivity. Fluorodeoxyglucose-PET (FDG-PET) is not generally recommended in the diagnosis/staging of clinically OC disease because of the overlap of FDG uptake between tumor, benign, and normal tissues. In addition, we have limited data about its use for PCa initial staging due to low avidity of FDG for the primary tumor [3]. Nevertheless, it may be useful

11. Robotic Radical Prostatectomy

The gland is vascularized by the main branches of the internal iliac arteries: inferior bladder arteries, internal pudendal arteries, and rectal medial arteries. All are located bilaterally in the NVB in the posterior lateral side. The venous drainage is guaranteed by the Santorini plexus, situated in the endopelvic fascia, on the back of the pubic symphysis. The lymphatic drainage is guaranteed by internal and external iliac nodes: the obturator and presacral nodes. The branches of the hypogastric inferior plexus form a curved plexus on the prostate surface, which is responsible for the innervation of the gland. All nerves are in the vascular nervous bundle in the posterior lateral side of the gland. Therefore the NVB is constituted by:

174

Handbook of Robotic and Image-Guided Surgery

FIGURE 11.2 Endorectal ultrasound showing a right anterolateral hypoechoic area.

FIGURE 11.3 MRI image shows the right anterolateral lesion with restricted diffusion on the T2 map.

in those patients suspected of poorly differentiated primary tumor with a Gleason score .7 and higher serum prostate specific antigen (PSA) level. Recently, 68Ga-labeled prostate-specific membrane antigen-PET CT seems to have potential sensitivity for node involvement. However, careful validation studies have not yet been performed.

11.2.2

Preoperative clinical assessment

The indications for robot-assisted RP (RARP) are essentially the same as those for open prostatectomy and laparoscopic RP, confirmed by the Pasadena Consensus Panel (PCP, 2012). According to the European Association of Urology Guidelines 2017 recommendations we can identify four categories of patients eligible for RARP: (1) patients with lowand intermediate-risk localized PCa and a life expectancy of .10 years; (2) patients with high-risk localized PCa and a life expectancy of .10 years only as part of multimodal therapy; (3) selected patients with locally advanced (cT3a) disease and a life expectancy of .10 years only as part of multimodal therapy; and (4) highly selected patients with very high-risk locally advanced disease (cT3b-T4 N0 or any T N1) only as part of multimodal therapy [4,5]. Absolute contraindications are impossibility for general anesthesia due to the presence of severe cardiopulmonary disease and bleeding diatheses. Relative contraindications are the presence of glaucoma (due to a transient increase in intraocular pressure only during steep Trendelenburg position), prior transurethral resection of prostate, prior pelvic or

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

175

11.2.3

Anesthesiological considerations

Critical anesthesiological questions during RARP include restricted access to the patient, steep Trendelenburg position, physiologic consequences of CO2 insufflation, hypothermia, fluid management, cerebral oxygenation, cardiopulmonary compromise, venous gas embolism, subcutaneous emphysema, peripheral nerves, and soft-tissue injuries. The operating room must be large enough to provide adequate space for all the equipment and to allow their movements, especially when emergency access to the patient is required, considering that undocking of the robot is a multistep process that can need no more than 1 minute. For these reasons, it is necessary for meticulous anesthesiological attention in preparing the patient. The patient is placed in a supine position in Trendelenburg, with the legs placed in a low lithotomy position. In order to avoid injury to the median and ulnar nerves, the arms and hands are stuffed at the sides with a padding. The face can be also protected with a light foam padding to prevent any contact with the endotracheal tube. Careful padding is also recommended to protect every soft and vulnerable body part (e.g., the hips, shoulders, knees), avoiding pressure

11. Robotic Radical Prostatectomy

abdominal surgery, prior inguinal herniorrhaphy with mesh, prior neoadjuvant hormonal therapy because of the development of adhesions between tissues, and distortion of the natural boundaries between organs. Obese patients represent a current challenge in robotic surgery because of the potential cardiopulmonary compromise, especially after placing in prolonged Trendelenburg position (sometimes in a steep Trendelenburg position), the limited working space, and inappropriateness of instrumentation (e.g., devices too short). In these patients the operation is more difficult however it is feasible. The PCP also reviewed the indications for nerve-sparing RARP, considering the increasing knowledge about the neural architecture surrounding the prostate gland. The PCP recommends (1) a full nerve-sparing surgery for sexually active and functional patients without comorbidities and low-risk disease, (2) a partial nerve-sparing surgery for preoperative potent patients without comorbidities and intermediate/high-risk localized disease, (3) a minimal nerve-sparing surgery for patients with erectile dysfunction and/or comorbidities or not interested in sexual activity, and (4) a nonnerve-sparing surgery for patients with clearly extraprostatic disease. Certainly, RARP offers various advantages due to robotic technology and allows the surgeon to perform the challenging task of laparoscopic suturing by the wristed technology of the instruments that offer the facility of human wrist movements. In addition, the magnified, three-dimensional and high-definition (HD) image provided by camera offers a stereo-endoscope vision of the working space and periprostatic anatomy, superior to conventional laparoscopy (bidimensional images). In this way, during RARP, the surgeon can better and more easily identify the relevant neurovascular network, essential for a nerve-sparing surgery. By these advantages related to the robotic approach, with a better pelvic anatomy definition, we could perform a more polished surgery, and try to widen the indications for this technique (e.g., nerve-sparing fashion). In addition, a mini-invasive approach could minimize adverse impacts on the surgical pelvic and peritoneal environment (e.g., lesser adhesions), allowing better multimodal therapy in high-risk PCa. All these advantages have overcome the main challenges with traditional open surgery, represented by impossibility to visualize important anatomic structures, limitation of human wrist movement, and difficulty to produce a precise and stable suturing of the bladder to the urethra. Furthermore, in the postoperative period patients can be dismissed after 2 days from robotic surgery, mainly because of rapid return to diet and more rapid mobilization. Urethral catheter is generally maintained for 7 postoperative days. There are several preoperative and predictive available tools and nomograms [e.g., predicting ECE in prostate cancer (PRECE) and Memorial Sloan Kettering Cancer Center (MSKCC) nomogram] that can influence and standardize the operative approach such as a nerve-/nonnerve-sparing surgery or performance of pelvic lymph node dissection. PRECE is a predicting tool that suggests and characterizes the risk of ECE in prostate cancer, based on some variables (such as patient age, clinical stage disease, PSA level, Gleason score for each positive biopsy core) and provides a graphic that can support the surgeon in the choice of surgical strategy and in patient counseling. The tool provides information of specific side overall risk of ECE. On the graphic we can identify five areas that predict the risk of ECE at 1 4 mm from prostate [6]. The MSKCC nomogram instead predicts the likely outcomes of prostate cancer treatment, based on patient characteristics and their disease (e.g., patient age, clinical stage disease, PSA level, Gleason score for each positive biopsy core, number of positive biopsy cores, possible neoadjuvant treatment). This preoperative tool provides the risk of disease extension as probability of OC, ECE, and LNi. It suggests also the probability of cancer-specific survival (at 10 and 15 years) and of being progression-free (at 5 and 10 years) after RP. These nomograms together with clinical stage disease and the D’Amico risk groups influence the surgical strategy and indications for lymph node dissection.

176

Handbook of Robotic and Image-Guided Surgery

and neuromuscular injuries. In transperitoneal approach (TP) approach, a steep Trendelenburg position provides an optimal operative field due to cranial dislocation of the bowel, but using this position for extended periods can lead to significant upper airway consequences and increased intracranial and intraocular pressure. In addition, the constant CO2 insufflation leads to an increased risk of hypercapnia and consequent acidosis. CO2 is the gas preferably used for insufflation because of its high diffusion coefficient, which minimizes the risk of gas emboli [7]. In a steep Trendelenburg position lung compliance can decrease by more than 50%, with decreased functional residual capacity due to the pushing action of the abdominal content on the diaphragm. Mean pulmonary arterial pressure and pulmonary capillary wedge pressure also decrease, while peak inspiratory pressure, plateau pressure, and end-tidal CO2 tension increase. The extraperitoneal approach (EP) approach, instead, allows to reduce the Trendelenburg position because of the natural barrier and retractor offered by inviolated peritoneum against the bowel, providing a good working space. Reduced Trendelenburg angle (inferior to 20 degrees) permits the possibility to ventilate more lung by increasing the tidal volume. However, although this is a consideration, the EP technique is associated with a significantly higher and more rapid CO2 absorption than TP [8,9]. Moreover, the mechanism of gas absorption during the robotic procedure is not yet fully understood, with a consequent lack in the explanation of major CO2 reabsorption during EP surgery. It is probably related to a direct intravascular uptake of CO2 due to disruption of microvascular or lymphatic channels or to dissection along the fascial tissue during the development of the space of Retzius, using a trocar-mounted balloon dilator device or the robotic camera through the blunt Hasson trocar. In addition, subcutaneous emphysema may contribute to the total amount of CO2 absorbed. The application of low CO2 pressure could lead to reduced passage of the gas into the bloodstream. We have studied the comparison between the effects of low CO2 pressure (about 8 mmHg) pneumo-Retzius (obtained by a stable insufflation with the AirSeal system) and standard CO2 pressure (12 15 mmHg) pneumo-Retzius at three different times during EP-RARP, using our surgical technique PERUSIA (posterior, extraperitoneal, robotic, under Santorini, intrafascial, anterograde) [10]. We found that PaCO2 was significantly higher at 120 minutes after starting CO2 insufflation in the latter condition. The results of our prospective randomization study suggest that EP-RARP can be safely performed using low CO2 pressure. In addition, by using a CO2 insufflation pressure inferior to the venous central pressure (at about 10 mmHg), steadily maintained through the AirSeal system, CO2 absorption could be reduced without increasing the complication rate. These findings can be reasonable if we consider patients with important chronic pulmonary diseases, in which the elimination of excessive CO2 is less efficient even with an increased minute volume of ventilation. From an anesthesiological viewpoint, this advantage could extend the eligibility criteria to those patients who need to undergo RARP. Hemodynamic changes during intraperitoneal CO2 insufflation have been described (e.g., increasing in systemic vascular resistances, mean arterial pressure, heart rate, central venous pressure), while few data exist for EP CO2 insufflation. However, it seems that these modifications are comparable between the two different approaches. Regarding the effect of pneumoperitoneum in the steep Trendelenburg position on cerebral oxygenation, this increases slightly, suggesting that the surgical procedure does not induce cerebral ischemia [11]. Another complication that can occur during RARP is the development of pneumothorax. Most reported pneumothorax cases generally occur during TP surgery, probably due to congenital diaphragmatic defects. Although the pathophysiology has not been well defined, the authors suggested that CO2 passed through muscle or fascial planes, along retroperitoneum, up to the mediastinum and then to the pleural space. This complication is rare in an EP approach, but possible [12]. For all these conditions, the role of the anesthetist is critical during RARP and it is necessary to perform careful noninvasive monitoring (electrocardiogram, pulse oximetry, neuromuscular monitoring, train of four, body temperature) and invasive monitoring (intraarterial blood pressure, ventilatory parameters, arterial blood gas analyses). By considering the effects of robotic surgery, not every patient can benefit from this approach. As reported previously, absolute contraindications include important chronic pulmonary diseases, severe heart conditions, hemodynamic instability, bleeding diatheses, and increased endocranial pressure.

11.2.4

Da Vinci robot and its docking

The robot has three major components: 1. The surgery console, out of the sterile field, which is utilized by the surgeon through two manipulators (like joysticks) and two pedals to control the instruments. Through the console the surgeon observes the operative field with a monitor connected to a 3D endoscope. The robot responds to the surgeon’s acts, but, thanks to the removal of natural hand tremor and the possibility to

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

177

11.2.4.1 The Da Vinci robot Xi The Da Vinci robot Xi is the most recent and more developed version. Compared with the previous one, this new robot has an opposite work outline because the arms arrive from the top and their position is set by a computer which follows an anatomical model chosen by the surgeon depending on the procedure. In addition, the arms move concomitantly, so that reciprocal conflict is decreased and thus the surgeon can move them in opposite directions. For this reason, it is possible to operate also in different opposite anatomical sites (superior and inferior abdomen) with the same robot position. There is a new system for the mobilization of the arms, “Grab and Move,” which decreases the operating time. In addition, the new version Xi has all the Si model innovations, such us fluoroscopic vision, robotic staplers, and coagulation tools. The patient is placed in a supine position with the arms and hands along the body, padded at the sides with eggcrate padding, avoiding injury to the median and ulnar nerves. After the induction of general anesthesia, the patient is placed in Trendelenburg position (20 or 30 degrees, according to the different techniques) with the legs in the lithotomy position. After skin disinfection, an 18 or 20 Ch urethral catheter is placed to decompress the bladder. In robotics, the surgeon is placed at the console, the assistant at the operating table, on the right, with the video cart at the patient’s feet on the right. Therefore the surgeon works with tridimensional and HD vision of the operative field, with complete control of the camera and robotic arms movements. The tableside assistant, instead, is responsible for docking/undocking the robot and uses classic laparoscopic instruments such as suction-irrigation device, forceps, scissors, Hem-o-Lok clips. In the TP, pneumoperitoneum is developed using either a Veress needle inserted at the base of the umbilicus or an open blunt Hasson. A 12 mm blunt trocar for 0-degree lens (8 mm if Xi) is initially placed, slightly above or inferior to the umbilicus. Therefore the CO2 insufflation starts, with pressure generally maintained at between 12 and 15 mmHg. Then, under direct laparoscopic view, other trocars are inserted (two 8 mm robotic arms about 8 cm distant from the

11. Robotic Radical Prostatectomy

decrease the number of movements, it guarantees major precision. Moreover, the large and tridimensional view of the operative fields allows to distinguish the smaller anatomic structures, otherwise barely visible to the naked eye. 2. The patient cart, which is connected with the four robot arms, where Endowrist instruments are situated. They have a wrist which can rotate 360 degrees. The instruments, in fact, can do seven large movements which respond to the hand and wrist movements, but with better amplitude. The surgeon’s preferences are guaranteed by the various robotic instruments. Each instrument has a specific surgical function to pick up, suture, dissect, or clot. In the cart there is a unipolar hook (or a unipolar scissor), a clamp Prograsp, a bipolar Maryland type, a rounded-edged scissor, and a needle carrier. 3. The visual cart, which has a central unit elaboration and an HD video system (full HD) that improves the images thanks to a high-intensity light and a camera control unit. Thanks to the visual cart the surgical field is visualized by the endoscope and transmitted to the HD camera, projected to have 60 degrees visual amplitude. With the Intuitive surgical endoscopes, the operative field is enlarged 6 10 times and, at the same time, the vision system InSite, thanks to a 3D HD camera collected with a processor, gives a tridimensional image. The surgeon sits comfortably in front of the console and looks at the 3D operative field. His hands take the principal control under the display, with a natural attitude. The system converts the hands, wrists, and fingers movements to accurate and simultaneous movements of the surgical instruments. As a matter of fact, the Da Vinci robot is an excellent instrument that allows the surgeon to have perfect control, to perform a lot of movements, to have excellent manipulation of tissues, and 3D vision similar to that of an open surgery technique and, at the same time, to work through small cutaneous incisures like a minimally invasive technique. With the 3D vision the space is better and more deeply explored than with the traditional laparoscopic technique and the quality of the image and the precision of anatomic details are improved. The six axes of the robot arms allow the surgeon to perform a more natural dissection, with some surgical acts that are not possible in a laparoscopic approach because of instrumental rigidity. In this way the dissection procedure is accurate and meticulous. In addition, the system carries out 1 million security checks a second and thus guarantees maximum safety and reliability during the procedure; the audio and video feedback inform at any moment the surgeon and operating staff about the system and patient state. A large touchscreen improves communication between the equipment and allows to write freehand on the operative field projection.

178

Handbook of Robotic and Image-Guided Surgery

camera and two last trocars, 8 and 12 mm, about 7 cm distant from the others, laterally and inferiorly about two to three fingers above the iliac crest: one for the fourth robotic arm on the left and one for the AirSeal system on the right, respectively). In an EP, after the preparation of the space of Retzius, blunt 12 mm trocar for the 0-degree lens (8 mm if Xi) is placed, slightly inferior to the umbilicus (about 1 cm). Then, other two trocars are placed for the right and left robotic arms 5 cm from the camera. The CO2 insufflation starts, with a pressure generally maintained at 10 mmHg. The last two trocars (8 mm for the fourth arm and 12 mm for the AirSeal) are placed 5 7 cm from the previous ones, laterally and about two to three fingers above and medially to the anterior superior iliac crest (one for the fourth arm on the left and one for the AirSeal system on the right), under direct laparoscopic view and using the laparoscopic devices. At this point, we have complete development of the space of Retzius and the robot docking is carried out. The patient side cart generally is placed in front of the patient, between his legs. By using laparoendoscopic single-site surgery (LESS)-RP a transumbilical incision is made and after the positioning of the 2.5 cm robotic port, the docking is carried out, with the patient side cart generally placed on the right. The first generation da Vinci Surgical System platform (2000) offered the control of three robotic arms simultaneously (including that for the camera). The second one, the da Vinci S Surgical System (2006) allowed the use of a fourth robotic arm for grasping and retraction. The third one, the da Vinci Si HD Surgical System (2009) launched two separate consoles for the surgeons, allowing a simultaneous surgical action. The latest generation, the da Vinci Xi Surgical System (2014) provides several advantages: multiquadrant surgery, table motion technology (that allows intraoperative modification of the patient’s position without the need for robot undocking), patient side cart with overhead arms, higher definition 3D image, longer and thinner robotic arms (that offer more free movements), 8 mm trocars (including that for the camera), and an interchangeable arm for the camera. The instruments used are reported in Table 11.1. An additional challenge for urological mini-invasive techniques has been the development of LESS, which has improved the aesthetic result, but also decreased the morbidity of the procedures, reducing the size and number of trocars (e.g., pain, postoperative recovery, and hospital stay). Recently, a new port robotic platform was developed, the da Vinci SP Surgical System (2014) and its evolution, the da Vinci EndoWrist SP, that guarantees surgeons the same precise movements as the traditional da Vinci Surgical System. It is made up of three articulated and flexible endoscopic instruments (“Y” principle) and an articulating camera, inserted into the patient through a single robotic port. The side cart incorporates four robotic manipulators moving the instruments and camera. These devices provide proper TABLE 11.1 Instruments for robot-assisted radical prostatectomy (RARP). Instrumentation for RARP (da Vinci S, Si HD, Xi Surgical Systems) Endowrist Maryland bipolar forceps Endowrist curved monopolar scissors Endowrist ProGrasp forceps Endowrist needle drivers InSite Vision System 0-degree lens Suction-irrigation device Two 12 mm trocars (for camera and AirSeal) Two 8 mm metal robotic trocars (three if using a fourth robotic arm) If Xi, all 8 mm trocars (for camera too) and one 10 12 mm trocar for AirSeal 18/20 Ch urethral catheter Hem-o-Lok clips 0 polydioxanone suture for dorsal venous complex 2.0 polydioxanone suture for posterior reconstruction One 2.0 synthetic monofilament absorbable suture Quill of 30 cm for the urethra-vesical anastomosis or two 3.0 absorbable sutures Biosyn V-loc of 15 cm (for two semicontinuous sutures)

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

179

11.3

Surgical approach to the prostate

Our experience with laparoscopic and, previously, with open extraperitoneal RP (ERP) has led us to maintain this approach in RARP. The ERP permits reduction of the Trendelenburg position due to Retzius-space gas pressure, which pushes up the bowel and peritoneum, which functions as a natural retractor, avoiding bowel displacement into the surgical field. We propose an innovative extraperitoneal bilateral full nerve-sparing (RARP) technique, beginning with a posterior and median plane, in order to preserve neurovascular structures lying outside of the veil of Aphrodite (VA) as much as possible. The VA becomes a useful anatomic landmark during robotic dissection. The primary aim of this study was to investigate the feasibility and the safety of PERUSIA RP. The secondary aim was to evaluate oncologic and functional results. There are two different approaches to RARP: transperitoneal and extraperitoneal. Each has pros and cons. The TP is the most common technique, due to a greater working space and familiar landmarks of the pelvis. Instead, in our center the EP is more often used, because of its advantages.

11.3.1

Extraperitoneal approach

A 1.5 cm incision is made immediately above the umbilicus and dissection is performed down through the anterior sheath of the rectus muscle of the abdomen (Fig. 11.1). At this point, a space immediately ahead of the posterior rectus sheath is created. There are different methods for preparation of the space of Retzius: G G

G G

Veress needle (with the risk of injuries to peritoneal structures). Trocar-mounted balloon dilator device inserted in the preperitoneal space anterior to the posterior rectus sheath and advanced down to the pubis along the midline. Using a 0-degree lens in the trocar, 500 mL of air is insufflated, developing the space of Retzius under direct laparoscopic vision. Nevertheless, this method has been linked to several vagal reactions, leading to heart attack due to manipulations of pelvic structures by the balloon. Blunt Hasson trocar, using the camera for creation of the space of Retzius. Digital preparation (Fig. 11.4), preferred because of the lower risk of peritoneal injuries.

After the preparation of the space of Retzius, CO2 insufflation starts, and the other trocars are placed, as already described (Figs. 11.5 11.11). FIGURE 11.4 Underumbilical vertical incision to expose the muscularis fascia.

11. Robotic Radical Prostatectomy

triangulation and mobility. By using this approach, there are some advantages, such as reduction in complications related to trocars positioning (e.g., bowel injuries), less loss of CO2 pressure, and less time in trocar positioning. However, there are disadvantages too, first of all the lack of a port for the assistant, which however can be electively placed. In 2014 Kaouk and colleagues described the first clinical application of LESS-RP using the da Vinci XP Surgical System, performed in young, not obese patients, with a medium-sized prostate and low-risk PCa. They reported that any conversions and complications were mostly minor. The mean operative time was slightly higher than for the traditional robotic approach. A total of 18.2% of patients had positive margins and 30-day continence rate and the recovery of erectile function were 55% and 63.3%, respectively. These results are promising, but further evaluations are required.

180

Handbook of Robotic and Image-Guided Surgery

FIGURE 11.5 Final configuration of the extraperitoneal approach.

FIGURE 11.6 The correct port position to create a pneumoretroperitoneum (Retzius).

FIGURE 11.7 Trendelenburg position.

Both in TP and in EP, it is possible to perform prostatectomy using an anterograde or retrograde procedure. The anterograde technique provides an initial surgical approach to the bladder neck, continuing toward the prostate to reach the apex, while the retrograde approach provides an inverse surgical sequence. In the literature we do not have studies that demonstrate significant differences between the two techniques in terms of operative time and patient outcomes. In 2006 Atug et al. compared the approaches and found differences in operative

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

181

FIGURE 11.8 Pneumo-Retzius induction after camera and first two trocars position.

FIGURE 11.10 Trocars’ tent effect to improve surgical space.

11. Robotic Radical Prostatectomy

FIGURE 11.9 Right side port positioning using a video laparoscopic approach.

182

Handbook of Robotic and Image-Guided Surgery

FIGURE 11.11 Docking of robot is complete.

time (slightly greater in the TP approach), also in the case of lymphadenectomy, in the time used in the packaging of vesicourethral anastomosis, in the estimated blood loss (lower in the EP), but all were not statistically significant. The length of hospital stay and also the rate of major complications were similar between the two groups. In 2004 Hoznek et al. demonstrated statistically significant shorter operating times in the EP approach. There are many advantages related to the EP technique. There is a lower risk of intra-abdominal complications, such as damage to the intestinal loops, thanks to the natural barrier that offers the undamaged peritoneum. Precisely this natural limit can be exploited in obese patients, so that the loops, confined in the peritoneal cavity, do not invade the operative field. It also allows to maintain a Trendelenburg position with a smaller angle (Fig. 11.7). The EP approach is preferred in patients with a history of abdominal surgery. However, in cases of previous extraperitoneal hernioplasty with mesh placement, a TP approach is preferable, since the Retzius space could be obliterated. On the other hand, the TP approach allows the creation of a wider operating field, especially important during lymphadenectomy and packaging of vesicourethral anastomosis. Another advantage of the TP technique is the negligible risk of lymphoceles in the case of pelvic lymphadenectomy, because the lymph is directly absorbed in the peritoneal cavity. However, the greater incidence of paralytic ileus and chemical peritonitis, due to the direct contact of urine, blood, and carbon dioxide with the peritoneum, is considered an important disadvantage. Finally, the EP approach was associated with increased CO2 uptake during the creation and maintenance of the pneumoRetzius, resulting in an increased risk of hypercapnia and acidosis. Although recognizing significant hemodynamic alterations after prolonged CO2 insufflation in both techniques, they proved to be irrelevant from a purely clinical point of view. In 2015 Dal Moro et al. compared the anesthesiological effects between the two approaches [9]. They concluded that the TP approach is preferable to EP, thanks to lower CO2 reabsorption. However, other studies are needed to define the anesthesiological implications and the intraoperative technical measures, such as the use of low CO2 insufflation pressures and a lower degree of inclination in the Trendelenburg position. In general terms, the laparoscopic experience acquired in the execution of RP was useful in the development of the robotic technique.

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

183

It is mandatory to consider that the preparatory phase of the operating space in RARP is performed in laparoscopy and the surgical steps are stackable to laparoscopic prostatectomy, although the robotic approach has led to a technical evolution in terms of precision, visibility, and surgical comfort, which has favored the choice of EP. Previously, the laparoscopic technique favored a TP approach, as it has more mobility and a better operative angle than EP. Moreover, as stated by some studies, the previous laparoscopic personal experience of the surgeon seems to shorten the learning curve of the robotic procedure.

Transperitoneal approach

Generally, an anterior way is used, in which we access the space of Retzius, and the prostate, SVs, and vasa are dissected. On the other hand, in the retrovesical (or posterior) way, the SVs and vasa are initially approached and dissected behind the bladder, before the space of Retzius is entered. After abdominal access and development of pneumoperitoneum, the pelvis is inspected, and potential present adhesions are lysed. First, we have to develop the space of Retzius. Peritoneum is incised transversally in an inverted U fashion incision using monopolar scissors and extended on both sides to the level of the vasa, laterally to the medial umbilical ligament. The presence of prevesical fatty tissue confirms the proper plane of dissection. Using a posterior and cranial (cephalad) traction on the urachus, we access the space of Retzius, through a dissection along an avascular plane, toward the intersection of the vas deferens and median umbilical ligaments, ensuring optimal mobility of the bladder, which allows tension-free vesicourethral anastomosis. At this point, the boundaries are: the pubic bone superiorly, the anterior aspect of the bladder and prostate gland inferiorly, puboprostatic ligaments, and endopelvic fascia.

11.3.3

Retzius-sparing approach

The Retzius-sparing approach, also called the posterior or Bocciardi approach, was described for the first time in 2010 and represents an innovation compared to the classic anterior approach, passed down from open surgery. Despite the improvements made by robotic surgery, the endopelvic fascia, NVBs, puboprostatic ligaments, eventual accessory pudendal arteries, and the Santorini plexus, all contributing to the maintenance of potency and continence, remain at risk of damage using the robotic approach. The posterior technique avoids all of these anatomic structures by passing through a posterior plane, the Douglas space, previously employed only through the transcoccygeal approach. It is performed using only access through the Douglas space, without opening the anterior compartment and the endopelvic fascia, and without the need to dissect the plexus of Santorini. The originality of the technique consists in using a completely posterior approach, without opening the Retzius, performing all the steps, from the isolation of the SVs to the anastomosis. It allows better preservation of the structures involved in the mechanisms of potency and continence. The preservation of the pubovesical complex allows better stability of the vesicourethral complex, ensuring a more rapid recovery of continence with optimal oncologic results. The frontal approach to NVBs may promote minor trauma and better execution of nerve sparing. In more detail, the Bocciardi approach is commonly performed using the following passages: incision of the parietal peritoneum at the anterior surface of the Douglas space and consequent isolation and incision of the SVs and vas deferens. Then, the separation of the Denonvillier’s fascia by the posterolateral surface of the prostate in an antegrade direction is carried out, maintaining a completely intrafascial plane, until reaching the prostatic apex. The bladder neck is isolated and sectioned, and the anastomosis performed, after placing four short cardinal stitches to evert the mucosa and to easily identify the bladder neck orifice. The anterior surface of the prostate is isolated from the Santorini plexus without any incision. After complete apex isolation, the urethra is incised. The anastomosis is carried out using a continuous suture starting from the 3 o’clock position. Once the anterior stitches have been passed into the bladder neck, the bladder catheter is placed, and the anastomosis is completed. The parietal peritoneum is finally closed [13]. The reported new approach presents several theoretical advantages over the traditional technique, which could further aid better functional results [14,15]. This approach allows the possibility of performing completely intrafascial prostatectomies, preserving complete anatomic integrity of Aphrodite’s veil, containing the NVBs, unlike the traditional technique in which the higher aspect of the veil has to be opened. It also enables the surgeon to avoid the Santorini plexus, the pubourethral ligaments, and eventual accessory pudendal arteries, ensuring less blood loss. Moreover, the procedure is carried out using a smaller surgical incision, at the Douglas space level, of no more than 4 cm, compared to the traditional approach that requires a large incision, approximately 15 cm long, of the anterior surface of the bladder, shaped like an inverted U from an umbilical ligament to the contralateral, passing through the urachal ligament.

11. Robotic Radical Prostatectomy

11.3.2

184

Handbook of Robotic and Image-Guided Surgery

The operative space is very narrow compared with the traditional approach. The bladder must be completely empty to prevent its posterior wall from falling in the surgical field, with consequent traction maneuvers resulting in damage to the bladder wall. Moreover, the ureters run laterally in the posterior bladder wall, so it is necessary to pay attention during isolation. Finally, the angle of vision is very untoward, especially during anastomosis, because the sectioned bladder neck tends to retract. It is also important to specify that this approach, like the standard technique, allows to develop an intra-, inter-, or extrafascial dissection of NVBs, depending on the clinical stage of the disease. Comparing the anterior and posterior RARP results [16], over a median follow-up of 1 year, there seem to be no significant differences in the rate of urinary continence recovery or urinary function scores, in the oncologic outcomes and the rates of postoperative complications. Instead, a better rate of erections sufficient for penetrative intercourse has been observed in the posterior RARP group (83.7% vs 72.4%). Bocciardi’s group, after more than 1200 Retzius-sparing prostatectomies, affirmed that the widest gap with standard RARP can be seen in nonnerve-sparing cases. Extrafascial dissection exacerbates the urinary continence results, which are nonetheless superior to what might be expected from wide demolition using standard RARP. In fact, 70% of nonnerve-sparing patients can be considered continent 1 week after surgery. Moreover, the same patients have an unexpectedly high 1-year potency rate (21%) [15].

11.4 11.4.1

Tip and tricks Bladder neck

A U-shaped incision is made on the bladder, neck preserving circular fibers (Figs. 11.12 and 11.13).

11.4.2

Approach to seminal vesicles

A perpendicular approach to the medial aspect of the SVs, which are mobilized from their lodge, maintains a medial avascular plane, avoiding damage to the proximal neurovascular plate. The presence of a proximal and accessory neurovascular plate, described in the trizonal model of periprostatic anatomy, and the histological demonstration that the fascia of SVs extends and scatters laterally into NVB justify our median approach to the seminal pathway (Figs. 11.14 and 11.15).

11.4.3

Anterograde intrafascial dissection

After incision of the Denonvilliers’ fascia, an athermal dissection is performed in a lateral manner, with enlargement of the retroprostatic space toward the prostatic pedicles. A full nerve-sparing technique is used beginning from the middle, where the nervous fibers are not represented, toward the posterior lateral side in intrafascial plane. Some authors have reported the presence of neurovascular structures in the context of VA, even in its higher and anterior aspect. Respect

FIGURE 11.12 The bladder neck is fully divided.

FIGURE 11.13 The catheter is still held on traction toward the abdominal wall.

FIGURE 11.15 From the posterior plane, find some of the pedicle in line with the posterior seminal vesicles.

11. Robotic Radical Prostatectomy

FIGURE 11.14 The Denonvilliers’ plane is just below the vas deferens.

186

Handbook of Robotic and Image-Guided Surgery

to Vattikuti’s experience, we preserve the VA completely, not only in the lateral side but also anterior, following the virtual space between the prostatic lobe and surrounding fascia through an athermal detachment. In this way, we reduce to a minimum the manipulation of the NVBs and their damage, also decreasing bleeding. Furthermore, the anterograde dissection allows us to better identify the VA before it becomes thinner anteriorly, thus the prostate gland may be shelled out from the overlying VA and dorsal vascular complex (DVC). Using this avascular plane, we preserve the DVC and its hemodynamic function, reducing bleeding and consequently the employment of thermal energy (Figs. 11.16 11.20). Patel et al. proposed a standardized nerve-sparing grading system based on intraoperative visual cues, using a landmark artery to delineate the course of the NVB in a retrograde manner. In our experience visualization is not necessary because the NVB remains outside of the plane of dissection.

FIGURE 11.16 Images show left anterograde intrafascial dissection of prostate to preserve NVB. NVB, Neurovascular bundle.

FIGURE 11.17 How to preserve NVB. NVB, Neurovascular bundle.

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

187

FIGURE 11.18 Ligation of NVB using a clip in extrafascial approach. NVB, Neurovascular bundle.

FIGURE 11.20 Right anterolateral dissection of prostate leading to Santorini plexus preservation.

11. Robotic Radical Prostatectomy

FIGURE 11.19 Left intrafascial anterior dissection.

188

Handbook of Robotic and Image-Guided Surgery

FIGURE 11.21 Lateral dissection of the apex.

11.4.4

Preservation of the anterior periprostatic tissue

Following the medial aspect of the VA we reach the anterior periprostatic tissue and detouch it bluntly from the fascia, without damaging the accessory neurovascular plate, which is a neural pathway for both cavernosal and sphincteric systems. The preservation of the VA avoids the dissection of the endopelvic fascia, which remains on the outside. The preservation of anterior compartment has already been described by Bocciardi et al. [15]. Compared to their transperitoneal technique, we use an EP with lower Trendelenburg degree, without the need to open the Douglas space to reach the Denonvilliers’ fascia (Fig. 11.21).

11.4.5

Preservation of the santorini plexus

DVC is not ligated or cut, developing an anterior avascular plane to identify the prostatic urethral junction maximizing the urethral length. Some authors have proposed reconstruction of the periurethral muscular fascial structures in order to avoid caudal traction on the urethra sphincter complex. The PERUSIA technique does not include any muscular fascial reconstruction because the preservation of both endopelvic fascia and VA avoids the urethral retraction and damage to anatomical structures that physiologically support the external sphincter. Moreover, the anterograde dissection allows better visualization of the prostate urethral junction and therefore a more careful apical dissection that results in improved preservation of the urethral length. Meticulous dissection reduces also injury to the pudendal nerve branches to the rhabdosphincter that are in close proximity to the prostate apex and endopelvic fascia (Figs. 11.22 and 11.23).

11.4.6

Urethrovesical anastomosis

This is performed in a semicontinuous fashion using suture Quill, without posterior reconstruction (Figs. 11.24 and 11.25).

11.5

Complications

The prostatic vascular complex and nervous anatomy determined difficulty in establishing the role of every single structure on sexual potency and urinary continence, and this was at the base of many recent technical modifications in surgical strategy. Compared to the other treatment options recommended for patients with localized disease, such as radiotherapy or focal treatment, RARP suffers the downside of a detrimental effect on sexual function. After RARP, erectile dysfunction and urinary incontinence are definitely major sources of patient anxiety, affecting negatively their quality of life (QoL). Recovery of sexual and urinary function is well documented to be time-dependent with maximal

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

189

FIGURE 11.22 Urethral incision.

11. Robotic Radical Prostatectomy

FIGURE 11.23 Bilateral preserved NVB coursing toward the perineum and preservation of the Santorini plexus. NVB, Neurovascular bundle.

urinary recovery requiring up to 18 months and maximum sexual recovery often taking even longer. Urinary function and sexual function are often considered the most important QoL domains for patients after prostatectomy. Thus slower recovery can cause a lot of anxiety. Likewise, sexual recovery after RP is complex and multifactorial, influenced by age, baseline potency, comorbidities, and surgical technique. In addition, there is an interplay between sexual and urinary recovery, further complicating the evaluation of postoperative function. A significant migration toward an earlier stage of PCa, mainly due to the onward diffusion of PSA-related diagnosis, led to an increasing number of younger patients undergoing RARP [18]. We know that a fast return to sexual activity

190

Handbook of Robotic and Image-Guided Surgery

improves the rate of return of normal, spontaneous erectile function, promoting postoperative erectile function recovery after nerve-sparing surgery [17,19]. Since a positive interplay between the quick recovery of urinary continence and the return to sexual intercourse was determined, the temporary presence of urinary leakage may limit a patient’s selfassurance [20 23]. In addition, the recent diagnosis of PCa may result in a remarkable loss of sexual activity desire, affecting especially younger patients QoL. However, many older patients, even those suffering from adverse surgical characteristics, also expect earlier complete continence and potency recovery after nerve-sparing RARP [24,25]. Experiencing incontinence reduces self-confidence during intercourse, explaining why many patients set potency recovery after achievement of full continence in their QoL evaluation. We can speculate the late return to perfect continence to be more associated with a detrimental psychological effect on potency recovery. All patients are offered urinary and sexual rehabilitation through our survivorship program. This includes oral firstline phosphodiesterase-5 inhibitors as well as more aggressive penile rehabilitation intervention. The use of PDE-5i in penile rehabilitation was introduced to protect the cavernosal smooth muscle in order to improve long-term potency rate, but its high cost and discontinuation of therapy reduced its applicability. Its use could be suggested to patients who suffer from incontinence in order to improve their self-assurance, helping patients to experience early sexual activity. Data on sildenafil and dosing schedules as well as intraurethral suppositories or intracorporeal injections were unavailable in this data set, so that our evaluation is limited in terms of specific factors influencing sexual recovery. Likewise, while all patients are offered pelvic floor physical therapy instruction as part of our survivorship program, the exact participation level is not available. Thus the role of this treatment in our study is unclear since we could not assess patient recovery stratified by the use or not of physical therapy rehabilitation. The confounding factors introduced by survivorship programs deserve mention, although our data represent a heterogeneous population of men in a fluid, personalized survivorship program. Therefore findings may be more generalizable to men enrolled in such programs. Many preoperative characteristics were demonstrated to negatively affect early continence. Age is one of the most powerful independent risk factors for postoperative incontinence and thanks to robotic technology the possibility to improve functional outcomes in older patients also should be mandatory. The achievement of the main functional outcomes after RARP was definitely also influenced by surgical strategy. Nevertheless, immediate urinary continence is poorly reported in urologic literature, concerning small cohorts undergoing innovative techniques of periapical preservation, aimed to reduce surgical urethral sphincter damage. Furthermore, there is a main confounding factor, represented by the classification of safety pad-users. Defining postoperative continence is not simple and sometimes the pad definition can be imprecisely related to urinary leakage [26]. For these reasons we considered incontinent nonpad-wearing patients who referred a case of urinary leakage during the previous 4 weeks and security pad-users, because of fearing loss of urine during the day and consequently during intercourse. Age at surgery is one of the documented risk factors for both these side effects, together with body mass index, comorbidities, and last, but not least, the surgical technique. Recently, some authors have suggested that QoL was particularly associated with satisfaction and regret after RARP because of the higher expectations from a less-invasive procedure. That is because, in the active surveillance era, urologists should pay specific attention to accurately report risks and benefits regarding surgical treatment, especially in low-risk disease. The cumulative literature describes the number of parasympathetic fibers not decreasing from the base to the apex of the gland, instead of a significant reduction of sympathetic fibers, even if little is known on the functional effect of their preservation. Great attention should be paid to the preservation of the anterior compartment, together with the maximization of urethral length in an effort to obtain good functional outcomes. Thus the preservation of the VA, without ligation of the DVC aims to maintain not only nervous, but also periprostatic vasculature integrity. In our opinion, the preservation of the DVC represents the best way to maintain its hemodynamic function, and to reduce sphincter damage during its manipulation. The better preservation of many periprostatic structures allowed by robotic technology has determined an overall reduced time to continence after RARP. Erectile function recovery after RARP was mainly reported, with a recent meta-analysis showing rates at 12 and 24 months of 54% 90% and 63% 94%, respectively. Full nerve-sparing techniques were demonstrated to guarantee better functional results than partial or nonnerve-sparing ones, with reported potency rates ranging from 62% to 90% at 12 24 months follow-up [25 29]. Furthermore, nervesparing surgery was demonstrated to offer better results also in terms of quicker recovery of continence until a plateau at 12 months [23 25]. The passage from a bundle-like structure to a spray-like arrangement of the NVB has been the reason for many technical modifications, reporting improved results both on functional and oncologic results. This was demonstrated by a report that the preservation of NVB itself does not seem to be important in the continence rate, but the surgical conservative approach has a leading role in this sense. In fact, when a subjective nerve-sparing scoring system was produced, we attended the preservation of more nerve tissue results in incrementally shorter times to potency recovery.

Robotic Radical Prostatectomy for Prostate Cancer: Natural Evolution of Surgery for Prostate Cancer? Chapter | 11

191

FIGURE 11.24 Urethrovesical anastomosis.

Time needed to regain potency is an important patient-specific consideration and it could also help physicians to preoperatively better inform patients about the expected time to return to successful sexual intercourse (SSI). We categorized patients who reported at least one SSI in the month preceding clinical evaluation. Our definition of potency required patients to gain an IIEF-5 score $ 17, together with an erection hardness score (EHS) of almost 3. In fact, while the first questionnaire alone was demonstrated to be not always sufficient, especially in oncologic patients, the latter seems to be a useful tool to reliably correlate with SSI, including during PDE-5i therapy [30]. As previously described, a fast return to sexual activity improves the rate of return of normal and spontaneous erectile function after nerve-sparing surgery. We believe that patients who achieve immediate full continence more easily attempt to have a sexual intercourse.

References [1] Sobin LH, Fleming ID. TNM classification of malignant tumors, fifth edition (1997). Union Internationale Contre le Cancer and the American Joint Committee on Cancer. Cancer 1997;80(9):1803 4.

11. Robotic Radical Prostatectomy

FIGURE 11.25 View of the completed anastomosis.

192

Handbook of Robotic and Image-Guided Surgery

[2] McClure TD, Margolis DJ, Reiter RE, Sayre JW, Thomas MA, Nagarajan R, et al. Use of MR imaging to determine preservation of the neurovascular bundles at robotic-assisted laparoscopic prostatectomy. Radiology 2012;262(3):874 83. [3] Jadvar H. Molecular imaging of prostate cancer with PET. J Nucl Med 2013;54(10):1685 8. [4] Montorsi F, Wilson TG, Rosen RC, Ahlering TE, Artibani W, Carroll PR, et al. Best practices in robot-assisted radical prostatectomy: recommendations of the Pasadena Consensus Panel. Eur Urol 2012;62(3):368 81. [5] Bill-Axelson A, Holmberg L, Garmo H, Rider JR, Taari K, Busch C, et al. Radical prostatectomy or watchful waiting in early prostate cancer. N Engl J Med 2014;370(10):932 42. [6] Patel VR, Sandri M, Grasso AAC, De Lorenzis E, Palmisano F, Albo G, et al. A novel tool for predicting extracapsular extension during graded partial nerve sparing in radical prostatectomy. BJU Int 2018;121(3):373 82. [7] Zorko N, Mekis D, Kamenik M. The influence of the Trendelenburg position on haemodynamics: comparison of anaesthetized patients with ischaemic heart disease and healthy volunteers. J Int Med Res 2011;39(3):1084 9. [8] Meininger D, Byhahn C, Wolfram M, Mierdl S, Kessler P, Westphal K. Prolonged intraperitoneal versus extraperitoneal insufflation of carbon dioxide in patients undergoing totally endoscopic robot-assisted radical prostatectomy. Surg Endosc 2004;18(5):829 33. [9] Dal Moro F, Crestani A, Valotto C, Guttilla A, Soncin R, Mangano A, et al. Anesthesiologic effects of transperitoneal versus extraperitoneal approach during robot-assisted radical prostatectomy: results of a prospective randomized study. Int Braz J Urol 2015;41(3):466 72. [10] Cochetti G, Boni A, Barillaro F, Pohja S, Cirocchi R, Mearini E. Full neurovascular sparing extraperitoneal robotic radical prostatectomy: our experience with PERUSIA technique. J Endourol 2017;31(1):32 7. [11] Park EY, Koo BN, Min KT, Nam SH. The effect of pneumoperitoneum in the steep Trendelenburg position on cerebral oxygenation. Acta Anaesthesiol Scand 2009;53(7):895 9. [12] Bhayani SB, Steinnerd LE. Bilateral pneumothorax after extraperitoneal laparoscopic radical prostatectomy. J Laparoendosc Adv Surg Tech A 2007;17(5):653 4. [13] Galfano A, Ascione A, Grimaldi S, Petralia G, Strada E, Bocciardi AM. A new anatomic approach for robot-assisted laparoscopic prostatectomy: a feasibility study for completely intrafascial surgery. Eur Urol 2010;58(3):457 61. [14] Asimakopoulos AD, Miano R, Galfano A, Bocciardi AM, Vespasiani G, Spera E, et al. Retzius-sparing robot-assisted laparoscopic radical prostatectomy: critical appraisal of the anatomic landmarks for a complete intrafascial approach. Clin Anat 2015;28(7):896 902. [15] Galfano A, Di Trapani D, Sozzi F, Strada E, Petralia G, Bramerio M, et al. Beyond the learning curve of the Retzius-sparing approach for robot-assisted laparoscopic radical prostatectomy: oncologic and functional results of the first 200 patients with ./ 5 1 year of follow-up. Eur Urol 2013;64(6):974 80. [16] Menon M, Shrivastava A, Bhandari M, Satyanarayana R, Siva S, Agarwal PK. Vattikuti Institute prostatectomy: technical modifications in 2009. Eur Urol 2009;56(1):89 96. [17] Padma-Nathan H, McCullough AR, Levine LA, Lipshultz LI, Siegel R, Montorsi F, et al. Randomized, double-blind, placebo-controlled study of postoperative nightly sildenafil citrate for the prevention of erectile dysfunction after bilateral nerve-sparing radical prostatectomy. Int J Impot Res 2008;20(5):479 86. [18] Gallina A, Chun FK, Suardi N, Eastham JA, Perrotte P, Graefen M, et al. Comparison of stage migration patterns between Europe and the USA: an analysis of 11 350 men treated with radical prostatectomy for prostate cancer. BJU Int 2008;101(12):1513 18. [19] Lee DJ, Cheetham P, Badani KK. Penile rehabilitation protocol after robot-assisted radical prostatectomy: assessment of compliance with phosphodiesterase type 5 inhibitor therapy and effect on early potency. BJU Int 2010;105(3):382 8. [20] Edwards B, Clarke V. The psychological impact of a cancer diagnosis on families: the influence of family functioning and patients’ illness characteristics on depression and anxiety. Psycho-oncology 2004;13(8):562 76. [21] Gacci M, Simonato A, Masieri L, Gore JL, Lanciotti M, Mantella A, et al. Urinary and sexual outcomes in long-term (5 1 years) prostate cancer disease free survivors after radical prostatectomy. Health Qual Life Outcomes 2009;7:94. [22] Schroeck FR, Krupski TL, Sun L, Albala DM, Price MM, Polascik TJ, et al. Satisfaction and regret after open retropubic or robot-assisted laparoscopic radical prostatectomy. Eur Urol 2008;54(4):785 93. [23] Liss MA, Osann K, Canvasser N, Chu W, Chang A, Gan J, et al. Continence definition after radical prostatectomy using urinary quality of life: evaluation of patient reported validated questionnaires. J Urol 2010;183(4):1464 8. [24] Kim JH, Ha YS, Jeong SJ, Lee DH, Kim WJ, Kim IY. Impact of robot-assisted radical prostatectomy on lower urinary tract symptoms and predictive factors for symptom changes: a longitudinal study. Urology 2013;81(4):787 93. [25] Ficarra V, Novara G, Rosen RC, Artibani W, Carroll PR, Costello A, et al. Systematic review and meta-analysis of studies reporting urinary continence recovery after robot-assisted radical prostatectomy. Eur Urol 2012;62(3):405 17. [26] Wallerstedt A, Carlsson S, Nilsson AE, Johansson E, Nyberg T, Steineck G, et al. Pad use and patient reported bother from urinary leakage after radical prostatectomy. J Urol 2012;187(1):196 200. [27] Ficarra V, Novara G, Ahlering TE, Costello A, Eastham JA, Graefen M, et al. Systematic review and meta-analysis of studies reporting potency rates after robot-assisted radical prostatectomy. Eur Urol 2012;62(3):418 30. [28] Ficarra V, Iannetti A, Mottrie A. Urinary continence recovery after open and robot-assisted radical prostatectomy. BJU Int 2013;112(7):875 6. [29] Ficarra V, Novara G, Artibani W, Cestari A, Galfano A, Graefen M, et al. Retropubic, laparoscopic, and robot-assisted radical prostatectomy: a systematic review and cumulative analysis of comparative studies. Eur Urol 2009;55(5):1037 63. [30] Boni A, Cochetti G, Barillaro F, Emanuele C, Lepri E, Mearini E. Pentafecta outcomes plus evaluation of immediate continence after robotic, extraperitoneal, radical prostatectomy technique with complete preservation of the veil of Aphrodite. J Urology 2015;193(4):E780.

12 G

Robotic Liver Surgery: Shortcomings of the Status Quo Andrea Peloso, Nicolas Christian Buchs, Monika Hagen, Axel Andres, Philippe Morel and Christian Toso University of Geneva, Geneva, Switzerland

ABSTRACT Robotic surgery has emerged as a promising minimally invasive surgical technique, with the ability to perform complex hepatobiliary surgeries and achieve outcomes similar to open surgery, but with the advantages of a minimally invasive approach. Recent advances in computer-assisted image-guided surgery are proposed to overcome some of the associated limitations of robotic surgery by preoperatively planning the surgical strategy with a patient-specific virtual resection plan, which can be directly transferred to the operating room in an augmented reality setup. Using new technologies, we could theoretically improve the preoperative planning, enhance the surgeon’s skill, and simplify complex procedures. Specifically, using an optical tracking system, calibrated on the patient, we used images-overlay navigation for the location of the lesions during robotic liver resections. Based on our experience, we suggest that robotic image guidance can improve the surgeon’s orientation during the operation, increasing the accuracy of tumor resection. The indications for robotic-assisted liver resection will increase in the coming years. This chapter reviews the recent development of robotic hepatic surgery, discussing its advantages and disadvantages in daily practice. The most common surgical procedures are described and, finally, the evolution of robotic surgery, one of the hottest fields in medical technology, is detailed. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00012-8 © 2020 Elsevier Inc. All rights reserved.

193

194

Handbook of Robotic and Image-Guided Surgery

12.1 Introduction: the development of robotic-assisted minimally invasive liver surgery In the last 20 years, since the first successful procedure [1], the introduction of minimally invasive surgery (MIS) has revolutionized the surgical landscape, both for general surgery procedures [2,3] and surgical subspecialty fields, including urology [4], gynecology [5], and thoracic surgery [6]. Laparoscopy has bypassed important limits intrinsic to the open approach. Indeed, minimally invasive procedures cause less postoperative pain and discomfort, are associated with a shorter hospital stay and quicker return to normal activities, and require smaller incisions [7]. Other important documented advantages are reduced tissue trauma and blood loss, without sacrificing the final oncological outcomes [8,9]. On the other hand, MIS has several disadvantages and requires a high degree of special resolution, dexterity, and specific technical skills. Principally, due to the mandatory transition from the human wrist (seven degrees of freedom to a range of four degrees of motion-limited freedom), a steep learning curve is usually necessary. With the introduction of the robotic surgical platform, such as the da Vinci Surgical System (Intuitive Surgical Inc., Sunnyvale, California, United States), the visual and ergonomic limitations of laparoscopy can be overcome [10]. The da Vinci Surgical System, launched in late 1998, is now in its 11th version, and is the first and currently the most-used computer-enhanced robotic surgical system (www.intuitivesurgical.com). The most modern version of the da Vinci Surgical System is composed of three essential elements: the surgeon console; the patient-side cart equipped with EndoWrist instruments; and the three-dimensional (3D) HD Vision system. This core technology permits physical separation between the patient and the surgeon, who seats comfortably at the console. The EndoWrist instruments recreate the human wrist’s seven degrees of freedom, permitting a precise dissection and intracorporeal suturing, as well as eliminating the “fulcrum” effect, which is intrinsic of all laparoscopic procedures. These advantages are also supported by tremor-filtration software [11]. Finally, the vision cart is furnished with a straight (0 degree) or angled (30 degrees) endoscope capable of offering a tridimensional image with up to 10 times magnification [12]. The 3D image has appeared particularly useful when performing nonlinear hepatic resections, such as curved parenchymal transection or hilar dissection [13]. On the other hand, important limitations still exist, such as the loss of haptic feedback (generally overcome by the surgeons’ capacity to “understand” how much tension is applied on the structure or on the sutures [14]), the inability of the system to work on multiple areas at the same time, and finally the necessity for a skilled bedside assistant for quick instrument changes. Finally, the high cost still represents a major drawback. However, with the expiration of the da Vinci patent (expired in October 2016), more convenient and cheaper options are becoming available. MIS (either as a laparoscopic approach or a robotic approach) has been applied to various general surgery interventions, including minor procedures, such as cholecystectomy, gastric fundoplication, and major surgeries (colectomy and esophagectomy). Recently, in a large review, which analyzed more than 9000 cases, Ciria et al. showed how the MIS could also be successfully applied to hepatic surgery [15]. These authors confirmed the growing safety of laparoscopic liver resection (in high-volume centers and for selected patients) associated with better short-term outcomes compared to open surgery. Specific ergonomic controls, able to mimic the open surgery movement and improved 3D visualization, allow the robotic system to outperform conventional laparoscopic strategy. Thus its application to hepatobiliary surgery seems inevitable. Over the last decade, important case series on robotic liver resection have been published for the treatment of colorectal metastases and hepatocellular carcinoma (HCC) [16 28] (Table 12.1). Additionally, the rate of publication of reports addressing laparoscopic robotic-assisted liver resection continues to grow (Fig. 12.1). Presently, for selected cases, the robotic-assisted minimally invasive approach represents a tangible option for hepatic resections. This strategy must be considered as an alternative to the “standard” and well-accepted open techniques, as well as to the laparoscopic methods. Open laparoscopic and robotic-assisted hepatic resections for oncological reasons have been compared, but data are incomplete because of a lack of randomized trials. As a first minimally invasive approach, laparoscopy has been analyzed and compared to open surgery. At first glance, this was subject to some skepticism from “purist” open surgeons due to hypothetical postoperative bile leakages, incomplete resection margins, intraoperative bleeding control, and consequent blood loss.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

195

TABLE 12.1 Robotic liver surgery: Clinical series. Study

Year

No. of patients

Type of lesions (N)

Resection (%)

CRM

HCC

Other

R0

R1

2016

183

4

112

67

100

0

Giulianotti et al. [17]

2011

70

16

18

36

100

0

Choi et al. [18]

2017

69

12

36

21

100

0

Tsung et al. [19]

2014

57

21

6

10

100

0

Wu et al. [20]

2014

52

1

38

13

NR

NR

Goh et al. [21]

2018

43

11

19

13

NR

NR

O’Connor et al. [22]

2017

42

5

30

7

90

10

Lai et al. [23]

2013

41

0

41

0

93

7

Troisi et al. [24]

2013

40

24

3

13

97

3

Chan et al. [25]

2011

27

7

13

7

NR

NR

Spampinato et al. [26]

2014

25

11

2

12

100

0

Casciola et al. [27]

2011

23

14

3

6

NR

NR

Morel et al. [28]

2017

16

8

3

5

100

0

CRM, Colorectal metastases; HCC, hepatocellular carcinoma.

FIGURE 12.1 Robotic liver surgery: Statistics on refereed publications Searches from NCBI-Pubmed titles or abstracts were conditioned to retrieve publications from the last twenty-two years.

In 2012 Alhomaidhi et al. [29] conducted an extensive systematic review of the medical literature between 1990 and 2010, comparing 751 open right hepatectomies to 4207 right laparoscopic hepatectomies. These authors noted an average reduced operative time in the laparoscopic approach, as well as a reduced mean blood loss (260 mL for the laparoscopic approach vs 1290 mL for open surgery). These data were confirmed by Franken et al. [30] in a case-matched study, showing that the laparoscopic approach required fewer blood transfusions and less blood loss (237 mL for the laparoscopic group vs 387 mL for the open group). Also, the postoperative hospital stay was shorter in the laparoscopic group. Resection margins were comparable between the two groups.

12. Robotic Liver Surgery

Chen et al. [16]

196

Handbook of Robotic and Image-Guided Surgery

The superiority of the laparoscopic approach over the open technique has been well documented [9,31 35], with better exposure of the hepatic region, shorter postoperative stay, smaller incision size, reduced blood loss, and fewer postoperative complications. Comparisons between the laparoscopic and robotic approaches are complicated by differences in robotic technology and the various levels of expertise. Tsung et al. [19] conducted a case-match comparison analysis between robotic (n 5 57) and laparoscopic (n 5 114) hepatic resections. In this large series, robotic and laparoscopic liver resection showed analogous safety and feasibility, with no significant benefits over laparoscopic techniques in operative outcomes. In 2015 Montalti et al. reported [36] a meta-analysis comparing outcomes of laparoscopic and robotic liver resections. The laparoscopic approach was associated with a significant reduction in blood loss and shorter operative time compared to the robot-assisted liver resection. In parallel, no differences were found for conversion rate, R1 resection rate, morbidity, or hospital stay. Even if studies have been conducted to correctly evaluate the role of robotic-assisted surgery for liver resection [37], the long-term oncological outcomes are still unclear and need further study.

12.2

Advantages and disadvantages of robotic liver resection

Robotic-assisted surgery should be considered as a technological evolution of laparoscopy. As for all new surgical technologies, these methods have been developed in an attempt to overcome the most-evident intrinsic drawbacks of laparoscopy, such as the limited bidimensional view, the inability to create an operative triangulation, and the lack of ergonomic or user-friendly instruments [38].

12.2.1

Advantages of robotic liver surgery

Minimal invasiveness is one of the most important advantages when using the robotic platform. During hepatobiliary surgery, the preservation of the integrity of the abdominal wall (avoiding large incisions) translates into better postoperative outcomes, for both respiratory complications and postoperative ascites in cirrhotic patients. From the beginning, the da Vinci Robot platform has been designed to overcome the limitations of conventional laparoscopy. To allow the surgeon to sit comfortably, the entire system has been created around a master-console system, resulting in the physical separation of the patient and the operative surgeon [39]. The console is linked in real time to a camera via a stereoviewer, which provides the surgeon with an image of the surgical site and extended system information. Regarding this aspect, one of the most important advantages is the provided tridimensional image, which is able to establish an almost-real depth perception and confer better hand eye coordination, thus reducing errors and increasing speed during completion of laparoscopic tasks [13,40]. This 3D system view is also capable of helping to overcome one of the major limits of the da Vinci: the lack of haptic feedback. The patient-side cart is positioned around the patient (according to the interventions’ laterality) and equipped with EndoWrist instruments, which are designed to restore the natural dexterity (with a greater level of freedom than the human hand) while operating. During conventional laparoscopy, all surgical instruments are inserted through 5- to 12-mm cutaneous incisions and, as the entry point on the body acts as a fulcrum point, the surgical tool is constrained to a maximum of four degrees of freedom, causing considerably reduced dexterity. With robotic EndoWrist instruments, a new fulcrum point is added, and the wrist is moved from outside the patients’ body (a hallmark of conventional MIS) to inside the abdominal cavity, resulting in seven degrees of freedom [41]. Furthermore, a surgeon’s intrinsic hand tremors in conventional MIS are filtered out by a control algorithm, and hand movements can be scaled down to perform precise manipulations [42]. The greater freedom, as well as the lack of tremor, permits the surgeon to realize gentle dissection and precise intracorporeal suturing, such as complex hilar dissection or biliary-enteric anastomosis [12].

12.2.2

Disadvantages of robotic liver surgery

With the current generation of robots, some disadvantages still need to be overcome. The da Vinci platform has a large footprint and bulky robotic arms, which imply the necessity to have spacious operating rooms and creates the risk of collision of robotic arms during surgery, which can result in intraoperative troubleshooting.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

197

12.3

Patient selection and preoperative preparation

Patient selection for liver surgery is primarily based on a preoperative imaging analysis. Preoperative imaging is essential both for the final diagnosis, as well as for the surgical planning. The use of magnetic resonance imaging (MRI) or four-phase computed tomography (CT) is the standard of care for preoperative liver surgeries. Compared to ultrasound (US) exam, these technologies allow the operator to differentiate the lesion(s) (evaluating the different perfusion characteristics of primary hepatic tumors or metastases) and to clearly assess the feasibility of the resection by identifying the number of involved segments or the proximity of lesion(s) to key vascular or biliary structures. When all the information derived from CT or MRI is combined with the patient’s history, diagnosis can usually be made without a biopsy (reserved in the case of diagnostic uncertainty) [48,49]. Overall, surgical indications for robotic liver resections are the same as for open and laparoscopy. Both benign and malignant tumors can be resected. The guidelines for the feasibility of a minimally invasive liver resection were first given in the Louisville Consensus statement in 2008 [50]. This document suggested the indications for the application of MIS to hepatic surgery. A laparoscopic approach was considered suitable in the case of the solitary lesion (maximum 5-cm diameter) located in segment 2, 3, 4b, 5, or 6 requiring a segmentary resection or a left lateral sectionectomy. Two subsequent meetings have been held: Morioka in 2014 [51] and Seoul in 2016 [52,53]. During the second meeting [51], the effective role of laparoscopy in the modern era of liver surgery was established, highlighting the need for formal education because of the steep learning curve. Finally, during the last consensus meeting, which took place during the 26th IASGO meeting in 2016, the expert panel underlined the role of MIS for donor hepatectomy, both for pediatric and adult living donors [52,53]. These guidelines, originally designed just for laparoscopy, can be extrapolated for robotic liver resections. Nonetheless, the use of a robotic system can improve certain steps of minimally invasive major hepatectomy [54]. Lesions located in the posterosuperior segments are better addressed by a robotic approach than by laparoscopy [27]. Technical limitations of laparoscopic liver resection of posterosuperior segments can be overcome thanks to the ability given by the robotic system using articulated instruments using dedicated robotic instruments. Wakabayashi et al. described robotic resection of HCC located in segment 8 with a thoracoscopic approach, by opening the right diaphragm [55]. Lai and Tang reported two cases of robot-assisted laparoscopic partial caudate lobe resection in patients with HCC, both of them with a Child Pugh score of A [56]. The most convincing indications for robotic surgery are procedures that involve a small, deep, fixed operating field or when fine dissection and parenchyma-sparing resections are required, as in cirrhotic patients.

12. Robotic Liver Surgery

In this context, during the intervention, if it is necessary to change the patient’s position, undocking and redocking of the whole system are required, which results in prolonged operative time [43]. Rather, it must be highlighted how the newly launched version of the da Vinci system Xi (Intuitive) permits an easier multifield surgery enabled by the working arm station. The robotic arms can be turned 180 degrees without replacing the patient. This feature can be particularly useful in the case of robotic combined hepatic and colorectal surgery (e.g., multiquadrant surgery) [44]. A steady communication between the robotic surgeon and the assistant is granted by two-way audio communication. This is an important parameter because specific data showed a significant association between poor intraoperative team communication and worse surgical outcomes in robotic gynecologic surgery [45]. Even if considered uncommon and rarely consequential, robot malfunctions must be considered as a potential limit of this technology. Roughly half of the described malfunctions are related to instrument malfunctions and are easily solved by tool replacement. In 2014 Buchs et al. [46] described 18 cases of robotic dysfunctions out of 526 general surgeries (3.4%). In their research, 9/18 (1.7%) of the problems were related to robotic instruments, with the need for a laparoscopic conversion in one case. They also showed more dysfunctions at the beginning of the study period (2006 10) than at the end (2011 12) (4.2% vs 2.6%, P 5 .35). Rajih et al. [47] confirmed these data and documented an overall rate of robotic malfunction of around 5%. This group evaluated a total of 1228 robotic surgeries performed with the four-arm da Vinci Si robot system in three different specialties (urology, gynecology, and thoracic surgery). The most common error was related to pressure sensors in the robotic arms in 2% (25/1228) of cases. Other types of problems were unrecoverable electronic communication-related in 1% (13/1228) of cases, failed encoder error in 0.6% (7/1228), illuminator-related in 0.3% (4/1228), faulty switch in 0.2% (3/1228), battery-related failures in 0.2% (3/1228), and software/hardware error in 0.1% (1/1228) of cases. The main limitation for the wider diffusion of the robotic platform remains its higher cost compared to laparoscopy.

198

Handbook of Robotic and Image-Guided Surgery

12.4

Robot-assisted left hepatectomy

Below we describe operative setups (designed for da Vinci Xi) and surgical techniques for common liver procedures. We acknowledge that significant variations are seen according to the surgeon and institution involved.

12.4.1

Operative setup

In most cases, the patient is positioned in a dorsal lithotomy position with 20-degree reverse Trendelenburg and parted legs. The pneumoperitoneum is achieved with an open laparoscopic technique at the level of the umbilicus. We feel that a standard configuration is particularly important to promote efficiency. The bedside assistant can be placed both on the right side of the patient or between the patient’s legs. For a standard left hepatectomy, we use a “four robotic arms” technique. The da Vinci cart is usually situated at the right side of the patient and the anesthetic tour on the head of the patient. The main operator sits at the surgeon’s console a few steps away. To avoid intraoperative arm collision, the ports should be placed at least 10 cm from each other (Fig. 12.2). For a robot-assisted left hepatectomy (Fig. 12.3—Green box), trocars are positioned as follows: G G

a 12-mm camera port (Fig. 12.3—C) through the umbilicus; a 12-mm assistant port over the right anterior axillary line (Fig. 12.3—A);

FIGURE 12.2 Tip and tricks for trovarti placement for robotic liver surgery.

FIGURE 12.3 Trocarts placement for robot-assisted left hepatectomy.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

199

12. Robotic Liver Surgery

FIGURE 12.4 Trocarts placement for robot-assisted left lateral sectionectomy.

G G G

an 8-mm working trocar in the right flank region (Fig. 12.3—1); an 8-mm working trocar in the left umbilical region (Fig. 12.3—2); and an 8-mm working trocar in the left hypochondriac region (Fig. 12.3—3).

In the case of a lesion involving segments 2 and 3 and requiring a left lateral sectionectomy, a second setup is preferred: G G G G G

a 12-mm camera port (Fig. 12.4—C) at the level of the umbilicus; a 12-mm assistant port in the left umbilical region (Fig. 12.4—A); an 8-mm working trocar in the right umbilical region (Fig. 12.4—1); an 8-mm working trocar in the left umbilical region (Fig. 12.4—2); and an 8-mm working trocar in the left hypochondriac region (Fig. 12.4—3).

12.4.2

Surgical technique

The procedure begins with full mobilization of the left liver by sectioning the round, falciform, and left triangular ligaments. The anterior surfaces of the left and middle hepatic veins are exposed. The left liver is elevated, and the gastrohepatic ligament is divided, exposing the inferior vena cava plan. The liver is then retracted to expose the hilar region. The lesser omentum is incised, and a vascular tape is passed through the foramen of Winslow to surround the hepatoduodenal pedicle. Although seldom needed, this tape can be used for a Pringle maneuver during liver parenchymal transection. The left hepatic artery is identified along the left side of the hepatoduodenal ligament, extraparenchymally dissected, and divided using 3.0 Prolene sutures (with the help of robotic needle holder) or robotic Hem-o-lok clips after confirming the correct interpretation of the anatomy.

200

Handbook of Robotic and Image-Guided Surgery

The left portal branch is identified and divided. The left hepatic biliary duct can be divided during the parenchymectomy (usually using a vascular stapler). At this time, with the aim to reduced blood loss, the central venous pressure (CVP) should be decreased to less than 5 mmHg [57,58]. The parenchymal transection should be carried out following the demarcation area, layer-by-layer, using the harmonic shears from the cortical aspect of the liver toward the core of the parenchyma. Stay sutures should be placed on the left side of the transection line and held by the third robotic arm to provide exposure of the hepatic section line. During the parenchymal transection phase, the round ligament can be used as a handle to retract the liver toward the right. Bleeding can be managed with bipolar cautery, harmonic shears, selective sutures, and robotic clips. The residual parenchyma and left hepatic vein are divided using a laparoscopic vascular stapler. An endobag is inserted through a 12-mm trocar, and the specimen is placed inside. The endobag is then retrieved by a 10-cm Pfannenstiel incision. A fibrin glue film can be applied directly on the resection area when biliostasis and hemostasis are achieved. One or two closed suction drains are placed around the resected area to prevent fluid collection.

12.5 12.5.1

Robot-assisted right hepatectomy Operative setup

The patient is placed in a dorsal lithotomy position with 30-degree reverse Trendelenburg and parted legs (with the assistant surgeon positioned between the patient’s legs). The pneumoperitoneum is achieved with an open laparoscopic technique at the level of the umbilicus. The table assistant should be placed on the left side of the patient. The da Vinci cart is usually docked at the head of the patient and the anesthetic tour on the left side of the head of the patient. For the robot-assisted right hepatectomy (Fig. 12.5—Green box) trocars are positioned as follows: a 12-mm camera port (Fig. 12.5—C) through the umbilicus and 12-mm assistant port over the right anterior axillary line (Fig. 12.5—A).

FIGURE 12.5 Trocarts placement for robot-assisted right hepatectomy.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

201

(These ports can be switched during the intervention. They can be alternatively used for suction/irrigation and retraction during the parenchymal transection phase.) G G G

an 8-mm working trocar in the right lower quadrant (Fig. 12.5—1); an 8-mm working trocar in the left umbilical region (Fig. 12.5—2); and an 8-mm working trocar in the left hypochondriac region (Fig. 12.5—3).

Surgical technique

Robot-assisted right hepatectomy should be divided into three different steps [57,58]: dissection of the hepatic hilum, hepatocaval dissection, and transection of the liver.

12.5.2.1 Dissection of the hilum First, a retrograde cholecystectomy should be performed. The peritoneum is opened by an energy device (da Vinci Harmonic ACE or da Vinci PK Dissecting Forceps). The hepatic hilum is then dissected by a combination of monopolar hooks and bipolar forceps to reach the right hepatic artery, which is divided between 3-0 Prolene suture or robotic Hem-o-lok clips. The portal vein is then approached. Selective ligatures should be applied before selectively dividing branches for segment 1. The portal vein can then be safely isolated up to its bifurcation and divided into right and left portal vein branches. The right portal vein is then divided using a 4-0 Prolene suture, robotic Hem-o-lok clips, or a mechanical stapler. This first step is completed with the dissection of the right bile duct. An extrahepatic approach can be performed only with a clear understanding of the biliary anatomy and in the case of a lower biliary confluence. Given that the right biliary branch is isolated, and the section between 4-0 Prolene sutures is approximately 1 cm before the bifurcation. Otherwise, with a centrohepatic biliary confluence, an intrahepatic approach is preferred, and the ligation of the right biliary duct performed during the parenchymal transection.

12.5.2.2 Hepatocaval dissection Full mobilization of the right liver is necessary for this step. The falciform ligament is divided up to the junction between the hepatic veins and the inferior vena cava with the assistance of countertraction from the nondominant robotic arm. This maneuver can then expose the right coronary and triangular ligaments, which are dissected in a medial-to-lateral direction. A subsequent retraction lateral-to-medial of the liver reveals the hepatocaval plan covered by the peritoneal reflection. The peritoneum is then incised along the hepatocaval plane exposing the inferior vena cava. 4-0 Prolene stitches are used to ligate posterior accessory hepatic veins that are finally sectioned. Robotic Hem-o-lok clips can also be used. The dissection is performed cranially up to the right hepatic vein. A left tilt of the patient can be added to complete the exposure.

12.5.2.3 Transection of the liver As with left hepatectomy, the CVP is lowered (,5 mmHg). The capsule is incised by the monopolar hook following the demarcation line, which has appeared after the section of the right hepatic artery and vein. A layer-by-layer transection from the hilum to the right hepatic vein is carried out using an ultrasonic dissector and diathermic scissor. Larger vessels can be closed by robotic clips. During this phase, the second and third robotic arms can be used to open the section line by 2-0 Prolene suspension sutures placed laterally. To benefit from better instrument alignment, our habit is to switch the first robotic arm to the 12 mm port. Finally, the right vein is divided using a laparoscopic stapler. After the completion of the transection, careful inspection of the raw surface of the liver is mandatory. CVP can be increased to check for active bleeding. Indocyanine green (ICG) with near-infrared (NIR) technology or methylene blue can be helpful to detect biliary leakages. Routine drains are also placed at the end of the procedure. The specimen is placed in an endobag and retrieved by a 10-cm Pfannenstiel incision (Fig. 12.6).

12. Robotic Liver Surgery

12.5.2

202

Handbook of Robotic and Image-Guided Surgery

FIGURE 12.6 Intraoperative images of robotic right hepatectomy. Dissection and clipping of the right hepatic artery (A). Section of the right portal vein (B). Dissection of accessory hepatic vein coming from IVC (C). Parenchymal transection (D). Courtesy Giulianotti PC, Coratti A, Sbrana F, Addeo P, Bianco FM, Buchs NC, et al. Robotic liver surgery: results for 70 resections. Surgery 2011;149:29 39.

12.6 12.6.1

Robotic liver resection for posterior segments (7 and 8) Operative setup

The patient is placed in a left lateral position with an angle of 30 degrees between the patient’s back and the operating table and 15-degree anti-Trendelenburg position. The patient’s right arm is also positioned in front of their face and a pillow placed between their arms. This position is essential for the resection of the posterior segments, maximizing the exposure of the right lobe. Furthermore, the gravitation will also act as a natural retractor on the liver when detached from the hepatocaval plane. The pneumoperitoneum is achieved by an open laparoscopic technique in the right midclavicular line at the level of the umbilicus. The bedside assistant should be placed on the left side of the patient. The da Vinci cart is usually docked over the patient’s right shoulder and the anesthetic tour on the left side of the patient’s head. For the robot-assisted posterior segmentectomy (Fig. 12.7—Green box), trocars are positioned as follows (keeping a distance from each other of at least 8 cm): a 12-mm camera port (Fig. 12.7—C) in the right midclavicular line; a 12mm assistant port over the right anterior axillary line (Fig. 12.7—A); an 8-mm working trocar in the right midaxillary line (Fig. 12.7—1), this port can also be placed in a transthoracic position, augmenting the safety of dissection along the inferior vena cava [54]; an 8-mm working trocar in the right subcostal region (Fig. 12.7—2); and an 8-mm working trocar in the right subcostal region medially to port no. 2 (Fig. 12.7—3).

12.6.2

Surgical technique

The procedure starts with retrograde cholecystectomy. The mobilization of the liver is performed as previously described, with the dissection of the falciform, the coronary, and the right triangular ligaments and complete separation of the liver from the right adrenal gland and the diaphragm. The short hepatic vein is sutured/clipped and dissected. In this case, a Pringle maneuver is suggested, and the hepatoduodenal ligament is controlled by the positioning of a vessel loop. US is used to identify the resection line using the right hepatic vein as a landmark. A line is then drawn on the liver surface. Parenchymal resection is performed along the marked line using an ultrasonic dissector and diathermic scissor. Arterial and portal branches directed to segment 6 are identified during the resection, clipped with robotic Hem-olok, and divided. The parenchymal transection is continued with the identification, ligation, and dissection of arterial and portal branches directed to segment 7.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

203

12. Robotic Liver Surgery

FIGURE 12.7 Trocart placement for robot-assisted posterior segmentectomy.

Finally, the branches from the right hepatic vein are isolated and divided using a laparoscopic stapler. The specimen can be removed in an endobag through a 10-cm Pfannenstiel incision. In the case of transthoracic port placement, a thoracic drain should be placed and kept under negative pressure (Fig. 12.8).

12.7

Extreme robotic liver surgery: robotic surgery and liver transplantation

MIS has gained increasing interest in the transplantation field [59]. This pioneering approach was first introduced in 1995 by Ratner et al. [60], who performed the first living donor kidney procurement with a laparoscopic technique. Six years later, in 2001, Gruessner et al. described a living donor distal pancreas procurement [61], and moreover, in 2006, the group led by Professor Gauthier published a series of laparoscopic left lateral sectionectomies in living donors [62]. While the importance and complexity of hepatic donor resection can favor the classical open approach, in experienced centers, a laparoscopic approach is now considered a standard of care for kidney procurement in living donors. This is especially considering the intrinsic postoperative complications related to the laparotomy itself. Classically, in the case of living donor hepatic resection by laparotomy, the risk of postoperative complications can be as high as 30% 50%, frequently related to major abdominal wall trauma. Bowel obstruction and adherence and chronic abdominal discomfort are the most common complications [63]. In 2012 Giulianotti’s group described a robotic living donor right hepatectomy [64]. This approach has definitively mainstreamed the robotic technique finally comparable in terms of outcomes to the open and the laparoscopic approaches [58]. At that moment, the laparoscopic approach was the only minimally invasive alternative [65] to the left sectionectomy [65]. Interestingly, 1 year later, a totally pure laparoscopic living donor right hepatectomy was performed and reported [66]. Finally, these two approaches (robotic vs laparoscopic) have been compared [20], showing that, although larger series are necessary to clarify the exact role of the robotic system, its amplified 3D view and the steady instruments

204

Handbook of Robotic and Image-Guided Surgery

FIGURE 12.8 Tips and tricks for robotic liver surgery.

are advantageous during laparoscopy. These properties ensure a meticulous deep parenchyma dissection, with less blood loss compared to open surgery (169 mL for the robotic approach vs 146 mL for the “open surgery” group; P 5 .47) [67]. On the other hand, important surgical tools are still missing. The current instrumentation for robotic dissection is not yet developed as much as the laparoscopic counterpart, forcing the surgeon to favor the laparoscopic approach in the interest of donor safety. In selected centers and for selected patients, living donation by a laparoscopic approach is now available, offering several advantages over conventional open surgery, including faster postoperative recovery, fewer pulmonary complications, and greater cosmetic outcomes. For this reason, the expansion of this technique must be strongly traced. Otherwise, with the diffusion of the robotic system and, at the same time, the increasing skills of dedicated teams worldwide, it is natural to consider the robotic system as the potential next way to follow. Data analysis in terms of postoperative pain, complications, and recovery, in addition to precise, cost-effective studies, still need to be validated. Today, total kidney transplantation (procurement and implantation) [68 70] can be performed safely. On the other hand, the feasibility and safety of totally robotic liver transplantation have yet to be demonstrated. However, the da Vinci platform has been frequently used to perform bridge-to-transplant or downstaging hepatectomies [55,56] toward final liver transplantation. In this situation and compared to the laparoscopic approach, robotic resection seems to give advantages in terms of adhesion formation, blood loss, and transfusion rate required during the subsequent transplantation [71].

12.8

Cybernetic surgery: augmented reality in robotic liver surgery

Starting from its first presentation in 1998, the da Vinci surgical system has shown the potential to upgrade the entire surgical field to a new level. Sharing the same aim, robotic and computer science are interlinked to directly augment the surgeon’s skill. This common aim is rooted in augmented reality (AR), which provides the real-time combination between real images and virtual data. This technology allows the surgeon to see something normally invisible. This is made possible thanks to as NIR technology. The da Vinci system is provided with the FireFly mode, which provides NIR images directly inside the surgical console.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

205

12. Robotic Liver Surgery

In the recent era, fluorescent-driven surgical navigation has gained great interest in the oncological field [72,73]. In hepatobiliary surgery, a specific die, ICG, has been investigated, first as a marker of hepatic metabolism before resection [74], and second exploiting its fluorescence when hit by specific NIR light, to detect hepatic tumoral cells. ICG tumoral lesion detection can be applied either for HCC [75] or colorectal cancer metastases [76,77]. Following various protocols, the ICG is administered intravenously to the patient from 12 to 48 hours before surgery to permit accumulation of the dye inside the liver and its physiological secretion by the same tissue. As a result, peri/intratumoral pathological accumulation can permit lesion detection. The da Vinci camera is equipped with a specific optic able to emit a light with a wavelength between 750 and 810 nm. This light excites the ICG molecules that then emit fluorescent light with a wavelength of around 830 nm (situated in the infrared spectrum and normally invisible to the human eye). This technology allows more accurate detection of intraparenchymal lesions not visualized during the preoperative plan [78]. In this oncological setup, ICG has significant limitations, mainly due to the low sensitivity in the detection of deep intraparenchymal nodules, as well as a high false-positive rate in cirrhotic livers. In the near future, new, different fluorescent molecules potentially conjugated with monoclonal antibodies [79] will be developed. ICG fluorescence can also be used to perform real-time vascular and biliary navigations. The concept of a fluorescent-driven angio/cholangiography guarantees the surgical respect of vascular and biliary structures during the resective phase. This can be particularly useful during the hilar dissection and might help in decreasing surgical complications in the case of anatomical variations [78,79]. On this subject, several papers have shown the efficacy of ICG guidance during robotic cholecystectomy [80,81] with real-time identification of the extrahepatic biliary anatomy. The opportunity to fuse the intraoperative images with 3D virtual reconstruction obtained from the preoperative medical workout generates the concept of “AR” in surgery. This topic has been principally stressed in recent years. In 2015 Pessaux et al. [82] explored the potential of AR in the robotic resection. Three patients underwent a robotic AR-assisted hepatic resection. For all patients, a 3D virtual anatomical model was obtained by dedicated software (VRRENDER—IRCAD) and then processed and analyzed through a specific plug-in, Virtual Surgical Planning (VSP, IRCAD). The structure deformations created by the pneumoperitoneum are also simulated. The final result is a patientspecific 3D virtual model of the abdominal cavity, incorporating the hepatic nodules and their relationship with vascular and biliary structures. This model has been superposed, with “see-through” visualization to the operative field during surgery, thus permitting AR-guided port placement and AR-guided liver resection with precise and safe recognition of all peritumoral vascular and biliary components (Fig. 12.9). The feasibility of this approach has also been validated by other authors [83,84], confirming on one hand that the da Vinci system is an excellent platform for image-guided surgery, and second, in a longer perspective, that this system is a new step toward potential automation of the entire process. Nowadays, the principal limit is the discrepancy between the real liver images, which are subject to physiological movements, and the overlap virtual images, which are fixed. This problem has to be overcome, but important progress is ongoing; for example, using the anatomical mark to follow the integration between surgical images and the superposed model. The da Vinci offers a virtual training simulation. In MIS, the need for continuous training is obvious, especially because of the 2D unnatural laparoscopic vision (with the lack of in-depth perception) and new gestures. This forces the surgeon to continuously learn to increase accuracy and advance their technical skills. With the robotic platform, simulation is possible, and in the near future, the exploration of the patient 3D model should help the surgeon in planning the operation (the relationship between liver tumor, vascular structure, and biliary tree). Additionally, the role of 3D images has been shown to help during planning [85]. With the introduction of the da Vinci platform, telesurgery has become a reality in the surgical scenario, revolutionizing the classic setup where the surgeon and patient are in the same room. Telesurgery allows the surgeon to be far from the patient. This is the essence of the Lindberg operation, where, in 2001, a transatlantic surgical procedure covering the distance between Strasbourg (France) and New York (United States) was performed [86]. Robotic platforms are the only machines able to achieve remote telesurgery, as well as surgical telementoring, a procedure where an expert surgeon watches in real time an operation performed in a different geographic location, guiding the operating surgeon during the procedure. Telementoring is already recognized as an educational option in multiple specialties [87], and a safe environment allows trainee robotic surgeons to be telementored while performing basic or more advanced surgical interventions. However, significant steps remain to be completed before the widespread use of this technology (cost, cybersecurity of data).

206

Handbook of Robotic and Image-Guided Surgery

FIGURE 12.9 Robotic augmented reality platform. Images taken from the patient’s CT scan are elaborated with a dedicated software creating a tridimensional model that could fit with the real-time intraoperative image (A and B). The virtual model is then transmitted to a navigation system and displayed within the robot console with an image overlapping. Finally, to precisely calibrate the image overlay, an optical calibration tracker is used based on selected markers (C and D). (A and B) Courtesy Volonte F, Bichs NC, Pugin F, Spaltenstein J, Jung M, Ratib O, et al. Stereoscopic augmented reality for da Vinci robotic biliary surgery. Int J Surg Case Rep 2013;4:365 7; (C and D) Courtesy Buchs NC, Volonte F, Pugin F, Toso C, Fusaglia M, Gavaghan K, et al. Augmented environments for targeting of hepatic lesions during image-guided robotic liver surgery. J Surg Res 2013;184:825 31.

12.9 The financial impact of the robotic system in liver surgery: is the robot cost prohibitive? Worldwide, technological innovations represent one of the most important burdens of growth of healthcare economies and, in recent years, financial difficulties have led to a careful evaluation of the cost-effectiveness of every new medical procedure. MIS (laparoscopic and robotic) has improved patient outcomes in different general surgery subspecialties but, with their costs, has forced surgeons to wonder if costs outweigh the benefits. Liver surgery is no exception and represents one of the most recent applications of these technologies. The cost-effectiveness of these procedures (relative to open procedures) is debated. Procedure costs include principally direct costs (including operating room time, instruments, and medications) and costs indirect ones, mainly related to hospital stay. In laparoscopic surgery, instrument cost (technological component) is the most important component and is affected by the type (reusable or disposable) of instruments used in each procedure. Nowadays, the da Vinci system is the most-known commercially available surgical robotic technology with the acquisition of high cost (over h1 million), as well as an annual service cost of around h100,000. This high cost can explain the limited use of robotic-assisted surgery. It has been estimated that to amortize the general costs, 300 surgeries/year for 7 years are the baseline [88]. Finally, another important disadvantage of the robotic system is the limited preprogrammed use of semidisposable instruments calculated at around 10 times more expensive. The financial impact of laparoscopic liver resection compared to the open approach has been analyzed by several authors [89,90], concluding that laparoscopic surgery has an equal or superior cost efficiency to open surgery (with 3a and 3b levels of evidence). To the best of our knowledge, only one study has compared robotic versus laparoscopic hepatic resection [91] focusing also on the economic impact of the da Vinci approach. These authors detected no differences between the two groups if only the direct costs were considered ($5130 for robotic vs $4408 for laparoscopic). On the contrary, when indirect costs have been taken into consideration, the robotic surgeries achieved a significantly

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

207

12.10

Conclusion and future directions of robotic liver surgery

MIS has important benefits in oncological surgery, and laparoscopic liver surgery has gained increasing interest. Due to the complexity of some hepatobiliary procedures, robotic approaches are also employed. Despite the recent progress and the initial enthusiasm for this new surgical platform, the role of robotic-assisted approaches for liver surgery still needs to be clarified. The comparisons between robotic and laparoscopic remain incomplete. The safety and utility of the da Vinci system for hepatic resection have been well established, but cost evaluations, as well as its steep learning curve, are major drawbacks.

References [1] Mouret P. How I developed laparoscopic cholecystectomy. Ann Acad Med Singapore 1996;25:744 7. [2] Baccarani U, Terrosu G, Donini A, Risaliti A, Bresadola F. Future of minimally invasive surgery. Lancet 1999;7:354 63. [3] Clinical Outcomes of Surgical Therapy Study G. A comparison of laparoscopically assisted and open colectomy for colon cancer. N Engl J Med 2004;350:2050 9. [4] Ashok KH, Rajeev K. Laparoscopy in urology. J Minim Access Surg 2005;1(4):147. [5] Rimbach S, Neis K, Solomayer E, Ulrich U, Wallwiener D. Current and future status of laparoscopic gynecologic oncology. Geburtshlife Frauenheilkd 2014;74:852 9. [6] Kuketich JD, Pennathur A, Awais O, et al. Outcomes after minimally invasive esophagectomy: review of over 1000 patients. Ann Surg 2012;256:95 103. [7] Rockall TA, Demartines N. Laparoscopy in the era of enhanced recovery. Best Pract Res Clin Gastroenterol 2014;28:133 42. [8] Bonier HJ, Deijen CL, Haglind E, COLOR II Study Group. A randomized trial of laparoscopic versus open surgery for rectal cancer. N Engl J Med 2015;373:194. [9] Castaing D, Vibert E, Ricca L, Azoulay D, Adam R, GAvet B. Oncologic results of laparoscopic versus open hepatectomy for colorectal liver metastases in two specialized centers. Ann Surg 2009;250(5):849 55. Available from: https://doi.org/10.1097/SLA.0b013e3181bcaf63. [10] Ko¨ckerling F. Robotic vs. standard laparoscopic technique—what is better? Front Surg 2014;15:1 15. [11] Supe AN, Kulkarni GV, Supe PA. Ergonomics in laparoscopic surgery. J Minim Access Surg 2010;6:31 6. [12] Idrees K, Bartlett DL. Robotic liver surgery. Surg Clin North Am 2010;90:761 74.

12. Robotic Liver Surgery

higher cost ($6553 vs $4408). Interestingly, this study did not analyze the costs related to patient hospital stay. The final results show the slight inferiority of the robotic approach in terms of clinical outcomes, with increased cost compared to the laparoscopic approach. Recently, Daskalaki et al. provided important unexplored information regarding the economic assessment of using a robotic platform for liver surgery [92]. For the first time, to eliminate any observer bias, an independent, external society has performed the economic analysis costs, comparing 67 robotic versus 54 open liver resections. For each procedure, the departmental expenses were divided into three categories: direct variable costs, direct fixed costs, and indirect costs. Furthermore, a cost of $100,000/patient (independently of the approach) has been selected as a threshold to discriminate between potential high-cost patients. The total costs of open and robotic surgery, including both direct (variable and fixed) and indirect, as well as the costs of any readmissions, were $41,948 and $37,518, respectively. The average value in the case of high-cost patients (three for the robotic group and four for the open surgery group) for the robotic group was higher than for the open technique ($159,194 vs $141,825). When single categories are considered, the average cost per robotic surgery is more expensive in terms of operating room and readmission costs but generally compensate by lower intensive-care unit, inpatient nursing, and pharmacy costs (but not reaching statistical significance). With the da Vinci robot system, Intuitive Inc. has dominated the market (primarily after the direct acquisition of the Computer Motion, its direct main competitor in 2003); with four generations of da Vinci platforms, and over 4000 units sold worldwide. Several of Intuitive Surgical’s earliest patents expired at the end of 2016. These patents cover some of the basic robotic concepts implemented in the company’s products, including the control of robotic arms and tools with a remote controller, and the imaging functionality provided by the surgical robot. With patent expiration, other companies are now developing robotic platforms for surgery. The next robotic systems that will be directly tested in operation rooms include Johnson & Johnson and Google’s Verb Surgical platform (www.verbsurgical.com), the TELELAP Alf-X system (www.transenterix.com), the MedTronic Robot, and the REVO-I and the AVATERA (www.avatera.eu/1/home) system. This situation not only increases the technical challenge in the field but, but will also decrease robot-related costs in the optic of a competitive market.

208

Handbook of Robotic and Image-Guided Surgery

[13] Velayutham V, Fuks D, Nomi T, Kawaguchi Y, Gayet B. 3D visualization reduces operating time when compared to high-definition 2D in laparoscopic liver resection: a case-matched study. Surg Endosc 2016;30:147 53. [14] Hagen ME, Stein H, Curet M. Introduction to the robotic system. Robotics in general surgery. New York: Springer Science 1 Business Media; 2014. [15] Ciria R, Cherqui D, Geller DA, Briceno J, Wakabayashi G. Comparative short-term benefits of laparoscopic liver resection: 9000 cases and climbing. Ann Surg 2016;263:761 77. [16] Chen PD, Wu CY, Hu RH, Chou WH, Lai HS, Liang JT, et al. Robotic versus open hepatectomy for hepatocellular carcinoma: a matched comparison. Ann Surg Oncol 2017;24:1021 8. [17] Giulianotti PC, Coratti A, Sbrana F, Addeo P, Bianco FM, Buchs NC, et al. Robotic liver surgery: results for 70 resections. Surgery 2011;149:29 39. [18] Choi GH, Chong JU, Han DH, Choi JS, Lee WJ. Robotic hepatectomy: the Korean experience and perspective. Hepatobiliary Surg Nutr 2017;6:230 8. [19] Tsung A, Geller DA, Sukato DC, Sabbaghian S, Tohme S, Steel J, et al. Robotic versus laparoscopic hepatectomy: a matched comparison. Ann Surg 2014;259(3):549 55. [20] Wu YM, Hu RH, Lai HS, Lee PH. Robotic-assisted minimally invasive liver resection. Asian J Surg 2014;37(2):53 7. [21] Goh BKP, Lee LS, Lee SY, Chow PKH, Chan CY, Chiow AKH. Initial experience with robotic hepatectomy in Singapore: analysis of 48 resections in 43 consecutive patients. ANZ J Surg 2018. Available from: https://doi.org/10.1111/ans.14417. [22] O’Connor VV, Vuong B, Yang ST, DiFronzo A. Robotic minor hepatectomy offers a favourable learning curve and may result in superior perioperative outcomes compared with laparoscopic approach. Am Surg 2017;83:1085 8. [23] Lai EC, Yang GP, Tang CN. Robot-assisted laparoscopic liver resection for hepatocellular carcinoma: short-term outcome. Am J Surg 2013;205:697 702. [24] Troisi RI, Patriti A, Montalti R, Casciola L. Robot assistance in liver surgery: a real advantage over a fully laparoscopic approach? Results of a comparative bi-institutional analysis. Int J Med Robot 2013;9:160 6. [25] Chan OC, Tang CN, Lai EC, Yang GP, Li MK. Robotic hepatobiliary and pancreatic surgery: a cohort study. J Hepatobiliary Pancreat Sci 2011;18:471 80. [26] Spampinato MG, Coratti A, Bianco L, Caniglia F, Laurenzi A, Puleo F, et al. Perioperative outcomes of laparoscopic and robot-assisted major hepatectomies: an Italian multi-institutional comparative study. Surg Endosc 2014;28:2973 9. [27] Casciola L, Patriti A, Ceccarelli G, Bartoli A, Ceribelli C, Spaziani A. Robot-assisted parenchymal-sparing liver surgery including lesions located in the posterosuperior segments. Surg Endosc 2011;25:3815 24. [28] Morel P, Jung M, Cornateanu S, Buehler L, Majno P, Toso C, et al. Robotic versus open liver resections: a case-matched comparison. Int J Med Robot 2017;13(3):e1800. [29] Alhomaidhi A. Right hepatectomy, open vs laparoscopic: a systematic review. Surg Sci 2012;3:580 7. [30] Franken C, Lau B, Putchakayala K, DiFronzo LA. Comparison of short-term outcomes in laparoscopic vs open hepatectomy. JAMA Surg 2014;149(9):941 6. [31] Wang XT, Wang HG, Duan WD, Wu CY, Chen MY, Li H, et al. Pure laparoscopic versus open liver resections for primary liver carcinoma in elderly patients: a single-center, case-matched study. Medicine (Baltimore) 2015;94(43):E1854. Available from: https://doi.org/10.1097/ MD.0000000000001854. [32] Han HS, Shehta A, Ahn S, Yoon YS, Cho JY, Choi Y. Laparoscopic versus open liver resection for hepatocellular carcinoma: case-matched study with propensity score. J Hepatol 2015;63(3):643 50. [33] Di Fabio F, Barkhatov L, Bonadio I, Dimovska E, Fretland AA, Pearce NW, et al. The impact of laparoscopic versus open colorectal cancer surgery on subsequent laparoscopic resection of liver metastases: a multicentre study. Surgery 2015;157(6):1046 54. [34] Hasehawa Y, Nitta H, Sasaki A, Takahara T, Itabashi H, Katagiri H, et al. Long-term outcomes of laparoscopic versus open liver resection for liver metastases from colorectal cancer: a comparative analysis of 168 consecutive cases at a single center. Surgery 2015;157(6):1065 72. [35] Qiu J, Chen S, Pankaj P, Wu H. Laparoscopic hepatectomy is associated with considerably less morbidity and a long-term survival similar to that of the open procedure in patients with hepatic colorectal metastases. Surg Laparosc Endosc Percutan Tech 2014;24(6):517 22. [36] Montalti R, Berardi G, Patriti A, Vivarelli M, Troisi RI. Outcomes of robotic vs laparoscopic hepatectomy: a systematic review and metaanalysis. World J Gastroenterol 2015;21(27):8441 51. [37] Buchs NC, Oldani G, Orci LA, Majno PE, Mentha G, Morel P, et al. Current status of robotic liver resection: a systematic review. Expert Rev Anticancer Ther 2014;14(2):237 46. [38] Berber E, Akyildiz HY, Aucejo F, Gunasekaran G, Chalikonda S, Fung J. Robotic versus laparoscopic resection of liver tumours. HPB (Oxford) 2010;12:583 6. [39] Van der Shatte Olivier RH, Van’t Hullenaar CD, Ruurda JP, Broeders IA. Ergonomics, user comfort, and performance in standard and robotassisted laparoscopic surgery. Surg Endosc 2009;23:1365 71. [40] Alaraimi B, El Bakbak W, Sarker S, Makkivah S, Al-Marzouq A, Goriparthi R, et al. A randomized prospective study comparing acquisition of laparoscopic skills in three-dimensional (3D) vs. two-dimensional (2D) laparoscopy. World J Surg 2014;38:2746 52. [41] Leal Ghezzi T, Campos Corleta O. 30 Years of robotic surgery. World J Surg 2016;40:2550 7. [42] Moore LJ, Wilson MR, Waine E, McGrath JS, Masters RS, Vine SJ. Robotically assisted laparoscopy benefits surgical performance under stress. J Robot Surg 2015;9:277 84.

Robotic Liver Surgery: Shortcomings of the Status Quo Chapter | 12

209

12. Robotic Liver Surgery

[43] Jain M, Fry BT, Hess LW, Anger JT, Gewertz BL, Ctachpole K. Barriers to efficiency in robotic surgery: the resident effect. J Surg Res 2017;205:296 304. [44] Leung U, Fong Y. Robotic liver surgery. Hepatobiliary Surg Nutr 2014;3:288 94. [45] Schiff L, Tsafrir Z, Aoun J, Taylor A, Theoharis E, Einstein D. Quality of communication in robotic surgery and surgical outcomes. JSLS 2016;20 e2016.00026. [46] Buchs NC, Pugin F, Volonte´ F, Morel P. Reliability of robotic system during general surgical procedures in a university hospital. Am J Surg 2014;207:84 8. [47] Rajih E, Tholomier C, Cormier B, Samoue¨lian V, Warkus T, Liberman M, et al. Error reporting from the da Vinci surgical system in robotic surgery: a Canadian multispecialty experience at a single academic centre. Can Urol Assoc J 2017;11:E197 202. Available from: https://doi. org/10.5489/cuaj.4116. [48] Hart ME, Precht A. Robotic liver resection technique. Cancer J 2013;19:147 50. [49] Frankel TL, Kinh Gian Do R, Jarnagin WR. Preoperative imaging for hepatic resection of colorectal cancer metastasis. J Gastrointest Oncol 2012;3:11 18. [50] Buell JF, Cherqui D, Geller DA, O’Rourke N, Ianniti D, Dagher I, et al. The international position on laparoscopic liver surgery: the Louisville statement. Ann Surg 2008;250:825 30. [51] Wakabayashi G, Cherqui D, Geller DA, Buell JF, Kaneko H, Han HS, et al. Recommendations for laparoscopic liver resection: a report from the second international consensus conference held in Morioka. Ann Surg 2015;261:619 29. [52] Cho JY, Han HS, Kaneko H, Wakabayashi G, Okajima H, Uemoto S, et al. Survey results of the expert meeting on laparoscopic living donor hepatectomy and literature review. Dig Surg 2017;35:289 93. [53] Shehta A, Han HS, Yoon YS, Cho JY, Choi Y. Laparoscopic liver resection for hepatocellular carcinoma in cirrhotic patients: 10-years singlecenter experience. Surg Endosc 2016;30:638 48. [54] Giulianotti PC, Sbrana F, Coratti A, Bianco FM, Addeo P, Buchs NC, et al. Totally robotic right hepatectomy: surgical technique and outcomes. Arch Surg 2011;146(7):844 50. [55] Wakabayashi G, Sasaki A, Nishizuka S, Furukawa T, Kitajima M. Our initial experience with robotic hepato-biliary-pancreatic surgery. J Hepatobiliary Pancreat Sci 2011;18:481 7. [56] Lai EC, Tang CN. Robot-assisted laparoscopic partial caudate lobe resection for hepatocellular carcinoma in cirrhotic liver. Surg Laparosc Endosc Percutan Tech 2014;24:e88 91. [57] Li Z, Sun YM, Wu FX, Yang LQ, Lu ZJ, Yu WF. Controlled low central pressure reduces blood loss and transfusion requirements in hepatectomy. World J Gastroenterol 2014;20:303 9. [58] Giulianotti PC, Bianco FM, Daskalaki D, Gonzales-Ciccarelli LF, Kim J, Benedetti E. Robotic liver surgery: technical aspect and review of the literature. HepatoBiliary Surg Nutr 2016;5(4):311 21. [59] Levi Sandri GB, de Werra E, Masciana` G, Guerra F, Spoletini G, Lai Q. The use of robotic surgery in abdominal organ transplantation: a literature review. Clin Transplant 2017;31:e12856. [60] Ratner L, Ciseck LJ, Moore RG, Cigarroa FG, Kaufman HS, Kavoussi LR. Laparoscopic live donor nephrectomy. Transplantation 1995;60:1047. [61] Gruessner RWG, Kandaswamy R, Denny R. Laparoscopic simultaneous nephrectomy and distal pancreatectomy from a live donor. J Am Coll Surg 2001;193:333. [62] Soubrane O, Cherqui D, Scatton O, Stenard F, Bernard D, Branchereau S, et al. Laparoscopic left lateral sectionectomy in living donors. Ann Surg 2006;244(5):815 20. [63] Abecassis MM, Fisher RA, Olthoff KM, Freise CE, Rodrigo DR, Samstein B, et al. Complication of living donor hepatic lobectomy—a comprehensive reports. Am J Transplant 2012;12:1208 17. [64] Giulianotti PC, Tzvetanov I, Jeon H, et al. Robot-assisted right lobe donor hepatectomy. Transpl Int 2012;25:e5 9. [65] Cauchy F, Schwartz L, Scatton O, Soubrane O. Laparoscopic liver resection for living donation: where do we stand? World J Gastroenterol 2014;20(42):15590 8. [66] Soubrane O, Perdigao Cotta F, Scatton O. Pure laparoscopic right hepatectomy in a living donor. Am J Transplant 2013;13:2467 71. [67] Chen PD, Wu CY, Hu RH, Ho CM, Lee PH, Lai HS, et al. Robotic liver donor right hepatectomy: a pure, minimally invasive approach. Liver Transpl 2016;22:1509 18. [68] Giulianotti PC, Gorodner V, Sbrana F, Tzvetanov I, Jeon H, Bianco F, et al. Robotic transabdominal kidney transplantation in a morbidity obese patient. Am J Transplant 2010;10(6):1478 82. [69] Boggi U, Vistoli F, Signori S, D’Imporzano S, Amorese G, Consani G, et al. Robotic renal transplantation: first European case. Tranpl Int 2011;24(2):213 18. [70] Tzvetanov I, Giulianotti PC, Bejarano-Pineda L, Jeon H, Garcia-Roca R, Bianco F, et al. Surg Clin North Am 2013;93(6):1309 23. [71] Panaro F, Piardi T, Cag M, Cinqualbre J, Wolf P, Audet M. Robotic liver resection as a bridge to liver transplantation. JSLS 2011;15:86 9. [72] Yoshida M, Kubota K, Kuroda J, Ohta K, Nakamura T, Saito J, et al. Indocyanine green injection for detecting sentinel nodes using color fluorescence camera in the laparoscopy-assisted gastrectomy. J Gastroenterol Hepatol 2012;27(Suppl. 3):29 33. [73] Zalken JA, Tufaro AP. Current trends and emerging future of Indocyanine Green usage in surgery and oncology: an update. Ann Surg Oncol 2015;22(3):S1271 83. [74] Hemming AW, Scudamore CH, Chacleton CB, et al. Indocyanine green clearance as a prediction of successful hepatic resection in cirrhotic patients. Am J Surg 1992;163(5):515 18.

210

Handbook of Robotic and Image-Guided Surgery

[75] Kokudo N, Ishizawa T. Clinical application of fluorescence imaging of liver cancer using indocyanine gree. Liver Cancer 2012;1(1):15 21. [76] Peloso A, Franchi E, Canepa MC, Barbieri L, Briani L, Ferrario J, et al. Combined use of intraoperative ultrasound and indocyanine green fluorescence imaging to detect liver metastases from colorectal cancer. HPB (Oxford) 2013;15(12):928 34. [77] Handgraaf HJM, Boogerd LSF, Ho¨ppener DJ, Peloso A, Sibinga Mulder BG, Hoogstins CES, et al. Long-term follow-up after near-infrared fluorescent-guided resection of colorectal liver metastases: a retrospective multicentre analysis. Eur J Surg Oncol 2017;43(8):1463 71. [78] Daskalaki D, Aguilera F, Patton K, et al. Fluorescence in robotic surgery. J Surg Oncol 2015;112:250 6. [79] Porcu EP, Salis A, Gavini E, Rassu G, Maestri M, Giunchedi P. Indocyanine green delivery system for tumour detection and treatments. Biotechnol Adv 2016;34(5):768 89. [80] Daskalaki D, Fernandes E, Wang X, Bianco FM, Elli EF, Ayloo S, et al. Indocyanine green (ICG) fluorescent cholangiography during robotic cholecystectomy: results of 184 consecutive cases in a single institution. Surg Innov 2014;21(6):615 21. [81] Buchs NC, Hagen ME, Pugin F, Volonte F, Bucher P, Schiffer E, et al. Intra-operative fluorescent cholangiography using indocyanine green during robotic single site cholecystectomy. Int J Med Robot 2012;8(4):436 40. [82] Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbecks Arch Surg 2015;400:381 5. [83] Buchs NC, Volonte F, Pugin F, Toso C, Fusaglia M, Gavaghan K, et al. Augmented environments for targeting of hepatic lesions during image-guided robotic liver surgery. J Surg Res 2013;184:825 31. [84] Volonte F, Bichs NC, Pugin F, Spaltenstein J, Jung M, Ratib O, et al. Stereoscopic augmented reality for da Vinci robotic biliary surgery. Int J Surg Case Rep 2013;4:365 7. [85] Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol 2011;20:189 201. [86] Marescaux J, Leroy J, Gagner M, Rubino F, Mutter D, Vix M, et al. Transatlantic robot-assisted telesurgery. Nature 2001;413:379 80. [87] Porretta AP, Alerci M, Wyttenbach R, Antonucci F, Cattaneo M, Bogen M, et al. Long-term outcomes of a telementoring program for distant teaching of endovascular aneurysm repair. J Endovasc Ther 2017;24(6):852 8. [88] van Dam P, Hauspy J, Verkinderen L, Trinh XB, van Dam PJ, Van Looy L, et al. Are costs of robot-assisted surgery warranted for gynecological procedures? Obstet Gynecol Int 2011;2011:973830. [89] Bhoiani FD, Fox A, Pitzul K, Gallinger S, Wei A, Moulton CA, et al. Clinical and economic comparison of laparoscopic to open liver resection using a 2-to-1 matched pair analysis: an institutional experience. J Am Coll Surg 2012;214:184 95. [90] Cleary SP, Han HS, Yamamoto M, Wakabayashi G, Asbun HJ. The comparative costs of laparoscopic and open resection: a report for the 2nd International Consensus of Laparoscopic Liver resection. Surg Endosc 2016;30:4691 6. [91] Packiam V, Bartlett DL, Tohme S, Reddy S, Marsh JW, Geller DA, et al. Minimally invasive liver resection: robotic versus laparoscopic left lateral sectionectomy. J Gastrointest Surg 2012;16:2233 8. [92] Daskalaki D, Gonzalez-Heredia R, Brown M, Bianco FM, Tzvetanov I, Davis M, et al. Financial impact of the robotic approach in liver surgery: a comparative study of clinical outcomes and costs between the robotic and open technique in a single institution. J Laparoendosc Adv Surg Tech A 2017;27:375 82.

Further reading Schwarz L, Aloia TA, Eng C, Chang GJ, Vauthey JN, Conrad C. Transthoracic port placement increases safety of total laparoscopic posterior sectionectomy. Ann Surg Oncol 2016;23(7):2167.

13 G

Clinical Applications of Robotics in General Surgery Rana M. Higgins and Jon C. Gould Medical College of Wisconsin, Milwaukee, WI, United States

ABSTRACT The clinical applications of robotics in general surgery are broad, spanning from bariatrics, hernias, foregut, colorectal, and solid organs. All the clinical studies and published literature identify that the minimally invasive approach provided by robotics is not inferior to that of laparoscopic surgery. However, considerations such as the learning curve, operative time, and cost are significant. This chapter provides a background on the utilization of robotics in general surgery, and in-depth evaluations of the clinical applications of robotics in bariatric surgery, hernia surgery, colorectal surgery, foregut surgery, and solid organ surgery. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00013-X © 2020 Elsevier Inc. All rights reserved.

211

212

13.1

Handbook of Robotic and Image-Guided Surgery

Utilization of robotics in general surgery

The primary robotic platform used in general surgery is the da Vinci surgical system (Intuitive Surgical, Sunnyvale, California, Unites States). This system is not autonomous, and serves as a computer-assisted telemanipulator [1]. This system received FDA approval in 2000, and is now on its fourth generation, released in 2017. The surgeon sits at a console with controllers that they can manipulate to control the robotic arms which are connected to ports that are similar in size to those used in laparoscopic surgery. The other robotic surgical system available in the United States is the Senhance Surgical Robotic System (TransEnterix Surgical, Inc., Morrisville, North Carolina, United States), which allows a surgeon to control the system via three separate robotic arms. This device was FDA approved in October 2017. Given the longevity of the da Vinci, most literature published in robotic surgery is related to this system. The utilization of robotics in gynecologic and urologic surgery has been dominant over the past decade, as it has converted the ability of those surgeons to transition from an open to a minimally invasive approach using the robotic system. In general surgery, the dominance of robotic surgery has not been as significant, as laparoscopy is common in many subspecialties of general surgery, with proven improved clinical outcomes over open surgery. However, the use of robotics in general surgery has been steadily increasing, and has surpassed urology in total procedures performed nationally (Fig. 13.1). Robotic surgery has yet to demonstrate a significant clinical benefit across multiple subspecialties in general surgery. Theoretical advantages of the technology include improved visualization using three-dimensional imaging, stabilization of instruments in the surgical field, mechanical advantages in regard to wristed instruments that are not prevalent in laparoscopic surgery, and improved ergonomics for the surgeon [3]. Disadvantages include lack of haptic feedback, cost, and limited proven clinical benefit. Within general surgery, the primary clinical research and utilization have been in bariatric, hernia, foregut, colorectal, and solid organ surgery. Among the general and bariatric surgery procedures, Fig. 13.2 demonstrates the distribution of cases, of which rectal resection, Heller myotomy, and antireflux surgery are the three most performed robotically [4]. FIGURE 13.1 Robotic surgery procedure trends over time in gynecologic, urologic, and general surgery [2].

FIGURE 13.2 Distribution of robotic general and bariatric surgery procedures, 2010 14 [4].

Clinical Applications of Robotics in General Surgery Chapter | 13

13.2 13.2.1

213

Robotics in bariatric surgery Procedure background

FIGURE 13.3 Anatomic depiction of sleeve gastrectomy [5]. From Welbour R. Oesophagogastric surgery. Courtesy Elsevier.

FIGURE 13.4 Anatomic depiction of a gastric bypass [5]. From Welbour R. Oesophagogastric surgery. Courtesy Elsevier.

13. Robotics in General Surgery

Bariatric surgery is the most effective long-term treatment for morbid obesity, which currently affects greater than onethird of the United States population, and is projected to increase to 50% by 2030. The two most common procedures performed in the United States are the sleeve gastrectomy and gastric bypass (Figs. 13.3 and 13.4). Through hormonal and anatomic mechanisms, these procedures provide incredibly effective long-term weight loss, upward of 50% 70% of excess body weight. Laparoscopic bariatric surgery has transformed the field by minimizing postoperative complications in this high-risk patient population. The first laparoscopic gastric bypass was performed in 1994, and over the next 10 years, the use of laparoscopy continued to rise [6]. Between 2004 and 2005, laparoscopy surpassed open bariatric procedures performed nationally, and this disparity has continued to grow rapidly. Over 97% of current bariatric operations are performed laparoscopically [7].

214

Handbook of Robotic and Image-Guided Surgery

13.2.2

Robotic gastric bypass

Robotics has started to gain momentum in bariatric surgery over the past several years. The first robotic gastric bypass was performed in 2000 [8]. The primary literature published has been in regard to laparoscopic versus robotic gastric bypass. There have been multiple institutional reviews published, focusing on outcomes in robotic-assisted, where portions of the operation are performed laparoscopically, and total robotic gastric bypasses compared to laparoscopy. Outcomes have been variable, with some demonstrating increased leak rates and higher readmission rates, and others demonstrating equivalent outcomes [9,10]. Increased operative time has been evident across multiple studies. National database studies of robotic bariatric surgery focus on short-term postoperative outcomes when comparing robotics to laparoscopy. An analysis performed of the Bariatric Outcomes Longitudinal Database from 2007 to 2012 identified 137,455 patients who underwent either robotic or laparoscopic primary gastric bypass. When the cohorts were propensity-matched, robotic gastric bypass patients had increased operative time, reoperation, 30- and 90-day complications, readmissions, stricture, ulceration, and anastomotic leak. Therefore the authors concluded that patients undergoing robotic gastric bypass had a higher rate of postoperative morbidity compared to laparoscopy [11]. More recently, Sharma et al. [12] identified 36,158 patients from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program 2015 dataset, 7.4% of which underwent robotic-assisted gastric bypass. These patients were propensity-matched to those who underwent laparoscopic gastric bypass. Robotic gastric bypass had longer median operative time (136 vs 107 minutes, P , .001) and higher 30-day readmission rates (7.3 vs 6.2%, P 5 .03). There were no statistically significant differences in all-cause morbidity, serious morbidity, mortality, unplanned intensive care unit admission, reoperation, or reintervention. Therefore the authors concluded that robotic-assisted gastric bypass is safe compared to the laparoscopic approach, but does have longer operative times and readmission rates. Systematic reviews have also demonstrated results in regard to longer operative times and increased cost, with a similar safety profile when comparing robotic and laparoscopic gastric bypass. Fourman and Saber [13] reviewed 18 studies that demonstrated similar or lower complication rates with robotic compared to laparoscopic gastric bypass. In addition, they highlighted a decreased learning curve in regard to the robotic gastric bypass. The majority of studies identified longer operative times robotically. These findings were also identified in a systematic review and metaanalysis from Li et al. [14] of 27 studies with 27,997 patients. There was no difference identified between robotic and laparoscopic gastric bypass in regard to overall complications, major complications, length of stay, reoperation, conversion, and mortality. The incidence of leak was lower for robotic compared to laparoscopic gastric bypass. However, robotic gastric bypass did have longer operative times and higher costs. From all the available studies, the overall safety profile of the robotic gastric bypass is similar to laparoscopy [15]. Increased operative time and higher cost are important metrics that need to be more closely examined. Operative time is greatly influenced by learning curve. Schauer et al. identified a learning curve of 100 cases to perform a laparoscopic gastric bypass with a significant decrease in operative time and technical complications [16]. Robotically, the operative time has been shown to decrease by 25 minutes after 10 robotic gastric bypass cases [17]. Cost is also a significant consideration and appears to be the primary disadvantage of robotics, regardless of specialty.

13.2.3

Robotic sleeve gastrectomy

Evidence for robotic sleeve gastrectomy has not been as prevalent in the literature as the gastric bypass [18]. Comparative single-institution studies have demonstrated increased operative time robotically, with no difference in complications compared to laparoscopic sleeve gastrectomy [19 21]. Magouliotis et al. [22] presented a systematic review and meta-analysis of 16 studies comparing robotic and laparoscopic sleeve gastrectomy. Robotic sleeve gastrectomy was found to have a higher mean operative time and increased length of hospital stay compared to laparoscopically. Postoperative complications and excess weight loss were not statistically different. In addition, the majority of studies identified an increased cost with the robotic approach. Regarding revisional robotic bariatric surgery, Gray et al. [23] performed a retrospective single-institution review of 84 patients who underwent laparoscopic and robotic bariatric surgery. For patients undergoing conversion from an adjustable gastric band, there was no difference in operative time, length of stay, or complications between the robotic and laparoscopic approach. For patients who had conversion from a stapled procedure, including sleeve gastrectomy, gastric bypass, and vertical banded gastroplasty, the robotic approach was associated with a shorter length of stay, and similar operative time and postoperative complications. Therefore revisional bariatric surgery can be safely performed robotically.

Clinical Applications of Robotics in General Surgery Chapter | 13

215

Overall, robotic bariatric surgery has demonstrated noninferiority in regard to perioperative safety when compared to laparoscopic bariatric surgery. Concerns about increased operative time and cost are significant, and additional studies are needed to further address these issues. The learning curve is an important part of the operative time that also needs to be taken into consideration.

13.3 13.3.1

Robotics in hernia surgery Procedure background

13.3.2

Robotic ventral hernia repair

The first published report of a laparoscopic ventral hernia repair was in 1993 [26]. Technically, this is performed as a bridging repair with 3 5 cm of mesh overlap without routine primary closure of the fascial defect. Transfascial sutures are typically placed, in addition to either absorbable or permanent circumferential tacks. In 2003 Schluender et al. [27] FIGURE 13.5 Anatomic depiction of a ventral hernia with fascial holes in the abdominal wall [24].

FIGURE 13.6 Anatomic depiction of a left inguinal hernia, with a fascial defect laterally to the inferior epigastric vessels, consistent with an indirect inguinal hernia [25].

13. Robotics in General Surgery

Hernia surgery is the most common procedure performed by general surgeons. Hernias are defined as a defect in the abdominal wall or pelvic floor musculature, causing bulging and/or pain. Ventral hernias are of the abdominal wall, as depicted in Fig. 13.5. Ventral hernias can be fixed with an intraperitoneal onlay mesh or component separation, also known as a retrorectus or transversus abdominis release (TAR). These procedures have been performed both open and laparoscopic, and are becoming increasingly more common in robotic surgery. Inguinal hernias are of the inguinal region, as depicted in Fig. 13.6, and are most common in men. These can be performed open or laparoscopic as well, with more being performed robotically over the past few years.

216

Handbook of Robotic and Image-Guided Surgery

published an animal study proposing a technique for using the robotic surgical system to intracorporeally suture the mesh to the fascia, as an alternative to transfascial suture fixation laparoscopically. In 2003 Ballantyne et al. [28] published the first study on robotic ventral hernia repair in two patients, however, tacks were used. This has now evolved into usage of the techniques as described in Schluender’s study, as the robotic system instruments allow for a greater range of motion and ability to suture intracorporeally. One of the advantages of robotic ventral hernia repair is primary closure of the fascial defect. Advantages of primary closure include a theoretically stronger repair with a greater surface area of the abdominal wall for the mesh to be in contact with, since it is not a bridging repair. In addition, with the primary fascial defect closed, there is a decreased likelihood the mesh would bulge into the defect [29]. Gonzalez et al. [30] found in a retrospective review of 134 cases, longer operative times were associated with primary fascial defect closure robotically, but complications and recurrences were similar to no fascial closure laparoscopically. Outside of single-institution studies, a multicenter review of 215 patients found a decreased incidence of hernia recurrence robotically compared to laparoscopically (2.1% vs 4.2%, P , .0001) and surgical site occurrence (4.2% vs 18.8%, P , .0001). More robotic than laparoscopic patients had primary fascial closure. Of note, in this study, robotic patients had a lower body mass index and fewer comorbidities than laparoscopic patients [31]. The national database reviews that have been published comparing laparoscopic and robotic ventral hernia repair have overall demonstrated that robotic ventral hernia repair has an equivalent safety profile to laparoscopic repair. Prabhu et al. [32] analyzed 454 laparoscopic and 177 robotic ventral hernia patients from the Americas Hernia Society Quality Collaborative (AHSQC). They found that the median length of stay was longer for laparoscopic patients, as well as an increased risk of surgical site occurrence (14% laparoscopic vs 5% robotic, P 5 .0001), however, there was no difference in procedure intervention between the two groups. Operative time was longer in the robotic hernia repair group. A study of the Nationwide Inpatient Sample of 149,622 patients comparing laparoscopic and robotic ventral hernia repair found comparable safety, with no difference in length of stay, mortality, or postoperative complications [33]. However, robotic ventral hernia repair was associated with a higher cost. A Vizient database study of 46,799 patients included open, laparoscopic, and robotic ventral hernia repairs [34]. Overall postoperative outcomes were improved with laparoscopic and robotic compared to open ventral hernia repair. Robotic ventral hernia repair did have higher postoperative complications (7.3% vs 3.5%, P , .05) and infections (1.72% vs 0.67%, P , .05) compared to laparoscopic repair. In addition, of the three types of hernia repair, robotic was the most expensive.

13.3.3

Robotic transversus abdominis release

Retromuscular ventral hernia repair and TAR have also developed an increasing prevalence among robotic hernias. Carbonell et al. [35] compared open to robotic retromuscular ventral hernia repair in 333 patients through the AHSQC database. They found that median length of stay was significantly decreased with robotic repair (2.0 vs 3.0 days, P , .001). There were no differences in 30-day readmissions or surgical site infections between either of the groups. The robotic hernia repair patients had a higher incidence of surgical site occurrences, the majority of which were seromas that did not require intervention. Studies of robotic TAR demonstrate low morbidity that is comparable to open repair. Martin-del-Campo et al. [36] identified in 38 patients who underwent robotic TAR and 76 matched open TAR, that operative time was longer for the robotic group. However, blood loss was significantly lower, as were systemic complications (0% robotic vs 17% open, P 5 .026) and length of hospital stay (1.3 days robotic vs 6.0 days open, P , .001). A similar length of stay was identified in a single-institution study of 102 patients with a median stay of 3.5 days for robotic and 6.7 days for open TAR patients (P , .01) [37].

13.3.4

Robotic inguinal hernia repair

Robotic inguinal hernia repair, performed as a transabdominal preperitoneal approach, has provided another minimally invasive approach in addition to laparoscopy. There has been a benefit demonstrated in obese patients for robotic compared to open inguinal hernia repair, with decreased postoperative complications. A multicenter retrospective chart review of 148 robotic and 113 open inguinal hernia repairs in obese patients identified a higher incidence of postoperative complications in open compared to robotic inguinal hernia repair patients (10.8% vs 3.2%, P 5 .047) [38]. This same multicenter group identified in another study, where obesity was not defined as a variable, that robotic inguinal hernia repair still had a decreased incidence of postoperative complications compared to open repair (4.3% vs 7.7%,

Clinical Applications of Robotics in General Surgery Chapter | 13

217

13.4 13.4.1

Robotics in foregut surgery Procedure background

Foregut surgery consists of operations on the esophagus, stomach, and proximal small bowel. The most common foregut surgeries performed include antireflux surgery, such as Nissen fundoplication. A Nissen fundoplication is a 360degree posterior wrap of the fundus around the gastroesophageal junction, as depicted in Fig. 13.7. The purpose of this operation is to recreate the sphincter mechanism that has weakened, and serve as a cure for gastroesophageal reflux disease.

13.4.2

Robotic Nissen fundoplication

Nissen fundoplications were performed open for many years. When laparoscopic Nissen fundoplication started to be performed in the 1990s, patients demonstrated improved outcomes in terms of pain and postoperative complications. The use of robotics in Nissen fundoplication has been steadily increasing. The benefits of three-dimensional visualization and wristed instrumentation in a narrow anatomic space, such as the mediastinum, have provided anecdotal benefits for surgeons in this specialty.

FIGURE 13.7 Anatomic depiction of a Nissen fundoplication [41]. Division of Thoracic Surgery, Stanford University School of Medicine. r 2018 medicalartstudio.com.

13. Robotics in General Surgery

P 5 .047) [39]. An analysis of the National Surgical Quality Improvement Program database of 510 patients identified a longer operative time and cost with robotic, compared to laparoscopic and open inguinal hernia repair, in addition to a higher incidence of postoperative skin and soft tissue infection (2.9% robotic, 0% laparoscopic, and 0.5% open, P 5 .02) [40]. Overall, robotic hernia repair is increasing in prevalence for ventral, inguinal, and TAR. Outcomes demonstrate that robotic hernia repair has an increased cost, but in more complex hernia repairs that are typically performed open, has the significant benefit of decreased length of stay. Conflicting data exist regarding surgical site infections, but the ability to primarily close the defect in ventral hernia repair has led to decreased surgical site occurrences, specifically seromas. Robotic hernia repair is safe overall and has demonstrated no significant safety or perioperative outcome differences from laparoscopic and open repairs in the literature thus far.

218

Handbook of Robotic and Image-Guided Surgery

A national data analysis of 12,079 patients demonstrated that laparoscopic Nissen fundoplication has a lower 30-day readmission rate and cost. The remainder of patient outcomes were not different been the laparoscopic and robotic approaches. When compared to open fundoplication, the robotic approach had improved postoperative morbidity, decreased length of stay, less ICU admission, and decreased cost [42]. A meta-analysis of prospective randomized controlled trials that identified five studies with a total of 160 patients, found that there was no difference in postoperative outcomes between the robotic and laparoscopic approaches, including postoperative dysphagia, intraoperative conversion, reoperation, hospital stay, and in-hospital costs [43].

13.5 13.5.1

Robotics in colorectal surgery Procedure background

Minimally invasive approaches to colorectal surgery have demonstrated an improvement in postoperative outcomes, such as surgical site infection and length of hospital stay. As a result, laparoscopic colorectal surgery has significantly increased over the past several years in comparison to open colorectal surgery. Colorectal surgery spans a large breadth of procedures, including both benign and malignant diseases. Portions of the colon can be removed, as well as the entire colon, depending on the reason for removal, as depicted in Fig. 13.8.

13.5.2

Robotic colon surgery

The primary advantages of robotic colon surgery compared to laparoscopic are when there are anatomic challenges, such as obesity or a narrow pelvis. In addition, the learning curve has been found to be significantly less with robotic colon surgery compared to laparoscopic. Previously, studies have found that a learning curve of around 110 cases is needed for laparoscopic colon surgery, to reduce operative time and postoperative complications. In robotics, this learning curve has been found to be less, at around 15 25 cases [45]. This has been found to be consistent, regardless of an increase in case complexity. A systematic review published of 69 publications found that robotic colon surgery had a longer operative time, less blood loss, shorter length of hospital stay, lower complication and conversion rates, and comparable oncologic outcomes when compared to laparoscopic and open colon surgery [46]. This study highlighted that robotic colon surgery is safe with similar outcomes to laparoscopic surgery. However, the increased operative time is significant, and needs to FIGURE 13.8 Anatomic depiction of a right hemicolectomy [44].

Clinical Applications of Robotics in General Surgery Chapter | 13

219

FIGURE 13.9 Anatomic depiction of a minimally invasive liver resection (A) Peri-vascular dissection; (B) Parenchymal dissection [49].

13.5.3

Robotic rectal surgery

For rectal cancer, to obtain an adequate oncologic operation, a total mesorectal excision (TME) has to be performed. A multicenter study comparing transanal and robotic TME demonstrated that out of 730 patients, there was no difference in the quality of the specimen when it was performed transanally or robotically [47]. The incidence of a poor-quality specimen was similar in both groups (6.9% transanal vs 6.8% robotic, P 5 .954). Morbidity between the two approaches was also not different, with similar rates of anastomotic leaks and reoperations. Therefore high-quality TME can be achieved equally with either a transanal or robotic approach. The Robotic versus Laparoscopic Resection for Rectal Cancer trial was created to evaluate the safety, efficacy, and outcomes of robotic versus laparoscopic rectal cancer resection [48]. This was a multicenter, international randomized clinical trial. There were a total of 571 patients, who were randomized to either receive a robotic or laparoscopic resection. The overall rate of conversion to open laparotomy was similar for the laparoscopic and robotic groups. Margin positivity was also similar between the two groups, as were intra- and postoperative complications. Therefore the robotic approach had comparable outcomes, and did not decrease the incidence of conversion to open laparotomy compared to the laparoscopic approach.

13.6 13.6.1

Robotics in solid organ surgery Procedure background

The primary solid organ surgery performed minimally invasively is liver resection. This can be performed for both benign and malignant conditions. It was traditionally performed open, and more recently, laparoscopic and robotic approaches have emerged, as depicted in Fig. 13.9.

13.6.2

Robotic hepatectomy

The primary literature in regard to robotic hepatectomy compares the robotic and open approaches. A systematic review and meta-analysis comparing robotic versus open hepatectomy included seven retrospective cohort studies. Robotic hepatectomy had a longer operative time, but shorter hospital stay, lower cost, and lower overall minor and major complications [50]. When examining oncologic outcomes, an international, multicenter retrospective study found that 61 patients who underwent robotic liver resection had comparable oncologic outcomes in regard to 5-year overall and disease-free survival, relative to laparoscopic and open resection [51].

13.7

Conclusion

The clinical applications of robotics in general surgery are broad, spanning from bariatrics, hernias, foregut, colorectal, and solid organs. All the clinical studies and published literature identify that the minimally invasive approach provided by robotics is not inferior to that of laparoscopic surgery. However, considerations such as learning curve, operative time, and cost are significant.

13. Robotics in General Surgery

be considered. The contributions to the increased operative time could be secondary to the learning curve, as well as room setup and ancillary staff assistance.

220

Handbook of Robotic and Image-Guided Surgery

References [1] Tsuda S, et al. SAGES TAVAC safety and effectiveness analysis: da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA). Surg Endosc 2015;29:2873 84. [2] Childers CP, Maggard-Gibbons M. Estimation of the acquisition and operating costs of robotic surgery. JAMA 2018;320(8):835 6. [3] Herron DM, Marohn DO. A consensus document on robotic surgery: prepared by the SAGES-MIRA robotic surgery consensus group, ,https:// www.sages.org/publications/guidelines/consensus-document-robotic-surgery/.; 2018 [accessed 22.06.18]. [4] Villamere J, et al. Utilization and outcome of laparoscopic versus robotic general and bariatric surgical procedures at Academic Medical Centers. Surg Endosc 2015;29:1729 36. [5] Griffin SM, Raimes SA, Shenfine J. Oesophagogastric Surgery. 5th ed. Saunders Elsevier, 2013. [6] Wittgrove AC, et al. Laparoscopic gastric bypass, Roux-en-Y: preliminary report of five cases. Obes Surg 1994;4:353 7. [7] Nguyen NT, et al. Trends in utilization of bariatric surgery, 2009 2012. Surg Endosc 2016;30:2723 7. [8] Horgan S, Vanuno D. Robotics in laparoscopic surgery. J Laparoendosc Adv Surg Tech, A 2001;11:415 19. [9] Moon RC, et al. Robotic Roux-en-Y gastric bypass, is it safer than laparoscopic bypass? Obes Surg 2016;26:1016 20. [10] Rogula T, et al. Does robotic Roux-en-Y gastric bypass provide outcome advantages over standard laparoscopic approaches? Obes Surg 2018. Available from: https://doi.org/10.1007/s11695-018-3228-6. [11] Celio AC, et al. Perioperative safety of laparoscopic versus robotic gastric bypass: a propensity matched analysis of early experience. Surg Obes Relat Dis 2017;13:1847 52. [12] Sharma G, et al. Robotic platform for gastric bypass is associated with more resource utilization: an analysis of MBSAQIP dataset. Surg Obes Relat Dis 2018;14:304 10. [13] Fourman MM, Saber AA. Robotic bariatric surgery: a systematic review. Surg Obes Relat Dis 2012;8:483 8. [14] Li K, et al. Robotic versus laparoscopic bariatric surgery: a systematic review and meta-analysis. Obes Surg 2016;26:3031 44. [15] Toro JP, Lin E, Patel AD. Review of robotics in foregut and bariatric surgery. Surg Endosc 2015;29:1 8. [16] Schauer P, et al. The learning curve for laparoscopic Roux-en-Y gastric bypass is 100 cases. Surg Endosc 2003;17:212 15. [17] Lyn-Sue JR, et al. Laparoscopic gastric bypass to robotic gastric bypass: time and cost commitment involved in training and transitioning an academic surgical practice. J Robotic Surg 2016;10:111 15. [18] Jung MK, et al. Robotic bariatric surgery: a general review of the current status. Int J Med Robotics Comput Assist Surg 2017;13:e1834. [19] Romero RJ, et al. Robotic sleeve gastrectomy: experience of 134 cases and comparison with a systematic review of the laparoscopic approach. Obes Surg 2013;23:1743 52. [20] Moon RC, et al. Robot-assisted versus laparoscopic sleeve gastrectomy: learning curve, perioperative, and short-term outcomes. Obes Surg 2016;26:2463 8. [21] Villalonga R, et al. Robotic sleeve gastrectomy versus laparoscopic sleeve gastrectomy: a comparative study with 200 patients. Obes Surg 2013;23:1501 7. [22] Magouliotis DE, et al. Robotic versus laparoscopic sleeve gastrectomy for morbid obesity: a systematic review and meta-analysis. Obes Surg 2017;27:245 53. [23] Gray KD, et al. Perioperative outcomes of laparoscopic and robotic revisional bariatric surgery in a complex patient population. Obes Surg 2018. Available from: https://doi.org/10.1007/s11695-018-3119-x. [24] Laparoscopic Incisional Hernia Repair. Intraoperative laparoscopic view of a recurrent hernia along an incision, ,https://www.researchgate.net/ figure/Intraoperative-laparoscopic-view-of-a-recurrent-hernia-along-the-incision-at-the-edge-of_fig6_221919728.; 2018 [accessed 29.08.18]. [25] Laparoscopic Hernia Repair. ,http://www.zadehsurgical.com/what-is-a-laparoscopic-hernia-repair-and-is-it-the-right-treatment-for-you/.; 2018 [accessed 29.08.18]. [26] LeBlanc KA, Booth WV. Laparoscopic repair of incisional abdominal hernias using expanded polytetrafluoroethylene: preliminary findings. Surg Laparosc Endosc 1993;3:39 41. [27] Schluender S, et al. Robot-assisted laparoscopic repair of ventral hernia with intracorporeal suturing. Surg Endosc 2003;17:1391 5. [28] Ballantyne GH, et al. Telerobotic laparoscopic repair of incisional ventral hernias using intraperitoneal prosthetic mesh. JSLS 2003;7:7 14. [29] Vorst AL, et al. Evolution and advances in laparoscopic ventral and incisional hernia repair. World J Gastrointest Surg 2015;7:293 305. [30] Gonzalez AM, et al. Laparoscopic ventral hernia repair with primary closure versus no primary closure of the defect: potential benefits of the robotic technology. Int J Med Robotics Comput Assist Surg 2015;11:120 5. [31] Walker PA, et al. Multicenter review of robotic versus laparoscopic ventral hernia repair: is there a role for robotics? Surg Endosc 2018;32:1901 5. [32] Prabhu AS, et al. Laparoscopic vs robotic intraperitoneal mesh repair for incisional hernia: an Americas Hernia Society Quality Collaborative analysis. J Am Coll Surg 2017;225:285 93. [33] Coakley KM, et al. A nationwide evaluation of robotic ventral hernia surgery. Am J Surg 2017;214:1158 63. [34] Armijo P, et al. Robotic ventral hernia repair is not superior to laparoscopic: a national database review. Surg Endosc 2018;32:1834 9. [35] Carbonell AM, et al. Reducing length of stay using a robotic-assisted approach for retromuscular ventral hernia repair. Ann Surg 2018;267:210 17. [36] Martin-del-Campo LA, et al. Comparative analysis of perioperative outcomes of robotic versus open transversus abdominis release. Surg Endosc 2018;32:840 5.

Clinical Applications of Robotics in General Surgery Chapter | 13

221

13. Robotics in General Surgery

[37] Bittner JG, et al. Comparative analysis of open and robotic transversus abdominis release for ventral hernia repair. Surg Endosc 2018;32:727 34. [38] Kolachalam R, et al. Early outcomes of robotic-assisted inguinal hernia repair in obese patients: a multi-institutional, retrospective study. Surg Endosc 2018;37:229 35. [39] Gamagama R, et al. Open versus robotic-assisted transabdominal preperitoneal inguinal hernia repair: a multicenter matched analysis of clinical outcomes. Hernia 2018. Available from: https://doi.org/10.1007/s10029-018-1769-1 [accessed 26.06.018]. [40] Charles EJ, et al. Inguinal hernia repair: is there a benefit to using the robot? Surg Endosc 2018;32:2131 6. [41] Stanford Medicine, Department of Cardiothoracic Surgery. Benign Esophageal Disease, ,https://med.stanford.edu/ctsurgery/clinical-care/thoracic-surgery-services/benign-esophageal-disease.html.; 2018 [accessed 29.08.18]. [42] Owen B, et al. How does robotic anti-reflux surgery compare with traditional open and laparoscopic techniques: a cost and outcome analysis. Surg Endosc 2014;28:1686 90. [43] Yao G, Liu K, Fan Y. Robotic Nissen fundoplication for gastroesophageal reflux disease: a meta-analysis of prospective randomized controlled trials. Surg Today 2014;44:1415 23. [44] SAGES. Laparoscopic colon resection surgery patient information from SAGES, ,https://www.sages.org/publications/patient-information/ patient-information-for-laparoscopic-colon-resection-from-sages/.; 2018 [accessed 29.08.18]. [45] Shaw DD, et al. Robotic colorectal surgery learning curve and case complexity. J Laparoendo Adv Surg Tech 2018. Available from: https://doi. org/10.1989/lap.2016.0411. [46] Kim CW, Kim CH, Baik SH. Outcomes of robotic-assisted colorectal surgery compared with laparoscopic and open surgery: a systematic review. J Gastrointest Surg 2014;18:816 30. [47] Lee L, et al. A multicenter matched comparison of transanal and robotic total mesorectal excision for mid and low-rectal adenocarcinoma. Ann Surg 2018. Available from: https://doi.org/10.1097/SLA.00000000000002862. [48] Jayne D, et al. Effect of robotic-assisted vs conventional laparoscopic surgery on risk of conversion to open laparotomy among patients undergoing resection for rectal cancer: the ROLARR randomized clinical trial. JAMA 2017;318(16):1569 80. [49] Choi SB, Choi SB. Current status and future perspective of laparoscopic surgery in hepatobiliary disease. Khaosiung J Med Sci 2016;32 (6):281 91. [50] Wong DJ, et al. Systematic review and meta-analysis of robotic versus open hepatectomy. ANZ J Surg 2018. Available from: https://doi.org/ 10.1111/ans.14690. [51] Khan S, et al. Long-term oncologic outcomes following robotic liver resections for primary hepatobiliary malignancies: a multicenter study. Ann Surg Oncol 2018;25:2652 60.

14 G

Enhanced Vision to Improve Safety in Robotic Surgery Veronica Penza1, Sara Moccia1,2, Elena De Momi3 and Leonardo S. Mattos1 1

Biomedical Robotics Lab, Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy 2 Department of Information Engineering, Universita` Politecnica delle Marche, Ancona, Italy 3 Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy

ABSTRACT In the last few decades, major complications in surgery have emerged as a significant public health issue, and so the practical implementation of safety measures to prevent injuries and deaths in different phases of surgery is required. The introduction of novel technologies in the operating theater, such as surgical robotic systems, opens new questions on how much and in which way safety can be further improved. Computer-assisted surgery, in combination with robotic systems, can greatly help in enhancing the surgeons’ capabilities providing direct patient- and process-specific support to surgeons with different degrees of experience. In particular, the application of augmented reality (AR) tools could represent a significant step toward safer clinical procedures, improving the quality of health care. This chapter describes the main areas involved in an AR system, such as computer vision methods for identification of areas of interest, surgical scene description, and safety warning methods. Recent advances in the field are also presented, providing as an example the Enhanced Vision System for Robotic Surgery: an AR system to provide assistance in the protection of vessels from injury during the execution of surgical procedures with a commercial robotic surgical system. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00014-1 © 2020 Elsevier Inc. All rights reserved.

223

224

14.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Minimally invasive surgery (MIS) has revolutionized the traditional open surgery technique, reducing the invasiveness of the access to the surgical site by inserting surgical instruments and an endoscope through a few small incisions, reducing patient’s trauma, risk of infections, and thus improving the surgical outcome. However, despite the benefits, the uptake of MIS surgery has been slow due to some limitations, including limited surgeon maneuverability, reduced haptic and depth perception, limited freedom of movement due to the single endoscope port access, and limited field of view of the surgical scene. Even if the advent of robotics in MIS (RMIS) has allowed to overcome many of these drawbacks, the core of the surgery still relies on the surgeons’ degree of expertise and experience, and on their ability, for example, to mentally fuse preoperative information intraoperatively, making the outcome of the surgery vary according to the surgeons’ skills. Computer-assisted surgery (CAS) can greatly help in enhancing the surgeon’s capabilities providing direct, patientand process-specific support to surgeons with different degrees of experience, and its combination with surgical robotic systems would be valuable in surgical safety improvements. Computer-assisted technologies have been developed to support different stages of the treatment procedure. During the preoperative phase, virtual reality can help the surgeon to diagnose the disease and plan the treatment. Different techniques have been established to build 3D virtual models of the patient’s anatomy from, for example, digital imaging and communications in medicine data. The virtual exploration of such models can assist the surgeon during the preliminary phase of a surgical procedure through interactive and visual planning of the operative strategy [1]. During the intraoperative phase, augmented reality (AR) systems can enhance surgical awareness and provide a more comfortable and efficient environment for the surgeon. This involves fusing the preoperative information and the surgical plan with the intraoperative visualization of the surgical field [2,3]. The process of precisely adapting the virtual 3D model to match the patient’s real anatomy is called registration. This is the key to creating an AR environment and intraoperative navigation tools. By highlighting target structures and anatomical relationships through a modular virtual transparency, AR allows looking inside closed cavities or through solid structures [4,5]. Moreover, since in MIS or RMIS surgeons cannot rely on the “sense of touch” to identify major structures (such as tumor and vessels) or orient the surgical resection, AR can help by providing intraoperative visualization of such structures, contributing to also reducing the time required in the operating room (OR) [6]. Despite continuous progress over recent years, the registration of multimodal preoperative patient data in soft-tissue surgery is still highly challenging as the anatomy undergoes significant changes between the data acquisition (preoperative) phase and the surgical procedure (intraoperative). This is due to different factors [7]: (1) different pose of the patient with respect to the pose in which the preoperative image was taken; (2) CO2 abdominal insufflation for increasing the working volume (pneumoperitoneum); (3) instrumenttissue interactions; (4) heartbeat; and (5) breathing. Considering all these aspects, the application of AR in MIS still presents a number of open issues, which are the focus of current research. This chapter is intended to give an overview of the main areas involved in an AR system for surgical applications, as shown in Fig. 14.1. In Section 14.2, general aspects of safety in surgery are tackled, focusing attention on what a CAS system can provide in terms of safety to RMIS. Section 14.3 describes the different modalities for the selection and identification of a region of interest. The core techniques involved in the surgical scene understanding are detailed in Section 14.4. Section 14.5 describes different existing AR modalities to display the safety information to the surgeon. A practical example of an implementation of an AR system aimed at improving the safety in abdominal RMIS is presented in Section 14.6, and the conclusions are presented in Section 14.7. FIGURE 14.1 Overview of the main areas involved in an Augmented Reality (AR) system to improve safety in surgery.

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

14.2

225

Safety in robotic minimally invasive surgery

14.3

Identification and definition of the structure of interest

Depending on the surgical intervention, an AR application can be used to augment different targets, such as tumors, vessels, nerves, or other anatomical structures. These structures can be semiautomatically extracted from anatomical preoperative images during the preoperative phase or can be manually identified intraoperatively on the endoscopic image. Hereafter, different methods for the two approaches will be described.

14.3.1

Semiautomatic preoperative identification

The structure of interest (SOI) can be identified and extracted during the preoperative phase in the form of a 3D virtual model built from preoperative imaging. The most used imaging systems are magnetic resonance imaging (MRI) and

14. Vision in Robotic Surgery

According to the World Health Organization (WHO), the number of major surgical operations performed in 56 countries in 2004 was estimated at between 187 and 281 million. It can be seen as approximately one operation per year for every 25 people alive [8]. Such a volume implicates complications in the surgical process that are documented as a major cause of injuries and death worldwide. In industrialized countries, the rate of major complications has been documented to occur in 3%22% of surgical procedures, with a death rate of between 0.4% and 0.8% [9]. Although there have been improvements in surgical safety knowledge, nearly half the adverse events in these studies were determined to be preventable. Consequently, surgical safety has emerged as a significant global public health concern. With the campaign “Safe Surgery Saves Life” between 2007 and 2009, the WHO has promoted the use of a surgical safety checklist, a 19-item checklist that should be completed at the crucial steps of a surgical procedure: before induction of anesthesia, before skin incision, and before the patient leaves the OR. Its pilot study, conducted in eight hospitals around the world, reported a decrease of mortality and complications of 48% and 37%, respectively [10]. Even if the reason for the improvement in safety is not clearly evincible, different studies demonstrated improvements in team communication, teamwork, compliance with prophylactic antibiotic administration, and monitoring prior to induction of anesthesia [11]. Despite these efforts, major complications during surgery remain a problem. Focusing the attention only on the laparoscopic procedure, Lam et al. [12] propose a classification of major complications according to six different phases of the procedure: (1) patient identification, (2) anesthesia and positioning, (3) abdominal entry and port placement, (4) surgery, (5) postoperative recovery, and (6) counseling. It is reported that more than 50% of complications occur during the abdominal entry and port placement (phase 3), where vascular, intestinal, and urinary tract injury, and gas embolism are very common. Vascular injuries can occur also during surgery (phase 4), being the most serious intraoperative laparoscopic complication, with a 9%17% mortality rate [13,14]. Risk factors are attributed to the surgeon’s skills, instrument sharpness, angle of insertion, patient position, degree of abdominal wall elevation, and volume of pneumoperitoneum [12]. In the last few decades, new technologies in the robotics field have been introduced with the purpose of improving the outcomes of surgical procedures. Specifically, RMIS has been demonstrated to improve precision and ergonomics, and reduce fatigue and stress with respect to laparoscopic procedures. An example of a surgical robotic platform currently on the market is the da Vinci surgical system (Intuitive Surgical, Inc., Sunnyvale, CA), which is having a strong impact on health care: by 2016 there were more than 3700 systems worldwide and more than 3000,000 operations performed [15,16]. Even if the surgeon can rely on such advanced technologies, the core of the surgical procedure is still under his/her total control. In fact, robotic assistance can help surgeons in improving their performance while performing important and delicate actions, such as, suturing. However, the intraoperative decision process is strongly dependent on their ability to dynamically fuse the preoperative patient-specific information into the information coming from the surgical scenario [17]. These tasks are made more difficult by the fact that the only feedback the surgeon receives from the surgical site is the video coming from the endoscope, whose field of view is limited by the characteristics of the camera and by the constraints imposed by the access point, which does not allow visualization from different perspectives. The most promising solution for safer surgical procedures with a minimal rate of major complications consists in exploiting assistive technologies. The role of such technologies lies in enhancing the surgeon’s capabilities during surgery, by providing information from the preoperative phase, monitoring the surgical site, and warning the surgeon in the case of situations that can lead to adverse events.

226

Handbook of Robotic and Image-Guided Surgery

FIGURE 14.2 On the left (A and B), an example of volume rendering from a CT scan of the abdomen. On the right, an example of surface rendering (C), with details of the 3D models of the liver and kidney (D).

computerized tomography (CT), due to the high resolution they can provide. The two methods used to generate 3D organs models rendering are: G

G

Volume rendering. Volume rendering allows 3D visualization of all structures represented in the medical images simultaneously, as shown in Fig. 14.2. Ray casting is one of the most popular techniques as it allows a very precise representation of the data. Virtual rays projected from the camera are used to traverse the 3D data, performing a weighted summation of the information of color and opacity associated with every voxel hit along the ray [18]. Therefore the advantage of not requiring manual interaction of the user to delineate the structure to render in the 3D model turns into a limitation for AR applications, since the 3D model of different organs cannot be independently visualized and manipulated. Surface rendering. Surface rendering allows the visualization of 3D organ models separately (see Fig. 14.2). As opposed to volume rendering, the first step for surface rendering involves manual user interaction for segmentation of the SOI. The segmented area can be obtained using different classic techniques such as thresholding, region growing, active contours, clustering, and classifiers, while more advanced methods include Markov random fields, artificial neural networks, and statistical shape models [1820]). The segmentation is then exploited to convert the CT scan volumetric information into a polygonal mesh. The most common technique is the marching cubes [21].

14.3.1.1 Registration With registration, the preoperative organ anatomy is aligned with the intraoperative view [18]. The registration process is described by a mathematical operator called transformation, which can be either rigid or deformable 1. Rigid registration: The transformation is described by six degrees of freedom, three for rotation and three for translation, assuming the rigidity of the scene. The most used rigid registration algorithm is the iterative closest point [22,23]. 2. Deformable registration: The transformation is described using six up to an infinite number of degrees of freedom, since the deformation model can be defined as complex as desired. A completed overview of deformable registration methods used in the medical field is reported in Ref. [24]. One of the most popular deformable transformations is based on vector fields [2426]. Both rigid and deformable registrations can be performed by a G

G

G

Manual approach: The registration process is performed by an expert and relies on manual input. This strategy can be used with other strategies, for example, to initialize surface- or volume-based registration algorithms. The manual approach is strongly affected by the quality of its user interface and by the operator’s degree of expertise. Point-based approach: The registration is performed between two sets of corresponding points, one from the preoperative view, one from the intraoperative scene. Point-based approaches require the identification of anatomical landmarks or artificial markers (usually attached to the organs and acquired with the help of a tracking system) visible both pre- and intraoperatively. In point-based approaches, the limitations are related to the need to visualize the same markers and to the tracking system inaccuracy. If the markers are manually identified, the degree of expertise of the user affects the quality of the procedure. Surface-based approach: In surface-based approaches, the reconstructed surface of the intraoperative scene (see Section 14.4.2) is registered to the surface of the preoperative 3D model. The registration strongly depends on the accuracy of the reconstructed surface and it is reliable only for visible surfaces, as no assumption is made on the

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

227

FIGURE 14.3 Examples of three different modalities for the intraoperative selection of an area of interest. From left to right, (A) touch screen monitor, (B) graphical tablet, and (C) the control handle of a surgical robot.

14.3.2

Manual intraoperative selection

In the case it is not possible to identify the SOI preoperatively, or this is not convenient, it is still possible to intraoperatively select a 2D area surrounding the SOI using endoscopic images. The selection can be performed, for example, by means of a touch screen monitor, a graphical tablet, or the robotic handle in the case of surgery assisted by a robotic system [27,28], as shown in Fig. 14.3. In this case, there is no need for registration with the intraoperative view, since the selection of the SOI is intraoperatively done.

14.4

Surgical scene description

This section introduces modern computer vision methods for surgical scene description. They perform such a task by classifying the tissues in the field of view, retrieving their 3D surface information, and tracking their deformations.

14.4.1

Semantic segmentation

Intraoperative tissue classification plays a fundamental role in different clinical fields, such as laryngology [29,30], dermatology [31], oncology [32], ophthalmology [33], and neurosurgery [34]. Tissue classification is performed for diagnosis, treatment planning and execution, and for treatment outcome evaluation and follow-up. The importance of intraoperative tissue classification is supported by the gradual but constant introduction in clinical practice of medical technologies aimed at enhancing the visualization of tissues, such as multispectral [35], narrow-band [36], and spectroscopy imaging [37]. Automatic image-based classification of tissues is a valuable solution to provide surgeons with decision support or context awareness intraoperatively. Despite the fact that automatic or semiautomatic tissue classification approaches have been proposed by the surgical data science (SDS) community, several technical challenges are still present, hampering the clinical translation of the proposed methodology to clinical practice. Indeed, performing robust and reliable classification is not trivial due to the high inter- and intrapatient variability, and to noise in the images, varying illumination level, and changing camera pose with respect to the tissues. Nonetheless, machine learning (ML) strategies have recently been shown to provide robust, reliable, and accurate tissue classification for decision support and context awareness during interventional-medicine procedures. ML models for tissue classification typically (1) apply automated image analysis to extract a vector of quantitative features to characterize the relevant image content and (2) apply a pattern classifier to determine the category to which the extracted feature vector belongs (e.g., malignant/healthy tissue) [38,39]. Intensity-based features aim at encoding information related to the prevalent intensity components in the image. Intensity features are mainly based on intensity histogram, mean, variance, and entropy [40] and are commonly combined with textural features, which encode tissue appearance, structure, and arrangement [41].

14. Vision in Robotic Surgery

G

underlying structure. The integration of biomechanical models in the registration process can alleviate this issue, but the parameters of such models have to be determined. Volume-based approach: The volume-based approach is often adopted when there is an intraoperative imaging system available to retrieve the intraoperative 3D anatomy [such as cone-beam CT or 3D ultrasound (US)]. Unfortunately, such equipment is not always available in the OR. However, when it is, preoperative models can be deformed according to biomechanical models to mimic the intraoperative surface.

228

Handbook of Robotic and Image-Guided Surgery

FIGURE 14.4 Example of semantic segmentation of an endoscopic image of a phantom of the abdomen. On the left, the RGB image, on the right the same image augmented with the tissue classification information obtained from a semantic segmentation algorithm [74].

Textural features include local binary pattern [42], gray-level cooccurrence matrix [43], and histogram of oriented gradients [44]. This class of features has been successfully used for several applications, such as tissue classification in gastric [45] and colorectal images [46,47]. Other popular features are obtained with filtering-based approaches. Filtering approaches build template-filters that correlate with tissue structure in the image. Common approaches are matched-filtering [48] and wavelets [49], which have been widely used for vessel detection and localization [50] and polyp classification [51]. Similarly, derivativebased approaches build derivative-filters to extract image spatial derivatives, such as gradient and Laplacian, for example, to highlight tissue edges [52,53]. Recently, learned features have been proposed and successfully used for tissue classification [54,55]. Learned features refer to features that are automatically extracted from the image [56]. The most popular approach to automatic feature learning is using convolutional neural network, which showed remarkable performance in classifying skin cancer [54] and predict cardiovascular risk factors from retinal fundus photographs [55]. As for pattern classifiers, several solutions have been introduced in the last few decades and an extensive review can be found in Ref. [57]. The first attempts were based on statistical approaches (i.e., naive Bayes [58], Bayesian networks [59], and Bayes classifiers [60]) and instance-based learning [61]. Applications for tissue classification include the work in [62,63]. Similarly, perceptron-based algorithms [64], have been widely used, for example, for polyp detection in endoscopic images [51,65,66]. Logic-based algorithms [67,68] and support vector machines [69] are probably among the most widely used classifiers. These algorithms showed promising performance for tissue classification in several fields (e.g., [29,70,71]). More recently, deep learning for tissue classification has drawn the attention of the SDS community. Examples include skin-cancer classification [54], polyp detection [72], retinal image analysis [55], and vessel segmentation [73], where large and labeled datasets are publicly available for deep-learning model training. An example of semantic segmentation for AR application can be found in Ref. [74], where the awareness of the scene is promoted by a confident multi-organ semantic segmentation, exploited to recognize different tissue textures on the endoscopic images (see Fig. 14.4).

14.4.2

Surgical scene reconstruction

Recovering the surgical scene surface during laparoscopic surgical interventions is an essential step for surgical guidance and AR applications. It can be used for different purposes: to register a preoperative model on intraoperative images, to build a 3D map of the surgical site, or to compute the geometrical relationship between the tissue and the surgical instruments [75,76]. Different image modalities can be used to acquire detailed information about the tissue morphology, such as US, CT, and interventional MRI. However, issues related to real-time performance, health risks, and high costs disfavor the use of such technologies in the intraoperative surgical scenario. This situation favors the use of intraoperative endoscopic images to perform on-line estimation of tissue surfaces through methods of depth estimation [77]. These methods can be divided into two categories [78]: passive methods that rely on the image information only, and active methods that make use of a light projection into the environment. Passive methods include stereoscopy [76,79,80], monocular shape-from-X and simultaneous localization and mapping [81,82]. Active methods are mostly based on structured light and time-of-flight. A review of state-of-the-art methods for 3D reconstruction in laparoscopic surgery is reported in Ref. [83], and a comparative validation between the most relevant methods is presented in Ref. [84]. Since the use of passive methods does not require any change to the existing surgical room setup, stereoscopy is one of the most common methods for the reconstruction of the surgical scene. This approach is inspired by the stereo imaging capability of the eyes. By finding correspondences between points that are seen by one eye and the same points as seen by the other eye, our brain is able to compute the 3D location of the points. Similarly, computers can process a

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

229

pair of stereo images captured by a laparoscope and compute the 3D surface of the surgical field. The leftright correspondences can be computed in a sparse manner, where several salient regions (features) are detected and matched. Alternatively, a dense reconstruction can be obtained by matching every pixel of the stereo images. We report, as an example, the main steps involved in dense stereo imaging (see Fig. 14.5)

FIGURE 14.5 Scheme representing the main steps involved in dense stereo imaging: stereo images as input, undistortion to remove possible distortions due to lenses imperfection, rectification to align the pixels on the same epipolar lines, stereo correspondence to compute the match between the stereo images, and triangulation to compute the depth information. The output is the 3D reconstruction of the surgical surface.

FIGURE 14.6 Pipeline of the main steps involved in stereo-correspondence process.

14. Vision in Robotic Surgery

1. Undistortion. The images acquired by the cameras are often affected by distortions, typically divided into radial and tangential, caused by the lens properties. The undistortion process is able to represent the image as a planar projection. 2. Rectification. In order to simplify the stereo-correspondence methods, the images are rectified thanks to a leftright calibration process. This computation ensures that equivalent pixels between left and right images lay in the same row of the image. 3. Stereo correspondence. Stereo correspondence is the process of identification of the difference in position of an object represented by the stereo images, by finding the matching between each pixel of the left and the right images (iml and imr). This information is called disparity (d). The process can be divided into four steps, as stated in Ref. [85] and shown in Fig. 14.6

230

Handbook of Robotic and Image-Guided Surgery

i. Matching cost computation: Exploits methods to calculate the similarity between two pixels, or windows around the pixels. For example, correlation (normalized cross correlation), intensity difference [absolute difference (AD), squared difference], or rank (rank transform, census transform) [86]. ii. Aggregation cost computation: The aggregation of the matching cost is done by using the information from a window surrounding the target pixel. As an example, the sum of ADs (SAD) is calculated for a pixel ði; jÞ as described by the following equation: SADði; j; dÞ 5

m=2 X n=2 h52m=2 k52n=2

Il ði 1 h; j 1 kÞ 2 Ir ði 1 h 2 d; j 1 kÞ

(14.1)

where Il and Ir are the intensity of each pixel of iml and imr, respectively, and d is the disparity. iii. Disparity computation: The computation of the disparity value at each pixel consists in choosing the disparity associated with the minimum of the aggregation cost value. An example of local computation is the winner takes all strategy [85], which selects the pixel with the minimum aggregation cost value as the best match. Other techniques use global or semiglobal information in order to optimize the result of the disparity computation [85]. iv. Disparity refinement: Different strategies can be used to reduce the number of incorrect values (due to an incorrect matching). As an example, a leftright consistency check can be performed in order to invalidate halfoccluded pixels, that is, objects viewed in one image but not in the other. Subpixel refinement is applied to avoid separated layers in the reconstructed surface resulting from pixel-level precision [87]. A speckle removal filter can also be applied in order to remove small artifacts, that is, regions of large and small disparities that can be generated near the boundaries of shapes. 4. Triangulation. The Z coordinate of the points, also called depth, can be computed as: Depthði; jÞ 5

ð f 3 sÞ disparityði; jÞ

where i and j are, respectively, rows and columns of the image, s is the baseline between the stereo cameras, and f is the camera focal length. See the graphical representation in Fig. 14.5. The 3D surface reconstruction of surgical endoscopic images is still an active area of research due to some challenging aspects that include dynamic and deformable environment, many textureless or homogeneous texture areas, specular highlights, smoke and blood produced during the intervention, and occlusions introduced by the surgical tools [88]. Moreover, the application of 3D surface reconstruction in surgery has to fulfill high requirements in terms of accuracy and robustness in order to ensure patient safety.

14.4.3

Tissue tracking

In order to measure intraoperative tissue movements, computer vision and image processing algorithms have been exploited to track soft-tissue areas relying only on the image characteristics [88]. Tissue tracking can be approached either from tracking or detection perspectives. Tracking-based algorithms estimate tissue motion using feature matches between successive frames. The tracking features are identified on the tissue surface during an initialization phase. Since this method relies on the uniqueness of the features, the definition of the features strongly affects its reliability. For this reason, different image features have been explored, such as gradient, color, and textural features [89], where each technique relies on different characteristics of the image. The main advantages of this approach are that it only requires an initialization, it is computationally cheap, and it produces smooth trajectories. On the other hand, it is prone to drift due to the error accumulation during runtime because of image noise and computational approximations. It also typically fails if the object is partially occluded or disappears from the camera view. Detection-based approaches estimate the location of the tissue at every frame independently, typically relying on local image features or sliding windows. The steps of feature-based approaches [90] usually are: (1) feature detection, (2) feature matching, and (3) model fitting. Sliding window-based techniques [91] use a window to scan the input image, and typically use a classifier to decide whether the underlying patch contains the object to track or background. As opposed to the tracking-based techniques, the detectors do not present drift problems and are robust against total or

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

231

14.5

Safety warning methods

The methodological and technological advancements presented in the previous sections allow moving toward a complete understanding of the surgical scene, coupling the contributions of computer vision and robotic systems to empower the surgeon with novel capabilities that increase context awareness. Providing additional information to the surgeon can potentially help in preventing complications and thus improve surgical safety. There are multiple ways to present safety information to the surgeon, including visual feedback (i.e., AR features) and/or haptic feedback (i.e., active constraints), as described in the following sections.

14.5.1

Augmented reality visualization

AR techniques allow the rendering of safety information directly on the surgeon’s view. The modality selected for AR depends on the surgical setup, that is, on the way the surgeon observes the surgical scene: direct visualization, microscope visualization, or endoscopic visualization. Basic AR modalities, as described in Ref. [18], include G

G

G

G

Projection on patient. This modality consists in projecting an image directly on the patient’s body. This can give very good results when working with 2D features such as insertion points or other surface landmarks. However, the correct representation of 3D information is highly dependent on the surgeon’s tracking, which is required to correct the perspective of the AR projection. Optical see-through. It is possible to render the AR information onto semitransparent surfaces allowing the simultaneous visualization of virtual information and the real scene. The projection surface can be statically mounted in front of the scene, or on a wearable device. In the first case, the projection surface could interfere with the surgical instruments, while the second can cause discomfort. As an advantage with respect to the previous technique, the optical see-through solves the problem of occluding the projection. Video see-through. Alternatively, the rendering of the AR information can be done on a video streaming captured from the real surgical scenario using a camera. This technique has been increasingly explored in recent years due to the technological advances in head-mounted displays and handheld devices. Static video display. Finally, the representation of the surgical scene with AR information can be done on a statically mounted display receiving a video streaming from the surgical scenario. The video display can be a classical monitor, or the 3D display of a remote console such as the da Vinci surgical system.

Apart from the visualization method, the kind of information transmitted to the surgeon is also important. This depends on the kind of surgery, the available information, and the kind of assistance the system is designed to offer. For example, the AR information may represent G

G

Preoperative information. The most direct use of AR is to render the preoperative information directly on the real images. This can help the surgeon to identify anatomical structures, such us veins or tumors. However, the accuracy of the overlay between the AR and the real scene can affect significantly the advantages of this technique. Processed information. It is also possible to transmit to the surgeon information obtained from some computational analysis of the surgical field. For example, in Ref. [92] the authors implement a supervisor for laser phono microsurgery where the AR system displays a 2D map representing the estimated tissue temperature and laser cutting depth. This information is displayed on a corner of the video streaming, not affecting the visualization of the surgical scenario, but providing significant information to the surgeon about the laser procedure.

14. Vision in Robotic Surgery

partial occlusions, or when the tracked area disappears from the camera point of view. However, the use of a classifier requires an offline training phase and therefore cannot be applied to unknown objects. The implementation of algorithms for soft-tissue tracking that work robustly for a long period is still a challenging task. This is due to several problems related to the characteristics of endoscopic images. In order to reach the goal of robust long-term (LT) soft-tissue tracking, the following issues have to be dealt with: (1) tissues are constantly subject to deformations due to breathing, heartbeat, and instrumenttissue interactions; (2) camera movements cause blurring and changes in scale and orientation of the object to be tracked; (3) camera movement and instrumenttissue interaction can cause partial or total occlusion of the tracked tissue, even for a long period of time; (4) the tissue may change its appearance, thus making the appearance from the initial frame irrelevant; and (5) tissue images often exhibit considerable specular reflections caused by the wet tissue surfaces, and endocavity lighting creates dynamic lighting conditions.

232

Handbook of Robotic and Image-Guided Surgery

14.5.2

Active constraints

The idea of adding a sensory overlay (i.e., active constraints/virtual fixtures) during the interaction between the human user and the robot was firstly introduced by Rosenberg [93]. Different constraint modalities were classified in Ref. [94] as regional/guidance, attractive/repulsive, unilateral/bilateral, dynamic/static in this comprehensive review paper, listing research work published before 2014. Recent publications have been focused on the constraint design modalities, enforcement [95] (focusing on improving the interaction performances and the stability of the control approach) and application to laser surgery [96]. Some authors introduced constraints in a comanipulation paradigm for keyhole surgery [97] or for bone drilling [98].

14.6

Application in abdominal surgery: Enhanced Vision System for Robotic Surgery system

After the panoramic on different computer vision methods and safety warning visualization techniques so far shown, this section provides an example of application of such methods presenting the Enhanced Vision System for Robotic Surgery (EnViSoRS), a system that aims to minimize the risk of intraoperative bleeding during abdominal RMIS [28]. The system includes AR features that provide a warning in case the robotic instruments get close to a user-defined safety area (SA), which can be defined intraoperatively by manually marking the tissue to be protected using graphic overlays. The system tracks the SA in real-time using a novel algorithm robust against common events happening during a surgical procedure, such as (1) camera movements, (2) tissue deformations, and (3) field-of-view occlusions due to smoke or the presence of surgical instruments. The core of EnViSoRS is based on novel computer vision algorithms aimed at retrieving the surgical scene information and processing such information to monitor the distance between the surgical instruments and the surface to protect in the 3D space. An overview of the system is shown in Fig. 14.7. The system consists of the following five steps: 1. Image preprocessing: The first necessary step is the calibration of the endoscopic stereo cameras. The extracted camera parameters are exploited to correct image distortions and perform rectification, which facilitates the search for stereo correspondences needed for the 3D reconstruction method. Specular highlights (identified as bright regions with low saturation and high intensity) are eliminated to prevent errors in feature tracking and 3D reconstruction computation.

FIGURE 14.7 Overview of EnViSoRS system: Enhanced Vision System to improve Safety in Robotic Surgery. The system is integrated into dVRK system (WPI and Johns Hopkins University). From the console the surgeon can (1) select the safety area using a stylus and a graphics tablet, (2) see the safety volume overlaid on the images, and (3) see a graphical gauge that provides information regarding the tissueinstruments distance [28]. dVRK, da Vinci Research Kit; EnViSoRS, Enhanced Vision System for Robotic Surgery.

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

233

2. Safety area definition: EnViSoRS, at this stage of the development, does not need any information from preoperative planning. At any moment during the surgery, the surgeon can define the SA as a 2D polygon on the endoscopic image using a graphics tablet and a stylus, as shown in Figs. 14.7 and 14.8. In our configuration, the tablet (WACOM Bamboo Pen and Touch Tablet) was placed directly on the da Vinci surgical system master console, allowing the surgeons to perform this operation while visualizing the images from the stereo-endoscope through the 3D display. 3. Tissue tracking: Since the tissue in the endoscopic field of view undergoes deformations and changes in position due to the highly dynamic nature of this environment, the LT SA tracking algorithm [99] is used to track the SA during the surgical operation. Tracking-based and detection-based approaches are combined to improve the robustness of the SA tracking against the events mentioned above. Eventual tracking failures are detected exploiting a Bayesian inference-based approach. Redetection after failure is performed using a model update strategy to improve the SA. 4. Tissue 3D reconstruction: In order to measure the spatial relationship between the surgical instruments and the SA, it is necessary to retrieve the 3D surface information of the tissue contained in the SA. This is done using a dense soft-tissue stereo-reconstruction algorithm [100]. Since the 3D reconstruction algorithm is computationally expensive, only the area contained inside the tracked SA is considered. The algorithm is based on a block matching approach, exploiting a nonparametric census transform to make the stereo-correspondence robust to illumination variations. Two different techniques are used to improve the density and smoothness of the reconstructed surface. The simple linear iterative clustering super pixel is used to refine the disparity and fill any missing area of the image. Subsequently, another strategy is used to further smooth the result: the disparity image is considered as a 2D Laplace equation problem, where the disparity values on the contours of the superpixels are considered as the Dirichlet boundary conditions, and the remaining pixels as the initial conditions of the problem. Then, smoothing is obtained by solving the equations using the GaussSeidel method with red and black ordering. 5. Safety augmentation: This final step merges all information extracted in previous steps to compute the safety volume (SV) to be protected and to show safety warnings. The SV is computed as an ellipsoid fitted on the computed point cloud representing the SA. Once this volume is computed, the system allows: (1) to display its 2D projection on the images by means of AR in the 3D visor, and (2) to warn the surgeon when the robotic instruments are approaching the protected tissue surface, with the aim of avoiding possible injuries to the delicate area contained in it. Spatial neighbor search based on an octree structure is used to measure the distance between the robotic instrument end-effector and the reconstructed surface. A gauge located in the top-right corner of the image is displayed to

14. Vision in Robotic Surgery

FIGURE 14.8 Example of AR visualization of EnViSoRS. Top left, safety area selection on the endoscopic video image. Top right and bottom images represent three different instants of the safety system displaying the safety volume as a green ellipsoid, and the gauge indicating the distance between the surgical tool and the 3D surface of the protected area (further distances indicated by green segments, closer distances indicated by red segments) [28]. AR, Augmented reality; EnViSoRS, Enhanced Vision System for Robotic Surgery.

234

Handbook of Robotic and Image-Guided Surgery

warn the surgeon if the instrumenttissue distance is inside a predetermined range, as shown in Fig. 14.8. A multithreading approach was exploited to make the integration of the computer vision algorithms run at interactive frame rates, providing AR features update rate up to 4 fps, without impacting the real-time visualization of the stereo endoscopic video. EnViSoRS was integrated into the da Vinci Research Kit, a platform provided by Intuitive Surgical Inc. (United States) to advance research in teleoperated robotic surgical systems. Nevertheless, EnViSoRS could be integrated in any surgical robotic system equipped with a stereo-endoscope and actuated instruments. Due to its modularity, it can also be used in a simple laparoscopic setup with the requirement of having a tracking system for the instruments and endoscope. The performance of EnViSoRS in terms of accuracy, robustness, and usability was evaluated by asking a team of surgeons and engineers to perform a simulated surgical task on a liver phantom [101], with and without the AR features provided by the system. The results show an overall accuracy in accordance with surgical requirements (,5 mm), and high robustness in the computation of the SV in terms of precision and recall of its identification. Qualitative results regarding the system usability indicate that it integrates well with the commercial surgical robot and has potential to offer useful assistance during real surgeries.

14.7

Conclusion

In the last few decades, major complications in surgery have emerged as a significant global public health issue, requiring the practical implementation of safety measures to prevent injuries and deaths in the different phases of surgery. The introduction of novel technologies in the operating theater, such as robotic surgical operating systems, opens new questions on how much and in which way safety can be further improved. CAS has been demonstrated to be able to enhance the surgeon’s capabilities, by providing direct, patient- and process-specific support intraoperatively to surgeons with different degrees of experience. Thus the development of novel CAS tools exploiting the endoscope or the kinematics of the robotic arms, would represent a relevant step towards a safer clinical procedure, improving the overall quality of health care. This chapter has presented the main areas involved in CAS, focusing on the application of AR during the execution of a surgical procedure. Different modalities for the selection of the anatomical target area have been presented, using either 3D models extracted from preoperative planning or selecting the area of interest directly on the intraoperative endoscopic image. Due to the highly dynamic nature of soft tissues during surgery, the run-time surgical scene information update is fundamental to ensuring reliability and robustness of the AR system. To this end, an overview of 3D reconstruction and tissue tracking methods was presented, and different modalities to represent such information and provide the surgeons with safety warnings were introduced. Medical imaging has reached a high level of maturity in a wide variety of specific areas. However, the combination of such knowledge and its adaptation and integration to create novel applications for clinical practice is still challenging. In fact, there are many aspects that must be considered in the development of a surgical application. The methods should ensure a high level of reliability, robustness, and accuracy, since their performances can directly affect the outcome of the surgery. This aspect is often in contrast with real-time requirements, needed to transparently augment the surgical scene. No less important is the way in which the safety applications interface with surgeons. As an example of recent advances in AR application in abdominal surgery, the EnViSoRS is presented. This system has demonstrated the feasibility of combining different computer vision algorithms to provide surgeons with safety features based on AR applications. The methodological progresses made in this work stress the potential of such algorithms to extract and exploit useful information implicitly contained in surgical images, overcoming challenges related to the introduction of extra equipment in the OR. Overall, the main motivation of surgical safety systems is to improve the success rate of surgeries, having a positive impact on patients’ health by reducing the number of complications and the risk of death. Therefore this is an area of research with critical importance, and one which will see large growth in upcoming years.

References [1] Gao Y, Tannenbaum A, Kikinis R. Simultaneous multi-object segmentation using local robust statistics and contour interaction. In: International MICCAI workshop on medical computer vision. 2010. p. 195203. [2] Nicolau SA, Goffin L, Soler L. A low cost and accurate guidance system for laparoscopic surgery: validation on an abdominal phantom. In: ACM symposium on virtual reality software and technology. 2005. p. 124133.

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

235

14. Vision in Robotic Surgery

[3] Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol 2011;20:189201. [4] Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbecks Arch Surg 2015;400:3815. [5] Penza V, Ortiz J, De Momi E, Forgione A, Mattos L. Virtual assistive system for robotic single incision laparoscopic surgery. In: 4th Joint workshop on new technologies for computer/robot assisted surgery. Genova; 2014. [6] Ukimura O, Gill IS. Image-fusion, augmented reality, and predictive surgical navigation. Urol Clin North Am 2009;36:11523. [7] Zitova B, Flusser J. Image registration methods: a survey. Image Vision Comput 2003;21:9771000. [8] Safety WHOP, WHO, et al. WHO guidelines for safe surgery: 2009: safe surgery saves lives. 2009. [9] Weiser TG, Regenbogen SE, Thompson KD, Haynes AB, Lipsitz SR, Berry WR, et al. An estimation of the global volume of surgery: a modelling strategy based on available data. Lancet 2008;372:13944. [10] Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AHS, Dellinger EP, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med 2009;360:4919. [11] Tang R, Ranmuthugala G, Cunningham F. Surgical safety checklists: a review. ANZ J Surg 2014;84:14854. [12] Lam A, Kaufman Y, Khong SY, Liew A, Ford S, Condous G. Dealing with complications in laparoscopy. Best Pract Res Clin Obstet Gynaecol 2009;23(5):63146. [13] Crist DW, Gadacz TR. Complications of laparoscopic surgery. Surg Clin North Am 1993;73(2):26589. [14] Opitz I, Gantert W, Giger U, Kocher T, Kra¨henbu¨hl L. Bleeding remains a major complication during laparoscopic surgery: analysis of the SALTS database. Langenbecks Arch Surg 2005;390(2):12833. [15] Cole AP, Trinh QD, Sood A, Menon M. The rise of robotic surgery in the new millennium. J Urol 2017;197(2):S2135. [16] Surgical I. Intuitive surgical annual report 2016. Intuitive Surgical’s Website; 2010. [17] Navab N, Hennersperger C, Frisch B, Fu¨rst B. Personalized, relevance-based multimodal robotic imaging and augmented reality for computer assisted interventions. Med Image Anal 2016;33:6471. [18] Bernhardt S, Nicolau SA, Soler L, Doignon C. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017;37:6690. [19] Pham DL, Xu C, Prince JL. Current methods in medical image segmentation. Annu Rev Biomed Eng 2000;2:31537. [20] Heimann T, Meinzer HP. Statistical shape models for 3D medical image segmentation: a review. Med Image Anal 2009;13(4):54363. [21] Lorensen WE, Cline HE. Marching cubes: a high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput Graph 1987;21:1639. [22] Besl PJ, McKay ND, et al. A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 1992;14:23956. [23] Zhang Z. Iterative point matching for registration of free-form curves and surfaces. Int J Comput Vis 1994;13:11952. [24] Sotiras A, Davatzikos C, Paragios N. Deformable medical image registration: a survey. IEEE Trans Med Imaging 2013;32:115390. [25] Sederberg TW, Parry SR. Free-form deformation of solid geometric models. ACM SIGGRAPH Comput Graph 1986;20:15160. [26] Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ. Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging 1999;18:71221. [27] Deshpande N, Ortiz J, Caldwell DG, Mattos LS. Enhanced computer-assisted laser microsurgeries with a “virtual microscope” based surgical system. In: IEEE international conference on robotics and automation (ICRA). Hong Kong: IEEE; 2014. p. 41949. [28] Penza V, De Momi E, Enayati N, Chupin T, Ortiz J, Mattos LS. enVisors: enhanced Vision system for robotic surgery. A user-defined safety volume tracking to minimize the risk of intraoperative bleeding. Front Robot AI 2017;4:15. [29] Moccia S, De Momi E, Guarnaschelli M, Savazzi M, Laborai A, Guastini L, et al. Confident texture-based laryngeal tissue classification for early stage diagnosis support. J Med Imaging 2017;4:034502. [30] Moccia S, Vanone GO, De Momi E, Laborai A, Guastini L, Peretti G, et al. Learning-based classification of informative laryngoscopic frames. Comput Methods Programs Biomed 2018;158:2130. [31] Griffiths C, Barker J, Bleiker T, Chalmers R, Creamer D. Rook’s textbook of dermatology. John Wiley & Sons; 2016. [32] Carmeliet P, Jain RK. Angiogenesis in cancer and other diseases. Nature 2000;407:24957. [33] Campochiaro PA. Molecular pathogenesis of retinal and choroidal vascular diseases. Prog Retin Eye Res 2015;49:6781. [34] Moccia S, Foti S, Routray A, Prudente F, Perin A, Sekula RF, et al. Toward improving safety in neurosurgery with an active handheld instrument. Ann Biomed Eng 2018;46:145064. [35] Stewart JW, Akselrod GM, Smith DR, Mikkelsen MH. Toward multispectral imaging with colloidal metasurface pixels. Adv Mater 2017;29. [36] Machida H, Sano Y, Hamamoto Y, Muto M, Kozu T, Tajiri H, et al. Narrow-band imaging in the diagnosis of colorectal mucosal lesions: a pilot study. Endoscopy 2004;36:10948. [37] Emsley JW, Lindon JC. NMR spectroscopy using liquid crystal solvents. Elsevier; 2018. [38] Wernick MN, Yang Y, Brankov JG, Yourganov G, Strother SC. Machine learning in medical imaging. IEEE Signal Process Mag 2010;27:2538. [39] Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J 2015;13:817. [40] Shannon CE. A mathematical theory of communication. ACM SIGMOBILE Mobile Comput Commun Rev 2001;5:355. [41] Castellano G, Bonilha L, Li LM, Cendes F. Texture analysis of medical images. Clin Radiol 2004;59:10619. [42] Guo Z, Zhang L, Zhang D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans Image Process 2010;19:165763.

236

Handbook of Robotic and Image-Guided Surgery

[43] Haralick RM. Statistical and structural approaches to texture. Proc IEEE 1979;67:786804. [44] Freeman WT, Roth M. Orientation histograms for hand gesture recognition. In: International workshop on automatic face and gesture recognition. 1995. p. 296301. [45] Moccia S, Mattos LS, Patrini I, Ruperti M, Pote´ N, Dondero F, et al. Computer-assisted liver graft steatosis assessment via learning-based texture analysis. Int J Comput Assist Radiol Surg 2018;13:135767. [46] Misawa M, Kudo SE, Mori Y, Takeda K, Maeda Y, Kataoka S, et al. Accuracy of computer-aided diagnosis based on narrow-band imaging endocytoscopy for diagnosing colorectal lesions: comparison with experts. Int J Comput Assist Radiol Surg 2017;12:75766. [47] Fu Y, Zhang W, Mandal M, Meng MQH. Computer-aided bleeding detection in WCE video. IEEE J Biomed Health Inform 2014;18:63642. [48] Horner JL, Gianino PD. Phase-only matched filtering. Appl Opt 1984;23:81216. [49] Walnut DF. An introduction to wavelet analysis. Springer Science & Business Media; 2013. [50] Fraz MM, Remagnino P, Hoppe A, Uyyanonvara B, Rudnicka AR, Owen CG, et al. Blood vessel segmentation methodologies in retinal images—a survey. Comput Methods Programs Biomed 2012;108:40733. [51] Magoulas GD. Neuronal networks and textural descriptors for automated tissue classification in endoscopy. Oncol Rep 2006;15:9971000. [52] Burt P, Adelson E. The Laplacian pyramid as a compact image code. IEEE Trans Commun 1983;31:53240. [53] Kumar S, Saxena R, Singh K. Fractional Fourier transform and fractional-order calculus-based image edge detection. Circuits Syst Signal Process 2017;36:1493513. [54] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:11518. [55] Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018;2:15864. [56] Nanni L, Ghidoni S, Brahnam S. Handcrafted vs Non-Handcrafted Features for computer vision classification. Pattern Recognit 2017;71:15872. [57] Kotsiantis SB, Zaharakis I, Pintelas P. Supervised machine learning: a review of classification techniques. Informatica 2007;31:24968. [58] McCallum A, Nigam K, et al. A comparison of event models for naive Bayes text classification. In: ICML/AAAI workshop on learning for text categorization. 1998. p. 418. [59] Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Mach Learn 1997;29:13163. [60] Ramoni M, Sebastiani P. Robust Bayes classifiers. Artif Intell 2001;125:20926. [61] Dasarathy BV. Nearest neighbor (NN) norms: NN pattern classification techniques. IEEE Computer society press, 1991. [62] Vrooman HA, Cocosco CA, Lijn F, Stokking R, Ikram MA, Vernooij MW, et al. Multi-spectral brain tissue segmentation using automatically trained k-nearest-neighbor classification. Neuroimage 2007;37:7181. [63] Berens J, Mackiewicz M, Bell D. Stomach, intestine, and colon tissue discriminators for wireless capsule endoscopy images. SPIE Med Imaging 2005;5747:28391. [64] Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 1958;65:386. [65] Karargyris A, Bourbakis N. Wireless capsule endoscopy and endoscopic imaging: a survey on various methodologies presented. IEEE Eng Med Biol Mag 2010;29:7283. [66] Shen L, Rangayyan RM, Desautels JEL. Detection and classification of mammographic calcifications. Int J Pattern Recognit Artif Intell 1993;7:140316. [67] Safavian SR, Landgrebe D. A survey of decision tree classifier methodology. IEEE Trans Syst Man Cybern 1991;21:66074. [68] Weiss SM, Indurkhya N. Rule-based machine learning methods for functional prediction. J Artif Intell Res 1995;3:383403. [69] Cortes C, Vapnik V. Support vector machine. Mach Learn 1995;20:27397. [70] Moccia S, Wirkert SJ, Kenngott H, Vemuri AS, Apitz M, Mayer B, et al. Uncertainty-aware organ classification for surgical data science applications in laparoscopy. IEEE Trans Biomed Eng 2018;65(11):264959. [71] Mukherjee R, Manohar DD, Das DK, Achar A, Mitra A, Chakraborty C. Automated tissue classification framework for reproducible chronic wound assessment. Biomed Res Int 2014;2014:851582. [72] Bernal J, Tajkbaksh N, Sa´nchez FJ, Matuszewski BJ, Chen H, Yu L, et al. Comparative validation of polyp detection methods in video colonoscopy: results from the MICCAI 2015 endoscopic vision challenge. IEEE Trans Med Imaging 2017;36:123149. [73] Moccia S, De Momi E, El Hadji S, Mattos LS. Blood vessel segmentation algorithms—review of methods, datasets and evaluation metrics. Comput Methods Programs Biomed 2018;158:7191. [74] Penza V, Moccia S, Gallarello A, Panaccio A, De Momi E, Mattos LS. Context-aware augmented reality for laparoscopy. In: Sixth national Congress of Bioengineering. Milan; 2018. [75] Bergen T, Wittenberg T. Stitching and surface reconstruction from endoscopic image sequences: a review of applications and methods. IEEE J Biomed Health Inform 2016;20:30421. [76] Ro¨hl S, Bodenstedt S, Suwelack S, Kenngott H, Mu¨ller-Stich BP, Dillmann R, et al. Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration. Med Phys 2012;39(3):163245. [77] Mountney P, Stoyanov D, Yang GZ. Three-dimensional tissue deformation recovery and tracking. IEEE Signal Process Mag 2010;27:1424. [78] Mirota DJ, Ishii M, Hager GD. Vision-based navigation in image-guided interventions. Annu Rev Biomed Eng 2011;13:297319. [79] Bernhardt S, Abi-Nahed J, Abugharbieh R. Robust dense endoscopic stereo reconstruction for minimally invasive surgery. In: International MICCAI workshop on medical computer vision. Heidelberg; 2012. p. 25462.

Enhanced Vision to Improve Safety in Robotic Surgery Chapter | 14

237

14. Vision in Robotic Surgery

[80] Stoyanov D, Scarzanella MV, Pratt P, Yang GZ. Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: International conference on medical image computing and computer-assisted intervention. Heidelberg; 2010. p. 27582. [81] Mountney P, Stoyanov D, Davison A, Yang GZ. Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery. In: International conference on medical image computing and computer-assisted intervention. Heidelberg; 2006. p. 34754. [82] Mountney P, Yang GZ. Motion compensated SLAM for image guided surgery. In: International conference on medical image computing and computer-assisted intervention. Heidelberg; 2010. p. 496504. [83] Maier-Hein L, Mountney P, Bartoli A, Elhawary H, Elson D, Groch A, et al. Optical techniques for 3D surface reconstruction in computerassisted laparoscopic surgery. Med Image Anal 2013;17:97496. [84] Maier-Hein L, Groch A, Bartoli A, Bodenstedt S, Boissonnat G, Chang PL, et al. Comparative validation of single-shot optical techniques for laparoscopic 3-D surface reconstruction. IEEE Trans Med Imaging 2014;33(10):191330. [85] Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vis 2002;47:742. [86] Brown MZ, Burschka D, Hager GD. Advances in computational stereo. IEEE Trans Pattern Anal Mach Intell 2003;25(8):9931008. [87] Tian Q, Huhns MN. Algorithms for subpixel registration. Comput Vis Graph Image Process 1986;35:22033. [88] Stoyanov D. Surgical vision. Ann Biomed Eng 2012;40:33245. [89] Yang H, Shao L, Zheng F, Wang L, Song Z. Recent advances and trends in visual tracking: a review. Neurocomputing 2011;74:382331. [90] Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis 2004;60:91110. [91] Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on computer vision and patter recognition CVPR. vol. 1, issue 1, 2001. p. 5118. [92] Fichera L. Realization of a cognitive supervisory system for laser microsurgery. In: Cognitive supervision for robot-assisted minimally invasive laser surgery. Springer, Cham. 2016. p. 7988. [93] Rosenberg LB. Virtual fixtures: perceptual tools for telerobotic manipulation. In: IEEE virtual reality annual international symposium. 1993. p. 7682. [94] Bowyer SA, Davies BL, Baena FR. Active constraints/virtual fixtures: a survey. IEEE Trans Robot 2014;30:13857. [95] Enayati N, Costa ECA, Ferrigno G, De Momi E. A dynamic non-energy-storing guidance constraint with motion redirection for robot-assisted surgery. In: EEE/RSJ international conference on intelligent robots and systems. 2016. p. 43116. [96] Olivieri E, Barresi G, Caldwell DG, Mattos LS. Haptic feedback for control and active constraints in contactless laser surgery: concept, implementation, and evaluation. IEEE Trans Haptics 2018;11:24154. [97] Vitrani MA, Poquet C, Morel G. Applying virtual fixtures to the distal end of a minimally invasive surgery instrument. IEEE Trans Robot 2017;33:11423. [98] Tauscher S, Fuchs A, Baier F, Kahrs LA, Ortmaier T. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control. Int J Comput Assist Radiol Surg 2017;12:176373. [99] Penza V, Du X, Stoyanov D, Forgione A, Mattos LS, De Momi E. Long term safety area tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery. Med Image Anal 2018;45:1323. [100] Penza V, Ortiz J, Mattos LS, Forgione A, De Momi E. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery. Int J Comput Assist Radiol Surg 2016;11:197206. [101] Penza V, Ciullo AS, Moccia S, Mattos LS, De Momi E. EndoAbS dataset: endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms. Int J Med Robot 2018;14:e1926.

15 G

Haptics in Surgical Robots Peter Culmer1, Ali Alazmani1, Faisal Mushtaq1, William Cross2 and David Jayne1 1

University of Leeds, Leeds, United Kingdom St James’s University Hospital, Leeds, United Kingdom

2

ABSTRACT This chapter addresses the use of haptics in robots for soft-tissue surgery, aiming to bring a cohesive consideration of the clinical context, underlying technologies, and state-of-the-art applications in this field. The fundamentals of haptics are first introduced, using the human sensory system as an example which serves as a benchmark for technological advances. The chapter then provides a description of the clinical context, describing surgical areas and procedures of particular relevance to robotics before introducing key clinical challenges that have the potential to be addressed by the introduction of haptic technology. The basic building blocks of haptics, sensing and feedback systems are then introduced to provide an understanding of the fundamentals together with an overview of state-ofthe-art in each area. From this foundation, a review of haptics applied to surgical robots is presented, highlighting key commercial systems together with research advances. The chapter concludes with a discussion of current trends in this field and considers the technological and clinical challenges which remain. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00015-3 © 2020 Elsevier Inc. All rights reserved.

239

240

Handbook of Robotic and Image-Guided Surgery

15.1

Introduction

Haptics, a broad term for the sense of “touch,” rivals vision for its importance to surgeons and is closely intertwined with surgical practice, from “physical” examination to precise interventions. It would seem logical that this relationship would be enhanced with step changes in surgical practice following the introduction of minimally invasive and robotically assisted minimally invasive surgery (RAMS). Yet the opposite is true; these innovations have resulted in haptic feedback being largely diminished or entirely removed from the hands of the surgeon. Given the perceived importance of haptics to surgery, and RAMS in particular, it is instructive to analyze how this arose, examine the state-of-the art, and look toward future innovations which promise to turn this situation around. Our aims in this chapter are to bring together a diverse range of research to provide a focused picture of haptics applied to RAMS. We seek to provide the reader with a solid background to the fundamental science relevant to this area, combined with insight into recent developments in the research field. We do so by drawing on key publications across the fields of engineering, psychology, and surgery and highlight important textbooks and reviews for further reading in each. In this chapter we begin by introducing the fundamental concepts of haptics. In particular we examine the human sense of haptics and explore its value in open surgical practice. We then consider the application of haptics on a generalized tele-operated surgical robot, highlighting key components and performance attributes and their relation to the human haptic system. This provides a common technical background which we use to examine the state of the art in clinically available RAMS, reviewing the technical attributes of commercial systems with haptic capabilities and then inspecting their clinical reach and virtue into different surgical specialties. Following this we look at the wealth of research that has been undertaken into haptics for surgical robotics, highlighting technological developments in sensors, feedback systems, and full robotic research platforms. To provide a holistic view of this research area we then consider the critical aspect of how well these systems perform, is there value in integrating artificial haptics into RAMS? The chapter concludes by looking toward the future. How are surgical needs for RAMS changing and how do these relate to technical requirements for haptic systems? How is surgical interaction with RAMS evolving and what are the implications of increasing levels of automation in these systems? While many questions are raised, it should become clear that haptics is set to become an increasing part of future surgical practice, driven by real clinical need and enabled by exciting advances in technology.

15.1.1

Fundamentals of haptics

The word haptics is derived from the Greek word “haptikos,” which refers to “a sense of touch,” and the Oxford English dictionary defines haptics as: “Relating to the sense of touch, in particular relating to the perception and manipulation of objects using the senses of touch and proprioception.” As shown in Fig. 15.1, “haptics” is a broad construct that encompasses a number of different sensory inputs arising from receptors embedded within the body. These receptors can provide information about the state of the body known as kinesthetic sensation (i.e., the angles formed by the

FIGURE 15.1 Key elements of the human haptic sensory system.

Haptics in Surgical Robots Chapter | 15

241

15.1.2

Surgery and haptics

Surgery presents a canonical example of where humans (and increasingly robots) need to skillfully interact with the environment to perform a task with a high degree of spatiotemporal accuracy and precision [15]. Accurate and precise sensorimotor behavior requires information for movement planning and feedback control. It is therefore useful to consider the situations in which haptic information is relevant to surgical practice.

15. Haptics in surgical robots

various joints within the arm), and the physical characteristics of objects within the world, known as tactile sensations. In contrast to vision, haptic information regarding an object’s physical properties requires the body to come into contact with the object. Mechanical interactions between an object and the body provide a rich source of information that is not readily available from vision, and the usefulness of this information to humans is a matter of common observation. The scientific examination of haptics started in the 1800s and this type of investigation can be attributed directly to two of the founding fathers of experimental psychology, Ernst Weber and his student Fechner [1]. Weber used calipers to measure two-point thresholds on the skin and found large variations in sensitivity (e.g., high sensitivity near the lips and low sensitivity on the trunk). From a series of observations on touch and other senses, Weber showed that the threshold at which a change in a physical stimulus could be detected is a constant ratio (percentage change) of the original (reference) stimulus. In haptic perception, Weber is also responsible for the phenomenon now known as the WeberFechner Law of perception, formally describing the sensitivity of the human haptic system (i.e., the smallest differences in haptic sensation it can discriminate), contributing to the birth of psychophysics (the scientific study of the relation between stimulus and sensation) and a finding which remains relevant almost 200 years later [2]. Since the time of Weber, psychophysics has highlighted the remarkable performance of human tactile sensation; the temporal resolution of touch is approximately 5 ms and the spatial resolution at the fingertip as low as 0.5 mm [3]. This spatial and temporal resolution of touch provides humans with rich data about the physical characteristics of objects contacted by the hand. The haptic information comes in part through mechanoreceptors (nerve endings which react to a mechanical stimulus), and a variety of different mechanoreceptor types are found across the body. These receptors provide specialized somatosensory information input (information regarding haptics) to the central nervous system (CNS). The mechanoreceptors work through changes in the physical properties of their plasma membranes. The rate of adaptation and threshold of activation varies according to the mechanoreceptor type. Human skin can be considered to contain four types of mechanoreceptor that are specialized to provide tactile input to the CNS (Meissner’s corpuscles, Pacinian corpuscles, Merkel’s disks, and Ruffini’s corpuscles). These high-sensitivity/low-threshold receptors can be found on glabrous (hairless) skin and the hand. Fingertips in particular have an intense distribution of low-threshold receptors and are thus one of the areas of the human body most sensitive to external contact with objects [46]. We have already drawn attention to the fact that the term “haptics” is also used to refer to the sensory system that provides the body with information about the relative position of body parts. This class of information relates to the sense of position and movement of the limbs in space and is often described as “kinesthesia” [7]. The source of this information arises from mechanoreceptors embedded in muscles, tendons, and joints. These receptors allow humans to sense the angular position of a joint and have a fundamental role in the control of human movement. An important aspect of haptic perception is that it is not processed by the CNS in isolation, haptic information is combined with vision (and to a lesser extent other senses) when humans interact with the external world. Indeed, it is well established that humans integrate information across multiple modalities when determining the characteristics of external objects and the state of the body [811]. The multimodal integration of information in humans suggests that the benefits of robot surgery might be best harnessed by systems that have the capacity to integrate multiple forms of information in order to arrive at the best possible state estimates. We will explore the consequences of a multimodal approach to surgical technologies in more detail later in this chapter, but here it is worth noting that integration of multiple streams of information in robotic systems is already yielding promising results. For example, a recent experiment evaluating the efficacy of integrating haptic feedback in robotic surgery showed that information from vision and haptics can be synergistically linked to reduce force application error [12]. Today, the study of haptics transcends philosophy and psychology, bringing together a diverse range of disciplines, from neurophysiology through to computer science. Haptics has become a “broad church,” covering the study of human and/or computer interactions with the environment and most often related to the manipulation of objects through touch and force feedback [13]. A large degree of research activity in haptics has focused on practical solutions, from supporting activities of daily living through to the incorporation of haptics in surgical technologies. The optimization of robotic surgery will require further practical considerations of how information can be best extracted and used to support the high levels of accuracy and precision required when physically interacting with tissue in surgery [14].

242

Handbook of Robotic and Image-Guided Surgery

Section 15.1 highlighted that mechanoreceptors in the body provide information that can contribute toward the performance of skilled sensorimotor control behaviors. The information can be used to guide arm movement (especially when visual information on arm position is not available) and provide rich data on the physical properties of an object of interest. Information on an object’s physical properties is an important precursor for interacting with the object in a skillful manner (e.g., ensuring that the appropriate fingertip forces are exerted when lifting or squeezing the entity). A move toward minimally invasive techniques and the associated degradation of haptic cues (or, in robot surgery, the removal of some forms of haptic cues), clearly reduces the information available to the surgeon. This means that current challenges in surgical technologies include: (1) restoration of normally available information through the creation of intelligent instruments that provide adequate substitutes for knowing effector position, producing appropriate forces and sensing deformation; (2) augmenting this information so surgeons can reach ever higher levels of sensorimotor performance and cognitive decision-making; and (3) presenting this information effectively so that surgeons can incorporate it into their sensorimotor control and decision-making processes, examined in Section 15.3.3. To address these three challenges, a consideration of the task is required in order to understand how information is used to perform a given task (termed “task analysis”). In the case of surgery, the surgeon needs to precisely monitor his or her limb position so that the controlled end point (e.g., the fingertip) is moved to the appropriate spatial location and applies the desired amount of force. If a surgeon is moving their finger to palpate tissue in open surgery then they are able to use a combination of visual and haptic information in order to guide their arm. The task is more complicated when the surgeon holds a tool, but there is a wealth of data showing that humans can use haptic information when holding a handheld object. The augmentation of such haptic information with vision enables humans to precisely control the end point of a handheld instrument (as if the instrument were part of their own body) [16]. In laparoscopic surgery, the surgeon loses the ability to directly monitor the end point of the laparoscopic tool through vision and must therefore rely on an indirect view of the instrument (via a camera feed), together with the haptic information obtained through wielding the laparoscopic device. The fact that surgeons can master laparoscopic surgery is a testament to the incredible learning capabilities of the human nervous system. It also demonstrates that humans are able to learn to produce skillful movement when alternative sources of information replace the normal information signals used within a task. The use of robotic surgical systems places further demands on the surgeon as the information available when holding a surgical tool is no longer directly available to the operator. This means that the surgeon no longer has access to haptic cues providing information on the end point position of the tool and so feedback control must rely entirely on a visual error signal. As a result, overall performance will be degraded, as highlighted by recent meta-analyses demonstrating the utility of including haptic information in teleoperative systems [17,18]. The other major use of haptic information in surgery relates to the information that a surgeon can glean about the physical characteristics of the tissues that are involved in the operation. Haptics can play a central role in driving surgical performance by allowing the surgeon to gain a much better understanding of the boundaries of a malignant growth, for example, and determining the forces required to complete a particular action such as manipulating tissue [16]. This highlights an important aspect of haptic perception (and indeed all of human perception): the process of obtaining information about the external world is not passive. Humans are information predators and we actively hunt to extract information from the world around us. In vision, humans scan with their eyes to extract information from the optical field. Indeed, humans make more eye movements in their lifetime than their heart beats. In haptics, humans actively interact with external objects and it is through this activity that information is obtained about the physical characteristics of the object. Active interaction with the world creates a haptic feedback loop, termed the “perceptionaction cycle,” of information flow. The manipulation of objects relies on “action for perception,” which in turn results in “perception for action” (i.e., the perceptual information needed to generate the skillful action that will allow the actor to apply the desired forces to the object to achieve a particular goal). To illustrate this perceptionaction cycle of information, consider the typical actions of a surgeon identifying a cancerous tumor through palpation in open surgery. The surgeon will actively probe the tissue of interest with their hands (prodding, pulling, squeezing, and stroking) to generate and observe sensory information (both visual and haptic) to inform assessment [19]. It can be seen that the challenge is to allow equivalent (or enhanced) functional information to be available to the surgeon within laparoscopic and robotic surgical devices where traditional haptic sources of information are degraded or absent.

15.1.3

Tele-operated surgical robot systems

Surgical robotics is becoming an increasingly broad field and the definition of what this constitutes can be somewhat nebulous. The SAGES-MIRA Robotic Surgery Consensus Group states that a “surgical robot” generally refers to systems which should strictly be considered as “remote telepresence manipulators,” dependent on the “control of a human

Haptics in Surgical Robots Chapter | 15

243

operator” and without autonomy. In fact, truly autonomous surgical robots remain an unfulfilled but exciting aspiration, as discussed later in the chapter, and hence here we consider robotic surgery as a surgical procedure or technology that adds a computer technology-enhanced device to the interaction between a surgeon and a patient during a surgical operation and assumes some degree of control heretofore completely reserved for the surgeon [20].

15.2 15.2.1

Surgical systems The surgical robotics landscape

The robotic revolution in the manufacturing industry is often used as an argument for similar transformations in robotic surgery. Among the numerous qualities afforded by robotic systems, precision, dexterity, repeatability, and cost efficiency have clear relevance to surgical practice [23,31]. Yet, in contrast to the manufacturing industry, surgical procedures represent a more complex and unstructured environment in which robotic systems must interact precisely with delicate, mechanically compliant structures and where the consequence of error is directly linked to human life. As a

15. Haptics in surgical robots

In this context, the key elements of a surgical robot with haptics are shown in Fig. 15.2. Foremost is the haptic measurement system, consisting of sensors (described in Section 15.3.1) used to obtain force and pressure information from the robot and operative field and which replace direct sensory feedback from the arms and hands of the surgeon. Next the robotic control system mediates the processing and return of information, emulating the nervous system of the body. Lastly, haptic information is “rendered” back to the surgeon by one or more haptic interfaces, as discussed in Section 15.3.2. This disconnect between the surgeon and the operative field brings associated technical challenges, together with opportunities to both change and enhance the surgical experience [21]. Design of a tele-operated system requires careful consideration of how to balance the conflicting requirements to maximize “transparency,” a term denoting the degree of telepresence achieved (i.e., the fidelity of the haptic information) while ensuring system “stability” (i.e., ensuring the robotic system behaves predictably) [22]. Transparency and stability have particular resonance in a surgical context; reliable behavior is crucial when the briefest erroneous movement could result in patient morbidity or death and appropriate telepresence is necessary if the haptic capabilities are to benefit, rather than hinder, surgical procedures. As shown in Fig. 15.2, the overall tele-operated system involves three interacting components, the surgical operator, the robotic system, and the surgical environment, each of which has its own dynamics [23]. Ensuring stability can be addressed by considering the flow of energy between components and ensuring that the robotic system is “passive” such that it can only absorb, rather than generate energy [24]. Promoting telepresence requires kinematic precision (the instruments should move to the surgeon’s commands accurately), kinetic fidelity (haptic information should be measured and rendered to minimize distortion when perceived by the user), and minimize temporal lags (specifically transport delays in the robotic system which occur at mechanical interfaces and in electronic communications). Transport delays also inhibit system transparency and in a surgical context have been demonstrated to slow procedures and affect confidence in the system [25]. For a detailed analysis of these factors the reader is directed to an excellent technical review [22]. Despite the inherent technical challenges, separating the surgeon’s hands from the operative environment can be used to positive effect. Without a direct physical connection between stimulus and sensory sensation, haptic information can be manipulated or artificially created prior to being displayed to the surgeon. Sensory substitution, in which haptic information is displayed in a different modality or form, emerged due to limitations in haptic displays, for example, displaying tactile pressure readings via a visual interface [26]. However, it can also be used to augment existing sensory information, like using variable-frequency vibration to convey the magnitude of a grasping force [27]. Just as endoscopic cameras provide magnification of small features, “force scaling” illustrates the potential for haptics to enhance human sensory capabilities by amplifying low-magnitude forces (e.g., tissue manipulations during microsurgery) to levels which can be better perceived by the surgeon [28]. Equally, just as surgical displays are increasingly augmented with additional information to help the surgeon (e.g., overlaying preoperative CT scans [29]), so haptic displays can be used to define “virtual fixtures” which help physically guide appropriate motion of the robotics system to avoid errors, for example, to protect sensitive tissue in neurosurgery [30], or alerting the surgeon when critical thresholds are approached/exceeded. These examples underline both the relevance of haptic technology to modern robotic surgery and its relative immaturity when compared to visual technology, as will be evident in the next section which examines extant haptic technology in commercial surgical robots.

244

Handbook of Robotic and Image-Guided Surgery

FIGURE 15.2 Key elements of artificial haptic feedback in a tele-operated surgical robot.

FIGURE 15.3 Prevalence of haptic-enabled surgical robots in clinical studies across different surgical specialties. Data from Amirabdollahian F, Livatino S, Vahedi B, Gudipati R, Sheen P, Gawrie-Mohan S, et al. Prevalence of haptic feedback in robot-mediated surgery: a systematic review of literature. J Robot Surg 2018;12:1125. https://doi.org/10.1007/s11701-017-0763-4 [32].

result, to date there has been a limited uptake of robotic surgical systems, particularly outside general surgery, as shown in Fig. 15.3. A prime example can be found in general surgery where a step change in practice was brought about through the introduction of laparoscopy in the 1980s. Laparoscopic surgery avoids the need for large incisions (used in open surgery), instead using minimally invasive techniques to access abdominal tissues and organs using long instruments inserted through small incisions in the abdominal wall. The benefits of laparoscopy (i.e., improved cosmesis, lower infection rates, and faster recovery) are well recognized and have driven uptake of the technique, despite the increased

Haptics in Surgical Robots Chapter | 15

245

15.2.2

Commercial surgical robot systems

The range of commercially available surgical robots with haptic capabilities is increasing, with the current state of the art presented in Table 15.1 and illustrated in Fig. 15.4. The systems span a range of surgical specialties and there is evidence of the market developing more specialized systems. It should be noted that the information available is typically limited due to the highly competitive commercial market and many sources are based on the research systems upon which the commercial systems were developed. In tandem, a number of major surgical robot systems exist without TABLE 15.1 Clinically available surgical robots with haptic capabilities. System name

Manufacturer

Surgical area

Haptic capabilities

Clinical studies

Senhance (formerly ALF-X)

TransEnterix

General surgery

Force feedback [38,39]

Gynecology [39,40]

RIVO-I

Meere, Korea

General surgery

Force feedback [42]

Colorectal [41] Preclinical anastomosis [42] Preclinical cholecystectomy [43] Preclinical partial nephrectomy [44]

MiroSurge

NeuroArm

Medtronic (formerly Covidien)

General surgery

Flexible arm configuration

Laparoscopic surgery?

Open surgery (cardiac)

Bimanual force feedback

Preclinical heart studies [45]

MacDonald, Dettwiler and Associates

Microsurgery

Tooltip force feedback

Glioma [30,47]

Hansen Medical Inc.

Endovascular

Force scaling Virtual fixtures [30,46]

Sensei X and X2

DoFs, Degrees of freedom.

Catheter tip with three DoFs force sensor

Stent grafting [51]

Full force feedback system [48]

Catheter ablation [52,53]

Minimizes contact force [49,50]

Catheter ablation—robot versus manual [54]

15. Haptics in surgical robots

challenge faced by the surgeon (i.e., impaired visual and haptic feedback, loss of dexterity, and complex instrument control). This forms the justification for robotic-assisted-laparoscopic surgery which has been dominated by the da Vinci robot series from Intuitive Surgical. The da Vinci uses robotic technology to improve surgical dexterity, visualization, and precision while simplifying control to ease cognitive loading. The da Vinci has been a commercial and clinical success, enjoying an unchallenged market share and uptake across the world despite its high cost. However, the technology is notable for its lack of haptic feedback, which is frequently linked to an increased risk of inadvertent tissue injury [20,33]. While this may not necessarily manifest in higher clinical complication rates it does have the effect of limiting the utility of the system to those areas in which haptic feedback is not absolutely necessary. The specific example of the da Vinci system in general surgery highlights the broader trend that robotic surgical technology must be developed to fully meet the clinical needs and associated challenges of modern surgery and that current systems are notable for a lack of haptic feedback, described in a comprehensive review [32]. These limitations do not necessarily reflect a lack of appropriate technology (as will be shown later in this chapter) but rather the realities of getting such technology to market in the face of a highly restrictive and patent-encumbered commercial landscape [34]. However, the expiry of key patents and renewed investment by large multinationals (e.g., Johnson and Johnson) give cause for optimism of a more competitive market bringing technological advances including haptic feedback to broaden the spread of robotic surgery [3537].

246

Handbook of Robotic and Image-Guided Surgery

FIGURE 15.4 Clinically available surgical robots with haptic capabilities: (A) Senhance (TransEnterix Inc., Morrisville, North Carolina, United States) [55]; (B) RIVO-I (Meere Company) [42]; and (C) the MiroSurge system with haptic control interface (inset) [DLR (CC-BY 3.0)] [56].

haptic capabilities at the time of writing, including Versius (Cambridge Medical Robotics), SPORT (Titan Medical), and the da Vinci (Intuitive Surgical).

15.2.2.1 General surgery: Senhance Senhance is a surgical robot platform developed for laparoscopy, originally conceived and developed by the Joint Research Centre of the European Commission in collaboration with SOFAR SpA (Italy) as the Telelap Alf-x and subsequently rebranded as Senhance surgical robot system when the technology was acquired by TransEnterix

Haptics in Surgical Robots Chapter | 15

247

15.2.2.2 General surgery: REVO-I The REVO-I is a general surgical platform developed through a collaboration between University College of Medicine, Korea, and Meere Company [42]. Development was initiated in 2007 and after several technical iterations and over 20 animal studies, the present system has received regulatory approval from the Korean Drugs Administration and was launched in March, 2018, aiming to provide cost-effective robotic surgery to reduce overall costs for resourcelimited healthcare systems with lower costs reported in comparison to use of an ALF-X robotic system for the same procedures [44]. The general configuration is deliberately similar to the da Vinci platform, with a central four-armed robotic cart used in conjunction with a surgeon’s control console, as shown in Fig. 15.4. The robotic arms are controlled by the surgeon console; one arm holds a high-definition (HD) stereoscopic laparoscope, the remaining three arms are equipped with instruments and selectively positioned and moved by the surgeon using a hand-operated grip-system [43,44]. Integration of haptic functionality into the RIVO-I has been reported as a key technical feature differentiating it from the ubiquitous da Vinci. Although technical details on the implementation are scarce, the system provides the surgeon with force feedback from surgical graspers and actively regulates instrument speed and force within specified “safe” limits [36,42].

15.2.2.3 General surgery: Medtronic MiroSurge MiroSurge (Medtronic, United States) is a commercialized version of the MiroSurge surgical robot for general and cardiac surgery, originally developed at DLR (German Aerospace Center), then licensed by Covidien before their acquisition by Medtronic who continued to develop MiroSurge and launched the system as a rival to systems like da Vinci (Intuitive Surgical, United States). MiroSurge was developed as a specific telerobotic instance of DLR’s broader Miro robotic surgery platform which consists of technologies for robotic manipulators, surgical instrumentation, user interfaces, and computer-assisted planning and registration. The MiroSurge system was developed to provide a flexible surgical system, in particular focusing on reducing the operative footprint in comparison to competing systems. To achieve this the system does not have a dedicated “base station” but instead uses a reconfigurable series of lightweight robot arms mounted directly onto the operative table, each holding a custom surgical instrument. The robot arms are based on DLR’s MIRO technology, a general-purpose robotic arm with seven DoFs, a lightweight (10 kg) structure, and integrated joint torque sensors, designed to achieve low-inertia and high-bandwidth (0.5 m/s) control, with a maximum payload of 3 kg. Instrument control in MiroSurge exploits this configuration by using combined force and position control to attain four DoFs redundant positioning of the tooltip within the body (to avoid external arm collisions) with movement about a variable fulcrum (in contrast to a fixed fulcrum employed by systems such as the da Vinci). MiroSurge instruments are attached to each robot arm to locate an additional two DoFs wrist mechanism inside the body, resulting in a functional endeffector with six DoFs, together with a single DoF functional movement (e.g., open/close grasping jaws) [56]. The result is a system that can dynamically adapt to, and compensate for, natural movement of the body (e.g., due to breathing) or enable robotic assistance in complex procedures such as open cardiac surgery [45,59]. The haptic capabilities of MiroSurge comprise seven DoFs force sensing at each instrument with a bimanual feedback interface. Measurements of tooltissue interactions are made directly at the end-effector of each instrument, as shown in Fig. 15.4, using a miniaturized six DoFs force/torque sensor combined with a grasp force sensor integrated

15. Haptics in surgical robots

(Morrisville, North Carolina, United States). The system was approved for general surgical procedures in Europe in 2012 and obtained FDA approval for the United States in 2017 [55,57]. There is little detailed technical information on the Senhance system, but the capabilities of the system can be inferred from a series of publications evaluating its clinical efficacy (see Section 15.2.3). The system was designed to be both cost-effective and to minimize disruption to existing operating theater environments and workflows. Instruments are held by a series of up to four robot arms, each situated on individual mobile carts to allow a flexible configuration around the patient. Senhance promotes that its instruments are similar to those used in manual laparoscopy, designed to promote familiarity with the surgeon they also lack the additional “wristed” degrees of freedom (DoFs) found in competing systems like da Vinci [58]. The surgeon sits at a console providing three-dimensional (3D) visualization, eye-tracking control of the endoscope, and haptic feedback via two force feedback manipulators which resemble laparoscopic instruments [55,57]. The manipulators transmit grasp force, enabling tissue consistency, or object manipulation, to be felt by the surgeon [57]. It is reported that the capabilities are instructive during thread and needle manipulation when suturing [58].

248

Handbook of Robotic and Image-Guided Surgery

into the jaws of forceps and needle-driver tools. This configuration minimizes mechanically induced disturbances to enable precise measurement (0.04 N resolution, 10 N range) at the expense and complexity of requiring compatibility with autoclave sterilization processes [56]. MiroSurge is controlled using two haptic manipulators developed through a commercial partnership with DLR (sigma.7, Force Dimension), each of which provides seven DoFs movement control with full force feedback on all DoFs. Thus the surgeon is able to continuously perceive and control both the pressure applied to grasp tissue (08 N) with the forces (020 N) and torques (00.4 N m) necessary to move it during operative practice. Crucially, to ensure safe surgical operation with this level of robotic complexity, the MiroSurge control system conforms to requirements for passivity (see Section 15.1.2), providing confidence of stability and reliability [60].

15.2.2.4 Microsurgery: NeuroArm The NeuroArm system has been developed to exploit the ability for robotic systems to perform precise movements and to operate in challenging environments, in this case to perform robotic microsurgery in neurological procedures within an MRI scanner used for intraoperative imaging. The system combines intraoperative MRI expertise from University of Calgary with tele-operated robotic technology originally developed for the International Space Station by Macdonald Dettwiler and Associates (Aurora, Canada) [46]. As a consequence, the system was developed to meet both aerospace and medical standards, and the system was licensed to a spin-off which now trades as IMRIS-Deerfield (United States) and has reportedly delivered systems to clinical use in over 70 locations worldwide. The reader is recommended to refer to Ref. [61] for an enlightening insight into the evolution of the NeuroArm project and the challenges involved in commercializing medical robotics technology. Central to the design and operation of NeuroArm is the constraint to be MRI-compatible, which impacts on its configuration, use of materials, and sensory systems. In the operative field, a mobile base-unit supports a field camera and two anthropomorphic seven DoFs robot arms, each of which can hold a range of detachable microsurgical tools. A remote surgeon workstation then controls this assembly and integrates the various imaging and robotic technologies through an array of 2D and 3D display units and hand controllers [46]. The system can operate at a maximum speed of 200 m/s with a 750 g payload while achieving precise motion (50 μm spatial resolution) for sensitive tasks like microsurgical dissection [30,46]. The haptic functionality of the NeuroArm is centered on the use of force feedback from the robotic arms to the surgeon’s hand controllers. Precision force transducers (Nano17, ATI Industrial Automation Inc., Apex, North Carolina, United States) enable high resolution (0.149 g-force) feedback of translational forces at the tools, together with an additional grasping degree of freedom via two force feedback hand-controllers (Omega 7, Force Dimension, Switzerland) [62]. The control system features a combination of motion scaling, a 2 Hz low-pass “tremor” filter and force scaling, the latter enabling enhanced sensation such that soft tissues can appear stiffer, or vice versa [30]. In addition, NeuroArm uses its haptic capabilities to provide guidance using virtual fixtures [28], notably to define “no-go” zones informed by MRI information. In addition, a “haptic warning system” can be enabled to inform when definable force thresholds are exceeded to help avoid tissue trauma [47]. During typical operative use the tool forces fall within a 020 Hz band, signifying controlled, precise motion [62]. However, even with this range of features, surgical tool motion was found to be slower in the NeuroArm system in comparison to conventional microsurgery. This is thought to be a consequence of small delays in the control system, forcing surgeons to slow their movement such that their movement remained synchronized with the robotic system, highlighting the importance of “haptic transparency.” In addition, the team note that the system’s spatial precision exceeds its visual capabilities and that to fully exploit this requires future enhancement or automation [25].

15.2.2.5 Endovascular: sensei The Sensei X platform (Hansen Medical, Mountain View, California, United States) merits discussion for advancing haptic feedback in the growing surgical field of endovascular catheters. The Sensei X provides an open robotic platform for endovascular surgical procedures focused on catheter-based radiofrequency ablation [50]. The system supports specialized catheters which have been instrumented to measure contact force and global position, for example, the ThermoCool SmartTouch ablation catheter places a precalibrated spring at the catheter’s tip and measures its deflection with a magnetic coil-sensor pair to infer contact forces with a 1 g resolution [63]. When used in conjunction with the Sensei X platform, the surgeon is provided with visual feedback of contact force magnitude and vibration “alerts” when this exceeds predefined limits. In a clinical context this is crucial since insufficient contact force could lead to ineffective ablative treatment while excessive force risks tissue trauma or perforation. Interestingly, in a study focused on atrial

Haptics in Surgical Robots Chapter | 15

249

fibrillation (ablation is used to form curative lesions) the use of this system led to higher contact forces than in manually performed procedures, but crucially these were more consistently in the optimum treatment range and remained within “safe” limits ( . 40 g/cm2), ultimately resulting in improved clinical outcomes [50].

15.2.3

Surgical practice

The systems described in Section 15.2.2 have led the way in both defining and exploring the virtues of haptically enhanced robotic surgery in real clinical settings. The roles and demands required of these systems are closely linked to the surgical field in which they operate, although there are also commonalities which emerge, as we demonstrate here by examining clinical use of haptically enhanced surgical robots.

The key robotic system incorporating haptic feedback currently in use for general surgery is the Senhance system (TransEnterix, Morrisville, North Carolina, United States). The haptic capabilities of the system have been used clinically for estimation of structure consistency inside the peritoneal cavity by pulling/pushing maneuvers and for estimating tissue tension when grasping structures. Although the system is not perfect, with a small delay before resistance is felt, it is suggested that the haptic feedback may be beneficial in reducing the risk of injury to organs and delicate structures, particularly if the instruments move outside the optical field. So far, the Senhance robotic system has been evaluated in gynecology and colorectal surgery. Fanfani et al. analyzed a prospective cohort of 146 patients undergoing gynecological surgery and showed it to be safe and with a low risk of conversion [64]. In colorectal surgery, Spinelli et al. reported clinical outcomes on 45 patients undergoing a range of colorectal operations for benign and malignant disease, with results similar to what might be expected from conventional laparoscopic surgery [55]. Although the Senhance system has demonstrated its clinical capabilities across a range of different operative procedures, the additional advantage that its haptic capabilities bring remains to be proven. They are marketed largely on the basis of minimizing tissue trauma, but given the low incidence of this in general robotic surgery it requires further study to make a more conclusive costbenefit argument.

15.2.3.2 Endovascular Haptic feedback is of great interest in the performance of endovascular procedures where excessive forces between the catheter tip and vascular wall can result in iatrogenic injury with the potential for perforation, vessel wall dissection, or pseudoaneurysm formation. Haptic sensation at the proximal end of the catheter can also help guidance-navigation of the catheter through the vasculature. Catheter systems can be divided into two categories: mapping catheters for profile visualization of the vessel or heart, and intervention catheters in which the tip of the catheter is equipped with a tool. To date, the evidence in support of the advantages of hepatic feedback in vascular catheters is derived mainly from experimental studies, with limited clinical data available. Russo et al. used the Sensei robotic navigation system with the ThermoCool SmartTouch catheter to treat patients suffering with atrial fibrillation and found that the use of the robotic system allowed larger catheter tip contact forces during atrial ablation therapy, thus improving treatment performance which resulted in lower rates of recurrence at clinical follow-up [50,65].

15.2.3.3 Neurosurgery Neurosurgery demands precision and accuracy to enable benign and malignant lesions to be resected with maximal preservation of normal brain tissue. Haptic feedback might be particularly advantageous in this scenario, given the inherent soft consistency of the brain and the proximity of vital structures [66]. Two robotic systems have been developed for neurosurgery and used on humans: NeuRobot and neuroArm. Clinical reports have described the use of the NeuRobot for a variety of human procedures, including third ventriculostomy, resection of tumors, as well as portions of intracranial microsurgical procedures such as Sylvian fissure dissection [6769]. However, the only system that remains in clinical use is the NeuroArm, which has been used in a variety of clinical procedures, including stereotactic MRI-guided biopsy, blunt or microsurgical dissection, suturing, hematoma aspiration and irrigation, and cauterization [61,70]. The force scaling feature of NeuroArm provides a significant clinical advantage, allowing the surgeon to enhance their sense of touch to improve tissue interaction. Maddahi et al. used the NeuroArm in seven cases of glioma surgery to record the range of forces between the tool tips and brain tissue but were unable to correlate the forces exerted to the tumor pathology, showing that using haptic information for assessment can be challenging in a surgical environment [47].

15. Haptics in surgical robots

15.2.3.1 General surgery

250

Handbook of Robotic and Image-Guided Surgery

15.2.4

Emerging surgical needs

Within the field of surgery, the application of robotic technology is a relatively new event and still within its evolutionary infancy. Despite significant economic, regulatory, and technological barriers, robotic minimally invasive surgery (RMIS) has been embraced by enthusiastic and pioneering surgical teams to the benefit of patients worldwide. For example, the use of articulating instruments, immersive pseudo-3D operative fields, motion scaling, and elimination of tremor, all enhance surgical performance, leading to greater surgical precision. Robotic technologies have allowed surgeons to become proficient in traditional laparoscopic procedures more quickly (shorter learning curve), which has led to many experienced surgeons changing their open surgical practice to RMIS. These technologies have also facilitated the development and advancement of surgical techniques and procedures, which would not be possible with traditional laparoscopic equipment. But if RMIS is to develop and expand into new surgical areas and be regarded as the standard of surgical care, the next generation of RMIS systems will need to address the current technological limitations and opportunities [29]. One of the main shortcomings of surgical systems has been the lack of haptic feedback. Previous attempts to integrate haptics have often failed to realistically replicate the normal surgical experience and the cost of developing such systems, in the absence of an obvious clinical benefit, has been questioned by some manufacturers. Surgeons must therefore resort to using visual cues to estimate forces applied to tissues, sutures, and other materials, and misinterpretation of these cues can lead to irreversible tissue trauma and potentially patient morbidity. The routine use of haptic systems will likely first emerge in clinical applications that demand accuracy and fine dexterity. Such procedures include tasks such as suturing, where motor skills need to be combined with precise judgment of suture tension, and microdissection in proximity to vital structures, as shown in Fig. 15.5. These clinical needs have received increased attention from the surgical community as robotic technology has developed, as summarized in Table 15.2. A common theme is tissue assessment, in which the consistency of a tissue is often related to an underlying

FIGURE 15.5 Using a surgical robot (da Vinci, Intuitive Surgical) without the use of haptics to (A) dissect a blood vessel and (B) apply a vessel clip to temporarily prevent blood flow. These tasks are routine but tissue manipulation must be regulated through visual feedback alone. The full video of this procedure is available online.

TABLE 15.2 A summary of the opportunities and benefits of haptically enabled systems across different surgical specialties. Surgical specialty

Opportunities for haptics in RMIS

Urology

G

References [71]

G

Improved preservation of the neurovascular bundles, which are important for erectile function, during radical prostatectomy and cystectomy Enhanced tissue handling of the ureters during reconstructive urinary tract procedures

Gastrointestinal surgery

G

Reduced tissue trauma during bowel resection and anastomosis

[35]

Plastic and reconstructive surgery

G

Enhanced manipulation of nerves, vessels, and muscles in free-flap surgery Brachial plexus surgery

[72]

G

Neurosurgery

G

Improved stereotactic positioning and minimization of collateral damage to neural tissues

[73]

Orthopedics

G

Facilitate prosthesis manipulation and positioning

[74]

Ophthalmology

G

Retinal microsurgery

[75]

RMIS, Robotic minimally invasive surgery.

Haptics in Surgical Robots Chapter | 15

251

15.3

Research systems

While the integration of haptics into commercially available surgical robots is immature, with few examples moving beyond simple force feedback, it has received significant attention in the research community over the last 20 years, driven by both a recognition of clinical need and increasing technological capability [23]. Here we consider notable advances made in the research field which have particular relevance to the commercial sector, encompassing both the enabling technology in sensor and feedback systems and research-grade surgical robot platforms which have advanced our understanding of the benefits of using haptics in surgery.

15.3.1

Sensing systems

The foundation of any haptic robotic system is the sensory system used to acquire data from the environment with which it interacts. This has been a subject of longstanding interest in the robotics community to support dexterous interaction with the environment [77]. The opportunity for integration of haptics into surgical robotics has been long recognized, exemplified by pioneering work on remote tissue palpation [78]. With this opportunity comes the technical challenge of moving from structured and static industrial environments toward the uncertain and highly variable environments found in surgery [79]. This has catalyzed the research community with reviews showing the development of haptic sensing technologies specifically for surgical applications [8083]. The sensory features of surgical robots are strongly influenced by a series of factors. Foremost is clinical need, which determines the fundamental sensing requirements; what type of haptic measurement is required for the surgical procedures performed by that system? Section 15.2.4 describes a spectrum of surgical tasks which can be ranked in haptic complexity, from suturing requiring “simple” force feedback to assessment of tissue margins demanding multipoint tactile feedback. Here we distinguish tactile sensors from force sensors as those systems designed to obtain information through direct interaction with the environment. Market factors and the surgical environment then shape the implementation of the sensing systems. Any elements placed within the operative field must be compatible with cleaning and sterilization procedures (e.g., high-temperature autoclave) if they are reusable, or designed for modular and single use, placing constraints on materials, packaging, robustness, and cost. In addition, modern surgery increasingly uses minimally invasive approaches which impose size constraints and considerations about the atraumatic tissue interaction. Integration of force sensing into surgical robot systems would seem more readily achievable given the ubiquity of commercially available systems and, indeed, commercial load-cells have been used to measure the tooltissue interaction in open surgical procedures [84]. However, the increased demands of Minimally invasive surgery (MIS) have driven the development of bespoke load cell technology, early examples include a 5 mm diameter tri-axis force sensor using optical fiber transducers [85] and a six-axis force and torque sensor (0.04 N resolution, 10 N range) using miniature strain gauges in the DLR Miro [56]. More recently, advances using fiber Bragg grating have enabled precise (0.15 mN resolution, 025 mN range) tooltip sensing on microforceps with 0.9 mm diameter (Fig. 15.6) for use in

15. Haptics in surgical robots

disease process; for example, the majority of malignant tissues are harder than their normal counterparts due to an increase in cell and stromal components. The ability to sense changes in tissue consistency can therefore assist surgeons in diagnosing disease and determining disease margins intraoperatively and in real time. It also has the potential to allow the development of procedures in parts of the body that are restricted due to current optical technology, particularly through use of haptic guidance (e.g., active constraints) which can inform the surgeon of the proximity of vital structures, for example, blood vessels or nerves, and guide their movement appropriately. In addition to guiding surgical execution, positional, and force data collected during robotic surgery can contribute to the development of surgical stimulators. The inclusion of touch sensation in a surgical stimulator should enhance the operator experience, providing a more realistic training platform where skills can be gained in a fail-safe environment. Another major opportunity with the introduction of haptically enabled robotic systems is in surgical training. The acquisition and development of skills in RMIS is commonly obtained through a combination of computer surgical simulation, practice in animal labs, and experience in the operating theater. Advancements in virtual reality training platforms have allowed surgeons to become proficient in basic and advanced skills, through risk-free computer simulation, before applying them on real patients. Although objective data on the benefits of haptic feedback in virtual reality RMIS simulation are limited, there is a suggestion that haptics enhances perception of the surgical field with reduced errors, improved task-time and overall training performance [76]. In addition, closing the loop and using intraoperative haptic data from real procedures can be used to help define and improve the mechanical properties of virtual tissue models, assisting the development of better virtual environments for surgical training.

252

Handbook of Robotic and Image-Guided Surgery

FIGURE 15.6 Force and tactile sensor systems developed for use in surgical robotics: (A) the STIFF-FLOP soft surgical robot with integrated force sensing [91], (B) microforcep with shaft-force sensor for retinal surgery [86], and (C) instrumented forceps with five DoFs tactile sensing jaws [99].

retinal surgery [86]. An alternative approach to direct force measurement is to integrate load sensors within the drive systems of robots, thus allowing them to be located remotely, outside the constraints of the surgical environment. A proof of concept implementation for the da Vinci surgical robot measures torque at the cable drive pulleys but highlighted that mechanical losses and nonlinearities made calibration challenging [87]. Use of statistical estimation techniques helped to minimize these factors in the Raven II surgical platform (Fig. 15.6) such that grasping forces could be reliably determined [88]. Another approach used neural networks to determine the external forces acting on a surgical grasper, with indicative errors of 0.24 N across a 04 N range when grasping [89]. While these external measurement techniques are implemented on rigid robotic structures, they also have relevance to soft robotic systems which are growing in popularity and technical maturity. This is exemplified by the development of the STIFF-FLOP surgical manipulator (Fig. 15.6), a structure composed of multiple “soft” segments, each of which includes three-axis position and force sensing using pneumatic transducers [90,91].

Haptics in Surgical Robots Chapter | 15

253

15.3.2

Haptic feedback systems

In comparison to the ubiquity and advanced state of visual and sound display technologies, haptic feedback systems have remained rather underutilized because of their relative immaturity, limiting their use to meet the challenging requirements found in robotic surgery [106]. An important consideration here is the bidirectionality of haptic perception, which implies an interchange of mechanical energy between the sensory organ and the environment, unlike any other human sensing modality. Enayati et al. described bidirectionality as the root of many challenges that not only make the design, production, and implementation of haptic interfaces demanding, but also make evaluation of their performance nontrivial [23]. Nonetheless, despite substantial technological challenges in the implementation of effective haptic feedback, the spread of haptics appears to be inevitable in RMIS. Several multipurpose haptic devices are presented in the literature and some of them are commercially available. In general, the mechanism of haptic feedback in these technologies can be categorized into kinesthetic (force feedback) devices, skin deformation, and/or vibration devices and haptic surfaces [107]. The most commonly used kinesthetic haptic interface is arguably the PHANToM (originally SensAble Technologies, now Geomagic, United States), which uses a serial kinematic structure to provide three DoFs of force feedback, six DOFs of positional sensing, and low free-space impedance [108]. The largest PHANToM model has a range of motion equivalent to the human arm, with a maximum exertable force of 22 N in each axis. An alternative to serial manipulators is devices with parallel kinematic configurations which offer high stiffness and low inertia (locating motors in the base) at the expense of a more limited workspace. These devices are often augmented to provide haptic grasping, a common requirement in RMIS, by mounting an actuated griper onto the parallel device with an additional rotational DoF at the interface [109]. A popular example of this arrangement is the Sigma-7 (Force Dimension, United States), which provides seven DoFs force feedback using a delta-based parallel structure, with up to 20 N and 400 mNm for translational forces and torques, respectively, and a maximum grasping force of 8 N using a motorized active wrist. In comparison, the HD2 (Quanser, Canada) has a larger workspace but a reduced maximum continuous force and torque of 11 N and 0.950 N m, respectively, while the Virtuose 6D (Haption, Germany) has a workspace

15. Haptics in surgical robots

Despite the lack of tactile sensing in current commercial surgical robots, it is a vibrant research field driven by clinical need. Minimizing tissue trauma is a key challenge, particularly in the delivery of intravenous catheters and needles where distally mounted tactile sensors can provide valuable information. The inherent size constraints are suited to optical sensing techniques (the reader is directed to a comprehensive review of optical sensing in MIS for further examples by [92]), for example, a one DoF tactile sensor using an optical FabryPerot interferometry technique was integrated into the needle of an MRI-guided tele-robotic system for percutaneous prostate surgery. This approach enables MRI compatibility and high sensitivity (0.01% of 020 N range) at small scales (i.e., 0.7 mm diameter needles), although overall cost may limit application in other areas [93]. Fabrication of cost-effective sensor systems at this scale can be challenging, particularly considering single-use devices. The recent development of “pop-up” fabrication techniques offers one potential solution which is well suited for single-use surgical applications, highlighted by an instrumented catheter using a light-modulation force sensor for measurement of tissue contact force, as shown in Fig. 15.6 [94]. Measurement of contact pressures is also critical in general surgery, where tactile sensors play a dual role; first in facilitating safe and appropriate interaction, second used as an assessment tool to interrogate tissue properties [78]. These sensors can be broadly divided into single-point force sensors and tactile “arrays” which map an area. Singlepoint force sensors have been deployed for tissue assessment where a probe is scanned across target tissue by the robotic system, for example, using a “wheeled” optical probe to map liver disease [95] or a magnetic palpation probe for the da Vinci research platform [96]. While single-axis pressure sensing has been translated into a commercially available laparoscopic grasper using optical fiber Bragg sensing [97] it is multiaxis sensing which represents state of the art. Noteworthy examples include tactile forceps for the Raven II surgical platform with four DoFs force sensing [98] and similar forceps for the S-Surge platform (Fig. 15.6) with five DoFs force and torque [99]. Research into tactile arrays for surgery has sought to increase spatial resolution, measurement performance, and physical robustness through the use of innovative sensing methods which include piezoelectric elements [100], MRIcompatible optical fibers [101], varying resistivity in soft conductive polymers [81], and conductive liquid in microfluidic channels [102]. Despite their relative complexity, low-cost “disposable” tactile arrays have been developed using capacitive sensing [103]. Notably, the group has also combined this technology with an ultrasound transducer into a single multimodal probe, compatible with the da Vinci platform (Fig. 15.6) for improved assessment of tissue state [104]. Other work has combined piezoresistive and optical sensing systems [105] to similar effect, and highlights that multimodal tactile sensing is likely to be a key focus in future research.

254

Handbook of Robotic and Image-Guided Surgery

FIGURE 15.7 Commercially available haptic devices categorized based on their actuation mechanisms into parallel/serial kinematics and magnetic levitation: (A) Omega.6, (B) Sigma.7, (C) HD2 high-definition haptic device, and (D) Virtuose 6D TAO. Courtesy (A and B) Force Dimension, Switzerland, (C) Quanser, Canada, and (D) Haption, Germany.

matching that of the human arm and a corresponding maximum force and torque of 35 N and 3 N m. These systems were compared in a study to identify the most appropriate and comfortable system for completing a surgical task and the PHANToM Premium 3 was found to offer the best overall performance, potentially relating to the similarity of its kinematic structure to that of the human arm [110]. In recent years, research has explored the development of kinematic mechanisms to improve haptic manipulators. Lambert and Herder developed a combined parallel and serial device which allows haptic grasping and six DoFs motion while locating drive motors in the base [111]. Similarly, a new six DoFs coupled-parallel device, Delta Haptic, is able to provide a large, singularity-free workspace with the low inertial characteristics associated with parallel mechanisms [112]. In addition, advances in actuator technology have enabled the design of haptic devices which improve stability and transparency in masterslave teleoperation, in particular those devices employing magnetorheological fluids as a means to passively vary interface compliance [113115], or using novel approaches, for instance the Maglev200 uses electromagnetic coupling to impart forces to the user (Butterfly Haptics, Pittsburgh, Pennsylvania, United States) (Fig. 15.7). In the last decade, researchers began to examine whether focusing on the tactile component of interface forces could bring advantages in terms of the size, complexity, cost, and wearability of haptic feedback systems. Tangential skin stretch (shear force) and vibration have been investigated by actuating a surface relative to the skin through friction and normal indentation. Devices have been developed to stretch the skin of the fingertip and thus convey tactile information to the user for stiffness discrimination in palpation [116] and to convey cues for navigation [117]. Prattichizzo et al. developed a wearable device that is worn on the fingerpad, and that applies a normal force on the fingerpad skin [118]. The device has been used to provide valuable feedback in surgical training tasks including needle insertion and pegtransfer [119]. Quek et al. developed a custom six DoFs tactile stylus which uses a combination of skin stretch and normal indentation to convey haptic information while avoiding the potential stability concerns found with direct force feedback systems [120]. It is important to consider how these technologies can be integrated into surgical robots, two

Haptics in Surgical Robots Chapter | 15

255

15.3.3

Human interaction

Section 15.1.1 highlights the virtues of haptic information within surgery and draws attention to the fact that humans combine multiple sources of information in order to obtain optimal state estimates and are able to learn to switch between different sources of information when undertaking a particular goal-directed action. In support of this conjecture, survey data show that surgeons do not necessarily miss the absence of haptic feedback [127], and it is demonstrably the case that robotic procedures are being completed successfully by surgeons across the world despite the absence of some forms of haptic information. These facts suggest that it should be possible to create laparoscopic and robotic systems that provide different information signals that can compensate for the degradation or removal of information normally obtained through the haptic modality. Moreover, augmented surgical devices may make it possible to provide more salient informational cues that possess a lower signal-to-noise ratio than those normally available to the surgeon. These signals could be fed to the human haptic system or be provided via another sensory modality. In order to develop “haptic-enabled robotic systems” (which we will define as systems that enable access to the information normally made available through haptics), it is essential to understand how information is processed by the human brain. In any given environment, our sensory systems are noisy and the incoming signals open to ambiguity. For the brain to resolve this uncertainty and interact effectively with the environment, it must make inferences about the state of the world from imperfect knowledge. Numerous models of cognitive processing propose that these inferences are resolved in a Bayesian fashion [128,129]. Whether the underlying neural processes are indeed Bayesian (or involve other mechanisms that result in outputs that merely look Bayesian in nature) is a matter of debate, but there is much evidence to suggest that information processing within the brain results in Bayesian type outcomes. Moreover, a Bayesian framework presents an instructive way of considering how surgeons use information in the operating theater. This can be illustrated through the example of a surgeon trying to decide whether tissue is malignant on the basis of a number of sources of information (e.g., visual and haptic “observations”), each of which is subject to noise. In the Bayesian approach, the imperfect knowledge of the exact parameter values is accounted for through probability distributions. The specification of prior information requires that knowledge about the parameters is expressed in

15. Haptics in surgical robots

prime examples are work by Culjat to relay tactile information using a pneumatic balloon actuator mounted onto the hand controls of a da Vinci robotic system [121], while Kuchenbecker has pioneered VerroTouch, a system providing vibration feedback in robotic surgery, demonstrated on the dVRK. VerroTouch measures high-bandwidth vibrotactile information from robotic instruments and relays this to the surgeon’s fingertips using voice coil actuators [122]. An interesting adjunct to these technologies is recent research into “haptic surfaces” which employ morphable physical surfaces to allow cutaneous feedback (by changing tactile properties based on location, e.g., a surface with variable friction) or simultaneous kinesthetic and cutaneous feedback [107]. The integration of haptic feedback into surgical robots remains largely experimental and it is a nontrivial task to ensure the combined sensory, feedback, and control elements of a system perform in a reliable and stable manner. It is therefore instructive to examine the state of research-grade and precommercial robotic systems to understand current challenges and opportunities. Raven II is an open-architecture platform developed for collaborative research on the advancement of RMIS. The system comprises two cable-driven seven DoFs arms and a surgeon interface console complete with haptic feedback to a pair of three-fingered control devices [123]. Preceyes is a platform for vitreoretinal microsurgery. The system features a masterslave approach, with tremor control, motion scaling, and haptic force feedback from the microsurgical instrument tips [75]. Little detailed information is available on the system, likely due to its commercialization through a company which aims to obtain regulatory approval in 2019 and commercial availability in 2020. Currently it is undergoing a clinical trial and was the first system to perform robotic eye surgery [124,125]. Eye Robot is a cooperatively controlled hand-over-hand system for retinal and vitreoretinal microsurgery which senses forces exerted by the surgeon on the tool handle, and moves to comply, filtering out any tremor [126]. While haptic feedback systems can be considered a limiting factor, preventing greater uptake of haptics in surgical robotics, it is also evident that there is a diverse range of technology being developed. Force feedback systems are more common than tactile displays, but the latter offer the promising opportunity to augment and enhance surgical robot control without the stability concerns associated with direct force feedback. However, with increasing demand for haptic displays across other sectors, it seems likely that they will change from specialist subsystems toward modular commodities, greatly facilitating their uptake in future surgical robots.

256

Handbook of Robotic and Image-Guided Surgery

terms of a prior distribution, which is formulated independently of the observations. This probability density is then used, together with the observations, to obtain the posterior distribution using Bayes theorem: pðθjxÞ 5 Ð

f ðxjθ Þ π ðθÞ f ðxjθ Þ π ðθÞ d ðθÞ

where x 5 (x1, x2, . . ., xn) is the vector of information sources, π(θ) the prior probability density of the parameters, f(x|θ) the likelihood of the observations, and p(θ|x) the posterior probability density of the parameters given the observations. There is much evidence to show that the human sensory system is able to flexibly integrate different sources of information in a (Bayesian) statistically optimal manner [130]. It has been demonstrated empirically that the accuracy with which we can make perceptual judgments increases with the number of independent sensory signals [131,132] and, conversely, uncertainty increases as information sources reduce [133]. Increasing uncertainty places larger pressures on the offline cognitive systems underpinning human decision-making and can reduce the capacity of working memory and attentional resources [134]. It follows that increasing the availability and quality of information has the potential to improve the perceptual judgments of surgeons and this, in turn, might improve their surgical performance. Nevertheless, the Bayesian nature of human perception flags the potential complexities involved with providing multiple information sources. The perceptual judgments of the actor will not just be a function of the information provided but will also be a function of the prior knowledge that the human has about the reliability of that information. This means that the human can erroneously place high confidence in a source of information that is not warranted in the environment in which it is operating. For example, visual information is likely to provide precise estimates of hand position under conditions where there is good lighting and a rich abundance of visual cues (e.g., Ref. [135]). Thus it can be seen that sensorimotor performance can be affected because of the “priors” that exist in a human operator. It is also the case that the decisions made by a surgeon will be a function of the information provided by the system and the surgeon’s existing knowledge (beliefs). This is again captured by Bayes’ theorem. The strength of a surgeon’s belief that a patient has a cancerous lump can be defined probabilistically as a real number between zero and one. The surgeon’s prior belief that the probability (p) of the event that a lump is cancerous (θ) can therefore be expressed as p(θ). The surgeon will have a prior belief about the probability of the lump being cancerous but will seek to test that belief by obtaining more information during the surgery. To obtain more information, the surgeon may start the process of palpation and, in open surgery, through the mechanoreceptors located with the hands receive information about the likelihood of θ. The surgeon can use this information to update his or her beliefs about the proposition that the lump is cancerous. Formally, Bayes’ theorem expresses this as: pðθjxÞ 5

pðxjθÞ pðθÞ pðxÞ

where p(θ|x) is the likelihood of observing a given level of tissue stiffness given that the lump is cancerous, and p(x) is the sum of the probabilities of observing cancerous tissue. In short, the available haptic information coupled with the prior expectation of finding cancerous tissue determines the probability that the tissue will be classified as cancerous. This highlights that strong priors can dominate perceptual experiences and also indicates a separation between offline cognitive judgments and lower level sensorimotor control decisions. It is clear that careful consideration of such subjective biases is required when designing systems that enable the provision of additional sensory information [136,137]. Given that the human decision-making process is bounded by our information processing capacities, it is crucial to consider how information should be fed back to the user. For example, augmenting the visual display on a master console with real-time data streams could increase access to information, but the benefits may be negligible for surgeons at the sharp end of the learning curve because cognitive resources are allocated to other fundamental aspects of the task. This consideration of how a human interacts with a robotic surgical system shows that it is important for careful thought to be given to the ways in which information is presented to the end-user. It is reasonable to conjecture that robotic systems which enable access to the information normally made available through haptic perception will allow improved surgical performance. Nevertheless, further work is required to understand how to effectively integrate this type of feedback into robot systems so that utility outweighs possible costs.

15.4

Future perspectives

The opportunity for haptics in surgical robots has long been recognized but to date it remains largely confined to research studies. However, there is good reason to believe that now a combination of clinical, technical, and commercial factors have emerged to create the right conditions for widespread commercial translation of the technology.

Haptics in Surgical Robots Chapter | 15

257

The clinical needs, and equally expectations, for surgical robots have evolved since their inception early in this millennium. From the first systems which focused on robotics for general and orthopedic surgery, the market for surgical robots has now expanded dramatically to encompass a wide array of specialties [32]. Within each specialty robotic systems are being used to support an ever-expanding range of procedures. With these clinical advances come the recognition and demand for provision of haptic feedback, from urology [138], colorectal surgery [139], gastroenterology [140], vascular [65], plastics [72], neurosurgery [73], and microsurgery [75]. While each specialty brings its own particular requirements, key trends are evident which have particular resonance to haptics.

These must be considered in the context of ongoing healthcare challenges, particularly the need for efficiency in the surgical environment. Robotic systems should demonstrate a costbenefit advantage to justify the increased complexity they often bring to the operating theater. Similarly, it is critical that haptic technology promotes adoption of surgical robots, rather than acting as a barrier due to lengthier learning curves and procedural complexity [23]. Fortunately, haptic technology is now maturing to the point at which these clinical requirements and challenges can be reasonably addressed, as demonstrated by the introduction of the first commercial systems with haptic capabilities. The burgeoning surgical robotics research field has catalyzed this process, both through the development of individual enabling technologies in sensing and feedback, and in research platforms like the Raven surgical system. The move toward micromanipulation is being supported by advances in haptic sensing technology, with examples of both force and tactile sensors at the millimeter scale. Equally, the development of high-fidelity haptic feedback systems is necessary to improve delicate and dexterous tasks such as manipulation of soft-tissue structures. Close attention must be paid to providing haptic feedback in a form and function which integrates with, and improves, the surgical experience. Inappropriate implementations of haptic feedback risk cognitively overloading the surgeon with streams of potentially unnecessary or misleading information. The emergence of research into increasing (robotic) autonomy in surgical robotic systems will likely have a significant impact on the form and function of haptic feedback. Autonomy can be described as a spectrum in which control is ceded to robotic systems, from “low-autonomy” (e.g., virtual fixtures) through task autonomy (suturing) and beyond to surgeons acting in a supervisory capacity [141]. This may initiate a move away from high-bandwidth, low-latency feedback to support real-time “surgeon in the loop” control, toward intermittent but higher fidelity feedback to inform more complex decisions (e.g., identification of tumor margins). These needs will require further exploration of haptic sensing, particularly the application of multimodal sensing systems combined with data fusion and analysis techniques to provide rich data for improved feedback. Accompanying these shifts in clinical need and technological capability is a commercial market more receptive to haptics. This is perhaps best exemplified in general surgery where competition has recently flourished, ending the market dominance enjoyed by Intuitive Surgical and driving the adoption of innovative features like haptics to differentiate between competing systems [36]. However, there must be a strong health-economic argument for the technology, which demonstrates that the clinical benefit obtained from integrating haptic technology justifies the added expense and complexity. This is particularly apt as the surgical robotics industry focuses on reducing traditionally high capital and maintenance costs [142], thus reducing margins for technology costs. In addition, it is important to recognize that obtaining regulatory approval for innovative technology is a key part of translating research into clinical practice. This encompasses metrics, defining consistent measures to demonstrate your technology meets relevant medical device standards, to which systems must comply before receiving regulatory approval (e.g., FDA or CE marking). Surgical robots, and the inclusion of haptics, are now being explicitly recognized and adopted into these practices, for example, the US National Institute of Standards and Technology identified “. . . critical performance metrics for force and haptic feedback” and “Development of performance metrics for evaluating the overall input/output motion of teleoperated surgical robots” as key priorities for its surgical robotics governance, with similar initiatives being made in European bodies [29]. It should be hoped that these standards can help guide the development of research into haptic systems for surgical robots and speed up their adoption into clinical practice so they can bring patients the benefits they have long promised.

15. Haptics in surgical robots

1. Robots are operating at smaller scales. For example, supporting microsurgical procedures like submillimeter vessel anastomoses and nerve repair necessitates both precise movement and regulation of force [72]. 2. Interacting with delicate anatomical structures. For example, performing dissection in neurosurgery [73]. 3. Enhancing the surgical experience. Robotic systems have utility beyond making precise movement and should support more advanced functionality. For example, providing information to support improved assessment and decision-making [37]. 4. Robotic systems should bring improved patient outcomes. For example, using haptics to reduce surgical errors [31].

258

Handbook of Robotic and Image-Guided Surgery

15.5

Conclusion

This chapter has taken a necessarily broad view of haptics in surgical robotics in which we have considered technical, psychological, clinical, and commercial facets of this flourishing field, from the concept of haptics through to their potential to bring clinical advances in a new generation of surgical robotics. We have seen that haptics is an umbrella term encompassing a complex human sensory process, and the study of how this behaves (the field of psychophysics) is as important for surgical robotics as the technological advances necessary to realize these features in clinical practice. The surgical robotic landscape has reached a point of expansion into different features, and surgical fields. There is related clinical demand for haptic technology to underpin this innovation. This has the potential to improve surgical performance and even enable new approaches, particularly in microsurgical procedures. However, to be commercially competitive and clinically viable these new systems must demonstrate a clear costbenefit. Many challenges remain in the general field of surgical robotics and haptics technology can play a role in addressing these. Increasing levels of task automation and intraoperative tissue assessment are two key areas in which haptic feedback will play a key role. Consequently, although the role of the surgeon may change with increased automation, the use of haptics seems likely to increase to match the current ubiquity of vision systems.

Acknowledgment This chapter is supported by the UK NIHR MedTech Co-operative in Surgical Technologies (a research network linked to the UK NHS) to provide insight into current surgical practice and associated clinical challenges.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

Fechner GT. Elements of psychophysics. Holt, Rinehart and Winston; 1966. Heidelberger M. Nature from within: Gustav Theodor Fechner and his psychophysical worldview. University of Pittsburgh Press; 2004. Heller MA. The psychology of touch. Psychology Press; 2013. Vallbo AB, Johansson RS. Properties of cutaneous mechanoreceptors in the human hand related to touch sensation. Hum Neurobiol 1984;3:314. Johansson RS, Flanagan JR. Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat Rev Neurosci 2009;10:34559. Available from: https://doi.org/10.1038/nrn2621. Grunwald M. Human haptic perception: basics and applications. Springer Science & Business Media; 2008. Proske U, Gandevia SC. The kinaesthetic senses. J Physiol 2009;587:413946. Available from: https://doi.org/10.1113/jphysiol.2009.175372. Fulkerson M. The unity of haptic touch. Philos Psychol 2011;24:493516. Available from: https://doi.org/10.1080/09515089.2011.556610. Gallace A, Spence C. In touch with the future: the sense of touch from cognitive neuroscience to virtual reality. Oxford: Press, Oxford University; 2014. Jones LA, Lederman SJ. Human Hand Function. New York: Oxford University Press; 2006. Sutter C, Drewing K, Mu¨sseler J. Multisensory integration in action control. Front Psychol 2014;5:544. Soo-Chul L, Hyung-Kew L, Joonah P. Role of combined tactile and kinesthetic feedback in minimally invasive surgery. Int J Med Robot 2014;11:36074. Available from: https://doi.org/10.1002/rcs.1625. Culbertson H, Schorr SB, Okamura AM. Haptics: the present and future of artificial touch sensations. Annu Rev Control Robot Auton Syst 2018;11225:112. Available from: https://doi.org/10.1146/annurev-control-060117. Culmer P, Barrie J, Hewson R, Levesley M, Mon-Williams M, Jayne D, et al. Reviewing the technological challenges associated with the development of a laparoscopic palpation device. Int J Med Robot 2012;8:14659. Available from: https://doi.org/10.1002/rcs.1421. Mushtaq F, O’Driscoll C, Smith F, Wilkins D, Kapur N, Lawton R. Contributory factors in surgical incidents as delineated by a confidential reporting system. Ann R Coll Surg Engl 2018;100:4015. Available from: https://doi.org/10.1308/rcsann.2018.0025. Jamieson ES, Chandler JH, Culmer PR, Manogue M, Mon-Williams M, Wilkie RM. Can virtual reality trainers improve the compliance discrimination abilities of trainee surgeons? IEEE Eng Med Biol 2015;4669. Available from: https://doi.org/10.1109/EMBC.2015.7318400. Nitsch V, Faber B. A meta-analysis of the effects of haptic interfaces on task performance with teleoperation systems. IEEE Trans Haptics 2013;6:38798. Available from: https://doi.org/10.1109/TOH.2012.62. Weber B, Eichberger C. The benefits of haptic feedback in telesurgery and other teleoperation systems: a meta-analysis. Hum Comput Interact Int 2015;9177. Available from: https://doi.org/10.1007/978-3-319-20684-4. Gwilliam JC, Pezzementi Z, Jantho E, Okamura AM, Hsiao S. Human vs. robotic tactile sensing: detecting lumps in soft tissue. Haptics Symp 2010 IEEE 2010;218. Available from: https://doi.org/10.1109/HAPTIC.2010.5444685. Herron DM, Marohn M, Group TS-MRSC. A consensus document on robotic surgery. Surg Endosc 2008;22:31325. Available from: https:// doi.org/10.1007/s00464-007-9727-5.

Haptics in Surgical Robots Chapter | 15

259

15. Haptics in surgical robots

[21] Westebring-van der Putten EP, Goossens RHM, Jakimowicz JJ, Dankelman J. Haptics in minimally invasive surgery—a review. Minim Invasive Ther Allied Technol 2008;17:316. Available from: https://doi.org/10.1080/13645700701820242. [22] Hokayem PF, Spong MW. Bilateral teleoperation: an historical survey. Automatica 2006;42:203557. Available from: https://doi.org/10.1016/ j.automatica.2006.06.027. [23] Enayati N, Momi ED, Ferrigno G. Haptics in robot-assisted surgery: challenges and benefits. IEEE Rev Biomed Eng 2016;9:4965. Available from: https://doi.org/10.1109/RBME.2016.2538080. [24] Okamura AM. Haptic feedback in robot-assisted minimally invasive surgery. Curr Opin Urol 2009;19:102. [25] Maddahi Y, Zareinia K, Sepehri N, Sutherland G. Surgical tool motion during conventional freehand and robot-assisted microsurgery conducted using neuroArm. Adv Robot 2016;30:62133. Available from: https://doi.org/10.1080/01691864.2016.1142394. [26] Trejos AL, Jayender J, Perri MT, Naish MD, Patel RV, Malthaner RA. Robot-assisted tactile sensing for minimally invasive tumor localization. Int J Robot Res 2009;28:111833. Available from: https://doi.org/10.1177/0278364909101136. [27] Koehn JK, Kuchenbecker KJ. Surgeons and non-surgeons prefer haptic feedback of instrument vibrations during robotic surgery. Surg Endosc 2015;29:297083. Available from: https://doi.org/10.1007/s00464-014-4030-8. [28] Abbott JJ, Marayong P, Okamura AM. Haptic virtual fixtures for robot-assisted manipulation Robot Res 2007;28:4964Springer, Berlin, Heidelberg . Available from: https://doi.org/10.1007/978-3-540-48113-3_5. [29] Dı´az CE, Ferna´ndez R, Armada M, Garcı´a F. A research review on clinical needs, technical requirements, and normativity in the design of surgical robots. Int J Med Robot 2017;13. Available from: https://doi.org/10.1002/rcs.1801. [30] Sutherland GR, Maddahi Y, Gan LS, Lama S, Zareinia K. Robotics in the neurosurgical treatment of glioma. Surg Neurol Int 2015;6:S18. Available from: https://doi.org/10.4103/2152-7806.151321. [31] Greenberg JA. From open to MIS: robotic surgery enables surgeons to do more with less. Ann Surg 2018;267:220. Available from: https://doi. org/10.1097/SLA.0000000000002335. [32] Amirabdollahian F, Livatino S, Vahedi B, Gudipati R, Sheen P, Gawrie-Mohan S, et al. Prevalence of haptic feedback in robot-mediated surgery: a systematic review of literature. J Robot Surg 2018;12:1125. Available from: https://doi.org/10.1007/s11701-017-0763-4. [33] Freschi C, Ferrari V, Melfi F, Ferrari M, Mosca F, Cuschieri A. Technical review of the da Vinci surgical telemanipulator. Int J Med Robot 2013;9:396406. Available from: https://doi.org/10.1002/rcs.1468. [34] Rassweiler JJ, Goezen AS, Klein J, Liatsikos E. New robotic platforms Robot Urol 2018;338Springer, Cham . Available from: https://doi.org/ 10.1007/978-3-319-65864-3_1. [35] Peters BS, Armijo PR, Krause C, Choudhury SA, Oleynikov D. Review of emerging surgical robotic technology. Surg Endosc 2018;120. Available from: https://doi.org/10.1007/s00464-018-6079-2. [36] Rao PP. Robotic surgery: new robots and finally some real competition!. World J Urol 2018;15. Available from: https://doi.org/10.1007/ s00345-018-2213-y. [37] Yang G-Z, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, et al. The grand challenges of science robotics. Sci Robot 2018;3:eaar7650. Available from: https://doi.org/10.1126/scirobotics.aar7650. [38] Gidaro S, Buscarini M, Ruiz E, Stark M, Labruzzo A. Telelap Alf-X: a novel telesurgical system for the 21st century. Surg Technol Int 2012;22:205. [39] Rossitto C, Gueli Alletti S, Fanfani F, Fagotti A, Costantini B, Gallotta V, et al. Learning a new robotic surgical device: Telelap Alf X in gynaecological surgery. Int J Med Robot 2016;12:4905. Available from: https://doi.org/10.1002/rcs.1672. [40] Alletti SG, Rossitto C, Cianci S, Restaino S, Costantini B, Fanfani F, et al. Telelap ALF-X vs standard laparoscopy for the treatment of earlystage endometrial cancer: a single-institution retrospective cohort study. J Minim Invasive Gynecol 2016;23:37883. Available from: https:// doi.org/10.1016/j.jmig.2015.11.006. [41] Spinelli A, David G, Gidaro S, Carvello M, Sacchi M, Montorsi M, et al. First experience in colorectal surgery with a new robotic platform with haptic feedback. Colorectal Dis 2017. Available from: https://doi.org/10.1111/codi.13882. [42] Abdel Raheem A, Troya IS, Kim DK, Kim SH, Won PD, Joon PS, et al. Robot-assisted Fallopian tube transection and anastomosis using the new REVO-I robotic surgical system: feasibility in a chronic porcine model. BJU Int 2016;118:6049. Available from: https://doi.org/10.1111/ bju.13517. [43] Lim JH, Lee WJ, Park DW, Yea HJ, Kim SH, Kang CM. Robotic cholecystectomy using Revo-I Model MSR-5000, the newly developed Korean robotic surgical system: a preclinical study. Surg Endosc 2017;31:33917. Available from: https://doi.org/10.1007/s00464-016-5357-0. [44] Kim DK, Park DW, Rha KH. Robot-assisted partial nephrectomy with the REVO-I robot platform in porcine models. Eur Urol 2016;69:5412. Available from: https://doi.org/10.1016/j.eururo.2015.11.024. [45] Hirzinger G, Hagn U. Flexible heart surgery. Ger Res 2010;32:47. Available from: https://doi.org/10.1002/germ.201090020. [46] Sutherland GR, Latour I, Greer AD, Fielding T, Feil G, Newhook P. An image-guided magnetic resonance-compatible surgical robot. Neurosurgery 2008;62:28693. Available from: https://doi.org/10.1227/01.neu.0000315996.73269.18. [47] Maddahi Y, Zareinia K, Gan LS, Sutherland C, Lama S, Sutherland GR. Treatment of glioma using neuroArm surgical system. BioMed Res Int 2016;2016:9734512. Available from: https://doi.org/10.1155/2016/9734512. [48] Al-Ahmad A, Grossman JD, Wang PJ. Early experience with a computerized robotically controlled catheter system. J Interv Card Electrophysiol 2005;12:199202. Available from: https://doi.org/10.1007/s10840-005-0325-y. [49] Kesner SB, Howe RD. Robotic catheter cardiac ablation combining ultrasound guidance and force control. Int J Robot Res 2014;33:63144. Available from: https://doi.org/10.1177/0278364913511350.

260

Handbook of Robotic and Image-Guided Surgery

[50] Russo AD, Fassini G, Conti S, Casella M, Monaco AD, Russo E, et al. Analysis of catheter contact force during atrial fibrillation ablation using the robotic navigation system: results from a randomized study. J Interv Card Electrophysiol 2016;46:97103. Available from: https://doi.org/ 10.1007/s10840-016-0102-0. [51] Riga CV, Bicknell CD, Wallace D, Hamady M, Cheshire N. Robot-assisted antegrade in-situ fenestrated stent grafting. Cardiovasc Intervent Radiol 2009;32:5224. Available from: https://doi.org/10.1007/s00270-008-9459-5. [52] Kanagaratnam P, Koa-Wing M, Wallace DT, Goldenberg AS, Peters NS, Davies DW. Experience of robotic catheter ablation in humans using a novel remotely steerable catheter sheath. J Interv Card Electrophysiol 2008;21:1926. Available from: https://doi.org/10.1007/s10840-0079184-z. [53] Saliba W, Reddy VY, Wazni O, Cummings JE, Burkhardt JD, Haissaguerre M, et al. Atrial fibrillation ablation using a robotic catheter remote control system. J Am Coll Cardiol 2008;51:240711. Available from: https://doi.org/10.1016/j.jacc.2008.03.027. [54] Rillig A, Schmidt B, Di Biase L, Lin T, Scholz L, Heeger CH, et al. Manual versus robotic catheter ablation for the treatment of atrial fibrillation: the man and machine trial. JACC Clin Electrophysiol 2017;3:87583. Available from: https://doi.org/10.1016/j.jacep.2017.01.024. [55] Spinelli A, David G, Gidaro S, Carvello M, Sacchi M, Montorsi M, et al. First experience in colorectal surgery with a new robotic platform with haptic feedback. Colorectal Dis 2018;20:22835. Available from: https://doi.org/10.1111/codi.13882. [56] Hagn U, Konietschke R, Tobergte A, Nickl M, Jo¨rg S, Ku¨bler B, et al. DLR MiroSurge: a versatile system for research in endoscopic telesurgery. Int J Comput Assist Radiol Surg 2010;5:18393. Available from: https://doi.org/10.1007/s11548-009-0372-4. [57] Stark M, Pomati S, D’Ambrosio A, Giraudi F, Gidaro S. A new telesurgical platform—preliminary clinical results. Minim Invasive Ther Allied Technol 2015;24:316. Available from: https://doi.org/10.3109/13645706.2014.1003945. [58] Bozzini G, Gidaro S, Taverna G. Robot-assisted laparoscopic partial nephrectomy with the ALFX robot on pig models. Eur Urol 2016;69:3767. Available from: https://doi.org/10.1016/j.eururo.2015.08.031. [59] Konietschke R, Zerbato D, Richa R, Tobergte A, Poignet P, Fro¨hlich FA, et al. Integration of new features for telerobotic surgery into the MiroSurge system. Appl Bionics Biomech 2011;8:25365. Available from: https://doi.org/10.3233/ABB-2011-0037. [60] Tobergte A, Helmer P, Hagn U, Rouiller P, Thielmann S, Grange S, et al. The sigma.7 haptic interface for MiroSurge: a new bi-manual surgical console. In: 2011 IEEERSJ international conference on intelligent robots and systems; 2011. p. 302330. https://doi.org/10.1109/ IROS.2011.6094433. [61] Sutherland GR, Wolfsberger S, Lama S, Zarei-nia K. The evolution of neuroArm. Neurosurgery 2013;72(Suppl. 1):2732. Available from: https://doi.org/10.1227/NEU.0b013e318270da19. [62] Maddahi Y, Gan LS, Zareinia K, Lama S, Sepehri N, Sutherland GR. Quantifying workspace and forces of surgical dissection during robotassisted neurosurgery. Int J Med Robot 2016;12:52837. Available from: https://doi.org/10.1002/rcs.1679. [63] Lin T, Ouyang F, Kuck K-H, Tilz R. THERMOCOOLs SMARTTOUCHs CATHETER—the evidence so far for contact force technology and the role of VISITAGt Module. Arrhythmia Electrophysiol Rev 2014;3:447. Available from: https://doi.org/10.15420/ aer.2011.3.1.44. [64] Fanfani F, Monterossi G, Fagotti A, Rossitto C, Gueli Alletti S, Costantini B, et al. The new robotic TELELAP ALF-X in gynecological surgery: single-center experience. Surg Endosc 2016;30:21521. Available from: https://doi.org/10.1007/s00464-015-4187-9. [65] Rijanto E, Sugiharto A, Utomo S, Rahmayanti R, Afrisal H, Nanayakkara T. Trends in robot assisted endovascular catheterization technology: a review. In: 2017 International conference on robotics, biomimetics, & intelligent computational systems (Robionetics); 2017. p. 3441. https:// doi.org/10.1109/ROBIONETICS.2017.8203433. [66] Mattei TA, Rodriguez AH, Sambhara D, Mendel E. Current state-of-the-art and future perspectives of robotic technology in neurosurgery. Neurosurg Rev 2014;37:35766. Available from: https://doi.org/10.1007/s10143-014-0540-z. [67] Goto T, Hongo K, Kakizawa Y, Muraoka H, Miyairi Y, Tanaka Y, et al. Clinical application of robotic telemanipulation system in neurosurgery. Case report. J Neurosurg 2003;99:10824. Available from: https://doi.org/10.3171/jns.2003.99.6.1082. [68] Hongo K, Kobayashi S, Kakizawa Y, Koyama J-I, Goto T, Okudera H, et al. NeuRobot: telecontrolled micromanipulator system for minimally invasive microneurosurgery-preliminary results. Neurosurgery 2002;51:9858 [discussion 988]. [69] Hongo K, Goto T, Miyahara T, Kakizawa Y, Koyama J, Tanaka Y. Telecontrolled micromanipulator system (NeuRobot) for minimally invasive neurosurgery. Acta Neurochir Suppl 2006;98:636. [70] Lang MJ, Greer AD, Sutherland GR. Intra-operative robotics: NeuroArm. Acta Neurochir Suppl 2011;109:2316. Available from: https://doi. org/10.1007/978-3-211-99651-5_36. [71] Vivekananda U, Henderson A, Murphy DG, Althoefer K, Seneviratne L, Dasgupta P. The science behind haptics in robotic urological surgery. BJU Int 2009;104:4334. Available from: https://doi.org/10.1111/j.1464-410X.2009.08521.x. [72] Dobbs TD, Cundy O, Samarendra H, Khan K, Whitaker IS. A systematic review of the role of robotics in plastic and reconstructive surgery— from inception to the future. Front Surg 2017;4. Available from: https://doi.org/10.3389/fsurg.2017.00066. [73] L’Orsa R, Macnab CJB, Tavakoli M. Introduction to haptics for neurosurgeons. Neurosurgery 2013;72(Suppl. 1):13953. Available from: https://doi.org/10.1227/NEU.0b013e318273a1a3. [74] Karthik K, Colegate-Stone T, Dasgupta P, Tavakkolizadeh A, Sinha J. Robotic surgery in trauma and orthopaedics: a systematic review. Bone Jt J 2015;97-B:2929. Available from: https://doi.org/10.1302/0301-620X.97B3.35107. [75] Griffin JA, Zhu W, Nam CS. The role of haptic feedback in robotic-assisted retinal microsurgery systems: a systematic review. IEEE Trans Haptics 2017;10:94105. Available from: https://doi.org/10.1109/TOH.2016.2598341.

Haptics in Surgical Robots Chapter | 15

261

15. Haptics in surgical robots

[76] Van der Meijden OA, Schijven MP. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surg Endosc 2009 Jun 1;23(6):118090. [77] Kappassov Z, Corrales J-A, Perdereau V. Tactile sensing in dexterous robot hands—review. Robot Auton Syst 2015;74:195220. Available from: https://doi.org/10.1016/j.robot.2015.07.015. [78] Howe RD, Peine WJ, Kantarinis DA, Son JS. Remote palpation technology. IEEE Eng Med Biol Mag 1995;14:31823. Available from: https://doi.org/10.1109/51.391770. [79] Lee M, Nicholls H. Review Article Tactile sensing for mechatronics—a state of the art survey. Mechatronics 1999;9:131. Available from: https://doi.org/10.1016/S0957-4158(98)00045-2. [80] Eltaib MEH, Hewit JR. Tactile sensing technology for minimal access surgerya review. Mechatronics 2003;13:116377. Available from: https://doi.org/10.1016/S0957-4158(03)00048-5. [81] Schostek S, Schurr MO, Buess GF. Review on aspects of artificial tactile feedback in laparoscopic surgery. Med Eng Phys 2009;31:88798. Available from: https://doi.org/10.1016/j.medengphy.2009.06.003. [82] Trejos AL, Patel RV, Naish MD. Force sensing and its application in minimally invasive surgery and therapy: a survey. Proc Inst Mech Eng Part C J Mech Eng Sci 2010;224:143554. Available from: https://doi.org/10.1243/09544062JMES1917. [83] Tiwana MI, Redmond SJ, Lovell NH. A review of tactile sensing technologies with applications in biomedical engineering. Sens Actuators Phys 2012;179:1731. Available from: https://doi.org/10.1016/j.sna.2012.02.051. [84] Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PCW. Supervised autonomous robotic soft tissue surgery. Sci Transl Med 2016;8:337ra64. Available from: https://doi.org/10.1126/scitranslmed.aad9398. [85] Peirs J, Clijnen J, Reynaerts D, Brussel HV, Herijgers P, Corteville B, et al. A micro optical force sensor for force feedback during minimally invasive robotic surgery. Sens Actuators Phys 2004;115:44755. Available from: https://doi.org/10.1016/j.sna.2004.04.057. [86] Gonenc B, Chamani A, Handa J, Gehlbach P, Taylor RH, Iordachita I. 3-DOF force-sensing motorized micro-forceps for robot-assisted vitreoretinal surgery. IEEE Sens J 2017;17:352641. Available from: https://doi.org/10.1109/JSEN.2017.2694965. [87] Spiers AJ, Thompson HJ, Pipe AG. Investigating remote sensor placement for practical haptic sensing with EndoWrist surgical tools. In: 2015 IEEE world haptics conference on WHC; 2015. p. 1527. https://doi.org/10.1109/WHC.2015.7177706. [88] Li Y, Hannaford B. Gaussian process regression for sensorless grip force estimation of cable-driven elongated surgical instruments. IEEE Robot Autom Lett 2017;2:131219. Available from: https://doi.org/10.1109/LRA.2017.2666420. [89] Yu L, Wang W, Zhang F. External force sensing based on cable tension changes in minimally invasive surgical micromanipulators. IEEE Access 2018;6:536273. Available from: https://doi.org/10.1109/ACCESS.2017.2788498. [90] Cianchetti M, Ranzani T, Gerboni G, Nanayakkara T, Althoefer K, Dasgupta P, et al. Soft robotics technologies to address shortcomings in today’s minimally invasive surgery: the STIFF-FLOP approach. Soft Robot 2014;1:12231. Available from: https://doi.org/10.1089/soro.2014.0001. [91] Arezzo A, Mintz Y, Allaix ME, Arolfo S, Bonino M, Gerboni G, et al. Total mesorectal excision using a soft and flexible robotic arm: a feasibility study in cadaver models. Surg Endosc 2017;31:26473. Available from: https://doi.org/10.1007/s00464-016-4967-x. [92] Abushagur AAG, Arsad N, Reaz MI, Bakar AAA. Advances in bio-tactile sensors for minimally invasive surgery using the fibre Bragg grating force sensor technique: a survey. Sensors 2014;14:663365. Available from: https://doi.org/10.3390/s140406633. [93] Su H, Shang W, Li G, Patel N, Fischer GS. An MRI-guided telesurgery system using a Fabry-Perot interferometry force sensor and a pneumatic haptic device. Ann Biomed Eng 2017;45:191728. Available from: https://doi.org/10.1007/s10439-017-1839-z. [94] Gafford JB, Wood RJ, Walsh CJ. Self-assembling, low-cost, and modular mm-scale force sensor. IEEE Sens J 2016;16:6976. Available from: https://doi.org/10.1109/JSEN.2015.2476368. [95] Puangmali P, Liu H, Seneviratne LD, Dasgupta P, Althoefer K. Miniature 3-axis distal force sensor for minimally invasive surgical palpation. IEEEASME Trans Mechatron 2012;17:64656. Available from: https://doi.org/10.1109/TMECH.2011.2116033. [96] McKinley S, Garg A, Sen S, Kapadia R, Murali A, Nichols K, et al. A single-use haptic palpation probe for locating subcutaneous blood vessels in robot-assisted minimally invasive surgery. In: 2015 IEEE international conference on automation science and engineering (CASE); 2015. p. 11518. https://doi.org/10.1109/CoASE.2015.7294253. [97] Alleblas CCJ, Vleugels MPH, Coppus SFPJ, Nieboer TE. The effects of laparoscopic graspers with enhanced haptic feedback on applied forces: a randomized comparison with conventional graspers. Surg Endosc 2017;31:541117. Available from: https://doi.org/10.1007/s00464-017-5623-9. [98] Kim U, Lee DH, Yoon WJ, Hannaford B, Choi HR. Force sensor integrated surgical forceps for minimally invasive robotic surgery. IEEE Trans Robot 2015;31:121424. Available from: https://doi.org/10.1109/TRO.2015.2473515. [99] Kim U, Kim YB, So J, Seok DY, Choi HR. Sensorized surgical forceps for robotic-assisted minimally invasive surgery. IEEE Trans Ind Electron 2018;1. Available from: https://doi.org/10.1109/TIE.2018.2821626. [100] Qasaimeh MA, Sokhanvar S, Dargahi J, Kahrizi M. PVDF-based microfabricated tactile sensor for minimally invasive surgery. J Microelectromechanical Syst 2009;18:195207. Available from: https://doi.org/10.1109/JMEMS.2008.2008559. [101] Ahmadi R, Packirisamy M, Dargahi J, Cecere R. Discretely loaded beam-type optical fiber tactile sensor for tissue manipulation and palpation in minimally invasive robotic surgery. IEEE Sens J 2012;12:2232. Available from: https://doi.org/10.1109/JSEN.2011.2113394. [102] Hammond FL, Kramer RK, Wan Q, Howe RD, Wood RJ. Soft tactile sensor arrays for force feedback in micromanipulation. IEEE Sens J 2014;14:144352. Available from: https://doi.org/10.1109/JSEN.2013.2297380. [103] Naidu AS, Patel RV, Naish MD. Low-cost disposable tactile sensors for palpation in minimally invasive surgery. IEEEASME Trans Mechatron 2017;22:12737. Available from: https://doi.org/10.1109/TMECH.2016.2623743.

262

Handbook of Robotic and Image-Guided Surgery

[104] Naidu AS, Naish MD, Patel RV. A breakthrough in tumor localization: combining tactile sensing and ultrasound to improve tumor localization in robotics-assisted minimally invasive surgery. IEEE Robot Autom Mag 2017;24:5462. Available from: https://doi.org/10.1109/ MRA.2017.2680544. [105] Bandari NM, Ahmadi R, Hooshiar A, Dargahi J, Packirisamy M. Hybrid piezoresistive-optical tactile sensor for simultaneous measurement of tissue stiffness and detection of tissue discontinuity in robot-assisted minimally invasive surgery. J Biomed Opt 2017;22:077002. Available from: https://doi.org/10.1117/1.JBO.22.7.077002. [106] van Beurden MHPH, IJsselsteijn WA, Juola JF. Effectiveness of stereoscopic displays in medicine: a review. 3D Res 2012;3:3. Available from: https://doi.org/10.1007/3DRes.01(2012)3. [107] Culbertson H, Schorr SB, Okamura AM. Haptics: the present and future of artificial touch sensation. Annu Rev Control Robot Auton Syst 2018;1:385409. Available from: https://doi.org/10.1146/annurev-control-060117-105043. [108] Salisbury JK, Srinivasan MA. Phantom-based haptic interaction with virtual objects. IEEE Comput Graph Appl 1997;17:610. Available from: https://doi.org/10.1109/MCG.1997.1626171. [109] Escobar-Castillejos D, Noguez J, Neri L, Magana A, Benes B. A review of simulators with haptic devices for medical training. J Med Syst 2016;40:104. Available from: https://doi.org/10.1007/s10916-016-0459-8. [110] Zareinia K, Maddahi Y, Ng C, Sepehri N, Sutherland GR. Performance evaluation of haptic hand-controllers in a robot-assisted surgical system: Evaluation of haptic devices in a robot-assisted surgical system. Int J Med Robot 2015;11:486501. Available from: https://doi.org/ 10.1002/rcs.1637. [111] Lambert P, Herder J. A novel parallel haptic device with 7 degrees of freedom. In: 2015 IEEE world haptics conference on WHC; 2015. p. 1838. https://doi.org/10.1109/WHC.2015.7177711. [112] Vulliez M, Zeghloul S, Khatib O. Design strategy and issues of the Delthaptic, a new 6-DOF parallel haptic device. Mech Mach Theory 2018;128:395411. Available from: https://doi.org/10.1016/j.mechmachtheory.2018.06.015. [113] Najmaei N, Asadian A, Kermani MR, Patel RV. Design and performance evaluation of a prototype MRF-based haptic interface for medical applications. IEEEASME Trans Mechatron 2016;21:11021. Available from: https://doi.org/10.1109/TMECH.2015.2429140. [114] Rizzo R, Sgambelluri N, Scilingo EP, Raugi M, Bicchi A. Electromagnetic modeling and design of haptic interface prototypes based on magnetorheological fluids. IEEE Trans Magn 2007;43:3586600. Available from: https://doi.org/10.1109/TMAG.2007.901351. [115] Song Y, Guo S, Yin X, Zhang L, Wang Y, Hirata H, et al. Design and performance evaluation of a haptic interface based on MR fluids for endovascular tele-surgery. Microsyst Technol 2018;24:90918. Available from: https://doi.org/10.1007/s00542-017-3404-y. [116] Schorr SB, Quek ZF, Romano RY, Nisky I, Provancher WR, Okamura AM. Sensory substitution via cutaneous skin stretch feedback. In: 2013 IEEE international conference on robotics and automation; 2013. p. 23416. https://doi.org/10.1109/ICRA.2013.6630894. [117] Guinan AL, Hornbaker NC, Montandon MN, Doxon AJ, Provancher WR. Back-to-back skin stretch feedback for communicating five degree-of-freedom direction cues. In: 2013 World haptics conference on WHC; 2013. p. 138. https://doi.org/10.1109/ WHC.2013.6548377. [118] Prattichizzo D, Pacchierotti C, Rosati G. Cutaneous force feedback as a sensory subtraction technique in haptics. IEEE Trans Haptics 2012;5:289300. Available from: https://doi.org/10.1109/TOH.2012.15. [119] Meli L, Pacchierotti C, Prattichizzo D. Sensory subtraction in robot-assisted surgery: fingertip skin deformation feedback to ensure safety and improve transparency in bimanual haptic interaction. IEEE Trans Biomed Eng 2014;61:131827. Available from: https://doi.org/10.1109/ TBME.2014.2303052. [120] Quek ZF, Schorr SB, Nisky I, Provancher WR, Okamura AM. Sensory substitution of force and torque using 6-DoF tangential and normal skin deformation feedback. In: 2015 IEEE international conference on robotics and automation (ICRA); 2015. p. 26471. https://doi.org/ 10.1109/ICRA.2015.7139010. [121] Culjat MO, King C-H, Franco ML, Lewis CE, Bisley JW, Dutson EP, et al. A tactile feedback system for robotic surgery. In: Conference proceedings of the annual international conference on IEEE engineering in medicine and biology society; 2008. p. 19304. https://doi.org/ 10.1109/IEMBS.2008.4649565. [122] McMahan W, Gewirtz J, Standish D, Martin P, Kunkel JA, Lilavois M, et al. Tool Contact acceleration feedback for telerobotic surgery. IEEE Trans Haptics 2011;4:21020. Available from: https://doi.org/10.1109/TOH.2011.31. [123] Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L, et al. Raven-II: an open platform for surgical robotics research. IEEE Trans Biomed Eng 2013;60:9549. Available from: https://doi.org/10.1109/TBME.2012.2228858. [124] Adee S. Eye, robot. New Sci 2016;232:24. Available from: https://doi.org/10.1016/S0262-4079(16)32079-6. [125] de Smet MD, Meenink TCM, Janssens T, Vanheukelom V, Naus GJL, Beelen MJ, et al. Robotic assisted cannulation of occluded retinal veins. PLoS One 2016;11:e0162037. Available from: https://doi.org/10.1371/journal.pone.0162037. [126] Iordachita I, Sun Z, Balicki M, Kang JU, Phee SJ, Handa J, et al. A sub-millimetric, 0.25 mN resolution fully integrated fiber-optic force-sensing tool for retinal microsurgery. Int J Comput Assist Radiol Surg 2009;4:38390. Available from: https://doi.org/10.1007/s11548-009-0301-6. [127] Hagen ME, Meehan JJ, Inan I, Morel P. Visual clues act as a substitute for haptic feedback in robotic surgery. Surg Endosc 2008;22:15058. Available from: https://doi.org/10.1007/s00464-007-9683-0. [128] Friston K. Learning and inference in the brain. Neural Netw 2003;16:132552. Available from: https://doi.org/10.1016/j.neunet.2003.06.005. [129] Friston K, Mattout J, Kilner J. Action understanding and active inference. Biol Cybern 2011;104:13760. Available from: https://doi.org/ 10.1007/s00422-011-0424-z.

Haptics in Surgical Robots Chapter | 15

263

15. Haptics in surgical robots

[130] Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 2002;415:42933. Available from: https://doi.org/10.1038/415429a. [131] Stein BE, Meredith MA. Multisensory integration. Ann NY Acad Sci 1990;608:5170. Available from: https://doi.org/10.1111/j.17496632.1990.tb48891.x. [132] Paraskevopoulos E, Herholz SC. Multisensory integration and neuroplasticity in the human cerebral cortex. Transl Neurosci 2013;4:33748. Available from: https://doi.org/10.2478/s13380-013-0134-1. [133] Tresilian JR, Mon-Williams M, Kelly BM. Increasing confidence in vergence as a cue to distance. Proc Biol Sci 1999;266:3944. Available from: https://doi.org/10.1098/rspb.1999.0601. [134] Mushtaq F, Bland AR, Schaefer A. Uncertainty and cognitive control. Front Psychol 2011;2. Available from: https://doi.org/10.3389/ fpsyg.2011.00249. [135] Al-Saud L, Mushtaq F, Mirghani I, Balkhoyor A, Keeling A, Manogue M, et al. Drilling into the functional significance of stereopsis: the impact of stereoscopic information on surgical performance. Ophthalmic Physiol Opt 2017;37:498506. [136] Mon-Williams M, Murray AH. The size of the visual size cue used for programming manipulative forces during precision grip. Exp Brain Res 2000;135:40510. [137] Flanagan JR, Beltzner MA. Independence of perceptual and sensorimotor predictions in the sizeweight illusion. Nat Neurosci 2000;3:737. [138] Rassweiler JJ, Autorino R, Klein J, Mottrie A, Goezen AS, Stolzenburg J-U, et al. Future of robotic surgery in urology. BJU Int 2017;120:82241. Available from: https://doi.org/10.1111/bju.13851. [139] Whealon M, Vinci A, Pigazzi A. Future of minimally invasive colorectal surgery. Clin Colon Rectal Surg 2016;29:22131. Available from: https://doi.org/10.1055/s-0036-1584499. [140] Yeung BPM, Chiu PWY. Application of robotics in gastrointestinal endoscopy: a review. World J Gastroenterol 2016;22:181125. Available from: https://doi.org/10.3748/wjg.v22.i5.1811. [141] Yang G-Z, Cambias J, Cleary K, Daimler E, Drake J, Dupont PE, et al. Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci Robot 2017;2:eaam8638. Available from: https://doi.org/10.1126/scirobotics.aam8638. [142] Higgins RM, Frelich MJ, Bosler ME, Gould JC. Cost analysis of robotic versus laparoscopic general surgery procedures. Surg Endosc 2017;31:18592. Available from: https://doi.org/10.1007/s00464-016-4954-2.

16 G

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism With Force-Sensing Capability for Robotic Surgery Uikyum Kim1, Yong Bum Kim2, Dong-Yeop Seok2 and Hyouk Ryeol Choi2 1

Korea Institute of Machinery & Materials, Daejeon, South Korea Sungkyunkwan University, Suwon, South Korea

2

ABSTRACT In order to create a compact and lightweight surgical robot with force-sensing capability, in this chapter, we propose a surgical robot called “S-surge,” which was developed for robot-assisted minimally invasive surgery, with a primary focus on its machinery design and force-sensing system. The robot consists of a four degrees-of-freedom (DoF) surgical instrument and a three DoF remote center-of-motion manipulator. The manipulator adopts a double parallelogram mechanism and spherical parallel mechanism design, which have the advantages of a compact structure, simplicity, high precision, and high rigidity. A kinematic analysis is conducted to optimize the workspace. The surgical instruments enable multiaxis force sensing, including three-axis pulling force and single-axis gripping force. In this study, it will be verified that it is feasible to carry the entire robot owing to its light weight (4.7 kg), thus allowing the robot to be used for telesurgery in remote areas. Finally, we will explain how we can use the robot’s motion- and force-sensing capabilities to simulate robot performance and perform tissue manipulation tasks in a simulated surgical environment. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00016-5 © 2020 Elsevier Inc. All rights reserved.

265

266

16.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Minimally invasive surgery (MIS) is a modern form of surgery that has replaced traditional open surgery. Unlike open surgery, in MIS, specially designed long and thin surgical instruments and endoscopic devices are inserted into the patient through small incisions. The surgeon performs a surgical procedure while monitoring the endoscopic visual image on an external monitor. MIS has many advantages, such as pain relief, shorter hospital stays, and reduced infection concerns [1]. However, conventional MIS performed manually by a surgeon has some disadvantages, such as insufficient tool flexibility and inaccurate operation, which is usually a result of the surgeon’s hands shaking. Robot-assisted MIS (RMIS) is an advanced form of MIS that can make up for the shortcomings of traditional MIS [2,3]. In RMIS, surgery is performed by a robot remotely controlled by a surgeon. Many researchers have developed various surgical robots, for these reasons. Among these robots, the da Vinci surgical robot series developed by Intuitive Surgical is the most representative and successful FDA-approved surgical robot system [4]. The development of this system enhances the application of RMIS technology in clinical trials as well as in medicine and engineering. These robots usually consist of a seven-degrees-of-freedom (DoF) robot and a four-DoF instrument. However, owing to their large size, weight, and cost, they are not suitable for portable use. On the other hand, in the future, robots for RMIS will no longer be limited to these robot systems. Equally problematic is that surgeons currently have no tactile sensation when using surgical robots for surgery, which prevents intuitive manipulation and can result in some damage to the tissue. In addition, if a tactile sensation is not provided, the surgeon takes a long time and a lot of effort to become familiar with the surgery [58]. Therefore considering all of these factors, a new type of portable surgical robot with force-sensing capability is needed, and it should be based on a remotely operated surgical system so that it can be quickly delivered to a place where surgery is required, such as a battlefield. The development of robots is also necessary because this can stimulate the healthcare economy. According to the minimum requirements of RMIS, the remote center of motion (RCM) mechanism has been studied to solve the above portability problem. The RCM is an almost fixed point in space through which the instrument shaft passes, and corresponds to a small incision in the patient’s skin. The RCM manipulator simplifies the mechanical approach and achieves a fixed RCM during robot operation without a physical constraint fulcrum. Although existing fixed RCMs have not been designed for portability for remote-site telesurgery, it is still possible to reduce the weight and size of RCM robots primarily because their minimum DoF is less than their corresponding DoF. The mechanism provides a three-DoF motion, including two-DoF tilting motion of the rotating RCM and an additional tool insertion motion [9]. At present, many RCM surgical robots using various mechanisms have been introduced [10]. The mechanisms applied to robot RCM motion are mainly divided into a spherical series mechanism, spherical parallel (SP) mechanism, and double parallelogram (DP) mechanism [11]. First, the Raven series was developed by Hannaford et al. and is a representative three-DoF RCM robot that uses a spherical tandem mechanism, as shown in Fig. 16.1A [12,13]. The robot FIGURE 16.1 RCM mechanisms of surgical manipulator. (A) Spherical serial mechanism. (B) Spherical parallel mechanism. (C) Double parallelogram mechanism. (D) Proposed mechanism. RCM, Remote center of motion.

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

267

16.2

Overview of the surgical robot

In order to achieve portability and force-sensing capabilities, we designed a compact parallel manipulator based on an RCM mechanism and a four-axis force sensing system embedded in the surgical instrument. The RCM mechanism is based on the DP and SP mechanisms to ensure the stability of the RCM motion, and has many advantages in terms of parallelism. In order to achieve proper movement of the RMIS, we considered the target workspace of the manipulator. According to Ref. [9], the dexterous workspace (DWS) of the surgical robot is defined as a cone with a vertex angle of 60 degrees placed at the RCM point. Within the cone, the surgeon spends 95% of the operative time engaged in tissue manipulation tasks. In order to achieve the full range of surgery, the extended DWS (EDWS) was determined to be a cone with a 90-degree vertex angle at the RCM point. Therefore we chose EDWS to meet the design requirements of the robot. In addition, the insertion movement of the surgical instrument was determined to be 200 mm below the RCM. In addition, because the force-sensing capability of RMIS requires four-axis force measurement, three-axis force, and a gripping force [21], we introduced a four-axis force sensing capability in the proposed surgical robot. Fig. 16.2 shows the configuration of the proposed surgical robot, S-surge, which includes an RCM manipulator with three-DoF motion and a sensing surgical instrument with four-DoF motion. The robot part is designed by a DP mechanism that moves in parallel. A moving virtual triangle is generated on the mechanism and moved at a fixed angle. To

16. S-Surge Surgical Robot

weighs 20 kg and is designed to reduce size, weight, and complexity. However, owing to the tendon-driving mechanism, there remains some degree of structural complexity, and more compactness is required in order to realize the portability of the robot. Secondly, the SP mechanism was applied to RCM robots [11,14]. Fig. 16.1B shows the three-DoF ball-prism-spherical (SPS) manipulator based on the SP mechanism. The mechanism can also drive general prismsuniversal (UPU) robots, spherical-prism-general robots, and more. The plates on the virtual plane are controlled by prismatic joints or ball joints through fixed RCM points. The parallel mechanism is a more advanced mechanical device that makes the robot more compact, smaller, and more precise [15,16]. However, owing to the structural characteristics, it is difficult to ensure a sufficient working space. In addition, there is a risk of collision with opposing surgical robots or patients [9]. Third, many surgical robots have been developed by applying a DP mechanism, which is the most common RCM mechanism, as shown in Fig. 16.1C [1720]. This mechanism can be easily used to develop two-DoF RCM motion and is operated by two rotary actuators to provide a stable fixed RCM point through its structural features and potentially unlimited workspace. On the other hand, because the actuators are placed at the ends of the mechanism, position errors will occur with greater impact when a large load is applied to the actuators. In this study, we applied a new mechanism that combines the advantages of the DP and SP mechanisms. As shown in Fig. 16.1D, we made a virtual triangle using a ball joint fixed to the ground and two ball joints connected to two linear actuators, which are based on a dual SPS robot. The opposing joints of the actuators are fixed to the ground. The actuator manipulates the triangular two-DoF motion in the same manner as in the parallel direction. Therefore the mechanism can be smaller, more compact, more accurate, and stiffer. Then, using the DP mechanism, a simple twoDoF RCM motion is generated. In this regard, a new mechanism is selected, and the manipulator is designed by using design variables from a kinematic analysis of workspace optimization. In the actual design of the manipulator, a double SPU consisting of a sphere, a prism, and a universal joint is used to simplify its implementation. Therefore this mechanism can be referred to as a dual SPU-based DP mechanism. In addition, in this study, we integrated a force-sensing system into the robot to enhance its force-sensing capabilities. The system provides four-axis force-sensing capability, consisting of a three-axis manipulating force and single-axis gripping force. Regarding the integration of the system with the robot, we designed a four-DoF surgical instrument as an interchangeable sensorized instrument. The details of the force sensors included in the system were presented in a previous work [21]. We built the entire robot, the manipulator and the instrument, along with a lightweight seven-DoF robot (4.7 kg) with four-axis force sensing, motion analysis, and workspace optimization. To verify the applicability of the proposed robot, we constructed a surgical environment as a remote operating system based on masterslave control. Using this environment, we evaluated the proposed workspace and force-sensing performance of the robot. This chapter is organized as follows. In Section 16.2, we outline the surgical robot. In Section 16.3, we show our analysis and design of the surgical manipulator, and in Section 16.4, we describe how we use the integrated forcesensing system to develop sensor surgical instruments. Section 16.5 explains the implementation of the robot. In Section 16.6, we evaluate the developed robot in a simulated surgical environment. Finally, in Section 16.7, we discuss the results and summarize the chapter.

268

Handbook of Robotic and Image-Guided Surgery

manipulate the triangle and perform a two-DoF spherical motion on the RCM, two linear actuators are applied to the robot. In most of the RCM mechanisms, instruments are inserted using a linear actuator. Then, in order to measure the four-axis force, a three-axis manipulating force sensor is placed on the wrist at the end of the instrument shaft, and two torque sensors that measure the gripping force are embedded in the actuating unit.

16.3 16.3.1

Surgical manipulator Kinematic analysis

For a kinematic analysis of the manipulator, the motion structure design of the manipulator is shown in Fig. 16.2A. Based on the dual SPU-based DP mechanism, forward kinematics are calculated to analyze the manipulator’s workspace. Five passively rotating joints (J1, J2, J3, J4, and J5) are placed in the structure of the DP, and are activated by two linear actuators that produce fixed RCM points, as shown in Fig. 16.2A. When two linear displacements [(d1=2 : d1 and d2 )] generated by the actuator occur equally, the angular motion on the X5 -axis at the RCM is generated as a function of θ1 . Further, when the actuator moves in the opposite direction, the angular motion on the Z5 -axis at the RCM is generated as θ2 . changes. We consider using the tilt angle (γ) to avoid collisions between the robot and the patient [9]. As a result, the DH parameters of the manipulator are listed in Table 16.1. Based on the parameters, a homogeneous transformation matrix representing the position of the end effector is described as 2 3 2cθ1 cθ2 sθ1 cθ2 sθ2 cθ2 A 0 2sθ1 ða3 2 a5 Þ 7 sθ1 cθ1 6 0 (16.1) 5: 5 P 5 4 2cθ sθ 2sθ2 A 2cθ2 1 2 sθ1 sθ2 0 1 0 0 where A 5 a4 2 a3 cθ1 1 a5 θ1 . FIGURE 16.2 S-surge, a surgical robot, composed of a RCM manipulator and a sensorized surgical instrument for RMIS. RCM, Remote center of motion; RMIS, Robot-assisted minimally invasive surgery.

TABLE 16.1 DH parameters of the manipulator. Joint

αi21

ai21

di

θi

1

π=2

0

0

θ1

2

π=2

0

0

θ2

3

0

a3

0

θ3 5 π 2 θ1

4

0

a4

0

θ4 5 θ1

5

0

a5

0

θ5 5 0

Note: a3 and a4 are lengths of links 3 and 4, respectively. a5 represents the distance from J4 to the RCM point or position of the end effector. RCM, Remote center of motion.

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

269

A dual-SPU parallel mechanism is embedded in the manipulator. Two SPU joints are located between the two corner points of the virtual triangle and the ends of the two actuators. The last point is connected to J1, and the virtual triangle structure is fixed on link 3. Therefore two linear motions move the triangle and perform two spherical RCM motions (θ1=2 : θ1 and θ2 ). According to the geometric relationship of the mechanism, angular displacement (Δθ1=2 ) is equally generated on at J1 and J5 (RCM), as shown in Fig. 16.2B. Therefore the relationship between θ1=2 and d1=2 can be expressed as  π 5 PðF1=F2Þ 5 5 RX ðθ1 2 ψÞ5 RZ θ2 2 5 PF1init =F2init : (16.2) 2

In addition, the linear displacements (Δd1=2 ) are represented as Δd1=2 5 d1=2 2 dinit ;

(16.4)

where dinit is the initial distance of d1=2 , and is expressed as dinit 5 5 PF1init =F2init 2 5 PB1=B2 :

16.3.2

(16.5)

Workspace optimization

The work area targets the EDWS, which has a cone with a dome angle of 90 degrees at the RCM point. In other words, the angular displacement (Δθ1=2 ) is 90 degrees. In this mechanism, we set two design parameters: the length (l) between J1 and the center of point (PF1=F2 ) connected to the actuator, and the angle (β) of the actuator, as shown in Fig. 16.3C. The robot is mechanically implemented depending on the length and joint of the selected motor. Although much effort has been made, such as determining the distance (c) and the distance between PF1=F2 and PB1=B2 , several parameters are limited for the robot’s compactness. We determine the positions of PF1=F2 and PB1=B2 (PB1 and PB2 ) according to the parameters. Similarly, the workspace can also be changed. Therefore PF1init =F2init is expressed as 2 3 0:5c 6 7 5 (16.6) PF1init 5 4 l sin ψ 5: 2l cos ψ Further, PF2init is symmetric with respect to the yz-plane. PB1 can be expressed as 2 3 0:5c 1 d0 sin β 0 6  7 5 PB1 5 5 P0 1 4 2 cos ψ 1 d tan γ 5;

(16.7)

0

2l cos ψ 2 d cos β where d 0 is the distance on the X5 -axis between PF1init =F2init and the end of the manipulator. Then, the geometric relation of the mechanism can yield  0 2 2 d1=2 5 d secβ 1 ðlsinψÞ2 : (16.8) Based on the equations described above, the angle (θ1 ) of the workspace cone is calculated and represented in Fig. 16.4. At this time, Δd1=2 was determined to be 150 mm by the selected linear actuator. According to its design limit, dinit is 283 mm, and ψ and γ are determined to be 30 and 15 degrees, respectively. Fig. 16.4 shows the angular displacement (Δθ1 ) according to two design parameters (l and β). In the figure, a dotted line indicates a point at which Δθ1 is 90 degrees. Since increasing Δθ1 means a decrease in the distance between P0

16. S-Surge Surgical Robot

Here, in order to simplify the calculation, the equation is expressed based on the coordinates at the RCM. PF1=F2 , PF1 , and PF2 are points that connect the limb to the spherical joint of the triangular structure. PF1init =F2init , PF1init , and PF2init are the initial points of PF1=F2 . Then, the distance (d1=2 ) between the two points of the actuator (PF1=F2 ) and the ends ðPB1=B2 Þ can be expressed as     (16.3) d1=2 5 5 PF1=F2 2 5 PB1=B2 :

270

Handbook of Robotic and Image-Guided Surgery

FIGURE 16.3 (A) Kinematic model of the two-DoF spherical mechanism based on 2-SPU-based double parallelogram. (B) Side view. (C) Top view. DoF, Degrees-of-freedom.

FIGURE 16.4 Relationship between the angular motion Δθ1 and two design parameters (β and l).

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

271

and PF1init =F2init , the load and power consumption applied to the motor increase. Therefore the point on the dotted line is a suitable candidate for the EDWS. Further, in the point on the dotted line, Δθ2 satisfied that (β; l) is above 90 degrees from (10, 110) to (54.5, 150.8). Thus points within the range of l and β are potential candidates for the EDWS that provides the manipulator. Based on a workspace analysis, we analyzed the isotropy of the robot. The isotropic score is calculated by the Jacobian of the manipulator, ranging from one to infinity. The score evaluates the degree of movement of the manipulator in all directions [9,15,22]. An isotropic score of one means that it is completely isotropic. Therefore in order to derive the Jacobian matrix of the dual SPU-based DP mechanism, we applied the parallel robotic Jacobian analysis method, namely, UPU, SPS, SPU, etc. [23,24]. Therefore we used the mutual screw theory to obtain the Jacobian matrix.

16.3.2.1 Jacobian analysis

$p 5 θ_ i;1 $^ i;1 1 θ_ i;2 $^ i;2 1 θ_ i;3 $^ i;3 1 θ_ i;4 $^ i;4 1 θ_ i;5 $^ i;5 1 θ_ i;6 $^ i;6 ; for i 5 1; 2

(16.9)

where the unit screws are represented as             si;1 si;2 0 si;4 si;5 si;6 $^ i;1 5 ; $^ i;2 5 ; $^ i;3 5 ; $^ i;4 5 ; $^ i;5 5 ; $^ i;6 5 si;3 ðbi 2 di Þ 3 si;1 ðbi 2 di Þ 3 si;2 bi 3 si;4 bi 3 si;5 bi 3 si;6 (16.10)

FIGURE 16.5 (A) Schematic of the twoSPU parallel mechanism included in the surgical manipulator. (B) Isotropy score of the manipulator depending on the two design parameters (β and l).

16. S-Surge Surgical Robot

Fig. 16.5A shows a portion of a dual-SPU parallel manipulator consisting of a moving virtual triangular plate, two limbs of the same motion structure, and a fixed base. As mentioned earlier, since the angular motions at RCM and P0 are the same, we analyze the Jacobian determinant on the basis of P0 . The linear actuator provides a prismatic joint, the universal joint leads to two intersecting rotary joints, and the spherical joint leads to three intersecting rotary joints. The connectivity of the limb is equal to 6. As a result, the instantaneous distortion of the plate ($p ) can be expressed as

272

Handbook of Robotic and Image-Guided Surgery

where each si;j is a unit vector along the ith limb axis of the jth joint. Here, the conditions si;1 5 si;5 and si;2 5 si;4 are satisfied. Then, in the mechanism, there is an additional universal joint connecting the plate and P0 , and the twist is expressed as $p 5 θ_ i;1 $^ i;1 1 θ_ i;2 $^ i;2 for i 5 3; where the unit screws are represented as

    ^$ i;1 5 si;1 ; $^ i;2 5 si;2 ; 0 0

(16.11)

(16.12)

In this configuration, there is no reciprocal screw at any of the joints of a limb of the manipulator. Therefore one reciprocal screw ($^ r;i;0 ) can be expressed as   0 for i 5 1; 2; 3 (16.13) $^ r;i;1 5 0 Thus we obtain the equation by taking the orthogonal product of both sides of Eqs. (16.9) and (16.11) with $^ r;i;0 , as follows: T $^ r;i;1 $p 5 0

(16.14)

Then, the aforementioned equation can be rewritten in matrix form as Jc $p 5 0; where Jc is represented as

2

01 3 3

6 J c 5 4 01 3 3 01 3 3

(16.15)

01 3 3

3

7 01 3 3 5:

(16.16)

01 3 3

The constraints imposed by the joints are represented in Eq. (16.11), and the reciprocal screws to each limb make a one-system while the prismatic joints are locked. Thus one additional screw ($^ r;i;1 ) can be represented as   si;3 ^$ r;i;1 5 ; i 5 1; 2: (16.17) bi 3 si;3 Here, $^ r;3;1 5 ½01 3 3  because there is no actuated prismatic joint. We obtain the equation by taking the orthogonal product of both sides of Eqs. (16.9) and (16.11) with $^ r;i;2 , as follows: T $^ r;i;1 $p 5 d^i :

(16.18)

^ Jx $p 5 q;

(16.19)

Eq. (16.14) can be rewritten in matrix form as where Jx and q^ are expressed as

2

ðb1 3 s1;3 ÞT

6 Jx 5 4 ðb2 3 s2;3 ÞT

sT1;3

3

7 sT2;3 5;

01 3 3 01 3 3 h i q^ 5 d^1 ; d^2 ; 0 :

(16.20)

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

273

Using Eqs. (16.14) and (16.18), the relation between the velocities (d^1 and d^2 ) of the two linear actuations and the twist ($p ) of the plate with a Jacobian matrix is expressed as 3 2 ðb1 3 s1;3 ÞT sT1;3 6 sT2;3 7 7 6 ðb2 3 s2;3 ÞT 7 6 01 3 3 01 3 3 7 ; J 56 7 6 01 3 3 01 3 3 7 6 5 4 0 0 133

133

01 3 3

01 3 3

T q_ 5 d_1;3 ; d_2;3 ; 0; 0; 0; 0 :

(16.21)

The isotropic score is defined as the conditional number κðJ Þ of the simplified two-dimensional Jacobian determinant of Eq. (16.21), which can evaluate the kinematic performance of the manipulator and is a measure of the direction uniformity of the angular motion (Δθ1=2 ). According to the design parameters (β and l), the score can be expressed as ISOðβ; lÞ 2 κðJ Þ21 ISOAh0; 1i

(16.22)

21

where κðJ Þ 5 :J::J :. Fig. 16.5B shows the mechanical isotropic score of the manipulator according to the design parameters (β and l). The horizontal axis in the figure represents the dashed line that satisfies the EDWS of manipulator. Here, the optimal points are 54.5 mm and 150.8 degrees. In this respect, the manipulator is isotropic at this point. Therefore the optimized design of the robot meets EDWS and good kinematic performance. Fig. 16.6 shows a surgical RCM manipulator of the dual SPU-based DP mechanism. The manipulator includes two linear actuators for spherical RCM motion and a linear actuator for the insertion motion of the instrument. In order to obtain the EDWS of the RMIS, the structure of the manipulator is designed based on the analysis explained above. In the cylindrical part, four motors, their motor controllers, and tool adapters are embedded. The instrument’s four-DoF motion is driven by a motor, that is, equipped with a robot on the instrument adapter. Mechanical design is done in a way that does not expose any mechanical components.

16.4

Sensorized surgical instrument

The instrument is designed to measure external forces generated when grasping and manipulating tissue. Fig. 16.7A shows a simplified structure of the sensing system of the instrument comprising a gripping portion, a shaft portion, and an attachment portion. A miniature three-axis three-axial force sensor and two torque sensors are used to detect grip and manipulating forces. In this study, as shown in Fig. 16.7B, the sensors are mounted in the actuation unit of the FIGURE 16.6 Design of the RCM manipulator based on the two-SPU-based double parallelogram mechanism. RCM, Remote center of motion.

16. S-Surge Surgical Robot

16.3.2.2 Mechanism of isotropy

274

Handbook of Robotic and Image-Guided Surgery

FIGURE 16.7 Force-sensing system that provides a four-DoF force information. (A) Simplified structure of the instrument integrating a three-axis force sensor and two torque sensors. (B) Three-axis force sensor integrated into the wrist of the instrument. (C) Torque sensor integrated into the actuation unit of the instrument. DoF, Degrees-of-freedom.

wrist and instrument, respectively. In this configuration, the gripping force (FG ) applied to the tip of the grasper is calculated as FG 5 r1 =r2 =ðT1 1 T2 Þ: The three-axis Cartesian manipulation force FM considering the decoupling is expressed as 2 3 2 3 FX 0 FM 5 4 F Y 5 5 FW 1 4 0 5 jFG j FZ

(16.23)

(16.24)

where FW is the output force measured in the three-axis force sensor. The grasper portion of the instrument includes four cable-driven joints, a rolling joint, a wrist joint, and two gripping joints. Its four-DoF motion is based on a general configuration, similar to existing RMIS surgical instruments such as the da Vinci or Raven platform [4,12]. Fig. 16.8A shows a more detailed exploded view of the actuating unit consisting of two pulleys, two torque sensors integrating two pulleys, a sensor controller, and four rotary knobs attached to the tool adapter of the manipulator. A total of four pulleys, including two torque sensors, are mounted on the bottom of the attachment. The torque sensors are housed in the internal space of the instrument module to prevent them from being disturbed by the external environment. In addition, the sensor controller board houses a torque sensor in the internal space of the instrument. The board contains an MCU for receiving and processing raw data from the wrist force sensors and torque sensors. The controller board has four pins that communicate with external components. The power supply is also provided via the pin. Fig. 16.8B shows the assembled sensor surgical instrument. The connector placed at the end of the actuation unit includes a power pin and a data transmission pin. Here, all mechanical parts are packaged. In addition, on the pulley of the tendon, the wiring method of the unit is shown in Fig. 16.8. As shown in Fig. 16.8C, two torque sensors are attached to the shaft and knob. Since the two pulleys integrated in the sensor are connected to the two gripper joints by a drive cable, the sensor can measure the gripping force applied to the grasper. In addition, for a rolling motion, a cable connects the pulley to the shaft, as shown in Fig. 16.8D. Table 16.2 shows the range of motion of each joint in the gripper component, as well as the gear ratio of the pulleys in the actuation unit of the instrument.

16.5 16.5.1

Implementation Surgical manipulator

We implemented the surgical robot using the designs of the above robots and instruments. Fig. 16.9 shows an assembled RCM manipulator with three-DoF motion forming a cone with a vertex angle of 90 degrees.

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

275

FIGURE 16.8 Design of the surgical instrument including the force-sensing system and the actuation unit for actuating the four-DoF joints. (A) Exploded view. (B) Assembled view. (C) Wiring method for the two grasping motions. (D) Wiring method for the wrist and the roll motions. DoF, Degrees-of-freedom.

16. S-Surge Surgical Robot

TABLE 16.2 Range of motion of joints in the instrument. Joint

Range of motion (degrees)

Transmission ratio

Roll joint

360

1:1

Wrist joint

150

1:1

Grasping joints

180

1:5 : 1

The robot consists of three brushless DC (BLDC) motors and four DC motors for controlling the RCM robot and instrument, respectively, controlled in position control manner. Three 8-W BLDC motors are used as linear actuators (EC-max 16, Maxon Motor AG, Switzerland). BLDC motors were chosen because of their high power density, which provides 8 W of continuous power at a 16-mm diameter. For linear motion, we use a commercial ball-screw mechanism (Spindle drive GP 16, Maxon Motor AG, Switzerland), which includes a 5.4:1 planetary gear head. For angular and insertion movements, the ball screws have a pitch of 2 mm and a length of 200 mm. We use a three-channel 512-pulse/ speed encoder for position measurement (MR, Maxon Motor AG, Switzerland). The linear motion has a resolution of 256 pulses/mm. For the position control of BLDC motors, we use a commercial motor controller (EPOS2, Maxon Motor AG, Switzerland). Then, in the manipulator, we include the drive module for the actuation unit of the instrument. From Fig. 16.9, we use grooves to transmit power to the joints of the instrument. Fig. 16.9 shows how four DC motors (DC1724, Faulhaber Mini-Motor SA, Switzerland) are embedded in the drive module to start the instrument. The DC motor is

276

Handbook of Robotic and Image-Guided Surgery

FIGURE 16.9 Implementation of the proposed surgical manipulator.

also controlled by two position controllers embedded in the drive module. The DC motor is controlled by an embedded position controller in the instrument drive unit. Owing to the limited space of the instrument drive unit, we developed a new DC motor controller that can control dual DC motors in a single circuit. As a driver, we use the L6205, a DMOS dual full-bridge driver (ST Micro, United States); it can provide 2.8 A for each channel. For controller chips, we use an STM32F103 (ST Micro, United States) microcontroller, which has a Cortex M3 architecture and operates at 70 MHz. We perform the position control of the DC motors based on a traditional proportional-integral derivative control scheme. In order to drive four DC motors, two of the controller’s drive units are stacked. In addition, three lowlevel controllers for linear actuators are connected to the host PC using universal serial bus (USB) communication (version 2.0). Two DC motor controllers use a control area network (CAN) for communication between high-level controllers. For linear actuators, we connect three low-level controllers to the host PC using a USB link (version 2.0). Two DC motor controllers use CAN to communicate between the high-level controllers. Similarly, the force sensor controller shares the CAN bus to transmit the measured force information to the high-level controller.

16.5.2

Sensorized surgical instrument

Fig. 16.10 shows an assembled sensor surgical instrument for mounting a four-axis force-sensing system. To measure the force sensing, a three-axis force sensor and two torque sensors are integrated into the wrist and actuation unit, respectively. Four joints (rolling, wrist, and two gripping joints) are placed at the end of the instrument. Based on the tendon-driven actuation mechanism, the joints and pulleys in the actuation unit are connected by a drive cable. The range of motion and transmission of each joint are shown in Table 16.2. Furthermore, the information about the drive module of the instrument joint is explained above. The knobs shown in Fig. 16.10 are used to receive power from the drive module of the manipulator. The capacitance generated in the sensor is measured by a single-chip configuration of a capacitance-to-digital converter (AD7147, Analog Devices Inc., MA). The sampling frequency and sensing range are 1.3 kHz and 16 pF, respectively. The chip is integrated in the sensor. Fig. 16.10 shows the implemented force sensor controller. In the controller,

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

277

FIGURE 16.10 Implementation of the sensorized surgical instrument.

we use an STM32F103 controller chip as the controller chip. The controller contains two I2C channels that detect the measured force from the three-axis force sensor and two torque sensors.

16.5.3

Entire surgical robot: S-surge

Fig. 16.11 shows an assembled surgical robot consisting of an RCM manipulator and instrument. The robot has sevenDoF motion and four-axis force sensing. It weighs 4.7 kg and measures 34 3 18 3 20 cm3. Table 16.3 lists the detailed specifications of the robot.

16. S-Surge Surgical Robot

FIGURE 16.11 Implementation of the entire surgical robot called “S-surge” and comprising the developed manipulator and instrument.

278

Handbook of Robotic and Image-Guided Surgery

TABLE 16.3 Specifications of the developed surgical robot. Quantity

Value

Weight

4.7 kg

Maximum workspace

90 degrees circular cone (radius of 15 cm)

Degrees of freedom

7

Force sensing

Four-axis force

Power consumption

34 W

FIGURE 16.12 Experimental environment based on a masterslave system, which is considered as the master side, and the robot side, which is considered as the slave.

16.6 16.6.1

Experiments Experimental environment

We built an experimental environment to evaluate the workspace and force-sensing performance of the robot being developed. Fig. 16.12 shows the experimental setup, including the developed robot and main console. The robot’s control is based on a masterslave control using the console. The slave side consists of the developed robot, a robot control PC (slave PC), a camera (LifeCam NX-6000, Microsoft), and a test bench. Then, we used a commercial master device (Phantom Omni, Sensable Co.) for position input and force feedback. The host PC transmits the master signal to the slave side and displays the surgical scene to the user. In the environment, in order to visually check the motion and feedback forces, the motion of the robot corresponds to the motion of the master device without any scale changes. In our previous work, we proposed a gripping device assembled on the master device [25]. The detailed configuration of the system running the control software on each host is shown in Fig. 16.13. The slave robot and the master device are controlled by their own hosts, which regulate the data flow, as shown in Fig. 16.14. We built the master host and slave host on the Windows and Linux operating systems, respectively. The master control program sends the desired position obtained from the master to the slave and receives real-time force data from the slave. The slave-side software is based on the Robotic Operating System (ROS) and consists of five nodes: inverse kinematics node, robotic control node, instrument control node, sensor calibration node, and data logging node. The role of each node is described in the figure as input and output. There are four different types of communication methods in the system: User Datagram Protocol (UDP), CAN, USB, and ROS. We use UDP as the communication protocol between the

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

279

FIGURE 16.13 Configuration of the software and communication link used in the experimental setup.

16. S-Surge Surgical Robot

FIGURE 16.14 Data-flow diagram of the experimental setup employed for the evaluation of the performances of the developed robot.

master and the slave, mainly because it is suitable for real-time data with low latency. Through this communication, images taken by the camera are always available to the surgeon. We use CAN to communicate between the instrument and the slave control node. Three BLDC motors drive the robot, and the robot communicates with the slave control node via USB. We use ROS to communicate between nodes in the slave host.

16.6.2

Experimental results

We completed the robot’s motion task by using the experimental setup. The robot’s working space satisfies the EDWS, proving that the cone has a 90 degrees vertex angle. The cone consists of two angular movements (θ1 and θ2 ) that are

280

Handbook of Robotic and Image-Guided Surgery

FIGURE 16.15 Motion tests of the developed surgical robot. Maximum motions in the directions of θ1 and θ2 :

FIGURE 16.16 Motion- and force-sensing tests of the developed surgical robot. Experimental images obtained during manipulating a simulated tissue.

above 90 degrees. Fig. 16.15 shows the motion of the robot with RCM points in the θ1 direction. In the direction of θ2 , the motion is also checked, as shown in Fig. 16.15 and the supplementary video. The two angular motions (Δθ1 and Δθ2 ) are graphically checked to be 90 degrees. In addition, experiments were conducted to analyze the task of manipulating tissue based on robotic motion and force sensing. As shown in Fig. 16.16, one end of the simulated tissue is fixed on the jig set, and the other end is grasped by the grasper of the robot. Then, the robot is controlled in accordance with the manner in which the master device moves, during which the robot performs a circular motion measuring the four-axis force. The tissue was made of a silicone rubber (Dragon Skin 30, Smooth-On) having a modulus of 592 kPa, and was a rectangular parallelepiped of 1.5 3 3 3 50 mm3. To explain the situation in detail, a schematic diagram is shown in Fig. 16.12A. In the figure, E and Einit , respectively, indicate the position of the robot end effector after the movement and its initial position. OT indicates the position of the fixed end of the tissue. Within a circular boundary with a tissue initial length radius (linit : 50 mm), the tissue is not stretched. The weight of the tissue is 3.2 g, which means about 31.4 mN in the direction of gravity. We assume that weight has an insignificant effect on the measured force during the experiment. Eb is the position of the end effector on the boundary. lb is the distance between Einit and Eb , and Δlt is the displacement of the tissue or the distance between Eb and E. During the two circular motions of the robot, the three-axis manipulating force, the gripping force, and the position of the end effector are measured and recorded, as shown in Fig. 16.17A and B. The manipulating force is calculated in Cartesian coordinates (the frame of the remote user) for force feedback

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

281

FIGURE 16.17 (A) Measured three-axis manipulating force and a grasping force during the experiment. (B) Location of the endeffector of the robot in the user’s frame. (C) Magnitude of the manipulating force and the distance from the Einit to the E.

16. S-Surge Surgical Robot

user

P

ws fmanip 5 user ws R fmanip

P

(16.25)

where n Rm is a rotation matrix of the frame m relative to the frame n ; and fm is the measured force in the frame P R m :ws user was calculated by using the robot’s joint positions. Fig. 16.17A shows that the gripping force is detected in the region of the unstretched tissue, and then the manipulation force is detected between the two dotted lines presented as the circular boundary in Fig. 16.18. The detected manipulation force indicates that the tissue is stretched. In the same time domain, the position of the end effector is shown to represent the same dashed line as shown in Fig. 16.17B. Each position on the line is (0, 26, 234) and (224, 15, 231), which indicates the position of Eb on the boundary. Here, the position of Einit is (0, 0, 0). Fig. 16.17C shows the magnitude of the manipulating force and the distance from Einit to E. Here, it is found that lb is 42.8 mm. Therefore we can get the distance (7 mm) between Einit and OT , which is the end of the tissue, that is, fixed to the jig by OT Einit 5 ltinit 2 lb :

(16.26)

In addition, we calculate the modulus of the tissue by analyzing the force and distance. In Fig. 16.17C, the increase before Eb indicates the displacement (Δlt ). Therefore the distance between Einit and OT is calculated as

282

Handbook of Robotic and Image-Guided Surgery

FIGURE 16.18 Schematic drawing of the experiment manipulating the tissue.

OT E 5 ltinit 2 Δlt :

(16.27)

Furthermore, the distance Emax of the farthest position was checked to be 61.8 mm, so the maximum displacement of the tissue Δltmax was found to be 19 mm. Since ltinit is 50 mm, the strain ε is calculated to be 0.38. At this time, the maximum force is checked to be 1.08 N. As described above, the cross-section of the tissue is a rectangle of 1.5 3 3 mm2. Therefore the stress σ applied to the tissue was calculated to be 240 kPa. Therefore based on E 5 σ=E, the modulus E is 631 kPa, which matches the specification of the actual modulus by 93.4%. The average modulus during stretching was calculated to be 581 kPa, matching at 98.1%.

16.7

Conclusion

In this chapter, we present a portable surgical robot in which a four-axis force-sensing system for RMIS is installed. The proposed system consists of an RCM manipulator and a sensor surgical instrument. In order to realize the mechanical advantages of the general parallel mechanism and the stable RCM structure of the DP mechanism, we designed a robot based on the dual-SPU DP manipulator. The robot mainly consists of three linear actuators for RCM fixed motion. For the 90 degrees EDWS, we used its kinematics analysis and workspace optimization to design the robot while using two mechanical parameters. The instrument focuses on the integration of force-sensing systems with fourDoF motion. We integrated a three-axis force sensor into the wrist of the instrument, and we integrated two torque sensors into the drive unit of the instrument. Each sensor measures the three-axis tissue manipulation force and individual tissue grasping force. The surgical robot developed has a weight of 4.7 kg and is light and portable. We evaluated the workspace of the EDWS and performed tissue manipulation tasks using the motion and force sensing of the robot in a simulated surgical environment. During the two circular motions of the robot, the position of the end effector of the robot and the measured force are analyzed. As a result, the stress and strain of the tissue were found, and the average value of the calculated modulus was also matched with its reference data by 98.1%. In the future, we will set up a remote surgical robot environment. Using this environment, we will evaluate the performance of the robot, including low power consumption and high stiffness. Based on force feedback control, various clinical tasks (tissue anatomy, suturing, and tissue examination of animal organs) will be performed using various instruments such as knives and scissors.

References [1] Hu JC, Gu X, Lipsitz SR, Barry MJ, D’Amico AV, Weinberg AC, et al. Comparative effectiveness of minimally invasive vs open radical prostatectomy. JAMA 2009;302(14):155764. [2] Hanly EJ, Talamini MA. Robotic abdominal surgery. Am J Surg 2004;188:S1926. [3] Cleary K, Melzer A, Watson V, Kronreif G, Stoianovici D. Interventional robotic systems: applications and technology state-of-the-art. Min Invasive Ther Allied Technol 2006;15(2):10113.

S-Surge: A Portable Surgical Robot Based on a Novel Mechanism Chapter | 16

283

16. S-Surge Surgical Robot

[4] Guthart GS, Salisbury JK. The Intuitive/sup TM/telesurgery system: overview and application. In: Proceedings, 2000 ICRA’00. IEEE International Conference on Robotics and Automation, vol. 1. IEEE; 2000. p. 61821. [5] Kim U, Lee DH, Yoon WJ, Hannaford B, Choi HR. Force sensor integrated surgical forceps for minimally invasive robotic surgery. IEEE Trans Robot 2015;31(5):121424. [6] Rosen J, Brown JD, De S, Sinanan M, Hannaford B. Biomechanical properties of abdominal organs in vivo and postmortem under compression loads. J Biomech Eng 2008;130(2):021020. [7] Puangmali P, Althoefer K, Seneviratne LD, Murphy D, Dasgupta P. State-of-the-art in force and tactile sensing for minimally invasive surgery. IEEE Sens J 2008;8(4):37181. [8] Wagner CR, Stylopoulos N, Jackson PG, Howe RD. The benefit of force feedback in surgery: examination of blunt dissection. Presence Teleoper Virtual Environ 2007;16(3):25262. [9] Lum MJ, Rosen J, Sinanan MN, Hannaford B. Optimization of a spherical mechanism for a minimally invasive surgical robot: theoretical and experimental approaches. IEEE Trans Biomed Eng 2006;53(7):14405. [10] Kuo CH, Dai JS, Dasgupta P. Kinematic design considerations for minimally invasive surgical robots: an overview. Int J Med Robot Comput Assist Surg 2012;8(2):12745. [11] Aksungur S. Remote center of motion (RCM) mechanisms for surgical operations. Int J Appl Math Electron Comput 2015;3(2):11926. [12] Lum MJ, Friedman DC, Sankaranarayanan G, King H, Fodero K, Leuschke R, et al. The RAVEN: design and validation of a telesurgery system. Int J Robot Res 2009;28(9):118397. [13] Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L, et al. Raven-II: an open platform for surgical robotics research. IEEE Trans Biomed Eng 2013;60(4):9549. [14] Kuo CH, Dai JS. Kinematics of a fully-decoupled remote center-of-motion parallel manipulator for minimally invasive surgery. J Med Device 2012;6(2):021008. [15] Li J, Xing Y, Liang K, Wang S. Kinematic design of a novel spatial remote center-of-motion mechanism for minimally invasive surgical robot. J Med Device 2015;9(1):011003. [16] Hong MB, Jo YH. Design of a novel 4-DOF wrist-type surgical instrument with enhanced rigidity and dexterity. IEEE/ASME Trans Mechatronics 2014;19(2):50011. [17] Shin WH, Kwon DS. Surgical robot system for single-port surgery with novel joint mechanism. IEEE Trans Biomed Eng 2013;60(4):93744. [18] Bai G, Qi P, Althoefer K, Li D, Kong X, Dai JS. Kinematic analysis of a mechanism with dual remote centre of motion and its potential application. In: ASME 2015 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers; Aug 2, 2015. p. V05BT08A011. [19] Li J, Zhang G, Xing Y, Liu H, Wang S. A class of 2-degree-of-freedom planar remote center-of-motion mechanisms based on virtual parallelograms. J Mech Robot 2014;6(3):031014. [20] Hadavand M, Mirbagheri A, Behzadipour S, Farahmand F. A novel remote center of motion mechanism for the force-reflective master robot of haptic tele-surgery systems. Int J Med Robot Comput Assist Surg 2014;10(2):12939. [21] Lee DH, Kim U, Gulrez T, Yoon WJ, Hannaford B, Choi HR. A laparoscopic grasping tool with force sensing capability. IEEE/ASME Trans Mechatronics 2016;21(1):13041. [22] Kircanski MV. Robotic isotropy and optimal robot design of planar manipulators. In: Proceedings, 1994 IEEE International Conference on Robotics and Automation. IEEE; May 8, 1994. p. 11005. [23] Joshi SA, Tsai L-W. Jacobian analysis of limited-DOF parallel manipulators. J Mech Des 2012;124:2548. [24] Lu Y, Ye N, Lu Y, Mao B, Zhai X, Hu B. Analysis and determination of associated linkage, redundant constraint, and degree of freedom of closed mechanisms with redundant constraints and/or passive degree of freedom. J Mech Des 2012;134(6):061002. [25] Kim U, Seok DY, Kim YB, Lee DH, Choi HR. Development of a grasping force-feedback user interface for surgical robot system. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; Oct 9, 2016. p. 84550.

17 G

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback for Robotic Surgery Yen-Yi Juo, Ahmad Abiri, Jake Pensa, Songping Sun, Anna Tao, James Bisley, Warren Grundfest and Erik Dutson University of California Los Angeles, Los Angeles, CA, United States

ABSTRACT Haptic feedback is the provision of force information to the surgeon via a simulated sense of touch, its loss being commonly cited as a major drawback of robotic surgery. The Center for Advanced Surgical and Interventional Technology haptic feedback system was designed as an add-on solution with compatibility in mind. Most of its interfaces to the robot were manufactured through a customizable 3D-printing process. The system’s sensory units utilize commercially available piezoresistive force sensors, mounted on the robotic end effectors via a 3D-printed mounting plate. Force signals detected by the sensory units undergo processing by a laptop computer instead of a microcontroller in order to allow maximal flexibility in the mapping algorithm. Multimodal haptic feedbacks, including normal force, vibratory feedback, and kinesthetic feedback, are provided to the surgeon’s fingertips with pneumatic actuators and vibratory motors mounted on the master console. The system is capable of inducing a significant reduction in grip force, reduced visual perceptual mismatch, reduced suture failure, enhanced knot quality, and superior tissue characterization. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00017-7 © 2020 Elsevier Inc. All rights reserved.

285

286

17.1

Handbook of Robotic and Image-Guided Surgery

Introduction

The sense of touch, that is, haptic feedback, is one of the most valuable tools for a surgeon to assess tissue characteristics, obtain cognitive orientation to anatomical surroundings, and receive feedback regarding the magnitude of forces being exerted. However, surgeons operating with robotic surgical systems, such as the da Vinci robotic surgical system (Intuitive Surgical; Sunnyvale, CA), are faced with a complete loss of haptic feedback. Although the use of robotic-assisted platforms increased dramatically in the early 2000s across the United States [1] and many experienced surgeons have learned to adapt to the loss of haptic feedback, this loss has frequently been acknowledged as a major drawback of robotic surgery [2 5]. The adverse clinical effects associated with the loss of haptic feedback are difficult to quantify in the absence of a viable FDA-approved alternative robotic system. However, studies comparing experimental surgical systems with and without haptic feedback consistently demonstrated superior outcomes when haptic feedback is restored under laboratory settings [6]. Previous studies have demonstrated that grip forces exerted during robotic surgery often exceed those needed to achieve a secure grasp when no haptic feedback was provided [7,8]. This is likely due to the fact that visually perceptible tissue deformation may not adequately reassure surgeons that a secure grasp has been achieved when gentle grip forces were applied. The excessive grip force, in turn, leads to increased soft-tissue crush injuries, especially among novices [9,10] (see Fig. 17.1). In addition, due to the lack of haptic sensations that correspond with visual information (visual perceptual mismatch) [11], surgeons were less able to orient their hands in relation to the operative environment in an intuitive manner, leading to more frequent errors and longer operative time during execution of basic surgical tasks [2]. Furthermore, excessive shear forces frequently lead to suture breakage and compromised quality of knots tied with the robot [12]. Finally, the loss of haptic feedback represented a major compromise in the surgeon’s ability to characterize the tissues being manipulated. During open surgery, surgeons rely on their fingertips for locating critical structures that may not be visually apparent, such as nerves or vessels that are hidden underneath fat. Robotic surgeons have to rely on memory of anatomical knowledge and recall of preoperative imaging studies in order to locate these structures with no means of real-time feedback [13]. Recent literature reviews showed that there is unanimous agreement regarding the need for haptic feedback during robotic surgery, while no solutions or products are available to address this need [6,14]. In this chapter, we review existent models of haptic feedback systems for use in robotic surgery, describe a unique solution developed and validated at the Center for Advanced Surgical and Interventional Technology (CASIT) laboratory, and summarize key advantages associated with the restoration of the haptic feedback system to robotic surgery.

17.2

Feedback modalities

Haptic information, that is, force information detected by the instrument in direct contact with the tissue being manipulated, can be conveyed to the surgeon via two categories of feedback mechanisms: (1) sensory substitution, that is, replacement of haptic information with visual or auditory cues, or (2) haptic feedback, provision of haptic information via an actuator that simulates the sense of touch (Table 17.1).

FIGURE 17.1 Correlation between grip force and soft-tissue injury. (Left) Focal hemorrhage in muscularis propria caused by excessive grip force during intestinal manipulation in a porcine model. (Right) Increased grip force is associated with correspondingly more severe soft-tissue injury as observed under histologic examination. Credit: Wottawa CR, Genovese B, Nowroozi BN, Hart SD, Bisley JW, Grundfest WS, Dutson EP. Evaluating tactile feedback in robotic surgery for potential clinical application using an animal model. Surg Endosc 2016;30(8):3198 209. Available from: https://doi.org/10.1007/s00464015-4602-2.

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

287

TABLE 17.1 Instances of feedback modalities. Sensory substitution Visual

Bar graph display, color display, numerical display, etc.

Auditory

Alarm tone with varying pitch, loudness, etc.

Temperature

Corresponding warming of actuator in contact with surgeon’s skin, etc.

Haptic feedback Tactile

Graded normal force applied to the surgeon’s skin

Kinesthetic feedback

Graded resistance against the surgeon’s grasp on the robotic console or against the surgeon’s push/pull on the joystick

Sensory substitution

Sensory substitution is the process of mapping data originally aimed for one sensory modality to a different sensory modality [15]. For example, converting the haptic force data recorded from the robotic grasper into visual information, such as a color bar or force magnitude graph [15,16], or auditory information, such as alarm tones with varying volume or pitch [15]. Sensory substitution has frequently been found to be distracting to surgeons, who are inundated with alarms in the operating room, and therefore perceived as being less desirable and less intuitive than haptic feedback [17 19]. The detrimental effect of sensory overload becomes progressively more apparent as the complexity of the force information being relayed increases [20].

17.2.2

Haptic feedback

Haptic feedback is the direct provision of haptic information to the surgeon via a simulated sense of touch, similar to that which is normally processed in the human neurosensory pathways [21]. For example, provision of normal force information to the operator’s fingertips via pneumatic balloons that inflate with levels correspondent with force magnitude being detected by robotic graspers. The effectiveness of any haptic feedback system relies on its ability to integrate with existing human neurosensory pathways. The native human somatosensory system perceives the sense of touch via two different categories of mechanoreceptors: (1) tactile feedback is perceived via sensors residing in the skin and relays information regarding more refined and dynamic pressure information, and (2) kinesthetic feedback relies on mechanoreceptors in the muscle tendons to detect information regarding resistance and tension (Fig. 17.2). The data from these sensory receptors allow the motor center in the brain to control and fine-tune muscular responses and drive attention to critical events [22,23].

17.3

Existing haptic feedback systems for robotic surgery

All existent haptic feedback systems are conceptually composed of three different components: sensing, signal processing, and actuation. First, strategically placed sensors installed on robotic instruments pick up force signals deemed relevant by the engineer/surgeon. Second, the signal processing unit filters out the noise and converts these force signals into the most appropriate type and level of feedback to be produced. Finally, actuators provide a variety of haptic feedback types, each targeted at a specific type of mechanoreceptor on the surgeon’s skin.

17.3.1

Sensing technology

The size and biocompatibility constraint of force sensors represents one of the most challenging aspects of engineering an effective haptic feedback system for robotic surgery. Currently prevalent laparoscopic trocars, through which robotic instruments are introduced into the human body, frequently have diameters ranging from 5 to 12 mm. This is the maximal width beyond which sensors are unable to be introduced into the human body. Furthermore, the most relevant haptic information is frequently detected at the tip of the instrument. A bulky

17. CASIT Haptic Feedback System

17.2.1

288

Handbook of Robotic and Image-Guided Surgery

FIGURE 17.2 Mechanoreceptors involved in tactile and kinesthetic force feedback in humans.

sensor will not only hinder fine dissection at the instrument tip but also obstruct the surgeon’s view of the tissue being manipulated. Piezoresistive sensors quantify force by sensing the corresponding change in electrical resistance of semiconductive materials such as silicone. However, this correlation is not always linear within their full dynamic range, limiting the range of applicable force it could detect. Capacitive sensors are composed of a compressible dielectric sandwiched between two conductive layers. Compression with normal force in a vertical direction decreases the distance between the layers and changes the capacitance, which could be converted to digital data as force signal. Shearing force in a parallel direction also changes the overlapping area between the layers and may be used as a mechanism to quantify shear force [24]. In addition, studies have reported the use of strain gauges, installed directly on the instrument shafts, to detect instrument bending [25], or the use of multimodal sensors (e.g., BioTac) as a means of detecting pressure, vibration, and temperature [26]. However, these all require significant modification of the robotic instrument and their bulky footprints that make them impractical in the field of surgical robotics.

17.3.2

Actuation and feedback technology

Actuation technologies can be categorized based on the targeted mechanoreceptors as either providing tactile or kinesthetic feedback. Tactile feedback can be provided via vibration, skin deformation, and normal force feedback. Vibration is effective in conveying forces as graded information [27], but has been found to be distracting due to the intuitive perception of vibration as a warning signal by the human brain [22]. Skin deformation, as an independent feedback modality, has been found to be ineffective unless coupled with force feedback [28]. Normal force feedback relies on activation of slow-adapting mechanoreceptors, similar to those activated during fine tuning of grasp force in reality, and has been demonstrated as an effective actuation method for providing a simulated sense of touch [7]. Kinesthetic feedback, targeted at mechanoreceptors such as the Golgi tendon organs, seeks to recreate a sense of resistance rather than that of fine touch. It is most commonly implemented using motors installed on the hinges of robotic surgical consoles and is the most common type of haptic feedback advertised in upcoming commercial robotic surgical systems, such as TransEnterix ALF-X, Titan Medical SPORT, MicroSurge, and SOFIE (Table 17.2). In order to imitate an intuitive sense of touch, it is most ideal when both tactile and kinesthetic feedback can be provided in an integrated manner [29].

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

289

TABLE 17.2 Existent robotic-assisted laparoscopic systems with integrated haptic feedback. Commercialization status

Force feedback actuator site

ALF-X (TransEnterix, United States) [30]

FDA-approved, awaiting commercialization

At fingertips and joystick arms

SPORT (Titan Medical Inc., Canada) [31]

Awaiting commercialization

In joystick arms

M7 (Stanford Research Institute, United States) [32]

Research prototype

At fingertips and joystick arms

MicroSurge (German Aerospace Center, Germany) [33]

Research prototype

In joystick arms

SOFIE (Technical University of Eindhoven, The Netherlands) [34]

Research prototype

In joystick arms

RAVEN II (Applied Dexterity, United States) [35]

Research prototype

At fingertips and joystick arms

17.4 The Center for Advanced Surgical and Interventional Technology multimodal feedback system 17.4.1

Overview

The haptic feedback system at the CASIT, University of California Los Angeles, began development in 2008. The project seeks to advance research in the field of haptics in surgical robotics and promote its clinical application. The system was designed as an add-on solution with compatibility in mind, thus most interfaces with the robotic surgical system were manufactured through a 3D-printed process, allowing future customizations to fit onto different robotic consoles without having to modify the core architecture of the system. While the majority of the following trials were performed using either the da Vinci robotic surgical system or the Raven II console [35], the CASIT haptic feedback system can theoretically be applied to any robotic surgical system with a master slave configuration (Fig. 17.3). The initial objective of the CASIT haptic feedback system was to reduce excessive grip force during robotic procedures. To this end, a prototype was developed utilizing water-resistant miniature piezoresistive sensors installed at the robotic grasper’s jaws and providing haptic feedback to the surgeon’s fingertips via customized pneumatic actuators [36,37]. The CASIT system has since undergone numerous iterations in order to be applied to a variety of applications. For example, adjusting the location of sensor placement and directionality of forces measured allowed functionalities from reducing suture failure [12], increasing knot-tying quality, to providing artificial palpation for mapping of softtissue characteristics [38]. A combination of multimodal actuation technologies allowed investigation of the utility of tactile, including normal force and/or vibration feedback, and kinesthetic feedback in various scenarios [29,39].

17.4.2

Sensory unit

The CASIT system’s sensory unit is mounted on the robotic instruments as an add-on component that can be modified for various instruments and has footprints small enough to fit through a 12-mm robotic trocar. In addition, reliable readings could be obtained even when exposed to large normal and shear forces during tissue manipulation. The most recent iteration of the sensory unit relies on Tekscan FlexiForce B201 piezoresistive force sensors for normal force detection (Fig. 17.4). The detection area of the sensors is 9.52 mm in diameter, with an additional 4.47 mm polyester margin on the sides for waterproofing the sensors. The sensor is mounted on the robotic grasper tip with a 3D-printed mounting component, measuring 10 mm in width (Fig. 17.5). Based on the intended functionality, this mounting component could be modified to adapt to a variety of mounting locations and surgical instruments. For example, sensors could be mounted on the inner surface of a grasper to measure grip force or the shaft of an instrument to measure retraction force. The mounting plate provides a stable surface for the entire sensing area of the B201 sensor. The sensor and the mounting plate are secured to each other via a 0.1-mm acrylic adhesive to minimize relative movement between each other. The sensors are then trimmed down from 14 mm to 11 m in diameter, maintaining the bonding on the outer shell of the sensing area and preserving its waterproof seal.

17. CASIT Haptic Feedback System

Robotic system

290

Handbook of Robotic and Image-Guided Surgery

FIGURE 17.3 Overview of the CASIT haptic feedback system, mounted on the da Vinci surgical system as an example. CASIT, Center for Advanced Surgical and Interventional Technology.

FIGURE 17.4 Tekscan Flexi-Force sensors (top left) 1 lb FlexiForce sensor—14 mm diameter; (bottom left) 10 lb FlexiForce sensor—7.6 mm; (top right) basic read-out circuitry; (bottom right) nonlinear behavior of the FlexiForce 10 lb sensors. Credit: Tekscan FlexiForce sensor product images & data sheet. ,http://www.tekscan.com/sites/default/files/styles/product_image/public/flexiforce-a101-force-sensor-275-275_0.jpg?itok 5 x_fMT5ZT. [accessed 02.05.17].

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

FIGURE 17.5 3D-printed component.

sensor

291

mounting

17. CASIT Haptic Feedback System FIGURE 17.6 Customized sensor board design: (A) voltage divider circuit for reading from FlexiForce sensors, (B and C) sensor board schematic, (D) sensor calibration.

One of the limitations of using FlexiForce sensors, particularly the 1 lb variant of the B201 sensor, was that recommended forces were only up to 4.4 N while using a high-gain read-out circuit (Fig. 17.4, bottom right). However, the da Vinci robot is capable of exerting grasp forces beyond 4.4 N and our previous experiment showed that 4 N appeared to be the threshold beyond which substantial tissue crush injury was to be anticipated [9], therefore, it is critical to be able to quantify force readings beyond 4.4 N. The main reason for the sensor’s limited dynamic range is the read-out circuitry design, whereas in reality these sensors can record forces as high as 111 N, if provided with the appropriate read-out circuitry. We have developed an in-house sensor board for expanding the sensor’s dynamic range (Fig. 17.6), utilizing a simple voltage divider circuit with a 10K resistor, a selection that would allow accurate sensor readings of up to 15 N and up to 12 simultaneous sensor readings. The sensor board utilized an Atmel SAM3X8E ARM Cortex M3 MCU as part of the Arduino Due dev kit for data processing and transmission to the computer. Calibration of the sensors was performed using a Mark-10 Series 3 force gauge. In addition to normal forces, shear sensors have also been developed to measure forces in directions parallel to the grasper surface. The primary application of this proof-of-concept sensor was to demonstrate feasibility of using haptic feedback to provide a warning regarding suture slippage or suture fatigue during intracorporeal knot tying.

292

Handbook of Robotic and Image-Guided Surgery

FIGURE 17.7 Shear sensor design. (Left and middle) 3D-printed shear-sensing mechanism, (right) installation of shear sensor on a robotic Cadiere grasper.

Our shear-sensing mechanism was fabricated utilizing two commercially available Tekscan A101 piezoresistive sensors mounted on the robotic instrument via a 3D-printed mounting component (Fig. 17.7). The A101 sensor is one of the smallest commercially available piezoresistive sensors, with a dynamic range of 0 44 N. The sensing area is 3.8 mm in diameter with a width of 7.6 mm, which could be trimmed down to 6 mm without losing its water resistance. The sensing concept relied on two components, a static outer shell and an inner movable plate. The outer shell was kept stable via a tight fit with the Cadiere grasper, while the inner plate had an opening in the center that allowed a small lateral range of motion. On each side of the inner component, an A1-1 sensor was positioned between the outer shell and an extruded area of the inner plate. A 20 shore A silicone rubber membrane with 0.5 mm thickness was placed between the sensor and the inner movable plate, allowing the inner plate to return to the original center position after shear forces were removed. This design leads to an increase in the force values detected by the sensor when shear is applied to the top surface of the inner plate and pushes the inner component against the A101 sensor.

17.4.3

Signal processing unit

Traditional approaches adopted by many feedback systems, including earlier iterations of the CASIT system, are used to create a direct mapping between sensor data and actuator feedback levels. The mapping algorithms are hard-coded on a microcontroller chip and serve as the control center of the system. These approaches, however, create significant limitations with regard to system adaptability and application in different surgical tasks. The CASIT multimodal haptic feedback system was designed to allow significant flexibility in the mapping algorithm between sensor data and feedback actuators. To this end, all decision-making and logic were performed on a separate computer using an in-house developed software, the Haptics Manager, so as to allow modification of the coding algorithm according to each application the system was being used for. The sensor board (described in Section 17.4.2) will only be responsible for transmitting sensor data to the Haptics Manager software via a customized ESP8266 WiFi module or a USB wired connection. The software engine (i.e., Haptics Manager) is developed using C# language and Microsoft’s .Net framework. It serves as the central processing unit for the CASIT multimodal haptic feedback system and provides five major functions (Fig. 17.8): 1. 2. 3. 4. 5.

Process and filter sensor data (sensor data processor); Generate control board packets (logic engine); Transmit data to the control board (haptics controller interface); Plot data and store sensor data in real time (main UI); Perform statistical analysis of recorded data.

The Haptics Manager software architecture was designed to minimize data processing latency. While most existent haptic feedback systems utilize a single-threaded, pipelined approach, this solution is inefficient for more complex multimodal feedback systems. In a single-threaded approach, data processing is synchronous, with sensor packets being generated, filtered, processed, followed by control packets being generated, and sent out to the control board. Additional tasks such as data storing and real-time plotting requires additional steps in the pipeline. This pipeline design is thus extremely costly for the overall system latency and makes the system highly susceptible to malfunction due to failure in any individual stage.

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

293

FIGURE 17.9 Haptics Manager software architecture using WiFi for sensor and control board communication.

The Haptics Manager software was thus designed with an asynchronous, multithreaded architecture with fixed-size processing queues. In Fig. 17.9, transition of data between processing units within the same thread was marked in red, while cross-thread operations marked in black. The Haptics Manager software relies on three primary processing threads, one to receive data from the sensor board and perform decoding, filtering, processing, controller packet

17. CASIT Haptic Feedback System

FIGURE 17.8 Multimodal CASIT haptic feedback system architecture. CASIT, Center for Advanced Surgical and Interventional Technology.

294

Handbook of Robotic and Image-Guided Surgery

generation, data writing in a thread-safe fixed size queue, the second to constantly retrieve data from the queue and transmit it to the control board, and a third thread for data storage and user interface updates. The fixed-size queue ensures that any delays in handling the controller packets, whether due to communication or mechanical delays in the actuators would have no impact on data processing. Under such conditions, the queue can become full and then begin to lose older items as new items are inserted. When the controller thread begins responding again, previous data that are no longer relevant will be skipped and the system begins to respond to the most recent data. This approach allows a quicker recovery from any unexpected delays within any component.

17.4.4

Haptic feedback unit

The CASIT haptic feedback system is unique in its ability to provide multimodal haptic feedback, including normal force feedback, vibratory feedback, and kinetic feedback. The normal force feedback was provided with the use of pneumatic balloons installed in a 3D-printed adapter unit, which can be designed to fit onto most existing robotic master consoles in contact with the surgeon’s fingertips. One of the unique aspects of the CASIT pneumatic feedback unit is the ability to counter desensitization of mechanoreceptors in the skin through the use of depressed-membrane pneumatic actuators (Fig. 17.10). Conventional designs of siliconebased flat-surface actuators frequently lead to difficulty in detecting balloon inflations after long periods of contact due to fatigue and desensitization of the deep pressure sensors in the skin. A depressed-membrane pneumatic actuator design allows the skin to remain free of contact at baseline, thereby maintaining an active light touch sensor in the skin and providing graded pneumatic feedback in correspondence with increasing haptic force signal being detected by the force sensors on the robotic instruments. The depressed-membrane actuators outperformed conventional flat actuators during peg transfer tasks by being more effective in reducing both grip force and sensory adaptation. The second feedback modality provided by the CASIT system is vibratory feedback. It is implemented via the use of vibration motors installed directly at the fingertips of the surgeon (Fig. 17.11). Each vibration motor is activated initially at voltages as low as 1.5 V and will provide stronger feedback at voltages up to 5 V. A benchtop test determined that the average user can uniquely recognize between three and four different levels of vibration, but only if the changes were made in a step-wise manner. In other words, users frequently have difficulty differentiating between different levels of vibration when vibration is activated from a stationery status. This behavior is expected due to the perception of vibration as a mechanism of drawing the attention of the brain to a specific event by the human somatosensory system [39]. In view of this fact, we have mostly utilized vibratory feedback in a binary manner as a warning mechanism instead of as a means of providing graded feedback. Finally, the third modality of the CASIT haptic feedback system provides kinesthetic feedback, aimed at triggering activation of the Golgi tendon organ in the muscle tendons. The integration of this sensory information creates a sense of resistance when applying force and deforming an object. In the context of robotic surgical tasks, kinesthetic force feedback helps provide haptic information in force ranges usually higher than tactile feedback, which is usually involved in light touch. Approaches for providing kinesthetic force feedback on most existent robotic surgical systems involve the installation of motors on the joints of the robotic console, aimed at feedback in the larger muscle groups in the arms. The CASIT haptic feedback system seeks to implement kinesthetic feedback at the fingertips for integration with tactile feedback and to provide a more intuitive sense of kinesthesia (Fig. 17.12). To this end, a pneumatic tube was placed between the graspers of the master console. The increase in air pressure (0 19 PSI) inside the tube would lead to constriction of the grasper’s ability to close, hence resisting the grasping action.

17.4.5

Validation studies

17.4.5.1 Reduction in grip forces The earliest objective of the CASIT system was to reduce excessive grip forces that may have arisen from the loss of haptic feedback during robotic surgery. Minimal grasp force required for achieving a secure grasp frequently caused no discernible object deformation that can be visualized. In order to ensure a secure grasp, surgeons, especially those inexperienced with the robotic system, frequently exerted unnecessarily excessive grip force, which has been shown to be associated with greater soft-tissue crush injury. Trials involving the use of the da Vinci robotic system during peg transfer tasks in ex vivo experiments and bowel running tasks in in vivo experiments consistently demonstrated a reduction in average and peak grip force with the addition of haptic feedback [8] (Figs. 17.13 and 17.14). Furthermore, histologic

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

295

17. CASIT Haptic Feedback System FIGURE 17.10 Depressed-membrane pneumatic actuator design. (A) Side-by-side comparison of depressed-membrane pneumatic actuator and conventional silicone-based flat-surface actuator. (B) Pressure on different areas of the finger at various levels of actuator activation. (C) Decreased sensory adaptation and effective grip force reduction were better achieved with a depressed-membrane actuator design.

296

Handbook of Robotic and Image-Guided Surgery

FIGURE 17.11 Vibration motors installed on 3D-printed pneumatic actuators.

FIGURE 17.12 Pneumatic kinesthetic feedback actuator design. (Left and middle) Modified 3D-printed actuators with pneumatic tubes installed. (Right) Placement of pneumatic tube between the console graspers to provide feedback during grasping.

FIGURE 17.13 Significant reduction in average and peak grip force upon activation of haptic feedback during robotic peg transfer tasks.

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

297

FIGURE 17.15 The restoration of haptic feedback reduced adverse impacts of visuospatial mismatch on task performance, manifest in a reduced number of faults and task completion time.

examination of the intestines being handled demonstrated correspondingly decreased damage sites when haptic feedback is restored [9].

17.4.5.2 Visual perceptual mismatch Nearly all current surgical robotic systems employ a master slave configuration, where the surgeon uses joystick-like controls at the master console to direct the movements of the robotic instruments (slave). The physical separation between the master console and the robotic instruments inevitably lead to disparate activity spaces. For example, the robotic arms may be capable of moving beyond a certain extent on one axis, but the control in the master console may have reached the limits on that same axis due to motion scaling (Fig. 17.15). One solution, such as that found on Intuitive Surgical’s da Vinci, is the “clutch” button, which allows repositioning of the console controls while the robotic instruments stayed still. The frequent clutching required during operation led to a cognitive spatial disorientation called

17. CASIT Haptic Feedback System

FIGURE 17.14 Incremental reduction in grip force during peg transfer tasks as haptic feedback provided approximates natural sense of touch [grip force: NF . tactile feedback alone . kinesthetic feedback alone . hybrid (tactile 1 kinesthetic) feedback]. NF, No feedback.

298

Handbook of Robotic and Image-Guided Surgery

FIGURE 17.16 Phantom model design for hidden tubular anatomical structure. (Left) A stiff plastic tubing is imbedded in relatively softer foam square block. (Right) The foam block is covered with a blue towel so that deformation associated with the stiff tubing placement could not be visually appreciated. The surface was then scanned with the da Vinci robot with an attached force sensor in the simulation of the palpation motion of a surgeon’s finger when attempting to detect underlying structure imbedded in soft tissue.

FIGURE 17.17 With restoration of haptic feedback, frequency of correct localization of the hidden structure increased while falsepositive detections decreased. This positive effect could be further strengthened with the addition of vibrotactile feedback, provided when normal forces exceed a predetermined certain threshold. *denotes p , 0.05; ***denotes p , 0.0001.

visual perceptual mismatch, whereby the human brain is confused by the conflicting visual and proprioceptive information being perceived. In a series of tests, visual perceptual mismatch was intentionally created by misaligning the controls at the master console while the robotic instruments are aligned. Surgeons were subsequently asked to perform peg transfer tasks in accordance with the fundamentals of laparoscopic surgery test rules. Under these circumstances, we found that the existence of visual perceptual mismatch was significantly associated with an increase in the number of times subjects dropped the peg during transfer. Interestingly, the restoration of haptic feedbackled to a reduction in the number of times pegs were dropped. In addition, the task completion time was also prolonged with visual perceptual mismatch, while this prolongation was no longer present when haptic feedback was restored.

17.4.5.3 Artificial palpation Surgeons frequently relied on tactile feedback for localization of hidden tubular structures, such as ureters, vessels, and nerves, during open surgery. This capacity was lost with the use of the da Vinci system. A study was performed to assess the feasibility of restoring touch sensation to the robotic surgeon with the use of tactile feedback, either with normal force feedback alone or a combination of normal force and combination feedback. The results demonstrated that surgeons could correctly localize soft tubular structure phantoms imbedded in a sponge with significantly higher accuracy and lower time when the CASIT haptic feedback system was activated [38] (Figs. 17.16 and 17.17).

17.4.5.4 Knot tying Intracorporeal knot tying during robotic surgery is one of the most challenging tasks in the learning curve of adapting to the robotic system due to the surgeon’s inability to quantify tension on the suture without haptic feedback. Excessive forces can lead to suture breakage while insufficient pulling forces result in a weak knot that could slip. This is a

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

299

FIGURE 17.19 Significant reduction in knot slippage, that is, loose knots, and frequency of suture failure were observed following restoration of haptic feedback utilizing shear force data. *denotes p , 0.05; **denotes p , 0.01.

commonly recognized need and other laboratory groups have demonstrated improvement in knot-tying performance following provision of shear force information via visual sensory substitution [15,16]. We seek to evaluate whether the use of direct tactile feedback could be associated with further improvement in knot quality and reduction of suture failure. Trials were carried out using customized uniaxial shear sensors that could detect the forces applied to the suture by the robot. This information could be provided in a graded fashion via the pneumatic actuators. Furthermore, comparing this information with previously compiled suture failure load data [12], a vibratory warning system could be developed when shear forces near the suture-breaking threshold. The results of our study showed that the application of haptic feedback during robotic knot-tying tasks was associated with significantly reduced knot slippage and suture failure frequency, demonstrating the efficacy of the CASIT haptic feedback system in improving knot quality during robotic surgery (Figs. 17.18 and 17.19).

17. CASIT Haptic Feedback System

FIGURE 17.18 Experimental setup for proof-of-principle testing of shear force feedback system. A surgeon’s knot and two square knots in the opposite direction were tied on a fixed tubular structure, with the last knot left intentionally loose. The subject was asked to tighten the last knot with the da Vinci system with shear sensors installed to a point where maximal shear was perceived while avoiding breaking the suture.

300

17.5

Handbook of Robotic and Image-Guided Surgery

Conclusion and future directions

The CASIT multimodal haptic feedback system represented one of the most advanced and robust haptic feedback systems described in the literature, being potentially compatible with most robotic surgery systems with a master slave configuration. Furthermore, its efficacy for various applications has been validated in several ex vivo experiments as well as ongoing animal experiments. While the feasibility and efficacy of the system have been definitively established, its clinical utility in human surgery is difficult to evaluate pending minimization of the end effector hardware. Successful sensor minimization will be the key to allowing clinical application of the CASIT haptic feedback system in the future.

References [1] Juo YY, Mantha A, Abiri A, Lin A, Dutson E. Diffusion of robotic-assisted laparoscopic technology across specialties: a national study from 2008 to 2013. Surg Endosc 2018;32(3):1405 13. [2] Enayati N, De Momi E, Ferrigno G. Haptics in robot-assisted surgery: challenges and benefits. IEEE Rev Biomed Eng 2016;9:49 65. [3] Shennib H, Bastawisy A, Mack MJ, Moll FH. Computer-assisted telemanipulation: an enabling technology for endoscopic coronary artery bypass. Ann Thorac Surg 1998;66(3):1060 3. [4] Delaney CP, Lynch AC, Senagore AJ, Fazio VW. Comparison of robotically performed and traditional laparoscopic colorectal surgery. Dis Colon Rectum 2003;46(12):1633 9. [5] Morino M, Pellegrino L, Giaccone C, Garrone C, Rebecchi F. Randomized clinical trial of robot-assisted versus laparoscopic Nissen fundoplication. Br J Surg 2006;93(5):553 8. [6] Amirabdollahian F, Livatino S, Vahedi B, Gudipati R, Sheen P, Gawrie-Mohan S, et al. Prevalence of haptic feedback in robot-mediated surgery: a systematic review of literature. J Robot Surg 2018;12:11 25. [7] King C, Higa AT, Culjat MO, Han SH, Bisley JW, Carman GP, et al. A pneumatic haptic feedback actuator array for robotic surgery or simulation. Stud Health Technol Inform 2006;125:217. [8] King CH, Culjat MO, Franco ML, Lewis CE, Dutson EP, Grundfest WS, et al. Tactile feedback induces reduced grasping force in robotassisted surgery. IEEE Trans Haptics 2009;2(2):103 10. [9] Wottawa CR, Genovese B, Nowroozi BN, Hart SD, Bisley JW, Grundfest WS, et al. Evaluating tactile feedback in robotic surgery for potential clinical application using an animal model. Surg Endosc 2016;30(8):3198 209. [10] Xin H, Zelek JS, Carnahan H. Laparoscopic surgery, perceptual limitations and force: a review. In: First Canadian student conference on biomedical computing, vol. 144. 2006. [11] Abiri A, Tao A, LaRocca M, Guan X, Askari SJ, Bisley JW, et al. Visual perceptual mismatch in robotic surgery. Surg Endosc 2017;31 (8):3271 8. [12] Abiri A, Paydar O, Tao A, LaRocca M, Liu K, Genovese B, et al. Tensile strength and failure load of sutures for robotic surgery. Surg Endosc 2017;31(8):3258 70. [13] Meli L, Pacchierotti C, Prattichizzo D. Experimental evaluation of magnified haptic feedback for robot-assisted needle insertion and palpation. Int J Med Robot Comput Assist Surg 2017;13(4):e1809. [14] Van der Meijden OA, Schijven MP. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surg Endosc 2009;23(6):1180 90. [15] Kitagawa M, Dokka D, Okamura AM, Bethea BT, Yuh DD. Effect of sensory substitution on suture manipulation forces for surgical teleoperation. Stud Health Technol Inform 2004;98:157 63. [16] Bethea BT, Okamura AM, Kitagawa M, Fitton TP, Cattaneo SM, Gott VL, et al. Application of haptic feedback to robotic surgery. J Laparoendosc Adv Surg Tech 2004;14(3):191 5. [17] Akamatsu M, MacKenzie IS, Hasbroucq T. A comparison of tactile, auditory, and visual feedback in a pointing task using a mouse-type device. Ergonomics 1995;38(4):816 27. [18] Bach-y-Rita P, Kercel SW. Sensory substitution and the human machine interface. Trends Cogn Sci 2003;7(12):541 6. [19] Vitense HS, Jacko JA, Emery VK. Multimodal feedback: an assessment of performance and mental workload. Ergonomics 2003;46 (1 3):68 87. [20] Kim K, Colgate JE. Haptic feedback enhances grip force control of sEMG-controlled prosthetic hands in targeted reinnervation amputees. IEEE Trans Neural Syst Rehabil Eng 2012;20(6):798 805. [21] Koehn JK, Kuchenbecker KJ. Surgeons and non-surgeons prefer haptic feedback of instrument vibrations during robotic surgery. Surg Endosc 2015;29(10):2970 83. [22] Giabbiconi CM, Trujillo-Barreto NJ, Gruber T, Mu¨ller MM. Sustained spatial attention to vibration is mediated in primary somatosensory cortex. Neuroimage 2007;35(1):255 62. [23] Lecuyer A, Coquillart S, Kheddar A, Richard P, Coiffet P. Pseudo-haptic feedback: can isometric input devices simulate force feedback? In: Proceedings IEEE virtual reality 2000. IEEE; 2000. p. 83 90. [24] Paydar OH, Wottawa CR, Fan RE, Dutson EP, Grundfest WS, Culjat MO, et al. Fabrication of a thin-film capacitive force sensor array for tactile feedback in robotic surgery. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; Aug 28, 2012. p. 2355 8.

Center for Advanced Surgical and Interventional Technology Multimodal Haptic Feedback Chapter | 17

301

17. CASIT Haptic Feedback System

[25] Reiley CE, Akinbiyi T, Burschka D, Chang DC, Okamura AM, Yuh DD. Effects of visual force feedback on robot-assisted surgical task performance. J Thorac Cardiovasc Surg 2008;135(1):196 202. [26] Wettels N, Fishel JA, Loeb GE. Multimodal tactile sensor. The human hand as an inspiration for robot hand development. Cham: Springer; 2014. p. 405 29. [27] Schoonmaker RE, Cao CG. Vibrotactile force feedback system for minimally invasive surgical procedures. In: 2006 IEEE international conference on systems, man and cybernetics; 2006. p. 2464 2469. [28] Quek ZF, Schorr SB, Nisky I, Provancher WR, Okamura AM. Sensory substitution and augmentation using 3-degree-of-freedom skin deformation feedback. IEEE Trans Haptics 2015;8(2):209 21. [29] Lim SC, Lee HK, Park J. Role of combined tactile and kinesthetic feedback in minimally invasive surgery. Int J Med Robot Comput Assist Surg 2015;11(3):360 74. [30] TransEnterix, Inc. [Online]. Available from: ,www.alf-x.com/en./.; 2018. [31] Seeliger B, Diana M, Ruurda JP, Konstantinidis KM, Marescaux J, Swanstro¨m LL. Enabling single-site laparoscopy: the SPORT platform. Surg Endosc 2019. Available from: https://doi.org/10.1007/s00464-018-06658-x. [32] SRI International. M7 surgical robot. [Online]. Available from: ,https://www.sri.com/engage/products-solutions/m7-surgical-robot.. [33] Hagn U, Ortmaier T, Konietschke R, Kubler B, Seibold U, Tobergte A, et al. Telemanipulator for remote minimally invasive surgery. IEEE Robot Autom Mag 2008;15(4):28 38. [34] Technische Universitat Eindhovern. 2018. Available from: ,https://www.tue.nl/en/research/research-institutes/robotics-research/projects/sofie/.. [35] Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L, et al. Raven-II: an open platform for surgical robotics research. IEEE Trans Biomed Eng 2013;60(4):954 9. [36] Franco ML, King CH, Culjat MO, Lewis CE, Bisley JW, Holmes EC, et al. An integrated pneumatic tactile feedback actuator array for robotic surgery. Int J Med Robot Comput Assist Surg 2009;5(1):13 19. [37] King CH, Culjat MO, Franco ML, Bisley JW, Dutson E, Grundfest WS. Optimization of a pneumatic balloon tactile display for robotic surgery based on human perception. IEEE Trans Biomed Eng 2008;55(11):2593 600. [38] Juo YY, et al. Artificial palpation with da Vinci surgical robot using a novel haptic feedback system. In: Society of American Gastrointestinal and Endoscopic Surgeons 2017 Annual Conference. 2017. [39] Cuppone A, Squeri V, Semprini M, Konczak J. Robot-assisted training to improve proprioception does benefit from added vibro-tactile feedback. In: 37th Annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; Aug 25, 2015. p. 258 261.

18 G

Applications of Flexible Robots in Endoscopic Surgery Ka Chun Lau, Yun Yee Leung, Yeung Yam and Philip Wai Yan Chiu The Chinese University of Hong Kong, Shatin, Hong Kong

ABSTRACT Conventional cancer surgery requires making huge incisions to gain access to internal organs for removal of tumors. This often leads to substantial disability and complications; especially in gastrointestinal organ surgery. The invention of optical fiber technology enhanced the development of laparoscopic and endoscopic surgery. However, in endoscopic surgery, some of the key difficulties are still present: lack of tissue retraction, lack of triangulation, and coupled movement between camera view and tools. These difficulties require a high level of surgical skills to compensate and thus increase the training time of surgeons. Gastric and colorectal cancers are the leading causes of cancer death worldwide. The literature shows that the survival rate for patients with these cancers could be improved if the tumors are removed en bloc at an early stage. Endoscopic submucosal dissection (ESD) is a special endoscopic surgery technique which targets the removal of submucosal tissue. Unfortunately, ESD is technically challenging due to the above difficulties in manipulating endoscopic devices. Perforation often happens during this surgery. Therefore, a surgical robotic system is proposed to ease these problems. The dual-arm flexible manipulator is driven by the tendon-sheath mechanism, which is designed as tiny as possible to be inserted into the human body and then controlled remotely. In vivo experiments show that the system increases the safety and reduces the operation time compared to conventional procedures. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00018-9 © 2020 Elsevier Inc. All rights reserved.

303

304

18.1

Handbook of Robotic and Image-Guided Surgery

Review of current manual endoscopic surgery tools

An endoscope is a flexible camera in medicine, which can reach cavities and viscera not visible to the naked eye. Different from other medical imaging techniques, endoscopes are inserted directly into the body. Surgeons can bend and insert the endoscope through patients’ natural orifices such as the mouth, vagina, or anus, or an incision for investigation of symptoms, confirmation of a diagnosis, and treating disease. Endoscopes can be used to review different body parts, such as the gastrointestinal tract, the respiratory tract, the ear, the urinary tract, the female reproductive system, and normally closed body cavities (through a small incision), etc. There are different types of endoscopes for looking inside different organs, for example, bronchoscopy for the trachea and bronchi, colonoscopy for the large bowel, cystoscopy for the bladder, sigmoidoscopy for the lower part of the large bowel, and hysteroscopy for the uterus. Each endoscope has its own length, size, and design according to its targeted operation site. Surgeons, based on the targeted site in the body and the type of procedure, choose which endoscope is used. Typically, an endoscope has two knots to control it bending in the pitch and yaw directions [1]. Other than for viewing purposes, surgeons also make use of endoscopes to perform operations. This kind of operation is called endoscopic surgery. Different tools for different operations can be inserted into the organs through different channels of an endoscope. The endoscopes used in medical treatment usually contain several features, including a rigid or flexible tube, an optical fiber system for imaging and illuminating the organ or the targeted object; a lens system for transmitting the image from the objective lens to the viewer, an insufflation channel; a water channel for flushing; and a channel for different medical instruments or manipulators. The commonly used endoscopic instruments or manipulators include a knife, graspers, biopsy forceps, trocar sleeves, tissue scissors and cutters, burs, cleaning brushes, tube sets (inflow and outflow), tissue staplers, ligation devices, and suturing systems. There are already many diseases in different organs that can be treated using an endoscope, for instance, stomach, esophagus, colon, ear, urinary tract, uterus, bronchus, etc. The commercially available endoscopes usually contain two knots to assist the endoscopists to have better control of the tools, which are used to control the up-and-down and left-and-right motion of the distal tip. With the two knots design, the distal tip can be fixed temporarily in any desired position. Endoscopists usually use one hand to control the knots and use the other hand to control the insertion of the endoscope and the motion of the medical instrument. The medical instrument is inserted into one channel and comes out from the distal tip of the endoscope. The camera is also attached at the distal tip of the endoscope. It transmits images to a monitor for image capture. Restricted by the design of the endoscope, the endoscopists can see the instrument coming out from the bottom of the screen. With the inserted medical instrument and the image from the camera, endoscopists can perform surgery in the body without making a larger incision. Endoscopic surgery does have a few potential complications, which may include perforation, reaction to sedation, infection, and bleeding. Surgeons prefer performing endoscopic surgery rather than performing open surgery in some procedures, as compared to open surgery, endoscopic surgery has smaller incisions, smaller scarring, shorter hospital stays, less blood loss, lower risk of complications, less pain, and a lower chance of infection. With sufficient training time and introducing the appropriate robotic system, the risk of complications of endoscopic surgery can be further reduced.

18.2

Technical challenges in current endoscopic surgery using manual tools

Endoscopic surgery involves techniques different from those of traditional procedures. Surgeons needs additional, formal training for performing endoscopic surgery. Other than different skills for performing the operation, endoscopic surgery also encounters certain limitations, such as only a single tool can be used at a time, restricted motion of the tools, coupled motion between the camera and tools, two-dimensional imaging, poor ergonomic position of endoscopists, etc. Endoscopists uses their hands to manually control the camera and tools, thus endoscopists can only operate one accessory at a time. For example, a right-handed endoscopist can operate the endoscope with the left hand and operate another tool with the right hand. The tool can have a backward-and-forward motion. This limited motion may not create enough triangulation for the operation. Some procedures are difficult to complete using a single tool at a time. The motion of the tools is restricted by the endoscope because the tools are inserted along the endoscope to reach the targeted operation site. Consequently, there is only one approach angle for the tools to the targeted organ. The endoscopists can only drive the instrument by bending the tip of the endoscope or pushing the instrument along the endoscope, and the working area and motion of the instrument are limited by the endoscope. The tool inserted into the endoscope also results in coupled movement between the camera view and tools. This could be imagined as having a

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

305

18.3 18.3.1

Review of endoscopic robots: purely mechanical and motorized Purely mechanical endoscopic robots

There are several purely mechanical endoscopic surgical robots that have developed. The claim of these robots is focused on natural orifice transluminal endoscopic surgery (NOTES). NOTES is a new surgical procedure which is used to access the abdominal cavity via a human body natural orifice, such as the mouth, anus, or vagina. It is an appealing procedure since it eliminates completely abdominal wall aggression, lower the anesthesia requirements, and even faster recovery [2]. However, there is limited technology available for this surgical procedure, researchers are aiming to design some new tools or robots for it. Intraluminal and transluminal are similar processes which also include accessing through natural orifices of the body. The structure of these robots is basically the same. Cobra, from USGI Medical, is a purely mechanical endoscopic surgical robot [3]. Its design is based on “TransPort,” an endoscopic platform by this company. “TransPort” has four channels and has two degree-of-freedoms (Dofs) of bending. Its “Shape-lock” function can maintain the shape of the platform during operation. Cobra makes use of these platform and adds three independent arms to form a triangulation structure. There are two robotic arms on two sides and a camera channel at the top. The camera channel is 6 mm in diameter, which allows a conventional endoscope to be used, as shown in Fig. 18.1. All the robotic arms and camera channel are driven by a tendon-sheath mechanism FIGURE 18.1 A manually driven endoscopic robot Cobra. The independently moveable arms allow triangulation and complex actions.

18. Applications of Flexible Robots

tool attached to your head; when we want to drive the tool by rotating our neck, our eyes move as well. This coupling motion reduces the accuracy and makes it difficult to determine how far and where we are going. Endoscopic surgery changes the method of accessing the visual information available on the corporal site. The twodimensional imaging is transmitted from the endoscope to a video monitor. This two-dimensional image reduces the depth perception of the operative site. Since surgeons view the operating environment indirectly and with insufficient depth cues, some may misjudge the spatial depth, which could lead to perforation, thus endangering patients. Surgeons lose the haptic feedback from the targeted tissue as they are using a long and soft tool during the operation. There is potential for tissue damage, possibly caused by inappropriate use of force as surgeons do not have direct force feedback from the operation site as the tool is long and flexible. It is also difficult to maintain the tip of the flexible endoscope in a stable position inside a hollow viscus, such as the stomach. Owing to a lack of force feedback and stability, control of the tool in endoscopic surgery is less intuitive compared to open surgery. The above difficulties require a high level of surgical skill to compensate and thus increase the training time of surgeons. Some of the problem can be improved by introducing a robotic system to assist surgeons to perform the endoscopy surgery. Therefore, different enhancement solutions to perform endoscopic surgery are in high demand. Robotizing endoscopic surgery, haptic feedback from the tools and reconstructing a three-dimensional (3D) image from the endoscope image could be solutions from an engineering research perspective.

306

Handbook of Robotic and Image-Guided Surgery

FIGURE 18.2 The EndoSamurai from Olympus is operated through an operator interface.

(TSM) with the wires connected to the controller directly. There were several problems found in this study. First, the TSM control is not accurate. The output of the robotic arm usually does not follow the input of the controller because of friction and backlash. Second, the whole structure is too soft to allow aggressive tissue manipulation. Third, the insertion of this system may hurt the inner organ as these two robotic arms can be retracted or protected by any cover. Due to these drawbacks, no obvious improvement could be seen. No further studies were published. EndoSamurai, from Olympus Japan, is another purely mechanical endoscopic surgical robot [4]. The EndoSamurai has three parts, the main body, the insertion tube, and the tip portion, as shown in Fig. 18.2. Its structure is similar to the Cobra, which contains two robotic arms and a camera. The two robotic arms are hollow, which can allow different tools to be used and interchanges during surgery. Each arm has an articulating configuration with five DoFs and the insertion tube allows additional tools such as injection and suction to be used. At least two operators are needed to control all the DoFs of this system. The articulated robotic arms create a large angle of triangulation. Although its structure is similar to Cobra, it showed a better result in different experiments. In a pin-placing task, ex vivo and in vivo dissection experiment, EndoSamurai showed its effectiveness compared to a conventional endoscope. EndoSamurai has two major drawbacks. The first is the same as Cobra. The two robotic arms could not be retracted. This increases the difficulties of insertion and increases the chances of damaging tissue during insertion. The second is that the robotic arms are too long, and too thick and wide. It is difficult to turn around inside the tract. This is because of the hollow structure of the robotic arms. This can be solved if the tool is not interchangeable. The Direct Drive Endoscopic System, from Boston Scientific, United States, is a multitasking platform [5]. It is claimed that it could be used in NOTES, single-port surgery (SPS), and endoluminal surgery. The structure is almost the same as the above systems and also contains two hollow articulated robotic arms and a camera channel, as shown in Fig. 18.3. Different tools can be interchanged to fit different surgical procedures. In this study, full-thickness suturing, knot tying, and endoscopic mucosal resection (EMR) were successfully conducted with this system. EMR is a procedure to remove early-stage cancer and precancerous growths from the lining of the digestive tract by using a snare. An operator needs to control the two robotic arms and another operator needs to control the endoscopic camera. No clear comparison was made between it and a conventional endoscope. It was reported that this system had limited triangulation, insufficient force transmission, and inaccurate TSM control.

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

307

FIGURE 18.3 The DDES platform needs two surgeons to operate it. DDES, Direct Drive Endoscopic System.

18.3.2

Motorized endoscopic robots

In order to solve those problems commonly existing in purely mechanical endoscopic surgical robots, researchers develop motorized endoscopic surgical robots. The system designs of these motorized endoscopic surgical robots are very similar. The master console sends the input to the main processing unit. After calculation of the robot kinematic model, the processing unit controls the motors to the desired position and velocity. The wires connected between the robotic arms and motors are pulled and thus the position of the robotic arm is changed. Scorpion Shaped Endoscopic Surgical Robot, from The Kyushu University, Japan, is claimed to be able to handle both NOTES and SPS [6]. Its configuration is the same as the purely mechanical robot which two robotic arms placed on two sides and the camera placed at the top. Each robotic arm has three DoFs: up-and-down, left-and-right, and openand-close of the gripper. The tip of the gripper is 10 mm in length and 2 mm in width. This tiny gripper, as shown in Fig. 18.4, has the ability to detect the softness of an object. The computer continuously monitors the tension of the wire and behind it is the algorithm they developed to calculate the softness of the object. In this study, they showed that the

18. Applications of Flexible Robots

FIGURE 18.4 The SSESR platform has two arms with grippers. SSESR, Scorpion Shaped Endoscopic Surgical Robot.

308

Handbook of Robotic and Image-Guided Surgery

FIGURE 18.5 The MASTER platform built on a conventional endoscope in which the cables are exposed. MASTER, Master and slave transluminal endoscopic robot.

robot was able to determine a sponge and a metal object. Moreover, the operator did not damage the soft silicon fragment. However, it did not show the effectiveness of these functions in a clinical study. Single Access and Transluminal Robotic Assistant for Surgeons, from the University of Strasbourg, France, is also a multifunctional platform. It has demonstrated that it can be used in NOTES and SPS [7]. It has a diameter of 18 mm, with a 35 mm passive shaft and a 22 cm bending section. The total DoFs can be up to 10, including two DoFs for bending, translation, and gripper open-and-close. The robotic arms can be fully enclosed by a distal articulated cap. This cap is a cone shape, which allows smooth insertion. When the endoscope reaches the destination, the cap is separated into two sides to show the two robotic arms. This motion increases the separation distance between the robotic arms, which creates a larger triangulation angle. The instruments used in this robot can be interchanged to fit different surgical purposes, such as a grasper, hooks, needle knife, etc. It is controlled through a force feedback controller, omega 7, from Force Dimension. However, when the robot is applied in the gastrointestinal lumen, the robot is oversized, and in particular, the cap cannot be fully opened and thus the motion of the robotic arms is constrained. Master and Slave Transluminal Endoscopic Robot, from Nanyang Technological University, Singapore, focused on NOTES initially, however some experiments were also conducted on endoscopic submucosal dissection (ESD). The first prototype was a traction wire-controlled robotic arm mounted externally on a dual-channel conventional endoscope as shown in Fig. 18.5. The transition module was set up outside the endoscope, which made the whole system a bit bulky. The common problem of the robotic arms not being retracted also existed in this robot, and the robotic arms not being flexible may increase the risk of damaging tissue during insertion. In this study, several clinic trials were conducted. The results showed the effectiveness of this system over conventional endoscopes [8,9]. Viacath, from Hansen Medical, United States, was initially a cable-actuated robotic arm integrated with a conventional endoscope [10,11]. The robotic arms are flexible, with two segments. Each segment has two DoFs and therefore the whole robot can have up to 10 DoFs. This design also increases the triangulation angle, as shown in Fig. 18.6. The robotic arms are smaller than those of the abovementioned, and are about 4.75 mm in diameter. It was reported that the force produced from the robotic arm was insufficient, about 0.5 N. The design of the robotic arm was changed to a rigid “shoulderelbow” configuration. It was claimed that this design could make the control more intuitive and natural for operators, and the force the robot could produce was much greater as it was rigid. The robotic arms are no longer built

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

309

FIGURE 18.6 The ViaCath instruments use a double-flex section design at the distal tip for articulation.

18.4

Advantages of flexible robots in the application of endoscopic surgery

A flexible manipulator is extensively used in various robotic applications, for example, aerospace and outer space industries, manufacturing industries, nuclear plants, military, surgery, and agriculture. This illustrates that the flexible manipulator has promising performance. Therefore, researchers also consider that the flexible behavior of flexible manipulators could enable them to be inserted into the human body through a torus channel without damaging any internal organs. This is very useful in robotizing endoscopic surgery. There are different types of flexible manipulators. With different structures and physical properties, each can serve as independent manipulators or they can combine with each other to work as one manipulator. For different endoscopic surgical procedures and environments, researches will construct different types of flexible robots to satisfy the controllability, stability, and dexterity in the specific workspace and working environment. The wire-driven manipulators [13], concentric tube robots [14,15], and continuum robot [16] are some of the flexible manipulators that have been widely used in minimally invasive surgery. By introducing flexible robotics to assist surgeons to perform endoscopic surgery, the motion of the endoscope and the tool is decoupled as the robot arms can be operated independently, thus surgeons can manipulate more than one instrument at the same time. When performing robotic endoscopic surgery, the endoscope can be held in a certain position. Surgeons will use their hands to control the manipulators via the control console, instead of using their hands to hold a single tool and the endoscope. This change could speed up the process and improve safety. The flexible robot also provides more degrees of freedom compared to traditional handheld endoscopic tools, and as the motion of the flexible robot is mapped with the handle of the control console, the design can have more flexibility. In the traditional procedure, the motion of the tool needs to be simple as surgeons need to hold and manipulate the tool

18. Applications of Flexible Robots

on a conventional endoscope, they are inserted through an overtube with three channels. The remaining channel is for the endoscopic camera. Statistics and data were shown in this study, without clinical data. CYCLOPS, from Imperial College London, presented a novel mechanism for an endoscopic surgical robot [12]. Almost all the endoscopic surgical robots follow the mechanism of a conventional endoscope which uses TSM. The reason for using this mechanism is that the distal end movement can be controlled remotely since there is insufficient space for any motors. This research group presented another mechanism where the wires were not installed along the robotic arm. The wires were attached to the middle of the robotic arm and the large hollow sphere acted as a wire guide and overtube. This robot follows the concept of a parallel robotic manipulator. The advantage of this design is that the force is much larger than with other robots and the precision is better. The hollow sphere enlarges the gastronintestinal (GI) tract and creates enough working space for operation by the robot itself, without the need for insufflation.

310

Handbook of Robotic and Image-Guided Surgery

FIGURE 18.7 Typical kinematic mapping of a continuum manipulator.

with one hand at the same time. Therefore, the adoption of the flexible robot in endoscopic surgery provides more flexibility and possibility of surgical tool design. The stability and accuracy of a robotic system help to improve the safety. A robot can be very stable compared to a human. In some surgical operations, surgeons need to hold an endoscope for a long period of time. Any unexpected movement of the endoscope at critical moments in the surgery could lead to serious complications, placing the patient in unnecessary danger. A person becomes quite fatigued and their ability to hold the endoscope stationary deteriorates after a while, however a robot can maintain the same position for a long time. With the modeling on the flexible robot arm and motion mapping between the controller and the robot arm, the accuracy of the robotic arm is higher than a human arm in some ways and therefore safety issues can be eased by using a robot arm to complete the surgery.

18.5

Basic coordinate system and kinematic mapping of a continuum manipulator

A flexible robot is a general term to describe the nature of a robot. To define the property and the mathematical model clearly, the term “continuum” is commonly used. Hence, the robot arm of the flexible manipulator is defined by a “continuum manipulator” in this section. A continuum manipulator is typically defined by three kinematic spaces: actuator space, configuration space, and task space. The actuation method of a continuum manipulator follows this sequence: considers the base of the continuum manipulator to be fixed first; then the actuator wires are pulled such that the length of the wires inside the manipulator is changed; after that the curvature is changed; finally, the position and the orientation of the tip are changed accordingly (Fig. 18.7). The actuator space variable is defined by the length of the actuator wires inside the continuum manipulator. The actuator space vector is defined by q⃑ in this chapter and the space variables are defined by l1 ; l2 ; l3 ; l4 , since the continuum manipulator used in the robotic system has four actuator wires. Therefore, q⃑ 5 ½l1 l2 l3 l4 T . Since different manipulators may have different combinations of wire configurations, the mapping from the actuator space to the configuration space is unique for each manipulator. For example, the most common types of manipulators have three or four wires distributed on the circumference evenly. Some other robots may have five to six wires to increase the actuation force. Pneumatics and hydraulics are also commonly used for actuation. The actuator space variable becomes pressure in this case. The mapping is also different. This is called manipulator-specific mapping, defined by the function g here. The configuration of the continuum manipulator is typically defined by three arc parameter variables: arc length s, curvature κ, and rotational angle ϕ. The configuration space vector is defined by u,⃑ that is, u⃑ 5 ½sκϕT . Piecewise constant curvature assumption is usually used in the ideal case of the configuration of the continuum manipulator. In this assumption, the continuum manipulator is considered as a piecewise arc of a circle. The curvature along the whole manipulator is considered to be the same. This assumption simplifies the calculation and gives a direct perspective to the user. This assumption was successfully applied to many continuum manipulators and robots [1719]. Meanwhile, we always want to design a model as accurately as possible. Recent researches have started to investigate deeply into the structure of the continuum manipulator, such as considering friction, material properties, and dynamic behavior. Since the manipulator is considered as a piecewise arc, there exist a center and radius corresponding to this arc. The curvature of the manipulator is defined by the inverse of the radius, that is, κ 5 1=r The range of the curvature varies from 0 to infinity. But in practical situations, it would not tend to infinity as the manipulator has its own physical limitations. It is easy to imagine that when the manipulator is straight, the curvature is zero, where the center of the arc is at infinity and thus the radius is infinity as well. The typical coordinate frame of a continuum manipulator is shown in Fig. 18.8. The base of the continuum manipulator is placed on the XY plate. When the manipulator is straight, it is aligned with the Z-axis. The center of this arc is defined by a point on the XY plane. The rotational angle is defined by the true angle of the line connecting the origin and center with the X-axis, as illustrated in the figure. The range of the projection angle

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

311

FIGURE 18.8 Typical coordinate system of a continuum manipulator. This figure shows different configurations. When κ 5 0, the manipulator aligns with the Z-axis.

18.5.1

Manipulator-specific mapping

Specific mapping converts the actuator variables to configuration variables, that is, u⃑ 5 gðqÞ. ⃑ In the continuum manipulator of this robotic system, a four-wire pullpull mechanism is used. Therefore, the actuator space variables are the length of these four actuator wires inside the continuum section. The actuator space q⃑ 5 ½l1 l2 l3 l4 T defines the length of all four actuator wires of the continuum manipulator. By changing the length, the curvature and the rotational angle of the manipulator are changed. Fig. 18.9 shows the decomposed view of the continuum manipulator. s is the length of the backbone, it is also the length of the arc in the piecewise constant curvature assumption; h is the distance between each disk or section length; d is the distance between the actuator wire and the backbone; κ is the curvature; and ϕ is the rotational angle; θ is the bending angle of the arc. Fig. 18.10 shows the top view of the decomposed continuum manipulator. Each actuator wire forms a curve in which the center of each curve collides on the same line and therefore each actuator wire has its own radius of

18. Applications of Flexible Robots

varies from 0 to 2π. To have a better illustration, sometimes we also use the angle of the arc θ to represent the bending angle of the manipulator. The most important information we would like to get from the manipulator is the distal tip position and orientation. The task space vector x ⃑ includes six variables: the x; y; z coordinates of the tip of the manipulator and α; β; γ Euler angle, that is, x⃑ 5 ½x y z α β γT . The mapping from the configuration space to the task space is independent of the structure or the actuation method of the manipulator because this is applicable to all continuum manipulators. This is a pure kinematic mapping which transfers the arc parameters to the tip position and orientation by mapping directly. This mapping is called manipulator-independent mapping and is defined by a function h here. There are several ways to find the independent mapping including “arc geometry,” “DenavitHartenberg (DH) parameters,” and “FrenetSerret frame.” The derivations from arc geometry and DH parameter are provided here in detail. A homogeneous transformation matrix parameterized by the arc parameters can be used to represent this mapping. It is found that the resulting matrices from these two methods are the same. The FrenetSerret frame is omitted since it is mainly used in a concentric tube, a totally different structure to a continuum manipulator. In other words, it is used to describe the shape of a 3D curve; while the continuum manipulator is two-dimensional.

312

Handbook of Robotic and Image-Guided Surgery

FIGURE 18.9 Decomposed view of the continuum manipulator.

FIGURE 18.10 Top view of the decomposed continuum manipulator.

curvature: ri defines the radius of curvature of actuator i. Each actuator wire forms an angle with the bending direction: ϕi defines the angle of actuator i of the bending direction. As shown in the figure, ϕ1 5 90 2 ϕ; ϕ2 5 180 2 ϕ; ϕ3 5 270 2 ϕ; ϕ4 5 360 2 ϕ: Each radius of curvature can be found by the following, ri 5 r 2 dcosϕi Recalling that s 5 θr and li 5 θri ,

  li 5 θri 5 θ r 2 dcosϕi 5 s 2 dθcosϕi

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

313

Since θ 5 ðs=rÞ 5 sκ, therefore, li 5 s 2 dsκcosϕi 5 sð1 2 dκcosϕi Þ Note that this is the inverse of the independent mapping, that is, g21 , converting the arc parameters to actuator lengths. By summing up the length of all actuator wires, l1 1 l2 1 l3 1 l4 4 X 5 sð1 2 dκcosϕi Þ i51

5 4s 2 sdκ

4 X

cosϕi

i51

From the above equation,

i51

5 sinϕ 2 cosϕ 2 sinϕ 1 cosϕ 50 Hence, s5

l1 1 l2 1 l3 1 l4 4

Furthermore, the length of each actuator wire is defined by, l1 5 s 2 dθsinϕ l2 5 s 2 dθcosϕ l3 5 s 1 dθsinϕ l4 5 s 1 dθcosϕ Note that, l1 1 l3 5 ðs 2 dθsinϕÞ 1 ðs 1 dθsinϕÞ 5 2s l2 1 l4 5 ðs 2 dθcosϕÞ 1 ðs 1 dθcosϕÞ 5 2s This shows that the sum of length of the opposite wire is constant. Therefore, Δl1 5 2 Δl3 and Δl2 5 2 Δl4 The change of length of each opposite actuator wire is coupled. Therefore the opposite wires can be paired up and actuated by the same pullpull module. Using the following formulation, l3 2 l1 5 2dθsinϕ and l4 2 l2 5 2dθcosϕ By extracting the dθ, l2 2 l4 l3 2 l1 5 2cosϕ 2sinϕ 0 1 l 2 l 3 1A .ϕ 5 tan21 @ l4 2 l2 dθ 5

Recalling ri 5 r 2 dcosϕi and θ 5 κs 5 li =ri , so that ri 5 li =κs, therefore, li κs .li 5 s 2 dκscosϕi s 2 li .κ 5 dscosϕi r 2 dcosϕi 5

18. Applications of Flexible Robots

4 X cosϕi 5 cosð90 2 ϕÞ 1 cosð180 2 ϕÞ 1 cosð270 2 ϕÞ 1 cosð360 2 ϕÞ

314

Handbook of Robotic and Image-Guided Surgery

By choosing i 5 2, we have, κ5 Substituting s 5 ðl1 1 l2 1 l3 1 l4 Þ=4,

s 2 l2 2 dscosϕ



 ðl1 1 l2 1 l3 1 l4 Þ=4 2 l2   κ5 2 d ðl1 1 l2 1 l3 1 l4 Þ=4 cosϕ 5

ðl1 1 l2 1 l3 1 l4 Þ 2 4l2 2 dðl1 1 l2 1 l3 1 l4 Þcosϕ

Since tanϕ 5 ðl3 2 l1 Þ=ðl4 2 l2 Þ, we get, l4 2 l2 cosϕ 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðl4 2l2 Þ2 1 ðl3 2l1 Þ2 Substituting back to the above equation, κ5

5

ððl1 1 l2 1 l3 1 l4 Þ 2 4l2 Þ=ð 2 d ðl1 1 l2 1 l3 1 l4 ÞÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðl4 2 l2 Þ ðl4 2l2 Þ2 1 ðl3 2l1 Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðl1 2 3l2 1 l3 1 l4 Þ ðl4 2l2 Þ2 1 ðl3 2l1 Þ2 2 dðl1 1 l2 1 l3 1 l4 Þðl4 2 l2 Þ

In summary, we successfully define the manipulator-specific mapping g, converting actuator lengths ðl1 ; l2 ; l3 ; l4 Þ to arc parameters (s; κ; ϕÞ.

18.5.2

Manipulator-independent mapping

By using the arc geometry method to define the continuum manipulator-independent mapping h, the manipulator is modeled as a piecewise arc, as shown in Fig. 18.11. Considering when ϕ 5 0, the arc lies on the XZ plane. The position of the tip is calculated as: r ð1 2 cosθÞ; 0; rsinθ.

FIGURE 18.11 When ϕ 5 0, the arc lies on the XZ plane, the position of the tip is shown in the figure.

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

315

The transformation of the coordinate frame follows this sequence: (1) rotating about the Y-axis by θ to align with the tip frame; (2) translating to tip by r ð1 2 cosθÞ; 0; rsinθ; (3) rotating about Z-axis by ϕ. The homogeneous transformation matrix is given by, 1 0 10 cosθ 0 sinθ r ð1 2 cosθÞ cosϕ 2sinϕ 0 0 B C B sinϕ 1 0 0 cosϕ 0 0 C C B CB 0 T 5B C CB @ 0 rsinθ A 0 1 0 A@ 2sinθ 0 cosθ 0 0 0 1 0 0 0 1 1 cosϕcosθ 2sinϕ cosϕsinθ rcosϕð1 2 cosθÞ B sinϕcosθ cosϕ sinϕsinθ rsinϕð1 2 cosθÞ C B C 5B C @ 2sinθ A 0 cosθ rsinθ 0

0

0

0

1

0

0

0

1

Note that the orientation of the frame after the above transformation does not rotate about the new axis, which does not represent the true orientation in the real world. Therefore, postmultiplying by Rz ð2 ϕÞ is needed. This yields the following homogeneous transformation matrix. 1 0 cosϕð1 2 cosκsÞ cosϕcosκs 2sinϕ cosϕsinκs C B κ C0 B 1 C cosð 2 ϕÞ 2sinð 2 ϕÞ 0 0 B C B sinϕð1 2 cosκsÞ CB sinð 2 ϕÞ B sinϕcosκs cosϕ sinϕsinκs cosð 2 ϕÞ 0 0 C C CB B κ T 5B C CB C@ B 0 0 1 0A C B sinκs C B 2sinκs 0 cosκs 0 0 0 1 C B κ A @ 0

0

0

cos2 ϕðcosκs 2 1Þ 1 1

B B B B B sinϕcosϕðcosκs 2 1Þ B 5B B B B 2cosϕsinκs B @ 0

0

1

sinϕcosϕðcosκs 2 1Þ

cosϕsinκs

cos2 ϕð1 2 cosκsÞ 1 cosκs

sinϕsinκs

2sinϕsinκs

cosκs

0

0

1 cosϕð1 2 cosκsÞ C κ C C sinϕð1 2 cosκsÞ C C C κ C C C sinκs C C κ A 1

The transformation matrix T defines the independent mapping h. To find the inverse mapping h21 , ϕ can be trivially determined by the x and y coordinates of the position vector,   py ϕ 5 tan21 px Consider the case in Fig. 18.14, in order to find κ or r, r and z form a triangle which has the following relation,

18. Applications of Flexible Robots

By substituting κ 5 1=r and θ 5 κs, we get the matrix represented by configuration space variables, 1 0 cosϕð1 2 cosκsÞ C B cosϕcosκs 2sinϕ cosϕsinκs κ C B C B B sinϕð1 2 cosκsÞ C C B sinϕcosκs cosϕ sinϕsinκs C κ T 5B C B C B sinκs C B 0 cosκs C B 2sinκs A @ κ

316

Handbook of Robotic and Image-Guided Surgery

 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 r2 x2 1y2 1 z2 5 r 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi r 2 2 2r x2 1 y2 1 x2 1 y2 1 z2 5 r 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 x2 1 y2 .κ 5 5 2 r x 1 y 2 1 z2 Moreover, to find s, s 5 rθ 0

0

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi11 x 2 1 y 2 AA r 0 0   11 2 x2 1 y2 x2 1 y2 1 z2 @ 21 @ AA 12 2 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos x 1 y 2 1 z2 2 x2 1 y2 r2 5 r @cos21 @

18.5.3

Drawback of a typical coordinate system

Analogous to conventional kinematic analysis, the velocity kinematic is given by, 6¼ x ⃑ 5 J6¼ q⃑ ; where 6¼ x⃑ is the derivative of the task space vector, 6¼ q⃑ is the derivative of actuator space vector, and J is the Jacobian matrix. This robotic system involves two kinematic mapping, g and h, given in the previous subsections. Therefore, the velocity kinematic of the whole system becomes, 6¼ x ⃑ 5 J ðu⃑ ÞJ ðq⃑ Þ6¼ q;⃑ where J ðq⃑ Þ is the Jacobian matrix of the manipulator-specific mapping and JðuÞ⃑ is the Jacobian matrix of the manipulator-independent mapping. JðqÞ⃑ is given by the following, J ðq⃑ Þ 5

@u⃑ @q⃑

Note that l1 is coupled with l3 and l2 is coupled with l4 . Therefore the actuator space can be reduced to q⃑ 5 ½l1 l2 T . Hence, @u⃑ @q⃑ 0 @u1 B @q1 B B @u B 2 5B B @q1 B B @u @ 3 @q1 0 @ϕ B @l1 B B @κ B 5B B @l1 B B @s @ @l1

J ðq⃑ Þ 5

1 @u1 @q2 C C @u2 C C C @q2 C C @u3 C A @q2 1 @ϕ @l2 C C @κ C C C @l2 C C @s C A @l2

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

0

2 ðl2 2 l4 Þ 0 1 B 2 B ð l 2l Þ B ðl 2l Þ2 @ 2 4 1 1A B 1 3 ðl1 2l3 Þ2 5B B B  B 1 @ 4

317

1 1C C ðl2 2l4 Þ2 ðl1 2 l3 Þ@ 1 1A C C 2 ðl1 2l3 Þ C C C  C 1 A 4 0

1

To find the Jacobian matrix of the rotational transformation, we have to find the derivation of rotational matrix R with respect to each of the configuration space variables. We first make use of the orthogonal property of the rotational matrix R, which follows that, R  RT 5 I; where I is the identity matrix. Differentiating both sides, we get, 6¼ RRT 1 R6¼ R T 5 0 Let S 5 6¼ RRT , then we have ST 5 ð6¼ RRT ÞT 5 R6¼ R T . Substituting back to the above equation, S 1 ST 5 0 In the above equation, S is said to be the skew symmetric matrix by definition. Then S can be written in the following skew symmetric form, 0 1 0 2ωz ωy 0 2ωx A; S 5 @ ωz 2ωy ωx 0 where ωx ; ωy ; ωz is the angular velocity about the X, Y, and Z-axis, respectively. The derivative of the rotational matrix with respect to each configuration space variables give out three skew symmetric matrices,       d d d R RT ; Sκ 5 R R T ; Ss 5 R RT Sϕ 5 dϕ dκ ds

18. Applications of Flexible Robots

@κ=@l1 and @κ=@l2 are omitted here because of space limitations. Since the task space involves position and orientation, they should be found separately. The Jacobian for position is given by, 1 0 @x @x @x B @ϕ @ϕ @ϕ C C B C B B @y @y @y C C B Jp ðu⃑ Þ 5 B @κ @κ @κ C C B C B B @z @z @z C A @ @s @s @s     1 0 2sin2 ϕ=2 2 1 ð2sin2 ðκs=2Þ 2 κsðsinðκsÞÞÞ sinϕcosðκsÞ 2 1 sinðκsÞcosϕ C B κ2 κ C B C B   C B C B cosϕcosðκsÞ 2 1 sinϕð2sin2 κs=2 2 κsðsinðκsÞÞÞ B 2 sinðκsÞsinϕ C 5B 2 2 C κ κ C B C B C B sinðκsÞ 2 κsðcosðκÞÞ A @ 0 2 cosðκsÞ κ2

318

Handbook of Robotic and Image-Guided Surgery

This results in nine angular velocities of each axis with respect to each configuration space variable. Combining them together forms the following matrix, which equals the Jacobian of rotational transformation with respect to each space variables, 0 1 ωxϕ ωxκ ωxs B C Jω ðu⃑ Þ 5 @ ωy ϕ ωy κ ωy s A ωz ϕ ω z κ ωz s 0 1 2sinðκsÞcosϕ 2sðsinϕÞ 2κðsinϕÞ B C 5 @ 2sinðκsÞsinϕ sðcosϕÞ κðcosϕÞ A 1 2 cosκs

0

0

Combining Jp ðu⃑ Þ and Jω ðu⃑ Þ, we get the overall Jacobian matrix of the manipulator-independent mapping,   Jp ðu⃑ Þ JðuÞ⃑ 5 Jω ðu⃑ Þ     1 0 2sin2 ϕ=2 2 1 ð2sin2 ðκs=2Þ 2 κsðsinðκsÞÞÞ sinϕcosðκsÞ 2 1 sinðκsÞcosϕ C B κ2 κ C B C B   C B 2 sinϕð2sin κs=2 2 κsðsinðκsÞÞÞ C B cosϕcosðκsÞ 2 1 C B2 2 sin ð κs Þsinϕ 2 C B κ κ C B C B 5B C sin ð κs Þ 2 κsðcos ð κ ÞÞ B 0 2 cosðκsÞ C C B 2 κ C B C B B 2sinðκsÞcosϕ 2sðsinϕÞ 2κðsinϕÞ C C B C B @ 2sinðκsÞsinϕ sðcosϕÞ κðcosϕÞ A 1 2 cosκs

0

0

Singularity of the Jacobian happens when κ 5 0 which some terms in the Jacobian are divided by zero. There are some solutions to deal with this singularity [20,21], however these methods reduce the model accuracy and increase the computational requirements.

18.6

Experimental results from several successfully developed endoscopic surgical robots

ESD is one of the common applications of endoscopic surgical robots. Gastrointestinal cancers, including stomach and colorectal cancers, are currently the most common cancers worldwide. Stomach cancer was the fifth most common cancer in 2012. Almost one million new cases (6.8% of total cancer cases) occurred in 2012. In 2012, colorectal cancer was the third and second most common in men and women, respectively [22]. ESD has been widely used in Japan for many years for the removal of early GI cancers. ESD is suggested when the lesion size is larger than 20 mm in diameter and the lesion is suspected to be invasive submucosal cancer [23]. The steps of ESD include marking around the tumor, injection of indigo dye saline, dissection, and removal. Detailed explanation of these steps can be found in Refs. [24,25]. ESD enables a high rate of en bloc resection. Many studies show that the rate of en bloc resection for large colorectal tumors is 80%98.9% [26,27]. However, ESD is a technically challenging procedure which requires high skill level and long training time. There are three main reasons for this. First, endoscopists can only operate one accessory at a time. Endoscopists can only operate the endoscope with the left hand and the electrocautery knife with the right hand. The knife could have a backward-and-forward motion to mark or to dissect. This limited motion will not create enough triangulation to aid the operation. Second, the movement of the camera of the endoscope is coupled with that of the knife. Since the left-andright and up-and-down motions of the knife are provided by the endoscope, this makes the camera move with the tool. This could be imagined as a knife being attached to our head, when we want to drive the knife by rotating our neck, our eyes move as well. This coupling motion reduces accuracy and makes it difficult to determine how much and how deep we are cutting. Third, it is difficult to maintain the tip of the flexible endoscope in a stable position inside a hollow viscus, such as the stomach. Targeting a mucosal lesion with this flexible endoscope is challenging. Therefore, the perforation rate for ESD is relatively high, from 10% to 53.8% [24,28].

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

319

FIGURE 18.12 Overview of the endoscopic surgical robotic system.

FIGURE 18.13 Overview of the robot arms.

18. Applications of Flexible Robots

To achieve a faster and safer ESD procedure, endoscopic surgical robotic system targeting for ESD is proposed. This system consists of a robotic arm, endoscopic platform, master console, driving unit, and computer interface, as shown in Fig. 18.12. The whole robotic system is built on a commercially available endoscopic platform, “TransPort,” manufactured by USGI Medical, Inc. “TransPort” is a four-channel endoscopic platform which is 18 mm in diameter and 0.9 m long. It has two DoFs in which the distal tip can be bent in two directions by rotating the two knots. The role of the “TransPort” in this robotic system is to bring the two robotic arms into the patient’s body. These two robotic arms occupy the bigger 6-mm channels on two sides. The remaining top and bottom channels are 4 mm in diameter. The top channel is used for the endoscopic camera, while the bottom channel is reserved for other uses. There are two robotic arms: a lifter and a dissector. Both the lifter and dissector have a flexible section. The difference between them is the distal end manipulator. The lifter equips a gripper and the dissector equips a knife holder. Before inserting into the body, both robot arms are enclosed into the overtube such that the endoscopic platform can be inserted smoothly. After the platform reaches the target location, both robot arms will come out to complete the operation. The complete structure of the robot arms is shown in Fig. 18.13. The continuum section, gripper, and knife holder are driven by a TSM. The driving motors are placed in the driving unit such that all the force and motion are remotely transmitted through the endoscopic platform. Each driving module consists of a pullpull configured pulley such that the two wires in the same pair are moved in opposite directions. This enables the driven parts to be controlled in alternative movements. The TSM also enables a tiny and waterproof end-manipulator design. Three in vivo experiments were conducted on a live porcine model by this robotic system. The procedure and setup followed the clinical standard. Three 5560-kg pigs were used as the size of esophagus from these pigs is similar to

320

Handbook of Robotic and Image-Guided Surgery

that of a human. First, a conventional endoscope was used to mark a 2040 mm diameter area, depending on the pig stomach situation. Second, indigo dye saline was injected under the tissue. After that, the surgeon used the endoscope to perform a partial incision around the marking. The endoscope was removed afterward, and the robotic arms were inserted with the “TransPort.” The robotic arms were fully retracted inside the overtube before the insertion. After the “TransPort” reached the target area and was held in place, the robotic arms were retracted and the experiment started. Fig. 18.14 shows the setup of the experiment. The surgeon on the right used the master console to control the robotic arms by watching the endoscopic view. The assistant in the center controlled the endoscopic camera. Her role was to control the position and orientation of the camera only, but not to control any other parts of the robotic system. They communicated in the experiment such that the surgeon had the best view of the tissue and the robotic arms possible. The average total time of the experiment was about 50 minutes. The total time included the time consumed in all procedures. If only the submucosal dissection procedure was counted, the average time was 32 minutes. Fig. 18.15 shows that the dissection rate increased from time to time. There was no perforation in any of the trials. There was only one additional injection in the second trial. The dissecting rate increased from time to time. For gastric lesions, the reported time to complete ESD averaged 84 6 54.6 minutes and for colon lesions was 70.5 6 45.9 minutes [29]. Compared to the time of our experiment, it is obvious that the robotic system achieved a better performance than the current technique. The perforation rate in our experiment was zero. Although the sample size is small, we can see from FIGURE 18.14 Experimental setup of the in vivo experiment.

FIGURE 18.15 Dissection rate of each trial of the in vivo experiment.

Applications of Flexible Robots in Endoscopic Surgery Chapter | 18

321

the dissected tissue that the submucosal tissue was dissected very accurately. No deeper tissue was dissected or penetrated. Only a small amount of bleeding happened in the third trial, however it was stopped quickly by the coagulating mode. In summary, the robotic system shows great ability for performing ESD and improves the efficiency and safety of this surgery.

18.7

Future development directions for flexible robots in endoscopic surgery

References [1] Olympus. Olympus endoscope [Internet]. Available from: ,http://www.olympusamerica.com/.. [2] Wang X, Meng MQ-H. Robotics for natural orifice transluminal endoscopic surgery: a review. J Robot 2012;2012:19. Available from: ,http://www.hindawi.com/journals/jr/2012/512616/.. [3] Swanstrom LL, Kozarek R, Pasricha PJ, Gross S, Birkett D, Park P-O, et al. Development of a new access device for transgastric surgery. J Gastrointest Surg 2005;9:112936 -1137. [4] Spaun GO, Zheng B, Swanstro¨m LL. A multitasking platform for natural orifice translumenal endoscopic surgery (NOTES): a benchtop comparison of a new device for flexible endoscopic surgery and a standard dual-channel endoscope. Surg Endosc 2009;23(12):2720. [5] Thompson CC, Ryou M, Soper NJ, Hungess ES, Rothstein RI, Swanstrom LL. Evaluation of a manually driven, multitasking platform for complex endoluminal and natural orifice transluminal endoscopic surgery applications (with video). Gastrointest Endosc 2009;70(1):1215. [6] Suzuki N, Hattori A, Tanoue K, Ieiri S, Konishi K, Tomikawa M, et al. Scorpion shaped endoscopic surgical robot for NOTES and SPS with augmented reality functions. In: International workshop on medical imaging and virtual reality; 2010. p. 54150. [7] DeDonno A, Zorn L, Zanne P, Nageotte F, DeMathelin M. Introducing STRAS: a new flexible robotic system for minimally invasive surgery. In: Proceedings—EEE international conference on robotics and automation; 2013. p. 121320. [8] Phee SJ, Low SC, Huynh VA, Kencana AP, Sun ZL, Yang K. Master and slave transluminal endoscopic robot (MASTER) for natural orifice transluminal endoscopic surgery (NOTES). In: Proceedings of the 31st annual international conference of the IEEE Engineering in Medicine and Biology Society: engineering the future of biomedicine, EMBC 2009; 2009. p. 11925. [9] Phee SJ, Reddy N, Chiu PWY, Rebala P, Rao GV, Wang Z, et al. Robot-assisted endoscopic submucosal dissection is effective in treating patients with early-stage gastric neoplasia. Clin Gastroenterol Hepatol 2012;10(10):111721. Available from: ,http://www.ncbi.nlm.nih.gov/ pubmed/22642951.. [10] Abbott DJ, Becke C, Rothstein RI, Peine WJ. Design of an endoluminal NOTES robotic system. In: IEEE international conference on intelligent robots and systems; 2007. p. 4106. [11] Yeung BP, Gourlay T. A technical review of flexible endoscopic multitasking platforms. Int J Surg 2012;10:34554.

18. Applications of Flexible Robots

We envisage that flexible robotics will be further applied as the surgical robotic manipulator for different MIS procedures, for example, injection, suturing, and dilatation, etc. An insertable flexible robotic platform for a general surgical procedure could be developed in the future. We will design different end-effectors for the proposed flexible robotics that support different surgical procedures, for example, a needle holder, scissors, and the gripper, etc. During the surgical procedure, different end-effectors in the flexible robotic can be interchanged to facilitate robotic surgery. From the control aspect, a more ergonomic and intuitive control on the flexible robotics favors the comfort of surgeons, which affects the performance of surgeons. A motion-tracking system could be used to capture the motion of surgeons accurately to enable intuitive control. The mapping between human body movement and flexible robotics could be studied to enhance the accuracy and stability of the control. This would also improve the dexterity of the flexible robot. The flexible robotic system should develop vision assistant ability in the next stage of development. Surgeons always use their knowledge and experience to determine the operating procedure and in decision-making, but sometimes, the environment inside the human body is very complicated. The vision assistant can help to speed up the decision process, and even help to reduce the risk of medical malpractice. The vision assistant can help to locate the contour of the tumor which helps surgeons to double check their initial thoughts. Moreover, the vision assistant system can also automate some of the tasks, for example the partial incision process. The vision assistant can help to locate the marker made by surgeons. Then the robot can help surgeons to follow the trajectory generated by the system and make sure the dissection area covers all of the marked area. To regain the sense of force from the distal end of the tools, the feedback sensor is necessarily required. Currently, there is a feedback sensor that is small and waterproof which can be used in the robotic arm, for example, as a magnetic tracking sensor. The sensor itself is only 0.5 mm in diameter and 10 mm in length. However, this magnetic tracking sensor has low accuracy. The resolution is only about 12 mm. This resolution can only be used to obtain the rough location of the robotic arm, but not to achieve sufficiently high-accuracy kinematic control. An optical fiber curvature sensing system could be another approach to achieve the same goal.

322

Handbook of Robotic and Image-Guided Surgery

[12] Mylonas GP, Vitiello V, Cundy TP, Darzi A, Yang GZ. CYCLOPS: a versatile robotic tool for bimanual single-access and natural-orifice endoscopic surgery. In: Proceedings—IEEE international conference on robotics and automation; 2014. p. 243642. [13] Palli G, Melchiorri C. Model and control of tendon-sheath transmission systems. In: Proceedings—IEEE international conference on robotics and automation; 2006. p. 98893. [14] Dupont PE, Lock J, Itkowitz B, Butler E. Design and control of concentric-tube robots. IEEE Trans Robot 2010;26(2):20925. [15] Rucker DC, Jones BA, Webster RJ. A geometrically exact model for externally loaded concentric-tube continuum robots. IEEE Trans Robot 2010;26(5):76980. [16] Burgner-Kahrs J, Rucker DC, Choset H. Continuum robots for medical applications: a survey. IEEE Trans Robot 2015;126180. [17] Webster RJ, Jones BA. Design and kinematic modeling of constant curvature continuum robots: a review. Int J Robot Res 2010;29 (13):166183. Available from: ,http://ijr.sagepub.com/cgi/doi/10.1177/0278364910368147.. [18] Hannan MW, Walker ID. Kinematics and the implementation of an elephant’s trunk manipulator and other continuum style robots. J Robot Syst 2003;20(2):4563. Available from: ,http://doi.wiley.com/10.1002/rob.10070.. [19] Cie´slak R, Morecki A. Elephant trunk type elastic manipulator - a tool for bulk and liquid materials transportation. Robotica 1999;17(1):1116. [20] Ro¨sch T, Adler A, Pohl H, Wettschureck E, Koch M, Wiedenmann B, et al. A motor-driven single-use colonoscope controlled with a hand-held device: a feasibility study in volunteers. Gastrointest Endosc 2008;67(7):113946. [21] Groth S, Rex DK, Ro¨sch T, Hoepffner N. High cecal intubation rates with a new computer-assisted colonoscope: a feasibility study. Am J Gastroenterol 2011;106(6):107580. Available from: ,http://www.nature.com/doifinder/10.1038/ajg.2011.52.. [22] World Health Organization. World health statistics. 2012. [23] Yoshida N, Yagi N, Inada Y, Kugai M, Yanagisawa A, Naito Y. Therapeutic and diagnostic approaches in colonoscopy. Endoscopy of GI tract. InTech; 2013. [24] Asano M. Endoscopic submucosal dissection and surgical treatment for gastrointestinal cancer. World J Gastrointest Endosc 2012;4(10):438. Available from: ,http://www.wjgnet.com/1948-5190/full/v4/i10/438.htm.. [25] Gotoda T, Yamamoto H, Soetikno RM. Endoscopic submucosal dissection of early gastric cancer. J Gastroenterol 2006;92942. [26] Isomoto H, Nishiyama H, Yamaguchi N, Fukuda E, Ishii H, Ikeda K, et al. Clinicopathological factors associated with clinical outcomes of endoscopic submucosal dissection for colorectal epithelial neoplasms. Endoscopy 2009;41(8):67983. [27] Yoshida N, Naito Y, Sakai K, Sumida Y, Kanemasa K, Inoue K, et al. Outcome of endoscopic submucosal dissection for colorectal tumors in elderly people. Int J Colorectal Dis 2010;25(4):45561. [28] Oka S, Tanaka S, Kaneko I, Mouri R, Hirata M, Kawamura T, et al. Advantage of endoscopic submucosal dissection compared with EMR for early gastric cancer. Gastrointest Endosc 2006;64(6):87783. [29] Kantsevoy SV, Adler DG, Conway JD, Diehl DL, Farraye FA, Kwon R, et al. Endoscopic mucosal resection and endoscopic submucosal dissection. Gastrointest Endosc 2008;68(1):1118.

19 G

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Sheila Russo Boston University, Boston, MA, United States

ABSTRACT Endoscopes are long, flexible instruments used to navigate the body exploiting natural orifices, reach a surgical target area, and perform diagnosis. However, the flexibility required for safe navigation conflicts with forces and dexterity that can be provided distally, and causes loss of sensor feedback, making instrument control poor and limiting the therapeutic capabilities of endoscopes. Efforts to solve these issues are limited by the engineering challenges of developing articulated, three-dimensional, miniaturized mechanisms with integrated sensing and actuation, which are safe for medical applications and combine different materials (i.e., soft elements). In this chapter, two different examples of smart, millimeter-scale robots combining distal articulation and sensor feedback for minimally invasive surgery are discussed. Technical aspects, such as materials and component selection, manufacturing, and testing are analyzed. The first is a flexible, 2-mm catheter-like robot for laser-assisted transurethral surgery of the prostate, provided with optical sensing and cable-driven actuation. The second device is a millimeter-scale, multi degrees-of-freedom robotic arm for endoscopic surgery. This is composed of hybrid soft-foldable actuators and sensors, fabricated through incorporating soft materials and rigid structural components, and taking inspiration from the principles of origami and kirigami. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00019-0 © 2020 Elsevier Inc. All rights reserved.

323

324

Handbook of Robotic and Image-Guided Surgery

19.1

Introduction

Open surgery is a traditional type of surgery in which an incision is made, using a scalpel, to then insert instruments and operate manually. Open surgery provides complete visual and tactile information to the surgeon. However, trauma and invasiveness related to this type of surgery are significant from the patient point of view. Minimally invasive surgical (MIS) techniques involve smaller incisions or even (in some cases) no incision at all, enabling shorter recovery time and reducing postoperative pain. In laparoscopic surgery, access to the surgical site is provided by multiple holes in the abdomen, which is then inflated with carbon dioxide to create room for the instruments to operate. Laparo-endoscopic single-site surgery follows the same principle of laparoscopic surgery but with only one access port, which is typically the umbilicus. In these cases, since the incisions are small the invasiveness for the patient is reduced: patients tend to have a quicker recovery time and less discomfort with respect to conventional or open surgery. Natural orifice transluminal endoscopic surgery, or NOTES, is a scarless procedure that exploits natural orifices to reach the surgical area of interest. MIS endoscopic procedures are currently performed through long, flexible instruments, such as endoscopes in the gastrointestinal (GI) tract, that allow navigation toward the surgical target through a remote access port, as is done in NOTES. Despite this, the flexibility required for safe navigation conflicts with the forces and dexterity that can be provided distally, and causes the loss of sensor feedback, limiting current therapeutic capabilities of endoscopes [1]. The quest for early detection and treatment of cancer in MIS procedures has pushed research forward in the development of miniaturized smart instruments, resulting in medical robotics being one of the fastest growing sectors in the medical devices industry [2]. Nonetheless, efforts to improve endoscopic therapeutic capabilities, and enable procedures that are currently difficult to perform [such as endoscopic submucosal dissection (ESD), or transurethral laser-assisted surgery of the prostate] [3 5] are limited by the engineering challenges of fabricating distally actuated, safe, miniaturized, smart, articulated structures. Current MIS technologies mainly rely on cable-driven mechanisms, which have limitations such as cable friction and backlash that can affect the accuracy, controllability, and thus the intuitiveness of the system [6]. Furthermore, these solutions lack embedded sensing, thus limiting the amount of information available to the surgeon to effectively perform diagnosis and therapy. The general aim of robotic systems in MIS is to compensate for the counterintuitiveness and complexity that arise when accessing the surgical target from remote locations on the body, and to restore the dexterity and sensor feedback that surgeons are used to having during open surgery [7]. Presently, robotic devices for MIS are manufactured in metal using conventional methods such as CNC (computer numeric control) machining, EDM (electrical discharge machining), laser cutting, or grinding [8 11]. Additionally, silicon-based microelectromechanical systems and novel metal microelectromechanical systems (MEMS) microfabrication techniques have been used to develop millimeter-scale devices [12]. Despite these advances, existing manufacturing approaches present limitations in the achievable complexity for minimally invasive tools, thus limiting their therapeutic capabilities and precluding the possibility for new minimally invasive procedures to become viable. Indeed, it remains a significant engineering challenge to develop articulated, three-dimensional (3D), millimeter-scale mechanisms with integrated sensing and actuation, which are also safe for medical applications, and thus, for example, biocompatible. And also, possibly combine different materials in the fabrication process, not only metal but also flexible, soft, and stretchable matter that can have a safer interaction with biological tissue due to its inherent compliance.

19.1.1

Urology

Many research robotic platforms have been studied and developed in recent years for urologic applications. A robotic system for transurethral inspection of bladder urothelium and tumor resection is presented in Ref. [13]. An active remote steering mechanism for bladder cancer detection and postoperative surveillance is proposed in Ref. [14]. The Hansen Medical Inc. robotic platform was adapted and tested for visual inspection of the interior of the urethra and kidneys in Ref. [15]. The feasibility and safety of a magnetic resonance imaging (MRI)-controlled transurethral ultrasound therapy robotic system for prostate cancer is evaluated in Ref. [16]. Robotic systems for precise needle insertion in the prostate under continuous MRI guidance are discussed in Refs. [17,18]. A manipulator for transurethral resection of the prostate (TURP), provided with a prostate displacement mechanism and a continuous perfusion resection system are proposed in Ref. [19]. A tool to improve the efficacy and safety during transurethral endoscopic surgery is presented in Ref. [20]. A robotic system to perform transurethral ultrasound scanning, surgical motion planning, execution, and virtual evaluation of transurethral laser resection is introduced in Ref. [21]. A handheld system combining active cannula robots with conventional endoscopes for transurethral laser prostate surgery is discussed in Ref. [22]. In the last couple of decades, the application of lasers in urology has undergone significant advances [23]. In particular, laser-assisted procedures are emerging as a valid clinical alternative to TURP for the treatment of benign prostatic

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

325

FIGURE 19.1 Overview of the laser-assisted transurethral surgical procedure of BPH. The resectoscope is inserted through the urethra to access the prostate. The surgeon can move the resectoscope by translating and rotating it along/ about its longitudinal axis. BPH, Benign prostatic hyperplasia.

19.1.2

Gastroenterology

Flexible endoscopes are a widespread tool for diagnostic purposes in the GI tract (Fig. 19.2). However, delivering therapy through these platforms introduces several challenges in terms of instrument controllability, stability, and capability to provide accurate and repeatable dexterous motions at the surgical site [1]. The fabrication of articulated structures, able to effectively perform tasks in complex and highly unstructured environments such as the human body, presents several challenges mainly due to the lack of viable manufacturing techniques and actuation strategies at these scales. Instruments need to provide the surgeon with sensory feedback as well as dexterity and forces necessary at the surgical site. Several solutions have been explored to enhance the therapeutic abilities of endoscopes [27,28], and comprehensive review papers on these systems can be found in Refs. [6,29,30]. Some of these consist of integrating robotic arms or manipulation aids directly on the tip of an endoscope. This strategy allows increasing dexterity and instrument triangulation (deflecting the instruments from the vision system), without losing flexibility of the endoscope in reaching the target. Nevertheless, current platforms mainly rely on conventional cable-driven actuation, which is prone to friction, backlash, and hysteresis (that affect accuracy, controllability, and thus intuitiveness of the system) or other small-scale actuators like piezoelectric bimorph and shape memory alloy, that require high voltage or current to operate [31], not guaranteeing a safe interaction with biological tissue. These devices do not only have

19. Composites and Foldable Arms

hyperplasia (BPH). Laser-assisted treatment of BPH is performed transurethrally via the resectoscope (similarly as TURP), as shown in Fig. 19.1. The resectoscope is an endoscopic straight instrument composed of three elements: a working element provided with an operative channel for the surgical tool (i.e., the laser) insertion, optics for visualization of the surgical site, and a sheath with irrigating fluid valves. In TURP, a wire-loop electrode for cutting and coagulating is inserted by the surgeon through the operative channel. The procedure requires general or spinal anesthesia. This operation has disadvantages such as blood loss requiring transfusion, incontinence, impotence, and long postoperative catheterization time [24]. TURP is still considered as the gold standard surgical treatment for patients with BPH; however, it is associated with significant morbidity and mortality. With respect to TURP, laser-assisted procedures allow reducing intraoperative blood loss as well as postoperative bleeding, catheterization time, invasiveness of the surgical procedure, duration of hospital stay, and recovery time [25]. Despite this, there are still some issues that hamper the widespread use of this technology, for example, distal dexterity is greatly reduced and contact between laser and prostatic tissue cannot be controlled. These limitations, mainly caused by the design of the surgical instrumentation itself (i.e., the resectoscope) may influence the outcome of surgery since, with some laser types, the prostatic tissue can be ablated only when in direct contact with the laser [5,26]. Refinements to existing technology could increase the role of lasers in BPH surgical treatment; in particular, a significant advance could be introduced by the merging of robotics with minimally invasive laser techniques. Among the robotic platforms described previously, laser technology is integrated only in Ref. [13] for bladder tumors, and in Refs. [21,22] for prostatic hyperplasia. No robotic platforms exist for laser-assisted BPH surgery combining distal dexterity and direct contact monitoring at the surgical tool. In Section 19.2, the design, fabrication, and testing of a robotic platform for transurethral laser surgery of BPH are presented. This system combines at the same time contact monitoring at the surgical site and distal dexterity, while maintaining a small distal overall encumbrance. This robot is designed to be compatible with current commercial endoscopic instrumentation, thus avoiding the necessity for dedicated or customized instruments.

326

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.2 Overview of minimally invasive gastrointestinal endoscopic surgery. The endoscope is inserted through the anus and navigates through the colon to access the surgical area of interest.

limited maneuverability and manipulation capabilities, but also typically lack embedded sensors, thus limiting the possibility to perform advanced surgical tasks. Alternative means of actuation, power transmission, and sensor feedback are required to provide seamless and effective instrument operation [32]. In Section 19.3, the design, materials, manufacturing, and evaluation of a hybrid soft-foldable robotic arm for GI endoscopic surgery are presented. This platform incorporates hard and soft complex biocompatible materials, embedding distributed sensors and actuators directly into the materials of the robot’s body. Taking inspiration from the principles of origami and kirigami, the arm is designed to be mounted at the tip of an endoscope in a folded configuration and be unfolded and deployed when necessary at the surgical target area, thus minimizing the impact on endoscope navigation in the GI tract.

19.1.3

Proposed robotic platforms

The robotic systems described in this chapter are focused on some of the more challenging endoscopic techniques where tool dexterity and sensor feedback are at a premium and can potentially make the difference between success and failure. These devices are smart, miniaturized, flexible end-effectors for endoscopy and are designed to be integrated with current endoscopic instrumentation. They can be defined as endoscopic add-ons, meaning that they are specifically designed and conceived to be mounted on top of an endoscope or pass through its working channel, without disrupting either endoluminal navigation or surgical workflow. Therefore, these devices can exploit real-time imaging from the endoscope camera vision system at any time throughout the duration of the surgical procedure. They enable interventional endoscopy in the fields of urology and gastroenterology and are compatible with commercial tools, thus avoiding the necessity for dedicated or customized instruments. These platforms are mainly involved during the intraoperative phase of the surgical process. They are meant to restore distal dexterity and sensor feedback at the surgical site, that are typically lost in surgical endoscopy due to the limitations of traditional devices. This research is focused on the design, mechanics, materials, and manufacturing of novel multiscale and multimaterial biomedical robotic systems that can make action, sensing, and control easier and more robust for natural, unstructured environments, such as the human body. The overall aim is to enable surgical endoscopic procedures to be more efficient and cost effective, with better long-term surgical outcomes compared with conventional surgeries. To support of this, in vitro and ex vivo tests have been performed, as detailed in Sections 19.2 and 19.3.

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

19.2 19.2.1

327

Smart composites in a robotic catheter for targeted laser therapy Clinical motivation

19.2.2

Robotic catheter: design, materials, and manufacturing

The proposed system is composed of a flexible catheter-like robot, integrated in a commercial resectoscope operative channel, provided with a cable-driven actuation mechanism and an optical fiber-based sensing unit, to steer a laser surgical end-effector while controlling contact forces with biological tissue and effectively perform ablation. A schematic overview of the platform design is shown in Fig. 19.3. All the materials and components of this system are specifically selected to be biocompatible and sterilizable with common methods, such as ethylene oxide, beta rays, and gamma rays. The main body of the system is composed of a polyamide multilumen catheter with an outer diameter of 2.1 mm (in order to be inserted into the 2.5 mm operative channel of a commercial resectoscope: Karl Storz 27042 LV) and a central lumen of 1.3 mm, so as to integrate a laser fiber (typically 0.8 1 mm in diameter), as shown in Fig. 19.4. This catheter is manufactured through an extrusion fabrication process out of a soft polymer. The laser fiber used is provided by El.En. SpA (http://www.elengroup.com): it has a diameter of 1 mm and it is a lateral emission fiber, thus it emits the laser light beam at 90 degrees to the fiber direction. The multilumen is also provided with six small lumens (arranged around the periphery forming the vertices of a hexagon) with a diameter of 0.22 mm to integrate cables for actuation (0.17 mm diameter) and fiber Bragg grating (FBG) sensors (0.18 mm diameter). Cables are fixed at the tip of the multilumen catheter thanks to a metal plate, shown in Fig. 19.4, fabricated by the Kern HSPC 2522 CNC micromilling machine. The metal plate, bonded to the catheter surface, provides a stiffer interface to connect the cables, thus

19. Composites and Foldable Arms

BPH is a nonmalignant enlargement of the prostate that may compress or cause an occlusion of the urethra, it is the most common pathology afflicting aging men, and constitutes a major factor impacting male health. BPH can lead to symptoms that affect quality of life and sleeping patterns, such as urgency to urinate, frequent urination, weak stream, straining, and/or the sensation of incomplete bladder emptying. In the most severe stage of BPH, the inability to completely empty the bladder may progress to complete urinary blockage, which can lead to kidney damage [24]. Laser-assisted BPH procedures are performed through the resectoscope (see Fig. 19.1) that is a telescopic straight and rigid instrument allowing only translation and rotation along/about its longitudinal axis. For this reason, dexterity and force feedback at the tip of the surgical tool are strongly reduced. Furthermore, in cases of a larger prostate, there can be some difficulties in manipulating the resectoscope [5]. In recent decades, various lasers have been introduced for the treatment of BPH, including: neodymium:yttrium-aluminum-garnet, potassium titanyl phosphate:yttrium-aluminumgarnet, diode laser, holmium:yttrium-aluminum-garnet (Ho:YAG), and thulium:yttrium-aluminum-garnet. Depending on laser wavelength, the laser interaction with tissue can be different: laser radiation can be absorbed by water or hemoglobin in prostate tissue [26]. The Ho:YAG laser is a pulsed laser with a wavelength of 2120 nm, the optical penetration depth in prostatic tissue is 0.4 mm. The laser energy of the Ho:YAG laser is highly absorbed by water (resulting in rapid dispersion of heat) and requires contact with the target tissue; if used in noncontact mode, the efficiency of tissue vaporization is reduced. In particular, the closer the contact, the better is the distribution of laser energy and the outcome of surgery: energy loss by water absorption is minimized, tissue ablation is more homogeneous avoiding formation of craters (that can entrap tissue fragments, debris, and perfusion liquid that partially absorb and scatter the incident light, thus hindering the laser effect) [33]. Laser-assisted surgical procedures can be efficient and cost effective for BPH treatment with better long-term surgical outcomes compared with the conventional TURP or open prostatectomy. However, a steep operative learning curve caused by the previously discussed issues may be an obstacle to the widespread use of these techniques. Moreover, surgical outcomes are currently variable and dependent on the experience and technical ability of the urologic surgeon [5]. The limitations of the current laser instrumentation play a critical role in the ability of surgeons to deliver consistent care. Advancement in the instrumentation with the aim of providing dexterity and force feedback at the tip of the surgical tool can potentially improve the laser-assisted BPH treatment, and final patient outcomes. In the next section, a MIS robotic platform for endoscopic laser-assisted surgery of the prostate is discussed. This consists of a smart composite catheter-like robot integrating a laser fiber, a sensing system, and an actuation mechanism. The catheter is passed through the working channel of a resectoscope to perform surgical treatment. The main aim of this system is solving the challenge of contact monitoring between the surgical end effector and biological tissue, that currently causes poor control of the ablation procedure, and improving distal dexterity of the laser [34].

328

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.3 Schematic overview of the design of the flexible catheter-like robot for laser-assisted surgery of BPH. In the inset on the top left, a section of the multilumen catheter is shown, integrating the laser fiber, the FBG sensors (in pink), and actuation cables (in green). BPH, Benign prostatic hyperplasia; FBG, fiber Bragg grating.

FIGURE 19.4 Multilumen catheter: (A) tip of the flexible catheter-like robot, with the multilumen catheter inserted in the resectoscope, (B) picture of the multilumen section at the optical microscope, (C) metal plate to fix the cables on a 5 cent Euro coin.

avoiding high stress concentration on the multilumen catheter during steering. In this way, the multilumen catheter is more protected from possible damage. The material used for the multilumen catheter is polyamide 12 (PA12). The Young’s modulus of PA12 is 1.1 GPa, thus allowing the multilumen to serve both as a mechanical continuum for strain transmission between the laser and sensors (that have a coating of polyimide with a Young’s modulus of 2.5 GPa), but also to be flexible enough to be bent by the cables. The melting point of PA12 (178 C) is compatible with the laser ablation. The material lubricity (coefficient of friction of 0.28) makes it possible to insert actuation cables and fibers into the lumens, but also to firmly glue the FBG sensors and the laser fiber to the multilumen body. The robotic catheter sensing unit is composed of FBG optical sensors responsible for contact monitoring between the laser surgical end-effector and prostatic tissue. These sensors are sensitive to both temperature and strain. Redundant FBG sensors are used to compensate for temperature effects and measure only the strain, as is typically done with electrical strain gauges. Three FBG sensors are integrated with 120 degrees intervals into the multilumen catheter. The sensing element (i.e., the Bragg grating) is positioned at about 25 mm from the ablation site, that is, the laser tip, as shown in Fig. 19.5. The laser tip protrudes 10 mm from the multilumen. Due to this arrangement, sensors experience the same temperature variation, but different strains in terms of compression and tension, and therefore, temperature compensation can be performed. Furthermore, the ablation temperature drops dramatically as soon as we move away from the ablation site, therefore the laser will not affect the FBG sensors’ operation. SmartFBG (Smart Fibres Ltd., United Kingdom) are integrated in the robotic platform. The selected sensors are flexible polyimide fibers with polyimide recoat, which is the recommended combination for strain sensing (http://www.smartfibres.com/). The diameter of the fiber is 0.18 mm, grating length is 5 mm, and distance from the grating to the tip of the fiber is 15 mm. The center Bragg wavelength is 1550 nm for all three sensors, thus avoiding possible optical interferences with typical wavelengths of laser fibers for

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

329

FIGURE 19.5 Schematic representation of the flexible catheter-like robot sensing system design: FBG positioning within the system (three FBG sensors are integrated inside the multilumen catheter and only two are visualized in the picture for clarity). FBG, fiber Bragg grating.

prostate treatment (Ho:YAG laser is a pulsed laser with a wavelength of 2120 nm). The Epo-Tek 301 optical adhesive is used to glue the FBG sensors and the laser fiber in the multilumen catheter. Since the volume in which to pour the glue is very low, a low-viscosity glue is used to facilitate the casting of the glue. Furthermore, the Epo-Tek 301 is nontoxic, it resists the most common sterilization methods, it has a fast curing time, and it has an operating temperature range of up to 300 C, which is compatible with the laser application. The assembly procedure description is the following: first the optical fibers are inserted in the lumens of the multilumen catheter, then the multilumen is kept in a vertical position in order to facilitate the flow down of the glue, and at this point, the glue is inserted inside the lumens using a microneedle (0.16-mm diameter) mounted on a syringe. The glue curing time is 24 hours at room temperature. This sensing system is capable of monitoring contact forces Fc in two directions (Fcx and Fcy) with sub-Newton sensitivity. The force component along the z-axis Fcz is not of interest since the laser fiber is a lateral emission fiber. Due to their particular arrangement, sensors experience the same temperature variation, but different strain in terms of compression and tension, and therefore temperature compensation can be performed easily. Three cables (nonabrasive Dyneema, 0.17 mm in diameter) are distributed at 120 degrees intervals in the section of the catheter, in order to steer the laser in all directions with a double-bending degrees-of-freedom (DoFs) cable mechanism. This is preferable from a clinical point of view, with respect to a single bending plus manual twisting to avoid rolling the whole system around its z-axis of 6 180 degrees from the outside of the patient, to reach all points to be treated within the prostate: this rolling procedure would produce difficulty in transmitting the rotational motion to the laser tip (due to friction with the biological tissue) and would cause stretching of surrounding tissue. Three brushless DC motors are connected to the cables through pulleys. Motor control is performed using commercial drivers. All the components forming the three actuation units (motors, pulleys, drivers) are fixed on a base plate provided with a dedicated support for the resectoscope and for the multilumen catheter, as shown in Fig. 19.6. The overall platform is shown in Fig. 19.7. It is composed of a PC running a LabVIEW graphical user interface (GUI) to acquire the signals from the FBG sensors and control the actuation motors, the FBG interrogator, and the driving motor platform. The system is currently conceived to provide contact force feedback (via visual and/or auditory signals) to the surgeon by means of the GUI. The system dimensions are 600 mm in length, 300 mm in width, and 100 mm in height. The overall weight of the

19. Composites and Foldable Arms

FIGURE 19.6 Actuation unit of the flexible catheter-like robot: (A) components of the actuation mechanism, (B) close-up view of multilumen support and redirection system of the cables, (C) assembled actuation system, and (D) close-up view of the pulleys.

330

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.7 Overview of the robotic platform for laser-assisted surgical treatment of BPH. BPH, Benign prostatic hyperplasia.

FIGURE 19.8 In vitro test of the robotic platform with an anthropomorphic phantom mimicking the biomechanical and anatomical properties of the surgical scenario for BPH: (A) experimental setup, (B) and (C) approaching phases of the laser tip to the tissue phantom, (D) and (E) slicing phases of the laser tip on the tissue phantom. During experiments, optics are inserted in the phantom so as to point toward the front of the catheter-like robot. BPH, Benign prostatic hyperplasia.

platform is 4 kg and it can be supported by a robotic arm (so as to compensate for its weight) during the surgical procedure, following a shared-control robotic surgery system paradigm. This device has been tested in vitro (see Fig. 19.8), together with a urological surgeon, demonstrating successful performance according to the clinical requirements. The sensing system allows contact monitoring between the laser and the hypertrophic tissue, and the actuation mechanism enables steering of the laser fiber inside the prostatic urethra of the patient, when contact must be reached. Future ex vivo and in vivo tests with clinicians could pave the way to study the necessary contact forces during laserassisted transurethral surgery of the prostate and the necessary dexterity of the laser fiber. This will allow to understand what the best combinations of dexterity and contact forces are to be applied in order to improve the ablation procedure.

19.3 19.3.1

Soft-foldable endoscopic arm Clinical motivation

Colorectal cancer is the third most common cancer in men and women, with almost 1.4 million new cases and 694,000 deaths/year worldwide [35], and 150,000 estimated new diagnoses and more than 50,000 deaths in the United States for

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

331

2018 [36]. Surgery is often the main treatment for early-stage colon cancers and the type of surgery used depends on the stage (i.e., the extent) of the cancer [37]. Colonoscopy is a widespread technique for colorectal cancer diagnosis and treatment. The colonoscope is a long, flexible tube, typically around 1 cm in diameter, equipped with an imaging system at its tip based on a camera and a source of light. The colonoscope is inserted and advanced slowly through the intestine, under visual control, into the rectum and through the colon up to the cecum. ESD is a MIS technique, performed endoscopically through the colonoscope, for early-stage treatment of malignant lesions of the stomach, esophagus, and colorectum [3,4]. ESD is used to remove lesions en bloc from the submucosal space of the GI tract and involves injecting fluid into the submucosa to elevate the targeted lesion from the muscle layer. Then, the submucosa is dissected under the lesion with a specialized knife. In other procedures, such as endoscopic mucosal resection (EMR), the lesion is removed with a snare. ESD enables removal of larger lesions than EMR with lower local recurrence rate and without increasing the procedure-related complications (bleeding or perforation) [38]. Although the use of this procedure is increasing, technical difficulty, long learning curves, and long procedure duration are limiting the adoption of ESD in the United States to a modest number of academic centers for the foreseeable future [39].

Endoscopic arm: design, materials, and manufacturing

The development of a smart, miniaturized surgical end-effector for ESD requires the development of advanced fabrication techniques. The “pop-up book microelectromechanical systems (MEMS)” manufacturing method creates 3D microstructures based on folding of multilayer rigid-flex laminates [40], and enables fabrication of highly complex structures with embedded actuation and sensing [41]. Surgical applications of pop-up mechanisms have been proposed as selfassembling force sensors for catheters [42], and mechanisms for deflecting electrosurgical tools in endoscopic procedures [28]. However, the actuation strategies typically used in these systems involve piezoelectric bimorph actuators and shape memory alloys that require high voltages and currents to operate, and thus are undesirable when a safe interaction with biological tissue is required [31]. Merging soft materials and soft fluidic actuation with pop-up book-inspired rigid micromechanisms can enable fabrication of millimeter-scale robotic end-effectors with embedded sensing and user-defined distributed compliance, thus paving the way to develop smaller, smarter, softer robots for medical/surgical applications [43]. Soft robotic fluidic actuators are often made of silicone elastomers (e.g., Ecoflex or Dragon Skin, Smooth-On, Inc., United States) and typically rely on 3D-printed molds. However, these materials do not meet the requirement of biocompatibility that is necessary to develop medical/surgical devices. In addition, 3D printing is inappropriate for millimeter-scale actuators and soft lithography-based microfluidics techniques [44], including the creation of lithographic micromolds and plasma bonding, can be used instead. Soft robotics represents a promising technology for robotic-assisted MIS because soft robots are constructed from compliant and flexible materials, resulting in machines that can safely interact with the surrounding environment [45,46]. They have already found applications in several research fields including the creation of biomimetic devices (given that the majority of the animal kingdom is mostly or entirely soft) [47 50], wearable robots [51,52], and medical robots [53 55]. However, the low elastic modulus of soft materials can limit the interaction forces between the robots and the surgical target. To resolve the paradox of generating large forces from soft devices, stiffening mechanisms can be exploited [56], such as granular jamming that has been integrated in a soft manipulator in order to effectively apply forces on a desired surgical target [57]. Soft biomedical robots are typically centimeter-scale [58] or larger, but the current trend in minimally invasive procedures is to perform surgical tasks through small and remote entry points relative to the surgical target [59], thus requiring millimeter-scale systems. Prior examples of soft millimeterscale mechanisms include flexible microactuators for building robotic manipulators and grippers constructed by casting silicone rubber and nylon fibers in micromolds fabricated using EDM [60], soft microtentacles for grasping delicate objects consisting of elastomeric microtubes fabricated with a direct peeling-based soft-lithographic technique [61], and a soft miniature hand fabricated through casting in micromolds and bonding silicone rubber through excimer light irradiation [62]. The forces that these actuators can exert are restricted to the millinewton range, thus suggested biomedical applications are limited to low-force surgical tasks, such as those performed in retinal surgery [63] and neurosurgery [64]. These limitations motivate the need for new millimeter-scale manufacturing technologies that combine soft materials with precision mechanisms to achieve distal articulation, integrated sensing, and effective force transmission with compliant, back-drivable, and safe devices for minimally invasive surgery. In this chapter, soft-foldable actuators and sensors are the result of a hybrid manufacturing paradigm (see Fig. 19.9), that combines pop-up book MEMS manufacturing with soft lithographic techniques.

19. Composites and Foldable Arms

19.3.2

332

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.9 Soft-foldable mechanisms. (A and E) Fully soft microactuators during expansion, modeled using Laplace’s law for a thin-walled sphere and Laplace’s law for a thin-walled cylindrical vessel. (B and F) Soft-foldable actuators schematics in deflated and inflated states. (C and G) Soft-foldable actuator prototypes during bending upon pressurization with water. (D and H) Exploded view showing all layers.

This method enables monolithic integration of soft materials and soft fluidic microactuators with other mechanical and sensing components, without the need for manual intervention to assemble discrete parts (thus guaranteeing a more accurate and faster fabrication process). Here, soft fluidic microactuators and soft-foldable mechanisms are manufactured using biocompatible silicone elastomers (NuSil Technology, CA, United States). Silicone elastomers with different shore hardness have been selected to tune the deformation: MED4-4220 (17A durometer) and MED-6033 (50A durometer). These silicone elastomers are more stretchable and have a higher tear strength than Sylgard 184 (which is usually used in soft lithography [44]), which makes them better candidates for fabricating soft actuators that can exhibit large deformations at relatively low pressures. These hybrid mechanisms are created from fiber-reinforced epoxy sheets (254 µm thick) as structural material and polyimide film (25 µm) as flexible material. Gaps in the structural material expose the embedded flexible material, creating folding flexure joints that define the articulation in these actuators. These layers are bonded together (Fig. 19.9) with a biocompatible medical pressure-sensitive adhesive (3M 9877) to avoid risks related to cytotoxicity. This method combines the accuracy, flexibility in material selection, scalability, and topological complexity of pop-up book MEMS with soft, biocompatible materials and microfluidics from the realm of soft lithography. This hybrid concept also enables soft fluidic actuation to safely interact with biological tissue without the need for high voltages or temperatures found in other small-scale MIS robotic end-effectors. Furthermore, this method is low cost and enables batch manufacturing. Integration of soft components in pop-up book MEMS is achieved by first fabricating the soft fluidic microactuators with soft lithography and laser machining, as shown in Fig. 19.10A H. The actuators are integrated into laminated pop-up structures by chemically modifying the polymer surfaces to achieve an irreversible chemical bond between the soft and the hard layers [combining oxygen plasma and amino-silane coupling agent (3-aminopropyl)triethoxysilane (APTES)], as shown in Fig. 19.10I M. Soft layers are transferred on a flexible support and aligned with the laser coordinate system by means of fiducial markers that are embossed on the silicon wafer mold (see Fig. 19.10C and D). During laser machining, holes are created on the soft layer and the flexible support to allow realignment and bonding/integration with the rest of the laminate by using precision dowel pins, as shown in Fig. 19.10F L. After the layers are laminated, a final laser machining step releases the soft-foldable mechanisms from the surrounding substrate, as shown in Fig. 19.10N Q. One of the primary benefits of these novel actuators is highlighted in Fig. 19.11, where the trajectories of the fully soft bending actuators are compared with the hybrid softfoldable counterparts. The observed trajectory of the soft-foldable actuators is a regular circumference arc, centered with a red cross in the center of the hybrid actuators, whereas the bending fully soft actuator tends to roll around itself (green trajectory) due to the compliance of the system, deviating from the regular circumference arc (blue trajectory). Indeed, the rigid

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

333

19. Composites and Foldable Arms FIGURE 19.10 Overview of the hybrid soft-foldable manufacturing method. Soft fluidic microactuators are manufactured using (A) soft lithography, (B) soft layers are cured, (C) peeled off from a silicon wafer mold, (D) laser machined, (E) O2 plasma-treated, and (F) realigned (G and H) to be bonded. Hard layers are (I) laser-machined, (J) chemically modified with O2 plasma and (K) APTES, (L) realigned, and (M) bonded to the soft layers. (N) The resulting hard/soft layer is (O) laminated and (P) a final laser machining step releases (Q) the mechanisms from the surrounding substrate. APTES, (3-Aminopropyl)triethoxysilane.

FIGURE 19.11 Trajectories of the fully soft bending actuator (A) and hybrid soft-foldable actuators (B and C) captured with a camera during actuation via pressurization with water. The desired trajectory is shown in blue, whereas the actual trajectory is shown in green.

334

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.12 Fabrication process of the soft-foldable actuators. Scale bars in all images are 10 mm. (A) Soft fluidic microactuators are chemically bonded on rigid sublaminates (layers are aligned using precision dowel pins), with detail of the integrated actuators. (B) Laminate with detail of integrated fluidic lines. (C) Detail of a fluidic line integrated inside the laminate after release cuts. (D) Laminate profile. (E) Aligned and bonded soft fluidic microactuators on a laminate, and detail of the integrated actuators. (F) Laminate profile. (G and H) Release cuts to release the mechanisms from the laminate scaffold. (I) Final prototypes of two different soft-foldable actuators.

structure naturally constraints the actuators to follow a well-defined and regular trajectory, thus defining the kinematics of the actuators. Therefore, the soft-foldable actuators show greater predictability in their trajectory during motion with respect to their fully soft counterparts. To demonstrate the scalability of this manufacturing paradigm, soft-foldable actuators are fabricated at three different scales: 5, 2.5, and 1.25 mm (which correspond to the width of the actuator lb), indicated in Fig. 19.9B and F. These dimensions were chosen to achieve mechanisms that can be mounted either externally with respect to operative GI endoscopes (typical outer diameter 11.1 15 mm) or passed through the endoscope working channel (2.8 4.2 mm) [65]. Photos of the fabrication workflow for these actuators are shown in Fig. 19.12. The soft-foldable technology enables embedding distributed sensors and actuators directly into the materials of the robot’s body. Capacitive sensing elements are integrated in the mechanisms to achieve proprioceptive actuation through conductive traces, running along the actuator sides, and conductive plates, at the top and bottom of the soft balloon, as shown in Fig. 19.13. A secondary conductive trace runs along the primary trace to shield from possible interference due to capacitance coupling. In this way, when the internal balloon inflates, a composite dielectric (consisting of the parallel of water and air connected in series with the elastomer) is formed between the two conductive plates. The conductive components of this system are realized with copper, which can raise some concerns in terms of biocompatibility, but this material could be easily substituted with gold (e.g., deposited by physical vapor deposition). Gold has also been proven to be compatible with the chemical bonding realized through amino-silane (APTES) [66]. The integration of sensing capabilities (conductive plates and traces) does not affect the actuator’s mechanical functionality. The relationship between bending angle and pressure for these actuators is shown in Fig. 19.13C. As a demonstration of the potential of this technology in medical/surgical applications, a soft-foldable multiarticulated robotic arm is fabricated and integrated on a flexible endoscope to evaluate the possibility of performing tissue countertraction, as shown in Fig. 19.14. This is a surgical task necessary for manipulating tissue and enabling resection of neoplasms in the GI tract. This robot is manufactured entirely with biocompatible materials to guarantee safety for surgical applications, and incorporates micron-scale features in a millimeter-scale tool. The addition of the arm increases the endoscope diameter by only

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

335

FIGURE 19.13 Proprioceptive actuation through capacitive sensing. (A) Scheme of the integration of conductive traces and conductive plates in the soft-foldable actuator to achieve proprioceptive actuation. (B) Model of the capacitive sensor. (C) Characterization of the proprioceptive actuators at different scales: 5, 2.5, and 1.25 mm. The dashed line is the analytic model, the solid line is the mean resulting from three experiments, and the shaded area is the standard deviation.

1.8 mm (initial diameter is 13.3 mm). The arm has three DoFs powered by soft fluidic microactuators embedded in a rigid origami-inspired structure. It is composed of a four-bar linkage mechanism, to move the arm with respect to the endoscope vision system (surgical triangulation), a yaw DoF and a pitch DoF to steer an end-effector and perform tissue manipulation. Three fluidic actuation lines are necessary for the arm actuation and they can be integrated on an overtube running along the endoscope. Fluidic lines are commonly integrated in endoscopic platforms for cleaning the camera and inflating the GI tract during navigation. In addition, fluidic actuation is commonly used to inflate navigation aids, for instance in double-balloon endoscopy [67]. The arm is provided with proprioceptive actuation through the integration of conductive materials in the origami structure that serve as capacitive sensors in the arm joints. In addition, we used optically clear materials to avoid occluding the endoscope camera’s field of view as much as possible. The functionality of this system was tested ex vivo on a porcine stomach together with a gastroenterologist, demonstrating successful endoluminal tissue manipulation, without hampering the endoscope movement and vision system during operation. The arm augments distal dexterity and provides sensor feedback without disrupting surgical workflow since it can be mounted externally onto conventional instrumentation, leaving the endoscope working channel free for passing additional tools (such as electrocautery tools). The endoscope used in the experiments is a commercial flexible endoscope (Olympus CF-100L) and, as shown in Fig. 19.15, the arm is capable of providing the necessary countertraction for safe en bloc tumor resection. This would enable a valid alternative technique to injecting fluid into the intestinal submucosa to elevate tissue, that is, currently used in ESD. The surgeons would not have to switch instruments (injection tool and electrocautery knife) through the endoscope working channel multiple times during the operation as they are currently doing. They would instead use the arm, mounted externally with respect to the endoscope, to reposition the target tissue as necessary and achieve an optimal lesion removal using the electrocautery tool inserted in the endoscope working channel. This can potentially shorten the duration of the procedure and make it easier to perform, thus shortening the learning curve for the surgeon.

19. Composites and Foldable Arms

FIGURE 19.14 Multiarticulated soft-foldable robotic arm. (A) Concept of the system: endoscope navigating in the GI tract and detail of the arm mounted at the tip of the endoscope. (B) Soft-foldable arm performing tissue countertraction during an ex vivo test on a porcine stomach. GI, Gastrointestinal.

336

Handbook of Robotic and Image-Guided Surgery

FIGURE 19.15 Soft-foldable arm ex vivo test on porcine stomach. The arm is mounted on a flexible endoscope and passed through a gastrointestinal tract of an anthropomorphic phantom. For each step, side and top views are shown. (A) Initial position. (B) Activation of expansion and yaw DoFs. (C) The endoscope is moved to position the end-effector on a location of interest. (D) The gripper is activated (by turning on a vacuum) and the tissue is grasped. (E) Activation of the pitch DoF to counter-tract the tissue. (F) Additional tissue tensioning combining pitch DoF with endoscope movement. DoFs, degrees-of-freedom.

19.4

Conclusion and future work

The proposed robotic technologies pave the way toward smaller, softer, safer, smarter robots for MIS, combining hard and soft complex biocompatible materials with low-cost manufacturing technologies, which enable millimeter-scale mechanisms with highly integrated sensing and actuation. These robotic systems have been tested successfully and demonstrated potential to be used in urology and gastroenterology. Regarding urology, future ex vivo and in vivo tests with clinicians of the catheter-like robot could pave the way to study the necessary contact forces during laser-assisted transurethral surgery of the prostate and the necessary dexterity of the laser fiber. This will allow to understand what the best combinations of dexterity and contact forces are to be applied in order to improve the ablation procedure. Regarding gastroenterology, future work will focus on making the proposed system suitable for an animal (in vivo) experiment. In particular, the arm will be integrated in a soft sleeve that the surgeon can pull (from outside the body) to free the mechanism in a safe and quick manner. Currently available technologies will be explored for a reliable integration of the arm and the actuation lines on the endoscope, similarly to what is done for endoscopic overtubes in doubleballoon endoscopy. Since these systems are based on manufacturing technologies that can be easily scaled down, potential future applications are in smaller and hard-to-reach areas of the human body, such as the brain, lungs, and cardiovascular system. Some interesting work also involves scaling up these technologies, as shown in Fig. 19.16. In particular, a novel 2D manufacturing technique for fabricating soft pneumatic actuators out of thin biocompatible plastic films is presented in Ref. [68]. This technique can be used to manufacture robotic tools for applications such as endoscopic stabilization [68], as shown in Fig. 19.17, and deployable tissue retraction mechanisms [69], as shown in Fig. 19.18.

FIGURE 19.16 Centimeter-scale soft-foldable actuators fabricated through the integration of thin biocompatible plastic films in origami-inspired rigid structures.

FIGURE 19.18 Deployable tissue retraction soft-foldable mechanisms for endoscopic surgery. (A) Integrated device encased in 23.81 mm outerdiameter overtube. (B) Integrated device during deployment and overtube retraction. (C) View from endoscope distal camera, end-effector in foreground. (D) Deployed device retracting tissue, endoscope with tool contacting tissue retracted by integrated device, as an electrocautery tool would be used to ablate tissue.

19. Composites and Foldable Arms

FIGURE 19.17 Integration of the centimeter-scale soft-foldable actuators around a flexible endoscope as stabilization tools. (A) Seven soft linear actuators are interconnected so that they can be inflated with a single tube. (B) The actuators are integrated on a sleeve mounded on the endoscope and (C) subsequently inflated, expanding into a bracing structure. (D) Five soft-foldable structures are fabricated in a single batch with one channel for inflation, (E) positioned around the endoscope, and (F) inflated. (G) Four soft-foldable mechanisms inspired by the origami magic cube are attached on the outer part of an endoscope and (H) their inflation is demonstrated.

338

Handbook of Robotic and Image-Guided Surgery

These systems are deployable endoscopic add-ons based on soft-foldable technology, able to perform advanced surgical tasks, such as locally counteracting forces applied at the tip of an endoscope and assisting with anchoring and tissue retraction during endoscopic procedures. The designed solution decouples the tissue-grasping function from the movement of the endoscope tip, leaving the surgeon free to use the endoscope tip solely for positioning of electrocautery or biopsy tools deployed through the endoscope working channel. Soft surgical robotics is a new field that offers opportunities for scientific collaborations involving materials scientists, chemists, engineers, and roboticists. Indeed, soft robots rely on the properties of materials in order to build sensor, actuator, and controller components, thus representing a paradigm shift with respect to their hard robot counterparts [70]. Furthermore, the range of operations of soft robots is related to their structural and functional properties that are mainly controlled by the fabrication method employed. Future possibilities here are endless, as recent advances in soft materials and additive manufacturing technologies present novel design and fabrication methods, and have laid the foundations for embedding sophisticated functions in soft robots [71,72]. These systems can potentially enable novel medical approaches, for instance untethered mobile milliscale and microscale soft robots are currently under investigation for minimally invasive medicine, such as microscale tissue manipulation and targeted delivery [73]. The next steps in this field will move toward bringing closer together researchers in medical robotics and materials science to tackle challenges in materials design, biocompatibility, controls, and explore synergies in developing new multifunctional soft material composites and investigating different sensing strategies as well as actuation methodologies to improve the range of capabilities of soft medical robots.

References [1] Loeve A, Breedveld P, Dankelman J. Scopes too flexible. . . and too stiff. IEEE Pulse 2010;1(3):26 41. [2] Yang GZ, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, et al. The grand challenges of Science Robotics. Sci Robot 2018;3(14): eaar7650. [3] Aihara H, Kumar N, Ryou M, Abidi W, Ryan MB, Thompson CC. Facilitating endoscopic submucosal dissection: the suture-pulley method significantly improves procedure time and minimizes technical difficulty compared with conventional technique: an ex vivo study (with video). Gastrointest Endosc 2014;80(3):495 502. [4] Yamamoto H. Endoscopic submucosal dissection—current success and future directions. Nat Rev Gastroenterol Hepatol 2012;9(9):519. [5] Kim M, Lee HE, Oh SJ. Technical aspects of holmium laser enucleation of the prostate for benign prostatic hyperplasia. Korean J Urol 2013;54 (9):570 9. [6] Vitiello V, Lee SL, Cundy TP, Yang GZ. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2013;6:111 26. [7] Canes D, Lehman AC, Farritor SM, Oleynikov D, Desai MM. The future of NOTES instrumentation: flexible robotics and in vivo minirobots. J Endourol 2009;23(5):787 92. [8] Piccigallo M, Scarfogliero U, Quaglia C, Petroni G, Valdastri P, Menciassi A, et al. Design of a novel bimanual robotic system for single-port laparoscopy. IEEE/ASME Trans Mechatronics 2010;15(6):871 8. [9] Dupont PE, Lock J, Itkowitz B, Butler E. Design and control of concentric-tube robots. IEEE Trans Robot 2010;26(2):209 25. [10] Webster III RJ, Jones BA. Design and kinematic modeling of constant curvature continuum robots: a review. Int J Robot Res 2010;29 (13):1661 83. [11] Berthet-Rayne P, Gras G, Leibrandt K, Wisanuvej P, Schmitz A, Seneci CA, et al. The i2Snake robotic platform for endoscopic surgery. Ann Biomed Eng 2018;46:1663 75. [12] Vasilyev NV, Gosline AH, Veeramani A, Wu MT, Schmitz GP, Chen RT, et al. Tissue removal inside the beating heart using a robotically delivered metal MEMS tool. Int J Robot Res 2015;34(2):236 47. [13] Goldman RE, Bajo A, MacLachlan LS, Pickens R, Herrell SD, Simaan N. Design and performance evaluation of a minimally invasive telerobotic platform for transurethral surveillance and intervention. IEEE Trans Biomed Eng 2013;60(4):918 25. [14] Yoon WJ, Park S, Reinhall PG, Seibel EJ. Development of an automated steering mechanism for bladder urothelium surveillance. J Med Device 2009;3(1):011004. [15] Aron M, Desai MM. Flexible robotics. Urol Clin North Am 2009;36(2):157 62. [16] Chopra R, Colquhoun A, Burtnyk M, N’djin WA, Kobelevskiy I, Boyes A, et al. MR imaging controlled transurethral ultrasound therapy for conformal treatment of prostate tissue: Initial feasibility in humans. Radiology 2012;265(1):303 13. [17] Seifabadi R, Gomez EE, Aalamifar F, Fichtinger G, Iordachita I. Real-time tracking of a bevel-tip needle with varying insertion depth: toward teleoperated MRI-guided needle steering. In: 2013 IEEE/RSJ international conference on Intelligent robots and systems (IROS). IEEE; Nov 3, 2013. p. 469 76. [18] Shang W, Su H, Li G, Fischer GS. Teleoperation system with hybrid pneumatic-piezoelectric actuation for MRI-guided needle insertion with haptic feedback. In: 2013 IEEE/RSJ international conference on Intelligent Robots and Systems (IROS). IEEE; Nov 3, 2013. p. 4092 8. [19] Hashimoto R, Kim D, Hata N, Dohi T. A tubular organ resection manipulator for transurethral resection of the prostate. In: Proceedings. 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS 2004), vol. 4. IEEE; 2004. p. 3954 9.

Smart Composites and Hybrid Soft-Foldable Technologies for Minimally Invasive Surgical Robots Chapter | 19

339

19. Composites and Foldable Arms

[20] Pantuck AJ, Baniel J, Kirkali Z, Klatte T, Zomorodian N, Yossepowitch O, et al. A novel resectoscope for transurethral resection of bladder tumors and the prostate. J Urol 2007;178(6):2331 6. [21] Ho G, Ng WS, Teo MY, Kwoh CK, Cheng WS. Computer-assisted transurethral laser resection of the prostate (CALRP): theoretical and experimental motion plan. IEEE Trans Biomed Eng 2001;48(10):1125 33. [22] Hendrick RJ, Mitchell CR, Herrell SD, Webster III RJ. Hand-held transendoscopic robotic manipulators: a transurethral laser prostate surgery case study. Int J Robot Res 2015;34(13):1559 72. [23] Pierre SA, Albala DM. The future of lasers in urology. World J Urol 2007;25(3):275 83. [24] Chughtai B, Forde JC, Thomas DD, Laor L, Hossack T, Woo HH, et al. Benign prostatic hyperplasia. Nat Rev Dis Primers 2016;2:16031. [25] Zhang X, Shen P, He Q, Yin X, Chen Z, Gui H, et al. Different lasers in the treatment of benign prostatic hyperplasia: a network meta-analysis. Sci Rep 2016;6:23503. [26] Gravas S, Bachmann A, Reich O, Roehrborn CG, Gilling PJ, De La Rosette J. Critical review of lasers in benign prostatic hyperplasia (BPH). BJU Int 2011;107(7):1030 43. [27] Mylonas GP, Vitiello V, Cundy TP, Darzi A, Yang GZ. CYCLOPS: a versatile robotic tool for bimanual single-access and natural-orifice endoscopic surgery. In: 2014 IEEE international conference on robotics and automation (ICRA). IEEE; May 31, 2014. p. 2436 42. [28] Gafford J, Ranzani T, Russo S, Aihara H, Thompson C, Wood R, et al. Snap-on robotic wrist module for enhanced dexterity in endoscopic surgery. In: 2016 IEEE international conference on robotics and automation (ICRA). IEEE; May 16, 2016. p. 4398 405. [29] Autorino R, Kaouk JH, Stolzenburg JU, Gill IS, Mottrie A, Tewari A, et al. Current status and future directions of robotic single-site surgery: a systematic review. Eur Urol 2013;63(2):266 80. [30] Arkenbout EA, Henselmans PW, Jelı´nek F, Breedveld P. A state of the art review and categorization of multi-branched instruments for NOTES and SILS. Surg Endosc 2015;29(6):1281 96. [31] Gafford J, Ranzani T, Russo S, Degirmenci A, Kesner S, Howe R, et al. Toward medical devices with integrated mechanisms, sensors, and actuators via printed-circuit MEMS. J Med Device 2017;11(1):011007. [32] Yeung BP, Gourlay T. A technical review of flexible endoscopic multitasking platforms. Int J Surg 2012;10(7):345 54. [33] Kang HW, Kim J, Peng YS. In vitro investigation of wavelength-dependent tissue ablation: laser prostatectomy between 532 nm and 2.01 µm. Lasers Surg Med 2010;42(3):237 44. [34] Russo S, Dario P, Menciassi A. A novel robotic platform for laser-assisted transurethral surgery of the prostate. IEEE Trans Biomed Eng 2015;62(2):489 500. [35] Bonjer HJ, Deijen CL, Abis GA, Cuesta MA, van der Pas MH, de Lange-de Klerk ES, et al. A randomized trial of laparoscopic versus open surgery for rectal cancer. N Engl J Med 2015;372(14):1324 32. [36] Siegel RL, Miller KD, Jemal A. Cancer statistics, 2016. CA Cancer J Clin 2016;66(1):7 30. [37] Liang J, Fazio V, Lavery I, Remzi F, Hull T, Strong S, et al. Primacy of surgery for colorectal cancer. Br J Surg 2015;102(7):847 52. [38] Wang J, Zhang XH, Ge J, Yang CM, Liu JY, Zhao SL. Endoscopic submucosal dissection vs endoscopic mucosal resection for colorectal tumors: a meta-analysis. World J Gastroenterol 2014;20(25):8282. [39] Maple JT, Dayyeh BK, Chauhan SS, Hwang JH, Komanduri S, Manfredi M, et al. Endoscopic submucosal dissection. Gastrointest Endosc 2015;81(6):1311 25. [40] Whitney JP, Sreetharan PS, Ma KY, Wood RJ. Pop-up book MEMS. J Micromech Microeng 2011;21(11):115021. [41] Sreetharan PS, Whitney JP, Strauss MD, Wood RJ. Monolithic fabrication of millimeter-scale machines. J Micromech Microeng 2012;22 (5):055027. [42] Gafford JB, Wood RJ, Walsh CJ. Self-assembling, low-cost, and modular mm-scale force sensor. IEEE Sens J 2016;16(1):69 76. [43] Russo S, Ranzani T, Walsh CJ, Wood RJ. An additive millimeter-scale fabrication method for soft biocompatible actuators and sensors. Adv Mater Technol 2017;2(10):1700135. [44] Xia Y, Whitesides GM. Soft lithography. Angew. Chem. Int. Ed. 1998;37(5):550 75. [45] Rus D, Tolley MT. Design, fabrication and control of soft robots. Nature 2015;521(7553):467. [46] Wang L, Iida F. Deformation in soft-matter robotics: a categorization and quantitative characterization. IEEE Robot Autom Mag 2015;22 (3):125 39. [47] Cianchetti M, Calisti M, Margheri L, Kuba M, Laschi C. Bioinspired locomotion and grasping in water: the soft eight-arm OCTOPUS robot. Bioinspir Biomim 2015;10(3):035003. [48] Suzumori K, Endo S, Kanda T, Kato N, Suzuki H. A bending pneumatic rubber actuator realizing soft-bodied manta swimming robot. In: 2007 IEEE international conference on robotics and automation. IEEE; Apr 10, 2007. p. 4975 80. [49] Marchese AD, Onal CD, Rus D. Autonomous soft robotic fish capable of escape maneuvers using fluidic elastomer actuators. Soft Robot 2014;1(1):75 87. [50] Mazzolai B, Mattoli V, Beccai L. Soft plant robotic solutions: biological inspiration and technological challenges. Advances in unconventional computing. Cham: Springer; 2017. p. 687 707. [51] Polygerinos P, Wang Z, Galloway KC, Wood RJ, Walsh CJ. Soft robotic glove for combined assistance and at-home rehabilitation. Rob Auton Syst 2015;73:135 43. [52] Walsh C. Human-in-the-loop development of soft wearable robots. Nat Rev Mater 2018;3(6):78. [53] Roche ET, Horvath MA, Wamala I, Alazmani A, Song SE, Whyte W, et al. Soft robotic sleeve supports heart function. Sci Transl Med 2017;9 (373):eaaf3925.

340

Handbook of Robotic and Image-Guided Surgery

[54] Cianchetti M, Menciassi A. Soft robots in surgery. In: Laschi C, Rossiter J, Iida F, Cianchetti M, Margheri L, editors. Soft robotics: trends, applications and challenges. Biosystems & Biorobotics, vol. 17. Springer: Cham; 2017. p. 75 85. [55] Cianchetti M, Laschi C, Menciassi A, Dario P. Biomedical applications of soft robotics. Nat Rev Mater 2018;3:143 53. [56] Manti M, Cacucciolo V, Cianchetti M. Stiffening in soft robotics: a review of the state of the art. IEEE Robot Autom Mag 2016;23(3):93 106. [57] Ranzani T, Gerboni G, Cianchetti M, Menciassi A. A bioinspired soft manipulator for minimally invasive surgery. Bioinspir Biomim 2015;10 (3):035008. [58] Cianchetti M, Ranzani T, Gerboni G, Nanayakkara T, Althoefer K, Dasgupta P, et al. Soft robotics technologies to address shortcomings in today’s minimally invasive surgery: the STIFF-FLOP approach. Soft Robot 2014;1(2):122 31. [59] Bergeles C, Yang GZ. From passive tool holders to microsurgeons: safer, smaller, smarter surgical robots. IEEE Trans Biomed Eng 2014;61 (5):1565 76. [60] Suzumori K, Iikura S, Tanaka H. Flexible microactuator for miniature robots. In: Micro electro mechanical systems, 1991, MEMS’91, proceedings. An investigation of micro structures, sensors, actuators, machines and robots. IEEE; Jan 30, 1991. p. 204 9. [61] Paek J, Cho I, Kim J. Microrobotic tentacles with spiral bending capability based on shape-engineered elastomeric microtubes. Sci Rep 2015;5:10768. [62] Wakimoto S, Ogura K, Suzumori K, Nishioka Y. Miniature soft hand with curling rubber pneumatic actuators. In: ICRA’09. 2009 IEEE international conference on robotics and automation 2009. IEEE; May 12, 2009. p. 556 61. [63] Watanabe Y, Maeda M, Yaji N, Nakamura R, Konishi S. Small, soft, and safe microactuator for retinal pigment epithelium transplantation. In: Micro electro mechanical systems, 2007. MEMS. IEEE 20th international conference on. IEEE; Jan 21, 2007. p. 659 62. [64] Okayasu H, Okamoto J, Fujie MG, Iseki H. Development of a hydraulically-driven flexible manipulator including passive safety method. In: Robotics and automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE international conference on. IEEE; Apr 18, 2005. p. 2890 6. [65] Varadarajulu S, Phadnis MA, Christein JD, Wilcox CM. Multiple transluminal gateway technique for EUS-guided drainage of symptomatic walled-off pancreatic necrosis. Gastrointest Endosc 2011;74(1):74 80. [66] Sunkara V, Park DK, Cho YK. Versatile method for bonding hard and soft materials. RSC Adv 2012;2(24):9066 70. [67] Yamamoto H, Ell C, Kita H, May A. Double-balloon endoscopy. Video capsule endoscopy. Berlin, Heidelberg: Springer; 2014. p. 113 8. [68] Ranzani T, Russo S, Schwab F, Walsh CJ, Wood RJ. Deployable stabilization mechanisms for endoscopic procedures. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE; May 29, 2017. p. 1125 31. [69] Becker S, Ranzani T, Russo S, Wood RJ. Pop-up tissue retraction mechanism for endoscopic surgery. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE; Sep 24, 2017. p. 920 7. [70] Whitesides GM. Soft robotics. Angew Chem Int Ed 2018;57:4258 73. [71] Wallin TJ, Pikul J, Shepherd RF. 3D printing of soft robotic systems. Nat Rev Mater 2018;3:84 100. [72] Ranzani T, Russo S, Bartlett NW, Wehner M, Wood RJ. Increasing the dimensionality of soft micro structures through injection-induced selffolding. Adv Mater 2018;30:1802739. [73] Sitti M. Miniature soft robots—road to the clinic. Nat Rev Mater 2018;3:74 5.

20 G

Robotic-Assisted Percutaneous Coronary Intervention Per Bergman, Steven J. Blacker, Nicholas Kottenstette, Omid Saber and Saeed Sokhanvar Corindus Vascular Robotics, Waltham, MA, United States

ABSTRACT The CorPath System is a robotic system for use in percutaneous coronary intervention (PCI). In a PCI procedure, balloon catheters are inflated at the coronary lesion to enlarge the narrowed vessel to restore blood flow. To visualize the devices inside the arteries, a fluoroscopy system is used, which exposes physicians to X-ray radiation during the procedure. The CorPath System allows the physician to navigate the devices deployed in PCI using robotic controls. Therefore, it significantly reduces physician exposure to radiation as well as musculoskeletal strain related to wearing heavy leaded aprons, as the physician can operate while seated at a radiation-shielded workstation. In addition, the robotic system allows higher precision than performing the procedure manually. In this chapter, the architecture of the CorPath GRX System, different functions of the robot, and the workflow of robotic-assisted PCI are presented. Moreover, kinematics of the arm used to position the robotic drive system is studied. Using the kinematics and workspace analysis, the arm is designed so that the robotic drive can reach to all the desired poses relative to the patient. Finally, a control model of the actuators used to manipulate the devices is developed and presented. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00020-7 © 2020 Elsevier Inc. All rights reserved.

341

342

Handbook of Robotic and Image-Guided Surgery

Abbreviations BLDC BSC DQ FFR PCI GC GCR GW GWL GWR PMSMs PVM SVPWM

20.1 20.1.1

brushless linear DC balloon/stent catheter direct quadrature fractional flow reserve percutaneous coronary intervention guide catheter guide catheter rotation guidewire guidewire linear guidewire rotation permanent magnet synchronous motors power vision monitor space vector pulse width modulation

Introduction Percutaneous coronary intervention

Percutaneous coronary intervention (PCI) transformed the treatment of severe coronary artery disease and acute myocardial infarction (MI). Prior to the advent of PCI, patients with multivessel coronary artery disease had to undergo open heart surgery. The first type of PCI, coronary angioplasty, was performed in 1977. PCI is less invasive, with fewer complications and faster recovery, than open heart surgery. Over 3 million PCIs are performed worldwide each year [1]. As with any organ, the heart needs oxygen and nutrients to function. Oxygenated blood is delivered to heart tissue through the coronary arteries. In patients with coronary artery disease, the accumulation of plaque causes coronary arteries to narrow (become “stenosed”). Stenosed arteries restrict the flow of oxygenated blood to heart tissue (see Fig. 20.1). If untreated, patients are subject to MIs and heart muscle/tissue damage, which can lead to other events, including the onset of heart failure and death. PCI is a minimally invasive procedure that restores blood flow to heart tissue by opening stenosed coronary arteries. In brief, a guide catheter (GC) is inserted into the femoral artery at the groin or the radial artery at the wrist to the opening of the affected coronary artery. Next, a guidewire (GW) is inserted into the GC and navigated past the obstructed area (“lesion”) in the coronary arteries (see Fig. 20.1). A balloon catheter is then introduced into the body and maneuvered over the GW to the lesion. The balloon is inflated to push the plaque against the artery wall to expand the arterial lumen (see Fig. 20.1). The balloon is subsequently deflated and removed. A stent catheter is then deployed to implant a stent along the lesion length to ensure that the coronary vessel stays open [2]. PCI procedures are performed in a

FIGURE 20.1 Schematic diagram of a lesion in the coronary, and the use of the guide catheter, balloon catheter, and guidewire in treating diseased coronary artery. The balloon is inflated to displace the lesion outward. A stent may be deployed at the lesion to provide support and maintain blood flow in the coronary vessel.

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

343

FIGURE 20.2 General setting and equipment used in PCI; the detailed figure shows coronary vessels, lesions, guide catheter, guidewire, and balloon when contrast media are injected. PCI, Percutaneous coronary intervention.

20.1.1.1 Health hazards Interventional physicians who perform manual PCI face chronic exposure to fluoroscopy’s low-dose ionizing radiation. Scientific evidence has demonstrated the negative health effects of this chronic exposure, including left-sided brain tumors, eye lens opacities (precursors to cataracts), melanoma, breast cancer, and premature vascular and neurological aging [36]. To reduce their exposure to ionizing radiation, cath lab personnel wear personal protective equipment, including leaded aprons, thyroid collars, and eye shields. Although protective equipment was designed to protect interventional physicians from exposure to scatter radiation, it has introduced a new occupational hazard: orthopedic injury. Numerous studies and surveys have shown that healthcare professionals involved in fluoroscopy-guided interventional procedures have a significantly higher prevalence of orthopedic problems compared to medical personnel who are not involved in these procedures [3,6]. Injury is correlated to age, years in the cath lab, and annual number of cases performed, which may necessitate early retirement for some interventionalists [6].

20.1.1.2 Precision Manual PCI involves a certain degree of imprecision regarding placement of balloon and stent catheters, as physicians “eye ball” the fluoroscopic image often while leaning over the patient’s body and straining to see the image on the screen. Physicians typically estimate the length of the lesion based on the two-dimensional angiographic image to determine which stent size to use in the procedure. However, it has been shown that visual estimates in manual PCI may result in inaccurate selection of the stent size [7,8]. Mis-sizing of the stent can lead to longitudinal geographic miss, which is associated with increased rates of repeat revascularization or adverse events, such as MIs [9]. Inaccurate placement of the stent can also necessitate a repeat PCI or cause an adverse event.

20.1.2

Robotic-assisted percutaneous coronary intervention

Robotic-assisted PCI involves remote navigation of devices inside the patient’s body. With the CorPath platform, the physician is distanced from the source of the X-ray emission and can manipulate catheters using controls at a radiationshielded cockpit [10]. Being seated at a shielded cockpit negates the need for wearing heavy leaded aprons and may reduce musculoskeletal strain. Clinical studies have shown that physicians performing robotic-assisted PCI with CorPath are exposed to significantly less scatter radiation than clinicians at the bedside [4,11]. Some studies on roboticassisted PCI have shown a trend toward reduced fluoroscopy time and less use of contrast media [11,12]. This may relate to the enhanced visualization offered by being seated near high-definition monitors in the interventional cockpit.

20. The CorPath Robotic System

cardiac catheterization lab also known as a “cath lab.” Cath labs are equipped with a fluoroscopy system which uses low-dose ionizing radiation to provide real-time X-ray images of the patient’s anatomy as well as the location and movement of medical devices (see Fig. 20.2). Over the last four decades, many tools and techniques have been developed to enable physicians to perform more complex PCI cases in patients with challenging anatomy. However, still several challenges with manual PCI persist, including health hazards for medical professionals and lack of precision related to positioning of intracoronary devices.

344

Handbook of Robotic and Image-Guided Surgery

FIGURE 20.3 CorPath GRX System in a cardiac cath lab.

CorPath also fosters precision in manipulation, measurement, and positioning. Using joysticks at the control console, the physician can move devices with submillimeter resolution. In addition, CorPath enables precise measurement of the anatomy to determine lesion length so that the appropriate stent length can be selected. These features have already demonstrated a significantly lower incidence of longitude geographic miss compared to manual PCI [13]. Clinical studies have shown that robotic-assisted PCI is a safe, reproducible procedure, with a short learning curve [11,14,15]. In addition, robotic-assisted PCI can be used with new techniques, such as radial access, and in complex cases, such as unprotected left main disease [1618]. Unlike other medical robotic systems, the CorPath System is device neutral. Its open architecture interacts with commercially available 0.014 in. GWs, rapid exchange balloon and stent catheters, and GCs. In other words, using CorPath, no system-specific catheters are required to perform robotic-assisted PCI.

20.2 20.2.1

System description CorPath GRX overview

The CorPath GRX System is the second generation of CorPath. Corpath GRX (referred to as CorPath hereafter) has two major subsystems: (1) a bedside unit and (2) interventional cockpit (Fig. 20.3). The bedside unit consists of an articulated arm, robotic drive, and a single-use cassette. The articulated arm (also known as the “extended reach arm”) is mounted on the bed rail of the cath lab table and is used to position the robotic drive and the cassette. The robotic drive receives inputs from the control console (located in the interventional cockpit) via the control network. These inputs actuate mechanisms in a single-use cassette attached to the robotic drive to manipulate a GW, balloon/stent catheter (BSC), and GC in the patient body. The cassette provides a sterile, disposable interface between the robotic system and commercially available interventional devices (i.e., GCs, GWs, and balloon or stents catheters). The interventional cockpit houses the control console and power vision monitor (PVM) to display angiographic and hemodynamic data (Fig. 20.3). A foot pedal in the cockpit allows the physician to activate the fluoroscopy system. The cockpit is designed to protect the physician from radiation exposure while performing the interventional procedure.

20.2.2

Articulated arm

As shown in Fig. 20.4, the articulated arm (referred to as arm hereafter) is a serial manipulator consisting of the following major components: G G G

three moving arm sections (arm 12, arm 34, and arm 56); six revolute joints (pivots P1 to P6); and a stationary section (mast, bed-rail clamp, and bed support which supports the arm on the bed rail).

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

345

FIGURE 20.4 Top and side views of the articulated arm and the robotic drive. The articulated arm has three major arm sections and six revolute joints (notation: arm ij is located between Pi and Pj). The mast connects the arm to the bed rail via the bed-rail clamp.

The arm is manually positioned by the cath lab technician to set the spatial position and orientation of the robotic drive system, specifically, the tip of the cassette from which the devices are driven in and out (Fig. 20.4). The cassette is treated as the end-effector of the arm. Locking mechanisms in the joints ensure that the arm remains stationary once the robotic drive is positioned or in the event of power outage. After initial positioning, the robotic drive can be moved along the ZRD axis, if needed, using a prismatic actuator (Fig. 20.4). In the ZX plane, the angular position of P1, namely ϕ1, is defined relative to the rail (Fig. 20.4), ϕ2 is defined relative to the direction of arm 12 and represents the angular position of P2, and ϕ5 is defined relative to the direction of arm 34 representing the angular position of P5. The vertical position of the arm is determined by ϕ3, which is defined as the angle between arm 34 relative to the ZX plane as illustrated in Fig. 20.4. Parallelism is used in arm 34 so that P5 stays parallel to the Y axis (Fig. 20.5). In doing so, the angle between the table surface and the devices exiting the cassette becomes independent of the ϕ3 angle. By decoupling the end-effector’s position from its orientation relative to the table surface, the process for positioning the arm is accomplished easier. The pitch angle of the robotic drive, ϕp, is defined relative to the direction of arm 56. The robotic drive can be positioned at two discrete pitch angles: ϕp 5 0 degree for stowage/setup and 35 degrees for operational.

20.2.3

Robotic drive and cassette

The cassette is a single-use component that provides a sterile barrier with the devices that are in direct contact with it. The cassette is attached to and actuated by the robotic drive through its motorized capstans. As shown in Fig. 20.6, the five modules of the cassette are engaged with the five capstans of the robotic drive. The modules of the cassette and the devices are shown in Fig. 20.7 and defined in Table 20.1. In total, the robotic drive has seven motors. Five motors actuate the capstans, and one motor moves the mechanism that pinches/unpinches the GW rotary module, and one motor moves the robotic drive linearly relative to the arm. By moving the robotic drive relative to the arm, the GC can be advanced or retracted; the range for adjusting the position of the GC is 6 100 mm. The GC rotation (GCR) module rotates the GC through a proprietary geared adapter that is connected between the proximal end of the GC and the Y-connector (see Fig. 20.7). To load the GC into the cassette, the robotic drive is positioned using the arm so that the cassette’s GCR module aligns with the proximal end of the GC. The GW and BSC are inserted into the GC through the Y-connector and their proximal sections are loaded into the cassette (see Figs. 20.3 and 20.7). The detailed procedure is described in Section 20.3.

20. The CorPath Robotic System

FIGURE 20.5 Arm 34 consists of a four-bar linkage mechanism in which facing linkages have the same length to ensure parallelism. Since P3P0 3 linkage is constrained to stay parallel to the Y axis by being attached to P2 (see Fig. 20.4), P4P0 4 linkage and P5 axis also remain parallel to the Y axis as the linear actuator (shown in green) actuates ϕ3.

346

Handbook of Robotic and Image-Guided Surgery

FIGURE 20.6 Robotic drive and cassette; motors are embedded in the robotic drive to actuate five capstans. When the cassette is attached to the robotic drive, each capstan can be actuated to drive the corresponding module. The modules of the cassette are: (1) guide GCR, (2) BSC, (3) linear pinch, (4) GWL, and (5) GWR. BSC, Balloon/stent catheter; GCR, guide catheter rotation; GWL, guidewire linear; GWR, guidewire rotation.

FIGURE 20.7 Cassette, robotic drive, Y-connector, and devices. The modules of the cassette are shown with the covers of the cassette open. A proprietary geared adapter is connected between the proximal end of the GC and Y-connector prior to loading into the cassette. BSC mounts on the GW, and both GW and BSC are inserted inside the GC through the Y-connector. BSC, Balloon/stent catheter; GC, guide catheter; GW, guidewire.

TABLE 20.1 Main modules of the cassette. Module

Function

GCR

Rotates the guide catheter

BSC

Advances/retracts the BSC

Linear pinch

Pinches/unpinches the guidewire and BSC

GWL

Advances/retracts the guidewire

GWR

Rotates the guidewire to steer its tip

BSC, Balloon/stent catheter; GCR, guide catheter rotation; GWL, guidewire linear; GWR, guidewire rotation.

The GW linear (GWL) and BSC modules have similar structures. A schematic diagram of the GWL module is shown in Fig. 20.8. The linear pinch module pinches/unpinches the tires in the GWL and BSC modules. Unpinching the tires permits the loading of devices in the modules. In the GWL module (Fig. 20.8), tires 1 and 2 drive the GW (advance/retract). As the GW moves, it drives tires 3 and 4. Encoders are connected to tires 1 and 3 (i.e., driver and driven tires). By comparing the rotation angle of the encoders, any slippage between the GW and the driver tires can be detected. In the case of slippage, a notification will be shown on the control console display. Tires 2 and 4 are placed

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

347

FIGURE 20.8 Guidewire-linear module used to advance/retract the guidewire.

20.2.4

Control console

The control console is the user interface that enables the interventionalist to remotely manipulate the GC, GW, and BSCs. The interventionalist’s commands are sent from the control console to the robotic drive via a communication cable. The robotic drive, in turn, drives the cassette and tracks the motion of the interventional devices inserted in the cassette. The control console user interface is comprised of a touchscreen monitor and three joysticks. The touchscreen illuminates and enables joysticks and touchscreen functions, where each device can be enabled or disabled separately. In addition, all devices can be disabled simultaneously by selecting “Disable All,” which dims the screen and deactivates joystick and touchscreen functions. The CorPath System tracks the proximal motion of all three interventional devices and displays their movement with 0.1 mm resolution on the touchscreen. The length of a lesion can be measured, saved, and reset to record a different measurement. During advancement of the catheters or GW, the system provides continuous, distinct audible sounds to indicate the speed the device is traveling. A change in the sound frequency corresponds to a change in device speed. If the rotational and linear functions are operated simultaneously, the audible beep for the linear function will override the audible sound for the rotational function. Along with this audio feedback, the system also provides visual feedback of the direction and speed of GW and BSC movement through the speed indicators display. The feedback for the remote manipulation of the GC differs from that for the other devices, as linear GC motion is limited and excessive rotation of the GC is not desired. The touchscreen displays the linear travel distance available to advance and retract the GC. A green horizontal bar indicates the current position relative to the travel limits. If the green bar is inside yellow zones at either end, there is less than 10 mm of stroke remaining to advance or retract GC. The rotation indicator displays degrees of rotation for the GC. Touching the “Reset” button sets the indicator to “0.” An arrow indicates the rotation direction. The color brightens with increasing revolutions in one direction. The CorPath System can reliably manipulate the BSC and GW in two ways that are not possible manually. The first is the ability to advance or retract the device in 1 mm increments. This gives the physician more control and enables more precise positioning of the balloon or stent catheter than could be achieved manually. The second is the ability to rotate the GW in 30-degree increments. This allows the interventionalist to steer the tip of the GW and direct it to the proper branch of a coronary artery for intervention. Both of these manipulations are activated through the push of a button on the touchscreen.

20. The CorPath Robotic System

on a spring-loaded plate and are pushed against the other two tires to pinch the GW (Fig. 20.8). The springs maintain constant force (pinch force) between the facing tires and the GW while the latter is being moved. The GW rotation (GWR) module is placed in-line with the GWL module (Fig. 20.7) to allow for rotary and linear motion. The GWR module has a structure similar to GWL module, however, the entire module rotates about the XD axis (Fig. 20.8). A pinion and bevel gear in the GWR module is used to rotate the GWR module about its axis when the corresponding capstan is actuated.

348

Handbook of Robotic and Image-Guided Surgery

20.2.5

Cockpit and power vision monitor

The cockpit allows the interventionalist to remain seated during the procedure, distanced and shielded from the fluoroscopy radiation source. The seated position allows for close proximity to the PVM, which provides enhanced visualization and clear images to aid in clinical decision-making. The PVM connects up to eight HD image sources from the cath lab into the cockpit. The monitor manager displays the video inputs preset or user-configured layouts on the 40 in. UHD monitor. The key image sources are live fluoroscopy, reference image fluoroscopy, and patient hemodynamics. Image sources from adjunct technologies used in the procedure, such as fractional flow reserve (FFR) and intravascular ultrasound, can be accessed by the interventionalist without leaving the cockpit. Using the preconfigured layouts during the procedure allows the interventionalist to quickly select the appropriate layout depending on the procedural workflow, such as an enlarged fluoroscopy image during GW navigation or the display of FFR waveforms during physiology lesion assessment.

20.3

Operation and workflow

The system is operated by two users: (1) the cath lab technician at the bedside who interacts with the interventional devices, robotic drive, and cassette and (2) the interventionalist who performs the procedure. The CorPath System has two main user interfaces to support the workflow between the interventionalist and the technician at the patient table. The interventionalist’s user interface is the control console located in the interventional cockpit (Fig. 20.3). The user interface for the technician is a touchscreen that is integral to the robotic drive.

20.3.1

Preparation for robotic-assisted percutaneous coronary intervention

A GC is manually inserted into the patient’s anatomy to reach the coronary artery. To allow devices to be inserted into the GC and minimize blood loss, a hemostatic Y-connector is connected to the hub of the GC. To rotate the GC during robotic-assisted PCI, a proprietary geared adapter is inserted between the GC hub and the hemostatic Y-connector. The draped robotic drive with the single-use cassette is then positioned to allow the GC hub, geared adapter, and hemostatic Y-connector to be loaded into the single-use cassette. The support track on the cassette (shown in Fig. 20.7) is then manually pulled out to enclose the exposed length of the GC between the cassette and the patient (Fig. 20.3). The support track prevents the exposed GC from buckling or kinking while it is advanced. At this point, the interventionalist leaves the patient table, removes the leaded protective gear, and performs robotic PCI at the interventional cockpit.

20.3.2

Robotic procedure

To begin the robotic procedure, the interventionalist seated in the cockpit enables the control console user interface by tapping the “Enable” button on the control console’s touchscreen. The interventionalist confirms that the manually placed GC is properly engaged at the coronary artery ostium using fluoroscopy. If not, the interventionalist adjusts the GC’s position using the GC joystick. The interventionalist will then instruct the technician at the patient table to load a GW into the GC. The technician presses the button on the bedside touchscreen to load the GW. In response, the CorPath System unpinches the cassette tires to allow a GW to be inserted between the tires. The technician then opens the cassette, inserts a GW into the GC through the hemostatic Y-connector, loads the GW into the cassette, and closes the cassette covers. The control console touchscreen displays the system status and indicates that the GW has been loaded and that the interventionalist can proceed robotically. With the GW in the GC, the interventionalist advances it with the GW joystick. While the GW is fully inside the GC, the “Turbo” button (Fig. 20.9) can be activated to multiply the speed of the joystick. In doing so, the GW can quickly be advanced to reach the tip of the GC. The “Turbo” button is then inactivated to safely navigate the GW inside the coronary arteries with high precision, until the GW crosses the lesion being treated. The technician at the patient table then threads a therapeutic catheter (balloon or stent catheter) onto the end of the GW, inserts the therapeutic catheter into the GC, and loads the proximal shaft of the therapeutic catheter into the cassette. The workflow returns to the interventionalist to turbo advance the therapeutic catheter using the BSC joystick (Fig. 20.9) to the tip of the GC and then continues to advance, without turbo, until the therapeutic catheter is at the lesion. The interventionalist can adjust the position of the therapeutic catheter with respect to the lesion by using the

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

349

FIGURE 20.9 CorPath control console.

1 mm advance or retract buttons on the control console touchscreen. Once the therapeutic catheter is appropriately positioned at the lesion, the therapeutic catheter is manually inflated and deflated at the tableside. The workflow sequence of interactions between the interventionalist in the cockpit and the tableside technician continues for the remainder of the procedure.

20.3.3

Safety considerations

The control console has several features to ensure safety during the interventional procedure. To reduce the likelihood of inadvertent commands from the control console, the software disables the user interface if it is not accessed within 2 minutes. Also, the user is prevented from commanding motion from the touchscreen and the joysticks simultaneously, as well as prevented from queuing multiple touchscreen commands. A touch sensor has been designed into each joystick, which will only operate if the physician’s fingers are in place. This prevents any movement of the devices not intended by the operator. Encoders in the robotic drive track the motion of the interventional devices loaded in the cassette. If the encoders detect unintended motion, the control software shuts down power to the robotic drive. Emergency stop buttons are located on the control console (Fig. 20.9) as well as on the robotic drive (Fig. 20.10). In the event of an emergency, pushing either emergency stop button will disable power to the robotic drive.

20. The CorPath Robotic System

FIGURE 20.10 A number of components on the arm and robotic drive used for safety of the procedure are shown, including obstacle sensor, emergency stop button, and emergency arm release.

350

Handbook of Robotic and Image-Guided Surgery

The robotic drive moves when the interventionalist positions the GC. To prevent the robotic drive from injuring the patient, a force sensor is built into its distal surface (see the obstacle sensor shown in Fig. 20.10). When a force threshold is met, the control software notifies the user and prevents the user from further advancing the GC. In the event of a power outage, the cassette is designed to allow manual removal of the interventional devices. With the devices removed, an emergency arm release (Fig. 20.10) overrides the articulated arm lock to allow the user to swing the arm out of the way to continue the interventional procedure.

20.4

Kinematics analysis

As explained above, the robotic drive and cassette are positioned at the desired position and orientation relative to the patient’s body using the arm. The arm should be able to reach multiple access points of the patient’s body that are normally used in PCI procedures. Also, patients have different sizes and anatomies. Therefore, the kinematics of the arm should be studied to find the design parameters (e.g., length of the linkages) to reach the desired range of poses.

20.4.1

Forward kinematics

Forward kinematics formulation is to find the position and orientation of the end-effector of a robot by knowing the values of its joint variables. The joint variables are angular position of the rotational joints (θj ) and linear displacement of prismatic joints (dj ). For serial robots, the pose of each linkage can be described relative to the previous linkage, and consequently, a chain of transformation matrices can be written to determine the pose of the end-effector relative to the desired coordinate system. For simple 2D mechanisms, this can be done by multiple alternative methods such as writing geometric relations or performing vector manipulation. However, for robots with multiple 3D linkages and complex geometry, a systematic method of obtaining rotation and translation matrices (i.e., transformation matrices) between linkages and the associated coordinate systems becomes necessary.

20.4.1.1 DenavitHartenberg method Different methods of obtaining the transformation matrix between the links of a mechanism are used in the literature [e.g., Euler method, DenavitHartenberg (DH) [19,20]]. While any arbitrary frame (i.e., coordinate system) attached to a linkage can be used to obtain the transformation matrix, a systematic convention can help to: (1) reduce the number of variables required for defining the pose of linkages, and (2) simplify the derivation of the kinematics equations (i.e., similar operations can be repeated for different linkages and different robots). Such a method is presented by the DH method [20,21]. In general, the position and orientation of a 3D rigid body relative to a coordinate system can be described by six parameters. By introducing a specific method of selecting the coordinate systems (i.e., defining two constraints), DH formulation requires only four parameters to define each subsequent transformation matrix. Based on the DH method, for revolute joints, the rotation axis is selected as the Z axis and, for prismatic joints, the Z axis is selected coincident with the prismatic joint and parallel to the sliding direction. Joints with more than 1 degree of freedom (DoF), can be modeled as a combination of multiple virtual 1-DoF joints without loss of generality. For example, a 3-DoF spherical joint can be modeled by three orthogonal revolute joints whose axes intersect at the center of the joint. Herein, the modified DH method is used to perform the kinematics analysis of the articulated arm. In the modified DH formulation [19], the X axis of joint j 2 1 (i.e., Xj21) is selected so that its extension intersects the Z axis of the subsequent joint (Zj) and is perpendicular to it (Fig. 20.11). By defining the frames in such a way, the transformation between two subsequent frames can be written as a combination of four basic transformations. The first two basic transformations include a rotation and a translation relative to Xj21. Since Xj21 is perpendicular to both Zj21 and Zj axes, it is guaranteed that Z0 j21 can become parallel to Zj by rotation of frame j 2 1 about Xj21 axis. Herein, the required angle of rotation for such a transformation is called γ j21 . The next basic transformation is a translation along Xj21 axis so that the origin of the transformed frame j 2 1 is coincident with the extension of Zj. The required magnitude translation for such a transformation is called aj21. Next, the frame can be rotated about Zj (i.e., the new Z0 j21) so that all the axes of j 2 1 frame are parallel to those of frame j. Finally, with a translation along Zj axis, transformation from frame j 2 1 to frame j can be completed. The required rotation and translation associated with the two latter basic transformations are called θj and dj , respectively.

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

351

FIGURE 20.11 DH parameters and coordinate systems; order of operations is numbered from 1 to 4. DH, DenavitHartenberg.

For translation along Xj21 axis with magnitude of aj21, we have: 2 3 2 3 Xj 1 aj21 Xj21 4 Yj21 5 5 4 5 Yj Zj21 Zj

(20.2)

By adding a row to position vectors, the translation matrix, PXj21 ;aj21 , can be written as: 2 3 2 32 3 1 0 0 aj21 Xj Xj21 6 Yj21 7 6 0 1 0 7 6 Yj 7 0 6 7 6 76 7 4 Zj21 5 5 4 0 0 1 0 5 4 Zj 5 1 1 0 0 0 1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} PX j21 ;

(20.3)

aj21

The final transformation relative to the Xj21 axis can be obtained as the product of the basic rotation and translation matrices (with the same order as the operation). To be able to perform this operation, the rotation matrix should be written in a 4 3 4 format. The elements of the added row and column (fourth row and column) are zero except for the diagonal element which is 1 as shown below: 2 3 2 3 1 0 0 aj21 1 0 0 0 60 1 0 6 0 cos γ j21 2sin γ j21 0 7 0 7 6 7 7 AXj21 5 6 (20.4) 4 4 0 sin γ j21 5 0 0 1 0 5 cos γ j21 0 0 0 0 1 0 0 0 1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} γ j21 rotation about X j21 axis

2

1 60 AXj21 5 6 40 0

0 cos γ j21 sin γ j21 0

aj21 translation along X j21 axis

0 2sin γ j21 cos γ j21 0

3 aj21 0 7 7 0 5 1

In this way, the resultant transformation matrix, AXj21 , can be written in the following general form [20]:   R PXj21;aj21 AXj21 5 Xj21 ;γj21 0 1

(20.5)

(20.6)

20. The CorPath Robotic System

Using the DH method described above, the final matrix transforming frame j 2 1 to frame j, Tj21;j , can be obtained as the product of the basic transformations done relative to Xj21 and Zj axes. The rotation matrix for rotation of γ i21 about the Xj21 axis can be written as follows [19,20]: 2 3 1 0 0 RXj21 ;γj21 5 4 0 cos γj21 2sin γj21 5 (20.1) 0 sin γj21 cos γj21

352

Handbook of Robotic and Image-Guided Surgery

The same procedure can be followed for the rotation and translation operations relative to Zj axis, which results in: 2 3 cos θj 2sin θj 0 0 6 sin θj cos θj 0 0 7 7 AZj 5 6 (20.7) 4 0 0 1 dj 5 0 0 0 1 Accordingly, the final transformation matrix between frame j 2 1 to j can be obtained as: Tjj21 5 AXj21 AZj

2

cos θj 6 cos γ sin θ j 6 j21 Tjj21 5 6 4 sin γ j21 sin θj 0

(20.8)

2sin θj

0

cos γ j21 cos θj

2sin γ j21

sin γ j21 cos θj 0

cos γ j21 0

This can again be written in the general form of: Tjj21



Rj21 5 j 0

Pj21 j 1

3

aj21 2dj sin γ j21 7 7 7 dj cos γ j21 5

(20.9)

1

 (20.10)

and Pj21 are the rotation and translation matrices between frame j 2 1 to j. where Rj21 j j

20.4.1.2 Forward kinematics formulation of the arm To develop the forward kinematics formulation for the CorPath articulated arm, the coordinate systems have been selected based on the modified DH convention explained above. The top and side views of the schematic diagram of the arm and the attached frames are shown in Fig. 20.12, and the parameters are listed in Table 20.2. DH parameters FIGURE 20.12 Schematic diagram of the articulated arm with DH parameters and coordinate systems: (A) top view of the arm and (B) side view of the arm (normal to P3 and P4). DH, DenavitHartenberg.

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

353

TABLE 20.2 DenavitHartenberg parameters for transformation from frame j to j 2 1. j

Tj21;j

γj

aj ðmÞ

dj ðmÞ

θj ðradÞ

1

T0;1

π=2

0

2 0:684

ϕ1 2 π=2 1 ϕc

2

T1;2

0

0:415

0

ϕ2 2 ϕc

3

T2;3

π=2

0:051

0

ϕ3

4

T3;4

0

0:368

0

2ϕ3

5

T4;5

2 π=2

0:029

0:088

ϕ5

6

T5;6

π=3

0:381

0:152

2 ðπ=22ϕp Þ

7

T6;7

2 π=2

0

0

2 π=2

θ4 5 2θ3

(20.11)

Given the joint variables as the inputs of the forward kinematics model, the transformation relation between each of the two frames can be written. For instance, if θ1 5 ðπ=2Þ: 32 2 3 2 0 21 3 0 0 X0 X1 0 0 21 2d1 7 6 Y0 7 6 Y 7 76 (20.12) 4 Z 55 6 4 15 4 1 0 0 0 5 Z1 0 1 1 0 0 0 1 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} T10

which results in X0 5 2 Y1 ; Y0 5 Z1 2 d1 ; Z0 5 X1 . Following the same rule for the chain of links, the position of a point on the jth frame relative to the base coordinate system (j 5 0) can be found as follows: 2 3 2 3 Xj X0 6 Yj 7 6 Y0 7 j21 7 (20.13) 4 Z 5 5 T10 3 T21 3 ? 3 Tj 6 4 0 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Zj 5 1 1 Tj0 For example, by knowing the joint variables, and setting j 5 7 in Eq. (20.13), the position of the tip of the endeffector (TEE) which is located at (107, 12.5, 44 mm) from the origin of seventh frame, can be found in the base coordinate system. Assuming θ1 5 10degrees; θ2 5 60degrees; θ3 5 15degrees; θ5 5 20degrees (Fig. 20.11), the position of the end-effector is calculated to be x0 5 630, y0 5 142, and z0 5 1288 mm, which is consistent with the CAD model (Fig. 20.11). Also, the orientation of the end-effector (seventh frame) can be obtained by extracting the rotation matrix from T70 as presented in Eq. (20.10). Accordingly: 2 3 2 3 e^ i7 e^ i0 4 e^ j0 5 5 R0 3 R1 3 ? 3 R6 4 e^ j7 5 (20.14) 1 2 |fflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}7 ^ e^ k0 e k7 0 R7

In addition, the unit vectors of the frame attached to the end-effector can be described in the base frame as follows: 2 3 2 3 e^ i7  0 21 e^ i0 4 e^ j7 5 5 R 4 e^ j0 5 (20.15) 7 |fflfflffl{zfflfflffl} ^ e^ k7 e k0 7 R0

20. The CorPath Robotic System

include four joint variables (ϕ1 ; ϕ2 ; ϕ3 ; ϕ5 Þ shown in Fig. 20.4, and constants. In Table 20.2, ϕc 5 10:5degrees and ϕp 5 35:25degrees are constants related to the geometry of the arm. As shown in Fig. 20.5, a four-bar linkage is used in arm 34 so that the axis of P5 remains vertical. The four-bar linkage is not shown in Fig. 20.12; instead, the following constraint is added and included in the analysis:

354

Handbook of Robotic and Image-Guided Surgery

20.4.2

Inverse kinematics and workspace analysis

The inverse kinematics formulation is to find the joint variables of the arm so that the end-effector achieves the desired position and orientation. Inverse kinematic analysis can be used to obtain the workspace of the robot. Herein, workspace is referred to as the region to which the arm can reach while the end-effector (i.e., cassette) has the desired orientation for the access point.

20.4.2.1 Independent joint variables The articulated arm has four independent DoFs between the 0th and seventh frames (Fig. 20.12) which are controlled by four independent joint variables ðθ1 ; θ2 ; θ3 ; θ5 Þ. The relation between ðθ1 ; θ2 ; θ3 ; θ5 Þ and (ϕ1 ; ϕ2 ; ϕ3 ; ϕ5 Þ can be found in Table 20.2. θ4 is not an independent joint variable due to the parallelism constraint (Eq. 20.11). Although the θ6 value can switch between 0 and ϕp (535 degrees), θ6 is always set to ϕp during the PCI procedure. θ6 controls the angle of the devices relative to the table surface, λ. λ is a compound angle which is a function of both γ 5 and θ6 . γ 5 and θ6 values are selected so that λ is 30 degrees. A 30-degree angle allows the TEE (shown in Fig. 20.4) to be positioned close to the patient without the robotic drive touching the patient’s body. Also, with a 30-degree angle, sharp or undesired angles or curves are avoided when the devices are introduced into the patient’s body. The parallelism constraint (Eq. 20.11) ensures that λ 5 30 degrees is maintained regardless of the values of the four joint variables ðθ1 ; θ2 ; θ3 ; θ5 Þ. This can be mathematically checked as follows. The GC direction is parallel to e^ k7 . Therefore, λ can be found as: π λ 5 2 arccosðe^ k7 Ue^ j0 Þ (20.16) 2 To perform vector manipulation, both the vectors e^ k7 and e^ j0 should be written in the same coordinate system. By using Eq. (20.14), one can transform e^ k7 to the base coordinate system (i.e., find the projection of e^ k7 on axes of the basic coordinate system) and find the product of the two vectors as follows: 0 2 31 2 3 0 0 (20.17) e^ k7 U^ej0 5 @R07 4 0 5A U4 1 5 1 0 |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} e^ k7

The right side of Eq. (20.17) is always equal to 0.5 for any given value of θ1 ; θ2 ; θ3 ; θ5 . Therefore, throughout the positioning phase, the angle between the GC and the table surface maintains at 30 degrees based on Eq. (20.16). θ6 is set to 0 degree when the arm is in stowage configuration to reduce the volume occupied by the robot at the cath lab table. As explained above, the robotic drive can be actuated linearly relative to arm 56 (parallel to Z7 direction) which adds another DoF to the robot. Linear actuation of the robotic drive expands the reach of the arm. However, this linear motion is reserved for adjusting the GC position in the artery only after the arm is positioned. Therefore, this DoF is not considered in the inverse kinematics and workspace analysis below.

20.4.2.2 Inverse kinematics formulation Since the angle between the GC and the table surface is constant, the position and orientation of the end-effector (seventh frame) can be fully defined by knowing four variables: three position variables of the tip of the cassette and one orientation variable which is the angle of the GC ðe^ k7 Þ relative to the bed rail ðe^ k0 Þ. These four variables can be treated as the outputs of the arm and the four independent joint variables can be defined as the inputs. A pose of the endeffector is reachable (i.e., is inside the workspace) only if the inverse kinematics equations have a solution. In addition to the above four joint inputs, by unlocking the rail clamp, the user can slide the entire arm on the bed rail to adjust its position relative to the patient. If this DoF is also considered in the analysis, the inverse kinematics equations can have infinite solutions and the workspace becomes larger. However, in this study, a conservative assumption is made that the end-effector should reach all the desired positions and orientations while the arm position is fixed on the rail. This assumption ensures that the arm would not interfere with other systems mounted on the bed rail. For the inverse kinematics formulation, the desired angle of the GC and the position of the tip of the cassette (i.e., the exit point from the cassette for the GC) are known in the base coordinate system. One can start solving the inverse kinematics equations by writing the relation between the known orientation of the end-effector and the joint variables

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

355

of the arm. Given the GC angle relative to Z0 and X0 (i.e., ϕGC=Z0 and ϕGC=X0 , respectively), the e^ k7 vector can be written in the base coordinate system as: 2 3 cos ðϕGC=X0 Þ 6 7 20:5 (20.18) e^ k7 5 4 5 cos ðϕGC=Z0 Þ Using Eqs. (20.14) and (20.18), one can write:

3 2 3 2 cos ðϕGC=X0 Þ 0 7 6 7 6 20:5 R01 3 R12 3 ? 3 R67 4 0 5 5 4 5 cos ðϕGC=Z0 Þ 1

(20.19)

For simplification, the product of the rotation matrices (left side of Eq. 20.19) can be broken into two parts: 2 3 0 cos ðθ1 1 θ2 1 θ5 Þ 2sin ðθ1 1 θ2 1 θ5 Þ R01 3 R12 3 R23 3 R34 3 R45 5 4 0 0 21 5 (20.20) sin ðθ1 1 θ2 1 θ5 Þ cos ðθ1 1 θ2 1 θ5 Þ 0

where ϕp is the pitch angle of the robotic drive and has a known value of 35 degrees. Therefore, the only unknown in Eq. (20.19) is η, where: η5θ1 1θ2 1 θ5 Accordingly, Eq. (20.19) can be written as follows:   3 2 2 3 cos ϕp cos ðηÞ 2 sin ϕp sin ðηÞ=2 cos ðϕGC=X0 Þ 7 6  7 6 6 pffiffiffi 7 754 6 20:5 2 3 sin ϕp =2 5 7 6 5 4   cos ðϕGC=Z0 Þ cos ϕp sin ðηÞ 1 sin ϕp cosðηÞ=2

(20.22)

(20.23)

Each row of thepvectors on the two sides of Eq. (20.23) presents one equation. However, the second equation is a ffiffiffi trivial relation ( 2 3sinð35:25degreesÞ=2 5 2 0:5) which again verifies what was found from Eq. (20.17). Therefore, we have two independent equations which can be written in the following standard form: 2   3 1 sin ϕ 2 cos ϕp 7 " # p 6 2   cos ðϕGC=X0 Þ 7 6 sin ðηÞ 7; X 5 ; B5 AX 5 B; whereA 5 6 (20.24)  6 1  7 cos ðϕGC=Z0 Þ cos ðηÞ 4 cos ϕ sin ϕp 5 p 2 Therefore, X can be found as:



sin ðηÞ cos ðηÞ



5 A21 B

(20.25)

Using Eq. (20.25), η can be simply calculated. For validation, the same example that was presented in the forward kinematics section (ϕp 5 ϕGC=Z0 5 35:25degreesÞ can be considered. Using Eq. (20.25), A21 B 5 ½1;0. Therefore, η 5 arcsinð1Þ 5 90degrees which is consistent with the joint variables that were used in the forward kinematic model.

20. The CorPath Robotic System

Note that the resultant matrix in Eq. (20.20) is independent of θ3 due to the parallelism constraint (Eq. 20.11). In other words, the axis of the P5 joint stays perpendicular to the table for any arbitrary value of θ3 . The remaining part of the left side of Eq. (20.19) can be obtained as follows: 2 3 cos ðϕp Þ 0 sin ðϕp Þ 6 pffiffiffi 7 sin ðϕp Þ=2 7 2 cos ðϕp Þ=2 R56 3 R67 5 6 (20.21) 4 2 3=2 5 pffiffiffi pffiffiffi 3 sin ðϕp Þ=2 1=2 2 3 cos ðϕp Þ=2

356

Handbook of Robotic and Image-Guided Surgery

The other equations for solving inverse kinematics can be obtained from position relations. The transformation matrix between the TEE and the base frame can be written using Eq. (20.13). If P72TEE 5 ðx72TEE ; y72TEE ; z72TEE Þ represents the position vector of the TEE relative to the seventh frame, the position of TEE relative to the 0th frame P02TEE 5 ðx02TEE ; y02TEE ; z02TEE Þ can be found as: 2 3 2 3 x02TEE x72TEE 6 y02TEE 7 6y 7 (20.26) 4z 5 5 T70 4 z 72TEE 5 02TEE 72TEE 1 1 Therefore,

2

x02TEE

3

2

a2 cos ðθ1 Þ 1 a3 cos ðψÞ 1 a4 cos ðθ3 Þ cos ðψÞ 1 f1 ðP72TEE ; η; ϕp ; a6 ; d6 Þ

3

6 7 6 7 2a4 sin ðθ3 Þ 1 f2 ðP72TEE ; η; ϕp ; d1 ; d5 ; d6 Þ 4 y02TEE 5 5 4 5 a sin ð θ Þ 1 a sin ðψÞ 1 a cos ðθ Þ sin ðψÞ 1 f ðP ; η; ϕ ; a ; d Þ z02TEE 2 1 3 4 3 3 72TEE p 6 6

(20.27)

ψ 5 θ1 1θ2

(20.28)

where f1 ; f2 ; f3 are functions of η (which was found from the previous step in Eq. 20.25), the end-effector position, P02TEE , and constants related to the geometry of the arm. Thus, the values of f1 ; f2 ; f3 are known, which can be found as follows:

pffiffiffi



3 1 1 f1 5 sin ðηÞ x02TEE 1 sin ðϕp Þ cos ðηÞ 1 cos ðϕp Þ sin ðηÞ y02TEE 1 cos ðϕp Þ cos ðηÞ 2 sin ðϕp Þ sin ðηÞ z02TEE 2 2 2 (20.29)

pffiffiffi

pffiffiffi  3 3 1 cos ϕp y02TEE 2 sin ðϕp Þ z02TEE f2 5 2 x02TEE 1 (20.30) 2 2 2





pffiffiffi 3 1 1 f3 5 2 cos ðηÞ x02TEE 1 sin ðϕp Þ sin ðηÞ 2 cos ðϕp Þ cos ðηÞ y02TEE 1 cos ðϕp Þsin ðηÞ 1 sin ðϕp Þcos ðηÞ z02TEE 2 2 2 (20.31)   In Eqs. (20.27)(20.31), x02TEE ; y02TEE ; z02TEE elements are known (input of the inverse kinematics model), η is known from Eq. (20.25), ðx72TEE ; y72TEE ; z72TEE Þ, ϕa , aj , and dj ( j 5 2,3,. . .,6) are constants associated with the geometry of the arm and end-effector (Table 20.2). Eq. (20.27) provides three independent equations to solve for three unknowns ðθ1 ; θ3 ; ψÞ. The second row of Eq. (20.27) contains only one unknown, θ3 . Therefore, θ3 can be found as follows:

f2 2y02TEE θ3 5 arcsin (20.32) a4 The two remaining unknowns ðθ1 ; ψÞ can be found using the first and third rows of Eq. (20.27):   a3 cos ðψÞ 1 a4 cos ðθ3 Þ cos ðψÞ 1 f1 ðP72TEE ; η; ϕp ; a6 ; d6 Þ cosðθ1 Þ 5 2 a2 sinðθ1 Þ 5 2

  a3 sin ðψÞ 1 a4 cos ðθ3 Þ sin ðψÞ 1 f3 ðP72TEE ; η; ϕp ; a6 ; d6 Þ a2

(20.33)

(20.34)

Squaring the two sides of Eqs. (20.33) and (20.34) and adding them, the left side of the resultant equation becomes equal to 1 (i.e., θ1 is eliminated from the equation), and the right side of the equation contains only one unknown, ψ. Since the resultant equation is quadratic, it can have a maximum of two solutions for ψ. The two solutions relate to mirror configurations of a two-linkage mechanism. After finding ψ, θ1 may be found using Eq. (20.33). Next, θ2 and θ5 can be found by employing Eqs. (20.28) and (20.22), respectively.

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

357

FIGURE 20.13 Four access points (left femoral, right femoral, left radial, and right radial) of a tall undersized male patient are shown with cones.

20.4.2.3 Workspace

20.5

Motor control system modeling

As explained in Section 20.2.3, the modules of the cassette are driven by the capstans on the robotic drive to manipulate the devices. Each capstan on the robotic drive is actuated by one motor. The motors are permanent magnet synchronous motors (PMSMs) which include both brushless linear DC (BLDC) and bipolar stepper motors. The significant difference between these two classes of motors is that stepper motor rotors tend to have higher numbers of magnetic pole pairs, np, and do not possess Hall effect sensors in order to sense the quadrature phase current, iq, with respect to rotor position. Unlike brushed DC motors in which the direct current is commuted by mechanical brushes in order to apply a torque proportional to the stationary (direct) motor phase currents, PMSMs need to be run in either an inefficient direct current control mode for stepper motor control or an efficient quadrature current control mode for BLDC motors. Both current control modes can be realized with the same direct quadrature (DQ) current control architecture in which a proportional integral (PI) controller applied to the feedback error between the measured direct and quadrature current terms and their corresponding references is used to determine the corresponding direct and quadrature voltages to apply to the PMSM. In order to analyze system stability and understand how to realize an effective PI controller we shall: (1) recall the definition of a PMSM; (2) present the current control architecture for PMSMs and analyze system stability; (3) discuss how to realize direct torque control of BLDC motors; and (4) direct current control of stepper motors.

20.5.1

Permanent magnet synchronous motor model

Used in CorPath, a PMSM is a motor in which the rotor consists of np in {1, 2, . . .} magnetic pole pairs and the stator usually consists of m in {2, 3} coil windings or m phases. It is further assumed that the rotor is cylindrical so that the

20. The CorPath Robotic System

The arm should be able to reach four different arterial access points for devices being introduced into the patient’s body: left femoral, right femoral, left radial, and right radial. Additionally, patients can have different genders, heights, and weights, all of which can affect the required orientation and position of the end-effector relative to the base coordinate system. To study the reachability of the arm to the access points of different body types, four extreme body categories were considered: tall oversized male, tall undersized male, small oversized female, and small undersized female. Fig. 20.13 shows the access points for a tall undersized male patient. For illustration, each access point is represented by a cone in the figure. The end-effector should be as concentric as possible to these cones to avoid any undesired curvature in the GC or other devices between the tip of the cassette to the entry point of the patient’s body. It is desired that the arm can operate properly on a wide range of patients. Therefore, models of patients with appropriate size and dimensions of the above four categories are considered. The workspace of the arm can be considered as a convex hull. As a result, if the end-effector reaches the four access points for the four extreme body types (i.e., 16 poses), it can achieve the desired pose of the end-effector for any arbitrary patient size in between those four extreme sizes. Using the inverse kinematics formulation derived above, the feasibility of reaching those 16 poses with the arm was studied. The geometry and length of the arm components were determined so that the inverse kinematics model could lead to an acceptable solution (i.e., joint variable) for all 16 poses.

358

Handbook of Robotic and Image-Guided Surgery

stator inductances are constant in the DQ frame [22]. The dynamics for PMSM motors can be captured by relating variables in a stationary two-phase system to the rotating DQ frame in which the direct (d) axis is parallel to a given radius along the rotors face and the quadrature (q) axis is orthogonal to the d axis and axis of rotation. The coil voltages v{d;q}, currents i{d;q}, and fluxes λ{d;q} in the DQ frame are denoted by the variables f{d;q}. The coil voltages v{α;β}, currents i{α;β}, and fluxes λ{α;β} in the stationary two-phase frame are denoted by the variables f{α;β}. f{d;q} and f{α;β} are related to one another in terms of the angular position of the rotor θ and np in terms of the orthonormal DQ or Park transformation matrix [23] in which we denote the electrical angle θe 5np θ:          fd cos np θ  sin np θ  fα 5 (20.35) fq fβ 2sin np θ cos np θ The inverse relationship between f{d;q} and f{α;β} is therefore:          fd fα cos  np θ 2sin np θ 5 fq fβ sin np θ cos np θ c}

(20.36)

For three-phase PMSMs we denote the stator voltages as v{a,b,c} and their corresponding currents and fluxes as i{a,b, and v{a,b,c} using the following variable f{a,b,c}. The Clarke transform [23] is used to map f{a,b,c} to f{α;β} as follows: 22 1 1 32 3   6 3 2 3 2 3 7 fa fα 6 74 5 (20.37) fb 56 21 7 fβ 4 0 p1ffiffiffi p ffiffiffi 5 fc 3 3 The modified Clarke transform maps f{α;β} to f{a,b,c} as follows: 3 2 1 p0ffiffiffi 2 3 6 1 3 7 fa 7  62 2 2 7 fα 4 fb 5 5 6 6 pffiffiffi 7 fβ 6 1 37 fc 5 4 2 2 2 2

(20.38)

With the mappings from the stationary to the rotating frames of motion established, it is sufficient to present the PMSM dynamics in the DQ frame. The PMSM inductance and resistance of the stator winding are denoted as L and R, respectively. The moment of inertia and damping coefficient of the rotor are denoted J and B, respectively. The rotor load torque _ respectively. The static friction amplitude is denoted τ f and is considered and angular velocity are denoted τ l and ω 5 θ, fully realized when jωj . ωf . 0. The PMSM torque and back emf constant are denoted by the positive real coefficient K. A PMSM may also be subject to an angular position detent  torque  disturbance of amplitude Kd4 [24]. Finally, we shall denote: (1) the effective quadrature voltage as vq 5 vq 2 Kω ; (2) the direct and effective quadrature input



voltages as v DQ 5 vd vq ; (3) the motor DQ current outputs as iDQ 5 id iq ; and (4) the skew symmetric matrix as   0 1 S5 in which S 5 2ST . 21 0 The PMSM dynamic model is: L_i DQ 5 v DQ 2 RiDQ 1 nP ωLSiDQ

  πω J ω_ 5 Kiq 2 τ l 2 Bω 2 Kd4 sin 4np θ 2 τ f tanh ωf

20.5.2

(20.39)

Direct quadrature control architecture for permanent magnet synchronous motors

The DQ reference current is denoted as iDQr 5 id2r PMSM is of the following form:

iq2r . The direct and quadrature currents PI control law for a

eDQ ðtÞ 5 iDQr 2 iDQ _i DQI ðtÞ 5 kDQI eDQ ðtÞ iDQc ðtÞ 5 kDQP eDQ ðtÞ 1 iDQI ðtÞ

(20.40)

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

359

Such that the command direct and effective input voltages are related to the control current as v DQ ðtÞ 5 RiDQc ðtÞ

(20.41)

T In which the symmetric proportional control gain matrix kDQ-P 5 kDQ -P . 0 and symmetric integral control T gain matrix kDQ-I 5 kDQ-I . 0 are tuned to optimize performance and account for discrete time sampling effects. For simplicity of discussion we assume that iDQr 5 0 in order to analyze Lyapunov stability for the continuous time case and prove that asymptotic stability is guaranteed using the above control law. Our proof for stability involves passivity theory (see [25] for a historical overview) showing: (1) that the mapping from the PMSM voltage input v DQ to the PMSM current output iDQ is strictly output passive; (2) the PI control law (s.t. kDQP . 0 and kDQI $ 0) is strictly input passive; and (3) from the passivity theorem the resulting system is asymptotically stable [26].

Lemma 1: When commanding a PMSM motor (Eq. 20.39) with control inputs v DQ and corresponding outputs iDQ then the resulting system is strictly output passive in which the storage function is: (20.42)

  V_ iDQ ðtÞ 5 iTDQ vDQ 2 RiTDQ iDQ

(20.43)

and, its integral results in the strictly output passive inequality     2   iDQ ; v DQ T $ R: iDQ T :2 2 V iDQ ð0Þ     ÐT 2 in which y; u T 5 0 yðtÞT uðtÞdt and :ðyÞT :2 5 y; y T .

(20.44)

  Proof of Lemma the inductance L is a positive real value then V iDQ ðtÞ $ 0 for all iDQ . Therefore, the stor 1: Since,   age function V iDQ ðtÞ is a valid Lyapunov function. Furthermore, computing the derivative of V iDQ ðtÞ with respect to time and using Eq. (20.39) proceeds as follows:   V_ iDQ ðtÞ 5 iTDQ Li_DQ

5 iTDQ vDQ 2 RiDQ 1 nP ωLSiDQ (20.45) 5 iTDQ vDQ 2 RiTDQ iDQ T T T In which the term involving the matrix,  S, vanished because S is skew symmetric ðiDQ SiDQ 5 ð1=2ÞiDQ ðS 1 S ÞiDQ 5 0. _ Integrating the final expression for V iDQ ðtÞ with respect to time from tA½0; T ; T $ 0 results in the strictly output inequality since V iDQ ðT Þ $ 0.

Lemma 2: The continuous time PI controller (Eq. 20.40) with symmetric proportional control gain matrix T T kDQ-P 5 kDQ -P . 0 and symmetric integral control gain matrix kDQ-I 5 kDQ-I . 0 whose input eDQ ðtÞ and corresponding control output iDQc ðtÞ is strictly input passive. In which the storage function is:   1 21 V iDQ-I ðtÞ 5 iTDQ-I kDQ -I iDQ-I 2 Whose derivative satisfies:

  V_ iDQ-I ðtÞ 5 iTDQ-I eDQ 5 iTDQ-c eDQ 2 eTDQ kDQ-P eDQ

(20.46)

(20.47)

Such that the following strictly input passive inequality holds:       2 hiTDQ-c ; eDQ iT $ λmin kDQ-P : eDQ T :2 2 V iDQ-I ð0Þ

  in which λmin kDQP denotes the minimum eigenvalue of the matrix kDQP .

(20.48)

20. The CorPath Robotic System

its corresponding derivative is:

  1 V iDQ ðtÞ 5 LiTDQ iDQ 2

360

Handbook of Robotic and Image-Guided Surgery

  21 Proof of Lemma 2: Since kDQI . 0 then kDQ  0 for all -I . 0 therefore V iDQI ðtÞ $ T iDQI , thus making the storage T 21 21 function a valid Lyapunov function. Note that k 5 k implies k 5 k so that computing the derivative DQ I DQ-I DQ-I DQ-I   of V iDQI ðtÞ with respect to time proceeds as follows:   21 _ T 21 V_ iDQ-I ðtÞ 5 iTDQ kDQ -I iDQ-I 5 iDQ kDQ-I kDQ-I eDQ 5 iTDQ-I eDQ 5 iTDQ-c eDQ

(20.49) 2 eTDQ kDQ-P eDQ

 The final  expression in Eq. (20.49) results from substitution of Eq. (20.40). Integrating the final expression for V_ iDQI ðtÞ with respect to time from tA½0; T ; T $ 0 and noting that the minimum singular value kDQP inequality results from the symmetry   constraint placed on kDQP (see Corollary 8.4.2 in Ref. [27]), results in the strictly input inequality since V iDQI ðtÞ $ 0. We can now show that our feedback interconnection of a strictly input passive (PI control law) and strictly output passive (PMSM) system results in an asymptotically stable system. Theorem 1: The PI control law (Eq. 20.40) applied to  the PMSM  (Eq. 20.39) in which v DQ ðtÞ 5 RiDQc ðtÞ is globally asymptotically stable for all symmetric kDQI . 0 and I 1 kDQP . 0: Proof of Theorem 1: We shall consider the following unbounded Lyapunov function:   1     V iDQ ðtÞ; iDQI ðtÞ 5 V iDQ ðtÞ 1 V iDQI ðtÞ R

(20.50)

Recall that for stability analysis we only consider when iDQr 5 0 which results in eDQ 5 2 iDQ . Therefore, we have the resulting time derivative:    1 V_ iDQ ðtÞ; iDQ-I ðtÞ 5 iTDQ vDQ 2 iTDQ iDQ 1 iTDQ-c 2 eTDQ kDQ-P eDQ R   (20.51) 5 iTDQ iDQ-c 2 iTDQ iDQ-c 2 iTDQ I 1 kDQ-P iDQ   5 2iTDQ I 1 kDQ-P iDQ :     Such that V_ iDQ ðtÞ; iDQI ðtÞ # 0 for all iDQ ðtÞ; iDQI ðtÞ and I 1 kDQP . 0 since it is not strictly negative definite for all iDQI ðtÞ we need to establish its largest invariant set. The largest invariant set for the integrator term is i_DQI 5 0 is 2eDQ ðtÞ 5 iDQ ðtÞ 5 0. The resulting closed-loop system has the following properties: (1) the largest invariant set       in which V_ iDQ ðtÞ; iDQI ðtÞ 5 0 is iDQ ðtÞ; iDQI ðtÞ 5 ð0; 0Þ; (2) V iDQ ðtÞ; iDQI ðtÞ $ 0; for all iDQ ðtÞ; iDQI ðtÞ and   V iDQ ðtÞ; iDQI ðtÞ -N if either :iDQ :2 -N or :iDQI :2 -N for all symmetric kDQI . 0; and (3)     V_ iDQ ðtÞ; iDQI ðtÞ # 0 if λmin kDQP . 2 1. Therefore, from the BarbashinKrasovskiiLaSalle invariant set theorem the system is globally asymptotically stable (see Theorem 3.5 in Ref. [28]).  Note that we weakened the condition on kDQP such that I 1 kDQP . 0. This gives a measure of robustness to our control law as it is practical to select kDQP . 0.

20.5.3

Quadrature current control of brushless linear DC motors

The BLDC motors are driven in a quadrature current closed-loop manner (rotor position feedback) with much smaller np # 8 and m 5 3 phases. It is assumed that the electrical angle, θe , is either measured with a low-resolution Hall effect sensor with appropriate angle interpolation [29] or estimated using an appropriate sensorless observer model [30,31]. Space vector pulse width modulation (SVPWM) of a full-bridge three-phase power converter is utilized to drive our three-phase motors as detailed below [23,32]. The DQ control law (Eq. 20.40) is then realized by:

Ðt 1. First driving the motor in stepper control mode in which θe ðtÞ 5 np 0 ωd dτ, iDQr 5 id2startup 0 in which ωd is the desired motor angular velocity; then  

2. Transitioning to closed-loop DQ current control after θe ðtÞ $ 2π in which iDQr 5 0 iq2r and the actual measured (or estimated) electrical angle is used, θe , in which the reference Q current, iq2r , is typically determined by an outer loop motor position-velocity control law [22].

Robotic-Assisted Percutaneous Coronary Intervention Chapter | 20

20.5.4

361

Direct current control of stepper motors

Stepper motors have a large number of pole pairs (npc8), are bipolar in which m 5 2, and have a nonnegligible detent torque in which Kd4 is nonzero. It is not necessary to measure or estimate the electrical angle when driving a PMSM in stepper mode. Therefore, we shall use our three-phase drive for our bipolar stepper motors by simply using the DQ control law (Eq. 20.40) in which:

Ðt 1. θe ðtÞ 5 np 0 ωd dτ and iDQr 5 id2drive 0 in which ωd is the desired motor angular velocity and id2drive is the desired control current necessary to avoid stalling of the stepper motor. 2. We utilize the two-phase motor SVPWM with a three-phase drive scheme as detailed in Ref. [33] in which phase voltages v{α;β} and zero sequence component voltage vo are wired to the three-phase outputs v{a,b,c} as follows: 2

3 2 vα 1 0 4 vβ 5 5 4 0 1 0 0 vo

32 3 21 va 21 54 vb 5 1 vc

(20.52)

20.6

Future of robotic vascular interventional therapy

The CorPath robotic system is primarily focused on the large PCI market. Next-generation systems consider expanding the capabilities of the robotic system to support a wider breadth of the workflows in PCI, including supporting adjunct technologies such as vessel imaging and preparation. Peripheral and neurovascular interventions are potential therapeutic areas for CorPath, as these types of procedures can be robotically enhanced using similar technologies. The global shortage of PCI-capable operators is significant and continues to be a growing problem. Remote control robotics may enable physicians to conduct interventional procedures virtually from any location, opening opportunities for more patients globally to receive the benefits of this lifesaving procedure. The advancements and availability of high-quality service connectivity makes it feasible to provide remote command and control to the robotic system. Challenges include the level of user immersion required to perform the procedure efficaciously, along with the regulatory pathways for the technology and patient care model. Robotic teleproctoring is envisioned as the logical first step toward full remote control, and this can provide patients in less-populated areas with access to top-level and specialized physicians. The standard of care can be improved through procedure automation. The navigation of GWs, as wells as the positioning of balloon and stent catheters may be improved by various levels of automation, from specific automated techniques to aid the operator, partial automation of specific challenging tasks, to a high level of automation where the robotic system performs a large part of the procedure. Using computer vision and modeling of the coronary arteries, it is possible that the amount of radiation, contrast media used, and procedure time can be materially reduced. Currently, a large part of the treatment planning occurs during the procedure, as information such as angiography and physiology measurement is not available to the interventionalist prior to coronary access. Technologies for preprocedural planning and decision-making are becoming available, such as CT-based computer fluid dynamics assessment of coronary disease (HeartFlow, California). Algorithms to guide periprocedural lesion assessment, treatment plan, and device selection are, however, lacking. By leveraging data, analytics, and deep learning, it is possible to reduce variability of care, as well as the time and cost of procedures. This prescriptive analytics approach combined with procedural automation and remote delivery of care may provide an excellent opportunity, albeit challenging, to significantly improve patient outcomes.

References [1] Jennings S, et al. Trends in percutaneous coronary intervention and angiography in Ireland, 20042011: implications for Ireland and Europe. Int J Cardiol Heart Vessel 2014;4:359. [2] Bennett J, Dubois C. Percutaneous coronary intervention, a historical perspective looking to the future. J Thorac Dis 2013;5(3):367. [3] Andreassi MG, et al. Occupational health risks in cardiac catheterization laboratory workers. Circ Cardiovasc Interv 2016;9(4):e003273.

20. The CorPath Robotic System

Although vo can be arbitrarily selected, it is practically chosen to be vo 5 vc 5 0; Vs such that the two phases are wired together and share the same common or supply voltage (Vs ) when driven by the three-phase drive. The scheme is quite convenient as it allows the same architecture to be used to evaluate BLDC and stepper motors. The main drawback as opposed to using four H-bridges [34] is that only 70% of the power supply can be fully utilized.

362

Handbook of Robotic and Image-Guided Surgery

[4] Maor E, et al. Current and future use of robotic devices to perform percutaneous coronary interventions: a review. J Am Heart Assoc 2017;6(7): e006239. [5] Reeves RR, et al. Invasive cardiologists are exposed to greater left sided cranial radiation: the BRAIN study (Brain Radiation Exposure and Attenuation During Invasive Cardiology Procedures). JACC: Cardiovasc Interv 2015;8(9):1197206. [6] Klein LW, et al. Occupational health hazards of interventional cardiologists in the current decade: results of the 2014 SCAI membership survey. Catheter Cardiovasc Interv 2015;86(5):91324. [7] Campbell PT, et al. The impact of precise robotic lesion length measurement on stent length selection: ramifications for stent savings. Cardiovasc Revasc Med 2015;16(6):34850. [8] Campbell PT, Mahmud E, Marshall JJ. Interoperator and intraoperator (in) accuracy of stent selection based on visual estimation. Catheter Cardiovasc Interv 2015;86(7):117783. [9] Costa MA, et al. Impact of stent deployment procedural factors on long-term effectiveness and safety of sirolimus-eluting stents (final results of the multicenter prospective STLLR trial). Am J Cardiol 2008;101(12):170411. [10] Beyar D. Remote control catheterization, Google patents; 2004. [11] Weisz G, et al. Safety and feasibility of robotic percutaneous coronary intervention: PRECISE (Percutaneous Robotically-Enhanced Coronary Intervention) study. J Am Coll Cardiol 2013;61(15):1596600. [12] Smilowitz NR, et al. Robotic-enhanced PCI compared to the traditional manual approach. J Invasive Cardiol 2014;26(7):31821. [13] Bezerra HG, et al. Longitudinal geographic miss (LGM) in robotic assisted versus manual percutaneous coronary interventions. J Interv Cardiol 2015;28(5):44955. [14] Weisz G, et al. The association between experience and proficiency with robotic-enhanced coronary intervention—insights from the PRECISE multi-center study. Acute Card Care 2014;16(2):3740. [15] Caputo R, et al. Safety and feasibility of robotic PCI utilizing radial arterial access. J Am Coll Cardiol 2015;65(10):A203. [16] Mahmud E, Dominguez A, Bahadorani J. First-in-human robotic percutaneous coronary intervention for unprotected left main stenosis. Catheter Cardiovasc Interv 2016;88(4):56570. [17] Mahmud E, et al. Demonstration of the safety and feasibility of robotically assisted percutaneous coronary intervention in complex coronary lesions: results of the CORA-PCI study (Complex Robotically Assisted Percutaneous Coronary Intervention). JACC Cardiovasc Interv 2017;10 (13):13207. [18] Madder RD, et al. TCT-435 feasibility and success of radial-access robotic percutaneous coronary intervention: insights from the PRECISION Registry. J Am Coll Cardiol 2015;15(66):B1778. [19] Craig JJ. Introduction to robotics: mechanics and control, vol. 3. Upper Saddle River, NJ: Pearson/Prentice Hall; 2005. [20] Spong MW, Vidyasagar M. Robot dynamics and control. John Wiley & Sons; 2008. [21] Corke PI. A robotics toolbox for MATLAB. IEEE Robot Autom Mag 1996;3(1):2432. [22] Petrovic V, Ortega R, Stankovic AM. Interconnection and damping assignment approach to control of PM synchronous motors. IEEE Trans Control Syst Technol 2001;9(6):81120. [23] Kung YS, Huang PG. High performance position controller for PMSM drives based on TMS320F2812 DSP. In: Proceedings of the 2004 IEEE international conference on control applications; 2004. [24] Tsui KW-H, Cheung NC, Yuen KC-W. Novel modeling and damping technique for hybrid stepper motor. IEEE Trans Ind Electron 2009;56 (1):20211. [25] Kottenstette N, et al. On relationships among passivity, positive realness, and dissipativity in linear systems. Automatica 2014;50(4):100316. [26] Hill DJ, Moylan PJ. Stability results for nonlinear feedback systems. Automatica 1977;13(4):37782. [27] Bernstein DS. Matrix mathematics: theory, facts, and formulas with application to linear systems theory, vol. 41. Princeton, NJ: Princeton University Press; 2005. [28] Haddad WM, Chellaboina V. Nonlinear dynamical systems and control: a Lyapunov-based approach. Princeton University Press; 2011. [29] Kim S-Y, et al. An improved rotor position estimation with vector-tracking observer in PMSM drives with low-resolution hall-effect sensors. IEEE Trans Ind Electron 2011;58(9):407886. [30] Ohara M, Noguchi T. Sensorless control of surface permanent-magnet motor based on model reference adaptive system. In: 2011 IEEE ninth international conference on power electronics and drive systems (PEDS)IEEE; 2011. [31] Hamida MA, et al. An adaptive interconnected observer for sensorless control of PM synchronous motors with online parameter identification. IEEE Trans Ind Electron 2013;60(2):73948. [32] Hava AM, Kerkman RJ, Lipo TA. Simple analytical and graphical methods for carrier-based PWM-VSI drives. IEEE Trans Power Electron 1999;14(1):4961. [33] Yang SM, Lin FC, Chen MC. Control of a two-phase linear stepping motor with three-phase voltage source inverter. In: IEEE international electric machines and drives conference, 2003. IEMDC’03. IEEE; 2003. [34] Kenjo T, Sugawara A. Stepping motors and their microprocessor controls. Oxford: Clarendon Press; 1994.

21 G

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery George Moustris and Costas Tzafestas National Technical University of Athens, Athens, Greece

ABSTRACT Motion compensation in coronary artery bypass graft surgery refers to the virtual stabilization of the beating heart, along with the mechanical synchronization of the robotic arms with the pulsating heart surface. The stabilized image of the heart is presented to the surgeon to operate on, while the heart motion is compensated by the robot, and the surgeon essentially operates on a virtual still heart. In this chapter, we present an introduction to the concept of motion compensation and a brief history of research efforts. We analyze a unifying framework which naturally binds together the image stabilization, mechanical synchronization, and shared control tasks. This framework serves as a baseline upon which more complicated assistive modes are built, for example, active and haptic assistance. These modes are discussed more thoroughly, and their efficacy is assessed via laboratory experimental trials in a simulation setup, which are presented in detail. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00021-9 © 2020 Elsevier Inc. All rights reserved.

363

364

21.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Robotic-assisted coronary artery bypass graft (CABG) surgery is a relatively new surgical treatment for patients with coronary disease. The goal is to restore normal blood flow to coronary arteries which present with a blockage. As such, it is the next evolutionary step to the traditional open-chest CABG, where the use of the robot enables totally endoscopic minimally invasive access to the heart. During this operation the heart remains beating but is mechanically stabilized in order for the surgeon to operate on it. This is performed using tools which exert pressure or suction on the heart tissue, known as tissue/cardiac stabilizers. Residual motion is present, however, as well as complications due to stress enforced on the heart, such as hemodynamic instability, lower cardiac output, stroke volume, arterial pressure [1,2], and myocardial ischemia [3]. To overcome these drawbacks, the concept of motion compensation has been proposed [4]. Motion compensation refers to the virtual stabilization of the image of the beating heart, along with the mechanical synchronization of the robotic arms with the pulsating cardiac wall. The stabilized image of the heart is presented to the surgeon to operate on, while the heart motion is compensated by the robot and the surgeon essentially operates on a virtual still heart (Fig. 21.1). This technology obviates the need for cardiac stabilizers and may have positive effects on CABG surgery, such as shorter operating time, better hemodynamic stability, and fewer conversions to open-chest surgery. Image-guided motion compensation in robotic-assisted cardiac surgery consists of three subproblems: image stabilization, mechanical synchronization, and shared control. The first basic tasks are surface reconstruction [5] and motion estimation [6] of the region of interest (ROI), either from stereo imaging or depth cameras. Using this estimation, the image is warped in an appropriate way, in order to algorithmically cancel the motion and the deformation of the ROI. In effect, this transformation virtually stabilizes the ROI, which is then presented to the surgeon. Concurrently, the robotic arms on the patient side manipulator (PSM) follow the actual motion of the ROI in order to move along with it in a synchronized way. In principle, both the position and orientation of the arms must be regulated to track the ROI movement. Since the arms are tele-operated by the surgeon, when he/she holds the master controls still, the slave arms on the ROI must move in sync with it and appear still in the stabilized image. Thus, this mechanical synchronization is a nontrivial task. The final problem is the actual translation of the surgeon’s movements on the master console, to the movement of the PSM. Since the robotic arms are mechanically synchronized with the ROI motion, they are controlled by the robot and the surgeon at the same time, that is, they share the control. This shared control binds the mechanical synchronization and image stabilization algorithms, by allowing the surgeon to operate on the still image and concurrently compensating the cardiac movement. However, this binding is complex since the physical space (i.e., the Cartesian space where the PSM moves) and the stabilized image space are nonlinearly related by a perspective plus a warping transformation. In the following we will present a theoretical unifying framework under which the mechanical synchronization, the image stabilization, and the shared control combine seamlessly. This framework allows the development of more advanced control techniques such as haptic and active assistance, which augment the surgeon’s performance. These applications will be also described, along with experimental results from surgeons.

21.2

Background

In general, existing research falls into two categories: mechanical synchronization methods and image stabilization algorithms. The first attempt to develop a motion compensation scheme for beating heart surgery was presented by Nakamura et al. [4], who introduced the notion of heartbeat synchronization. The authors employed a 4 degrees-offreedom (DoF) robot along with a high-speed camera at 995 fps in order to track a laser point projected on a vibrating piece of paper. The camera image was moved in the image buffer in order to keep the reference point at the same pixel position, whilst the robot moved in sync. This visual synchronization is a simpler form of image stabilization, employing only translations. Similar experiments have been performed using the da Vinci Research Kit with bimanual control [7]. A heart mockup consisting of a piece of dense foam coated in a colored latex coating was moved purely translational using a parallel platform (Novint Falcon). Four markers were mounted onto it and tracked by a 3D position sensor. A camera was also mounted onto a separate manipulator, moving in sync and thus providing visual synchronization. Tests in suturing by surgeons provided encouraging results about the usefulness of the technique. However, the nonrealistic heart model which presents no deformity, along with the simplified motion of the mockup which has small movement variance in the x,y axes, allow for further enhancement and improvement.

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery Chapter | 21

365

21.3

Image stabilization

Consider the setup shown in Fig. 21.1, showing the PSM and master console. The slave robot manipulates the operating field (e.g., cardiac wall) while an endoscope provides a view of the action. The unaltered image from this camera is called the physical image. In motion compensation, the physical image is stabilized and projected to the master console. The stabilized image is called the canonical image. The surgeon perceives the surgical field in the canonical image as still, and produces the human input which is fed to the shared controller manipulating the slave robot. We define four spaces describing objects in their respective domains, namely: the physical workspace Wp where the PSM operates in; the physical image space Ip; the canonical image space Ic; and the canonical workspace Wc, denoting the Cartesian space of the master controls. Using the pinhole camera model, we can relate the physical image space Ip to the physical workspace Wp by using the standard projective transformation and the camera matrix, that is, P : Wp -Ip ; PAGPLðWp Þ. Following this, the canonical and physical images are related by a warping transformation Ψ which cancels the apparent motion of the operating field, namely, Ψ : Ip -Ic . This warping transformation is responsible for stabilizing the image of the beating heart, and presents it as “virtually still” to the surgeon. From the commutation diagram in Fig. 21.2, we see that if we can find a bijective map Φ : Mc -Mp , which maps a static reference into the surgical field, then the image warping transformation Ψ can be computed as, Ψ 5 P3Φ21 3P21

(21.1)

FIGURE 21.1 Overview of the motion compensation concept. The surgeon views a stabilized image of the pulsating heart, while concurrently the robot is synchronized to the heartbeat. The surgeon and the system share control of the patient side manipulators.

21. Beating Heart Surgery

In Ref. [8] the authors performed in vivo tests on a porcine heart, placing four markers on the heart surface in order to capture the motion with a 500 fps camera. Model predictive control (MPC) algorithms were also used, employing a heartbeat model for reference. MPC is further developed in Refs. [9,10]. Motion prediction is also investigated in Ref. [11], using a least square approach and an artificial neural network. An interesting feature in this work is the ability to predict the motion of visually occluded parts of the heart and the fusion of biological signals (electrocardiographic (ECG) and respiration) in the estimation algorithms. The algorithms, however, were not tested on a real robot. Biological signals in MPC were also utilized in Ref. [12]. A sonomicrometry system was used to collect motion data from the heart, bypassing the problem of visual occlusion of the surgical field by the robotic manipulators or other surgical tools. Three-dimensional ultrasound-guided motion compensation for beating heart mitral repair is presented in Ref. [13]. A 1-DoF linear guide was controlled for an anchoring task using feedback from a 3DUS system. Due to the latency of the process, predictive filters were also employed. A US-guided cardiac catheter utilizing motion compensation has also been presented in Ref. [14]. A different approach is presented in Ref. [15] using feedback from a force sensor mounted on a robot. Assuming that the motion is periodic, the authors used an iterative learning controller with a low-pass filter to cancel the motion. In vitro results showed the potential of this approach; however the assumption of periodicity is an oversimplification of the actual motion of the heart. A more robust approach is presented in Ref. [16] using ECG and respiratory signals in order to model and estimate the full 3D motion of the heart surface. Work on the imaging part of motion compensation involves tracking of features on the heart [17,18] and image stabilization. For the latter, interesting work is presented in Ref. [19], where the image was rectified using the CUDA framework. A different method is presented in Ref. [20], where the heart surface was reconstructed in 3D and produced a virtual model. A virtual camera was then set to track a point on the virtual surface, effectively compensating the specific point.

366

Handbook of Robotic and Image-Guided Surgery

FIGURE 21.2 Definition of the four fundamental spaces for motion compensation.

FIGURE 21.3 Illustration of the effect of the strip-wise affine map.

To find Φ, we must formally define what its actual function will be. To that end, for motion compensation the system must track the motion of a reference manifold Mp ðui ; tÞCWp in the surgical field, parameterized by coordinates ui Aℝ. For example Mp could be a point on the heart surface (0-manifold), a line (1-manifold), or a patch (2-manifold). In this work we will present the tracking of 1-manifolds on the heart surface, that is, a line. This is implemented using the strip-wise affine map [21], which is ultimately identified with the map Φ.

21.4

Strip-wise affine map

The strip-wise affine map (SWAM) is a piecewise linear map between the physical and canonical workspaces. It decomposes the xy plane into strips and then applies an affine map between each one. It takes a polygonal line from the Wp, and maps it to the x-axis in the canonical space Wc. Now, let fwi g; i 5 1; . . .; n,wi 5 ðp xi ; p yi ; 0Þ be the vertices of the polygonal line, lying on the physical plane z 5 0 (Fig. 21.3). Each vertex is projected to a point αi on the real axis in the canonical world according to its normalized length, ai 5

i X

Sk =S;

i 5 1; . . .; n

k51

(21.2)

P where Sk 5 jwk 2 wk21 j; S 5 n1 Sk , and S1 5 0. It holds that a1 5 0 and an 5 1. Furthermore, let qp 5 ðxp ; yp ; zp Þ be a point in Wp and qc 5 ðxc ; yc ; zc Þ a point in Wc. Then the SWAM sends qc to qp using, 2 3 2 3 xp yc S cos θs 1 fx ðxc Þ qp 5 4 yp 5 5 4 yc S sin θs 1 fy ðxc Þ 5 5 Φðqc Þ (21.3) zp zc where θs is some angle, called the shifting angle. The functions fx and fy are given by,

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery Chapter | 21

n  X p

fx ðxc Þ 5

 xk 1 SUðxc 2 ak Þcos θk ψk ;

k50

n  X p

fy ðxc Þ 5

367



(21.4)

yk 1 SUðxc 2 ak Þsin θk ψk

k50

The angles θk are the angles of each edge [wk wk11] with respect to the physical x-axis, while ψk is an index function on Wc such that,  1; xc A½ak ; ak11 Þ (21.5) ψk ðxc Þ 5 0; elsewhere Observe that Eq. (21.4) is a piecewise linear parameterization of the reference manifold Mp in a way that Mp 5 ðfx ; fy ; 0Þ. The reference line is a 1-manifold and thus xc is the coordinate parametrizing it. When the robot moves on a vertical plane parallel to the x-axis in the canonical space, its image in the physical space moves parallel to the reference manifold, by a standard offset g 5 ðyc S cos θs ; yc S sin θs ; zc Þ from the point ðfx ðxc Þ; fy ðxc Þ; 0Þ which lays on Mp. It follows that we can define the goal point as,  T (21.6) qG ðxc Þ 5 fx ðxc Þ fy ðxc Þ 0 and Eq. (21.3) can take the compact form, The inverse map Φ21 is given by, 2

3

xc 4 yc 5 5 zc

2

sin θs n X S6 62 cos θk ψk J 4 k50 0

2 cos θs n X sin θk ψk k50

0

3 2 C 3 2 a k 2 3 0 7 6J 7 6 7 xp 7 6 4 yp 5 2 6 07 D 7 5 7 6 2 zp 4 J 5 1 0

(21.7)

(21.8)

where C5S

n  X p k50

D5S J 5S

n  X

k50 n X 2

p

 xk sin θs 2 p yk cos θs ψk  xk sin θk 2 p yk cos θk ψk

(21.9)

sinðθs 2 θk Þψk

k50

Using the SWAM, the control of the robot is transferred to the canonical world where the objective for the surgeon is to track the x-axis.

21.5

Shared control

Shared control refers to the simultaneous actuation of the robotic manipulators by the human surgeon and the robotic system, that is, they share the control. This sharing can be something as simple as controlling different degrees of freedom, or something more intricate such as the nonlinear combination of the computer and surgeon commands. Within this framework, a first level of shared control is performed by the map Φ. When the user inputs commands in the canonical space, Φ maps them in the physical space so as to move in sync with the surgical field motion. However, on top of this simple motion compensation, more elaborate assistive schemes can be built. To consider this, let p qm be the position of the master console in the physical space, also here called the master workspace Wm. This is mapped to c qm in the canonical space via, for example, scaling, tremor reduction, etc. The position of the PSM in the physical space is p qs . The PSM tracks a reference position in Wp denoted as p qr , which is mapped to the canonical space to point c qr . Depending on the relation of these points, we discern the following cases.

21. Beating Heart Surgery

qp 5 gðyc ; zc Þ 1 qG ðxc Þ

368

Handbook of Robotic and Image-Guided Surgery

21.5.1

Simple motion compensation

In the simple motion compensation case [22], the canonical master point c qm is identical to the canonical reference c qr , that is, c qm 5 c qr (Fig. 21.4). This resembles a direct teleoperation scheme, albeit intertwined with motion compensation. In the PSM side, the slave manipulator is tracking the physical reference p qr 5 Φðc qr Þ using a controller, for example, a proportionalintegralderivative (PID) control. Thus the slave position p qs follows p qr .

21.5.2

Active assistance

In the active assistance case [23], the computer actively controls the direction of motion of the PSM, while the surgeon operates some other DoF. In our particular case, for the tracking of the reference line in the physical space, the goal is to track the stabilized straight line in the canonical space. To this end, the canonical master point c qm and the canonical reference c qr are dissociated, that is, c qm 6¼ c qr (Fig. 21.5). Given the canonical master point c qm , let c q0 be the projected point on the xc axis. Then, the computer controls the canonical reference c qr only on the yc axis, either moving it toward c qm or toward c q0 . When the canonical master enters the attraction zone, with width ε about the xc axis, the reference is attracted toward the projection point c q0 . When the master leaves the zone, the reference moves toward it, resulting in pure teleoperation. What this behavior does, is that it “snaps” the slave manipulator to the reference line, when it is very close (inside the attraction zone). When this happens the surgeon controls the lateral movement of the reference, while it remains snapped onto the line. When the canonical master exits the assistive zone, it reverts back to normal motion. As is apparent, in this assistive mode the computer actively controls the slave manipulator. In that sense, the slave position does not necessarily follow the master, especially when transitioning between the two attracting points.

21.5.3

Haptic assistance

In the haptic assistance the computer exerts assistive forces on the master console, in order to guide the surgeon toward the proper direction. This is juxtaposed with the active assistance case where the force is applied to the slave side.

FIGURE 21.4 Illustration of the simple motion compensation scheme. The input from the master console is directly translated into the physical space in the operating field, intertwined with motion compensation.

FIGURE 21.5 Illustration of the active assistance scheme. The canonical reference point c qr is attracted toward the canonical master point c qm or its projection on the xc axis c q0 , depending on whether it is inside the assistance zone.

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery Chapter | 21

369

FIGURE 21.6 Illustration of the haptic assistance scheme. The canonical reference point c qr is identified to the canonical master point c qm , which is attracted by its projection on the xc axis c q0 , depending on whether it is inside the assistance zone.

The canonical master point c qm is again identified to the canonical reference c qr , that is, c qm 5 c qr (Fig. 21.6). If c q0 is the projected point on the xc axis, then the force is exerted on the master, based on its distance to the projected point. This defines a guidance virtual fixture (VF)/active constraint [24] around the xc axis. The fixture is defined in a cylinder, with external radius ε, placed at the xc axis. Two more internal cylinders are also defined. One with radius β , ε and a second with radius δ , β (Fig. 21.7, left). These areas define a force profile that starts linearly from ε to β, exerts a constant force between δ and β, falling back linearly to zero (Fig. 21.7, right). The vectorial force lies on the zcyc plane, pointing toward xc. The guidance fixture has been contained in the cylinder in order to prevent an unwanted drag effect in the entire surgical field, which would counter the surgeon’s movements in different areas. Looking closer we see that in the inner cylinder ðjrj , δÞ, the force is linearly reduced to zero. This has been implemented so as to allow the surgeon to perform the surgical task when on the reference path. In a different situation, for example, if the force was present at r 5 0, normal positional errors of the surgeon due to tremor or the inherent limited accuracy of humans, would result in switching forces being applied, as the tip would “wiggle” about the axis. This would present an undesirable effect since the fixture would be too stiff, not allowing the surgeon to perform motion in the force direction. Thus a spring was inserted, in order to present haptic cues to the user and not confine him/her in the inner tube.

21.6 21.6.1

Experimental setup Robotic system description

The robotic teleoperation masterslave system consists of the PHANToM Omni (now called the Touch haptic device from 3D Systems), for the master side, and the PHANToM Desktop for the slave side (now called Touch X from 3D Systems). These two robotic devices have identical mechanical structure (six-DoF positional sensing, anthropomorphic serial link manipulators), with the first three joints being actuated by DC motors. The tool center point (TCP) is located at the intersection of the last three joints that form a spherical wrist, that is, a gimbal. Since we are only compensating

21. Beating Heart Surgery

FIGURE 21.7 (Left) Depiction of the 3D virtual fixture around the canonical axis. The fixture is of cylindrical shape with two more internal regions regulating the force. (Right) The force profile in relation of the distance to the axis.

370

Handbook of Robotic and Image-Guided Surgery

FIGURE 21.8 (Left) Depiction of the simulated master robot, operated by the surgeon in the canonical space. (Right) Depiction of the simulated slave robot, operating in the physical space. Notice the needle attached to the gimbal, simulating a lancet. The checkerboards are used for visual calibration and registration.

FIGURE 21.9 Overview of the experimental setup.

for the translational movements of the surgical field, utilizing only the first three joints, the gimbal was rigidly locked to the second link at each robot (Fig. 21.8). The master robot is a haptic display device, able to render forces on the TCP and affect the surgeon’s movements per the assistive modes. The kinematic equations of the robots were derived using their known DenavitHartenberg parameters [22]. The forward kinematics produces the Cartesian position of the TCP with respect to the base frame. In order to simulate a lancet on the slave robot, a metallic needle was attached to the gimbal stylus, matching a similar one on the master side, which is provided by default. The kinematics was extended to include the transformation from the base frame to the tip, using a calibration algorithm employing a checkerboard as a reference. The checkerboard’s vertices were sampled with each robot’s tip and the problem was reduced to a quadratic minimization problem. The checkerboard also defined the physical world frame Wp. To express the tip’s coordinates in Wp, the robot base frame was registered to the world frame using [25]. A similar procedure was also used for the master robot to register its base frame to a checkerboard defining the canonical world frame Wc. The surgical field is a video of a beating heart, projected at a semitransparent screen lying in front of the slave robot. Underneath, there is the projector showing the reference line on the screen. The surgeon views this field through a camera mounted on a pole, aiming toward the projection screen. The camera was calibrated and registered to Wp using the Camera Calibration Toolbox in MATLAB. The general overview of the system is shown in Fig. 21.9. The master and slave robots are connected to two different computers (the master and slave controllers, respectively) while the camera is feeding a video stream to a third computer (user console). The computers communicate over a Gigabit Ethernet connection using the User Datagram Protocol (UDP). This configuration was chosen in order to reduce the computational overhead and latency since the implemented algorithms need to be real time.

21.6.2

Graphics system description

The graphics system is responsible for the image acquisition and rectification on the master console. It comprises a camera and a dedicated PC to perform the image processing. The camera model is a LifeCam VX-800 USB 2.0 camera

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery Chapter | 21

371

from Microsoft. The PC uses the camera to acquire the image, processes it, and presents it to the surgeon as a stabilized image. Image rectification is implemented in MATLAB and OpenCV, achieving a refresh rate of approximately 30 Hz using three parallel processing threads. To further reduce the computational load, as well as the latency in the system, the image resolution was reduced to 320 3 240 pixels and converted to grayscale, following the detection of the red reference line using a color filter. The graphics processing loop essentially applies the Ψ transform on the physical image. The slave robot is controlled by the servo loop, which combines input from the graphics system, namely the detected reference line, as well as the master and slave robots. It controls the slave robot by querying its configuration and using the PID controller for teleoperation, setting the forces on the first three joints. The loop runs at a 1-kHz update rate, meaning that the forces are updated every 1 ms. The position update rate however, follows the update rate of the communication loop (100 Hz). This is utilized using UDP sockets at a 100-Hz update rate. Since the UDP protocol does not employ an error-correcting mechanism for the data packets, data transmission is fast, relatively immune to latency, but unreliable. To compensate for this, a simple error correction algorithm was implemented, using a predefined header with each data packet.

21.7

Simulation experiments

FIGURE 21.10 Depiction of the simulated surgical field. The projected image of the heart is pulsating along with the reference red line, to convey the impression of a beating heart.

21. Beating Heart Surgery

This section describes experiments conducted, along with an analysis of the results, for the two assistive modes, namely the active and the haptic assistance. These two sessions have been performed independently and at different times, with a different user. A trained surgeon was asked each time to operate the master console, in order to track a pulsating red line embedded into an endoscopic image of the heart. The entire image was deformed according to the reference line in order to give the perception of a beating heart. The line followed a periodic movement, driven by a prerecorded ECG signal set to various heart rates. Field trials showed that a rate of 18 bpm was the maximum that allowed for a sufficiently small delay in the image acquisition and control loop, mainly due to the hardware limitations that affect the graphics system update rate in the current experimental setup. To capture the robot motion, a green marker was attached to the tip of the simulated lancet. All trials were recorded in 1080 pixel resolution by an HD camera which was overlooking the scene. Prior to the experiments both the surgeon’s camera and the HD camera were calibrated and registered to a common physical world frame (Fig. 21.10). For the active assistance case, two groups of experiments were performed; the first corresponding to a 12 bpm pulsation frequency and the second to 15 bpm. The first group comprised four cases: “without compensation,” “simple compensation,” “active compensation w/o dead zone (ε 5 15 mm, δ 5 0),” and “active compensation with dead zone (ε 5 15 mm, δ 5 7.5 mm).” The second group included two cases: “active compensation with dead zone” and “no compensation.” The physical and canonical images from the actual experiments are seen in Fig. 21.11. In the “no compensation” case, the surgeon was shown the left (physical) image, while in the compensated ones the right (canonical) image was shown.

372

Handbook of Robotic and Image-Guided Surgery

FIGURE 21.11 View of the simulated surgical field through the camera. (Left) Physical image. (Right) Canonical image. Note the two white squares on the distal ends of the line. The surgeon tracks the line, doing touch-and-go between these two squares in a “ping-pong” like fashion.

TABLE 21.1 Statistical results for the first group in the active assistance session.

Mean error (mm) a

Rel. difference

Assistance with dead zone

Assistance w/o dead zone

Simple compensation

No compensation

4.487

5.131

6.031

7.171

2 12.57%

2 14.91%

2 15.90%



a

Relative difference between consecutive columns.

TABLE 21.2 Aggregate results for the two groups in the active assistance session. 12 bpm

15 bpm

Assistance with dead zone

No compensation

Assistance with dead zone

No compensation

Mean (mm)

4.487

7.171

4.400

8.022

Rel. diff.

2 37.43%



2 45.15%



The surgeon was asked to follow the red line, touching in turn the white squares on the distal ends of the line. The movement of the tip of the slave was calculated in postprocessing from the HD camera video. A simple color filter was used in each frame to track the tip’s green marker, and the closest segment of the red line. Following this, the pixel trajectory was filtered to reduce noise and backprojected to the Cartesian space to perform the analysis. To assess the effect of the active compensation, the error distance of the tip to the line was calculated for each frame and a statistical analysis was performed. The results are presented in Table 21.1. From Table 21.1, we see that the active assistance with dead zone reduces the mean error, with respect to active assistance without dead zone, by 12.57%, which further reduces the error by 14.91% compared to simple compensation, which finally reduces the error by 15.90% if no compensation is used. The effect of the heart rate on the active assistance case is investigated in Table 21.2, showing aggregate data for the two groups. Table 21.2 shows a consistent decrease of the mean error across the two rates. It is worth noting that in the active assistance, the error remains virtually the same, implying that the assistive controller is robust with respect to the pulsation frequency, and is approaching its lower threshold, that is, it effectively cancels the effects of motion irrespective of the frequency, given the current hardware implementation. The latency of this setup can also be identified as the cause of the residual error (B4.5 mm). In the haptic assistance session, the same tracking task was asked by a trained surgeon. The session consisted of three groups (no compensation, simple compensation, compensation with VF). Each group, in turn, comprised three frequency cases according to heart rate (12, 15, 18 bmp). For each frequency, four experimental trials were performed. Consequently, the total number of trials was 36. The trial selection mechanism was fully randomized using a discrete

Image-Guided Motion Compensation for Robotic-Assisted Beating Heart Surgery Chapter | 21

373

TABLE 21.3 Statistical results for each group per heart rate in the haptic assistance session. 12 bpm

15 bpm

18 bmp

Group

Mean (mm)

5.743

5.987

5.496

No compensation

Mean (mm)

4.042

4.544

5.020

Simple compensation

Mean (mm)

3.700

3.850

4.302

Compensation with VF

VF, Virtual fixture.

TABLE 21.4 Aggregate results for the three groups in the haptic assistance session.

Mean (mm) Rel. diff.

a

Std.

Simple compensation

Compensation with VF

4.535

3.951



2 21.01%

2 12.88%

0.733

0.623

0.483



2 15.00%

2 22.50%

VF, Virtual fixture. a Relative difference between consecutive means. b Relative difference between consecutive standard deviations.

uniform distribution. Each trial was removed from the pool in subsequent runs. The parameters of the VF were set to ε 5 40 mm, β 5 30 mm, and δ 5 5 mm, while the maximum force was Fmax 5 1 N. Again, the error distance of the tip to the line was calculated for each frame and a statistical analysis was performed. A summary of the statistical results of the groups, according to heart rate, is presented in Table 21.3. Table 21.3 shows an increase in the average error across heart rates, for the two compensation groups. This can be attributed to the specific hardware setup and algorithmic implementation, which introduces delays in the processing loop. It was experimentally confirmed that frequencies beyond 20 bmp were not capable of being processed in real time by our hardware, injecting significant latency in the control loop. Despite this, the haptic assistance group presents a systematic decrease of the mean error across all three heart rates, compared to the two other groups. The aggregate results of the three groups are presented in Table 21.4. We see a decrease in the average error for the haptic assistance group. Quantitatively, VFs decrease the average error by 12.88% compared to the simple compensation group and by 31.19% compared to no compensation. Furthermore, the standard deviation is also decreased by 22.5% with respect to the simple compensation groups, and 34.1% with respect to no compensation. Interpreting these results, one can say that the virtual fixture allows the surgeon to track the reference line with better accuracy and smaller perturbations, in a consistent manner. Note that by comparing the 15 bmp group across the two sessions (Tables 21.2 and 21.3), we see that the haptic assistance mode enables better tracking than the active assistance with dead zone. These results show the promise of this approach and seem to support the hypothesis that haptic assistance presents advantages for the surgeon in robotic surgery.

21.8

Conclusion

In this chapter we have presented a unifying framework for image-guided motion compensation for robotic-assisted beating heart surgery. This framework binds naturally the image stabilization, mechanical synchronization, and shared control tasks, which are required in order to create a robust motion compensation service that will assist the surgeon in off-pump CABG surgery. More complex assistive modes, like active and haptic assistance, are also presented. These modes are built upon the motion compensation framework and experimental results show that they have a positive effect on the surgeon’s accuracy while tracking features on the beating cardiac wall. Even though this technology is in its first steps, when it reaches maturity it is expected to have a significant impact and change the way robotic CABG will be performed.

21. Beating Heart Surgery

Rel. diff.

b

No compensation 5.742

374

Handbook of Robotic and Image-Guided Surgery

References [1] Oliveira PP, Braile DM, Vieira RW, Petrucci Junior O, Silveira Filho LM, Vilarinho KA, et al. Hemodynamic disorders related to beating heart surgery using cardiac stabilizers: experimental study. Rev Bras Cir Cardiovasc 2007;22(4):40715. [2] Couture P, Denault A, Limoges P, Sheridan P, Babin D, Cartier R. Mechanisms of hemodynamic changes during off-pump coronary artery bypass surgery. Can J Anaesth J Can Anesth 2002;49(8):83549. [3] Raut MS, Maheshwari A, Dubey S. Sudden hemodynamic instability during off-pump coronary artery bypass grafting surgery: role of BezoldJarisch reflex. J Cardiothorac Vasc Anesth 2017;31(6):213940. [4] Nakamura Y, Kishi K, Kawakami H. Heartbeat synchronization for robotic cardiac surgery. In: Proceedings 2001 ICRA IEEE international conference on robotics and automation (Cat No01CH37164), vol. 2; 2001. p. 20149. [5] Hu M, Penney GP, Rueckert D, Edwards PJ, Bello F, Casula R, et al. Non-rigid reconstruction of the beating heart surface for minimally invasive cardiac surgery. Medical image computing and computer-assisted intervention  MICCAI 2009 [Internet]. Berlin, Heidelberg: Springer; 2009 [cited 2018 Feb 15]. p. 3442. (Lecture Notes in Computer Science). [6] Mohamadipanah H, Andalibi M, Hoberock L. Robust automatic feature tracking on beating human hearts for minimally invasive CABG surgery. J Med Dev 2016;10(4):0410100410108. [7] Ruszkowski A, Schneider C, Mohareri O, Salcudean S. Bimanual teleoperation with heart motion compensation on the da Vinci #x00AE; Research Kit: implementation and preliminary experiments. In: 2016 IEEE international conference on robotics and automation (ICRA); 2016. p. 41018. [8] Ginhoux R, Gangloff JA, de Mathelin MF, Soler L, Sanchez MMA, Marescaux J. Beating heart tracking in robotic surgery using 500 Hz visual servoing, model predictive control and an adaptive observer. In: Robotics and automation, 2004 proceedings ICRA ’04 2004 IEEE international conference on, vol. 1; 2004. p. 274279. [9] Gangloff J, Ginhoux R, de Mathelin M, Soler L, Marescaux J. Model predictive control for compensation of cyclic organ motions in teleoperated laparoscopic surgery. IEEE Trans Control Syst Technol 2006;14(2):23546. [10] Ginhoux R, Gangloff J, de Mathelin M, Soler L, Sanchez MMA, Marescaux J. Active filtering of physiological motion in robotized surgery using predictive control. IEEE Trans Robot 2005;21(1):6779. [11] Ortmaier T, Groger M, Boehm DH, Falk V, Hirzinger G. Motion estimation in beating heart surgery. IEEE Trans Biomed Eng 2005;52 (10):172940. [12] Bebek O, Cavusoglu MC. Predictive control algorithms using biological signals for active relative motion canceling in robotic assisted heart surgery. In: Robotics and automation, 2006 ICRA 2006 proceedings 2006 IEEE international conference on; 2006. p. 23744. [13] Yuen S, Kesner S, Vasilyev N, Del Nido P, Howe R. 3D ultrasound-guided motion compensation system for beating heart mitral valve repair. Medical image computing and computer-assisted intervention  MICCAI 2008. New York: Springer Berlin/Heidelberg; 2008. p. 7119. [14] Kesner SB, Howe RD. Design and control of motion compensation cardiac catheters. In: Robotics and automation (ICRA), 2010 IEEE international conference on; 2010. p. 105965. [15] Cagneau B, Zemiti N, Bellot D, Morel G. Physiological motion compensation in robotized surgery using force feedback control. In: Robotics and automation, 2007 IEEE international conference on; 2007. pp. 18816. [16] Duindam V, Sastry S. Geometric motion estimation and control for robotic-assisted beating-heart surgery. In: Intelligent robots and systems, 2007 IROS 2007 IEEE/RSJ international conference on; 2007. pp. 8716. [17] Mountney P, Yang G-Z. Soft tissue tracking for minimally invasive surgery: learning local deformation online. In: Metaxas D, Axel L, Fichtinger G, Sze´kely G, editors. Medical image computing and computer-assisted intervention  MICCAI 2008 [Internet]. Berlin, Heidelberg: Springer; 2008. p. 36472. Lecture Notes in Computer Science; vol. 5242. [18] Stoyanov D, Mylonas GP, Deligianni F, Darzi A, Yang GZ. Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures. In: Duncan J, Gerig G, editors. Medical image computing and computer-assisted intervention  MICCAI 2005 [Internet], vol. 3750. Berlin, Heidelberg: Springer; 2005. p. 13946. Lecture Notes in Computer Science. [19] Richa R, Bo´ APL, Poignet P. Towards robust 3D visual tracking for motion compensation in beating heart surgery. Med Image Anal 2011;15 (3):30215. [20] Stoyanov D, Yang G-Z. Stabilization of image motion for robotic assisted beating heart surgery. In: Ayache N, Ourselin S, Maeder A, editors. Medical image computing and computer-assisted intervention  MICCAI 2007 [Internet]. Berlin, Heidelberg: Springer; 2007. p. 41724. Lecture Notes in Computer Science; vol. 4791. [21] Moustris G, Tzafestas SG. Reducing a class of polygonal path tracking to straight line tracking via nonlinear strip-wise affine transformation. Math Comput Simul 2008;79(2):13348. [22] Moustris GP, Mantelos AI, Tzafestas CS. Shared control for motion compensation in robotic beating heart surgery. In: 2013 IEEE international conference on robotics and automation (ICRA). Karlsruhe, Germany: IEEE; 2013. p. 581924. [23] Moustris GP, Mantelos AI, Tzafestas C. Active motion compensation in robotic cardiac surgery. In: Proceedings of the European Control Conference 2013. Zurich, Switzerland; 2013. [24] Bowyer SA, Davies BL, Baena FRy. Active constraints/virtual fixtures: a survey. IEEE Trans Robot 2014;30(1):13857. [25] Horn BKP. Closed-form solution of absolute orientation using unit quaternions. J Opt Soc Am A 1987;4(4):62942.

22 G

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy, Driven by Pneumatic Stepper Motors Vincent Groenhuis1, Franc¸oise J. Siepel1 and Stefano Stramigioli1,2 1

Robotics and Mechatronics, University of Twente, Enschede, The Netherlands ITMO University, Saint Petersburg, Russia

2

ABSTRACT Sunram 5 is the fifth-generation magnetic resonance (MR)-safe robotic system for breast biopsy. It has five degrees of freedom and is driven by six linear and curved pneumatic stepper motors plus three singular cylinders, all constructed by rapid prototyping techniques. The design, production, and evaluation of both single pneumatic cylinders and various type of stepper motors are described in detail in this chapter. Control strategies are also discussed such as how multiple motors can work together in order to achieve both high speed and high accuracy, despite the relatively low stepping frequencies associated with long pneumatic lines between controller and motor. Sunram 5 also includes a breast fixation system, an emergency needle ejection mechanism, and fast and precise needle insertions under near-realtime MR imaging (MRI) guidance, giving potential to improve accuracy and efficiency in MRIguided breast biopsy procedures. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00022-0 © 2020 Elsevier Inc. All rights reserved.

375

376

Handbook of Robotic and Image-Guided Surgery

22.1 22.1.1

Introduction Clinical challenge

Breast cancer is the most commonly diagnosed cancer type among women [1]. Early detection is essential for a good prognosis. In many countries breast cancer screening programs have been set up. Mammography (X-ray) is the primary imaging modality, as it is quick and able to detect the majority of cancers. In addition, ultrasound (US), palpation, computed tomography, and/or magnetic resonance (MR) imaging (MRI) may be used for more conclusive screening. MRI has the highest sensitivity of all imaging modalities. It makes a 3D scan with (sub-)millimeter resolution with good contrast between different types of tissue. After inserting a contrast agent, areas of angiogenesis (local growth of veins) can be distinguished. Certain patient groups may undergo MRI screening even if no abnormalities are found on mammography or US. Women carrying the BRCA 1 or 2 gene have an elevated risk for developing breast cancer and benefit from periodic MRI screening [2]. Additionally, women with inexplicable complaints in the breast (e.g., pain) may be advised to undergo MRI screening. Finally, MRI scans may also be useful for the surgeon to prepare for breast surgeries. When a suspicious lesion is found, a biopsy is required for accurate histological evaluation. This is first attempted under US guidance, which is a relatively easy procedure if the lesion is well visible on US. In this procedure the radiologist inserts a biopsy needle into the breast toward the lesion. When correctly positioned, the biopsy gun is fired, capturing a tissue sample of the suspicious lesion which is subsequently stored for pathology assessment. This method is the gold standard in determining the malignancy of suspicious lesions. In some cases the lesion is not visible on US, but only on MRI. An MR-guided biopsy is then necessary. In this procedure the radiologist inserts a biopsy needle using a grid or postpillar system. This is normally performed outside the MRI scanner due to accessibility constraints of the scanner bore. The patient has to be moved in and out of the scanner multiple times in the procedure. Small movements of the breast due to respiration and tension may cause displacement of the lesion to be sampled, making it difficult to precisely target it. Also, the spacing of the grid introduces a discretization error when this method is used. To compensate for these errors, a relatively large needle (9 ga, equivalent to 3.8 mm) is generally used, and multiple tissue samples are acquired to obtain a high confidence of acquiring at least one tissue sample of the lesion. One example system is the vacuum-assisted breast biopsy (VABB) system by Hologic. Still, the confirmation scan may indicate an inadequate needle placement, requiring repositioning of the needle, leading to additional tissue damage or a false-negative biopsy. In order to resolve the shortcomings of the manual MRI-guided breast biopsy procedure, the needle should not be inserted blindly outside the MRI scanner, but inside the scanner bore itself. This more or less implies the necessity of a robotic system to position, align, and insert the biopsy needle due to accessibility constraints. This eliminates the possibility of patient movements and allows for (near-)realtime imaging guidance during robotic needle insertion. Additionally, robotic systems potentially allow for more precise needle insertions. With such a system, a relatively thin (16 ga, equivalent to 2.1 mm) biopsy needle is sufficient, resulting in considerably less tissue damage compared to conventional systems such as the VABB.

22.1.2

Magnetic resonance imaging compatibility of surgical robots

An important requirement for devices inside the MRI scanner is that these are safe to use in the specific environment. The ASTM F2503 standard defines three categories of MRI devices: MR safe, MR conditional, and MR unsafe [3]. The MR-safe requirement implies that the device is free of metallic, ferromagnetic, and conductive materials and therefore is inherently safe to use in all MRI scanners. This is regardless of the field strength and other parameters such as maximum gradients and minimum distance to patient. The MR conditional classification indicates that the device is only safe when certain given conditions are all met, while devices with the MR-unsafe classification pose unacceptable risks and cannot be used in any MRI environment. This scheme replaces the former one (MR compatible/safe), which is known to cause confusion and errors: many “MRI-compatible” devices were only tested under certain conditions and sometimes resulted in unsafe behavior in other environments, leading to serious risks.

22.1.3

Actuation methods for magnetic resonance-safe/conditional robots

The MR-safe/conditional requirement implies that conventional electromagnetic motors cannot be directly used in actuation of any MR robot. Several alternative actuation methods have been proposed and demonstrated:

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

G

G

377

The authors of this chapter use pneumatics as the energy transfer method in the form of pneumatic stepper motors. We show that fast and precise control is possible, despite the low bandwidth and lack of direct position feedback.

22.1.4

State of the art

Many MRI surgical robots have been developed in the past by various research groups. In this chapter a selection of robotic systems driven by pneumatic stepper motors is discussed: first three robots by other research groups and then five robots by the authors of this chapter.

22.1.4.1 Pneumatic magnetic resonance imaging robots by Stoianovici, Bomers, and Sajima Stoianovici et al. developed several MRI robots for prostate biopsy. One example is the MrBot, shown in Fig. 22.1A [14]. It is driven by six PneuStep rotational stepper motors of which a schematic cross-section is shown in Fig. 22.2A. The PneuStep motor consists of three diaphragm cylinders that are connected to an internal gear. By alternatingly pressurizing the three cylinders, the internal gear is translated along a circular trajectory and its hoop gear in turn engages a spur gear. A leadscrew mechanism then converts the rotational motion of the spur gear into linear motion, resulting in movement of the robotic system. PneuStep makes use of optical positional encoders to detect and correct for missing steps, allowing to operate it at higher stepping speeds when less than maximum torque is needed. The valve manifold is put inside a shielded enclosure within the MRI room, allowing to reduce the tube lengths to a minimum [16]. The Soterial Remote Controlled Manipulator by Bomers et al. is shown in Fig. 22.1B. Like MrBot, this robot is designed for prostate interventions [15]. It is driven by five pneumatic stepper motors of which a schematic drawing is shown in Fig. 22.2B. Its five cylinders have cone tips mounted on the pistons which engage on a two-dimensional (2D) pattern of holes on the rod. Pressurization of one cylinder pushes the associated cone tip into one hole, forcing the hole to align with the cone tip by the associated wedge mechanism and thereby introducing a displacement. Sequential pressurization of the right combination of cylinders results in either a screw movement or a linear movement of the rod, resulting in a small or large displacement of the robot linkages. The cylinders are double-acting; a single tube is used for the return stroke of all five pistons so that six tubes are used per actuator [17].

22. Sunram 5 Robotic System

Piezo motors and ultrasonic motors are electric motors that only cause limited interference with the MRI’s magnetic field. Using dedicated control electronics and taking certain precautions, such motors may be classified MR conditional and may be usable in actuation of MRI robots [4 6]. A drawback is that piezo/ultrasonic motors cannot be classified MR safe due to the use of electricity and metallic materials, so the MRI safety and imaging quality aspects have to be reevaluated each time that operating conditions are expanded. Bowden cables transport energy via solid wires guided through tubes [6,7]. Instead of tubes a system of pulleys can also be used. These techniques allow to place conventional motors away from the robot (outside the Faraday cage of the MRI scanner). If the wires and Bowden tubes (or pulleys) are made of nonmetallic materials, the system could be made MR safe. Friction, backlash, and elasticity in the rigid materials may make an effective energy transfer difficult, especially when many bends are present in the transmission line. G Pneumatics use clean air as an energy transfer medium which is abundant in hospitals and laboratory environments. As small leakages are acceptable, pneumatic cylinders can be manufactured using rapid prototyping techniques. Important limitations are the compressibility of the medium which makes precise position control of a single cylinder difficult [8,9], and also the long distance between the (MR-unsafe) controller manifold and robot leads to long pneumatic lines which result in relatively low bandwidth. G Hydraulics make use of liquid to deliver power to the robotic system [10,11]. The liquid is kept in a closed system with a compressor and valves and leaks are to be avoided. A hydraulic device requires the use of precisely engineered components, which makes rapid prototyping relatively difficult compared to other techniques. G Actuation by magnetic spheres driven by gradients of the MRI scanner have also been demonstrated [12]. This technique is relatively complicated as it requires precise control of the MRI’s gradients while at the same time mitigating the imaging artifacts induced by the magnets. G Shaped memory alloy (SMA) actuators generate unidirectional movements when heat is applied to an SMA spring. The heat can be generated by applying current through the SMA spring, of which the self-resistance results in resistive heating. Bidirectional movement is generated using complementary pairs of SMA springs [13]. The use of metallic materials in the SMA actuators and the application of current through it make the SMA actuators MR conditional at best.

378

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.1 Two state-of-the-art MRI manipulators driven by pneumatic stepper motors. (A) MrBot by Stoianovici et al. [14]. (B) Soteria RCM by Bomers et al. [15]. MRI, Magnetic resonance imaging; RCM, remote controlled manipulator.

FIGURE 22.2 Pneumatic motors used to actuate the corresponding manipulators in Fig. 22.1. (A) PneuStep by Stoianovici et al. [16], (B) pneumatic stepper motor by Bosboom et al. [17].

Sajima et al. developed a manipulator driven by rotational stepper motors [18,19]. Each stepper motor consists of three single-acting cylinders that act on a rotation gear by means of a wedge mechanism. By sequentially pressurizing the three cylinders the gear is driven around in either direction. In this design the gears have to be back-drivable in order to allow retraction of the pistons in the nonpressurized cylinders for continuous movements. A leadscrew finally converts the rotational motion of the gear into linear motion of the manipulator linkages. An important limitation of the design of Sajima is that the wedge mechanism must be back-drivable due to the use of single-acting cylinders. This implies that the teeth cannot have sharp angles and significant torque is lost by friction of the sliding surfaces. On the other hand, Sajima’s design is relatively compact and easy to manufacture compared to the designs of Stoianovici and Bomers. The low nominal stepping frequency resulting from the long pneumatic tubes is an issue which has to be addressed in order to achieve both high speed and high accuracy. Stoinanovici’s design utilizes position encoders which allow to speed up the motors when less than maximum torque is needed, while Bomers’ design uses a 2D hole pattern that enables both large and small actuation steps.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

379

TABLE 22.1 Comparison of Stormram 1 4 and Sunram 5. Robot

Dimensions (mm)

DoF

Workspace

Motor step size

Stepper motor force

Accuracy

Stormram 1

290 3 290 3 310

7

Small

1.3 mm

30 [email protected] MPa

B2 mm (free air)

Stormram 2

140 3 140 3 140

5

B0.4 L

1 mm

15 N

B2 mm (free air) 4.7 7.3 mm (MRI)

Stormram 3

185 3 155 3 140

5

,2 L

0.67 mm

13/70 N, @0.3 MPa

B2 mm (free air)

Stormram 4

200 3 115 3 50 (full) 72 3 51 3 40 (moving)

4

2.2 L

0.25 mm

63 N @0.65 MPa

0.7 mm (free air) 1.3 mm (MRI)

Sunram 5

200 3 140 3 90 (full) 107 3 72 3 56 (moving)

5

Large

0.3 mm

.60 N

,1 mm (free air)

DoF, Degree of freedom; MRI, magnetic resonance imaging.

22.1.4.2 Stormram 1 4 and Sunram 5 Five MR-safe breast biopsy robots have been developed by the authors of this chapter. All of them are driven by pneumatic linear or curved stepper motors and rapid prototyped by 3D printing and laser-cutting techniques. Table 22.1 lists the main characteristics of each of the five generations. Stormram 1 (Fig. 22.3, left) was developed in 2014 and has seven degrees of freedom (DoFs) [20]. Its large size and small workspace make it unsuitable for practical applications, so in 2015 the Stormram 2 was developed (Fig. 22.3, center) with five DoFs. This robot is driven by stepper motors integrated inside 45 mm ball joints for compactness [21]. In the following year Stormram 3 was developed (Fig. 22.3, right), also with five DoFs and with improved accuracy, force characteristics, and workspace thanks to redesigned joints and smaller step sizes [22]. Still, the use of a parallel kinematic chain made control complicated and the low stepping frequency in an MRI environment made the robot too slow. In 2017, the Stormram 4 was presented (Fig. 22.4, left). This system has a serial kinematic manipulator with four DoFs, of which two are driven by curved stepper motors and two by linear stepper motors. With a compact size of 72 3 51 3 40 mm (excluding racks) this robot combines a relatively large workspace with good accuracy, still at low speed in the MRI environment [23,24]. In the following year the Sunram 5 was developed (Fig. 22.4, right) which utilizes dual-speed motors on certain axes. The Sunram 5 has five DoFs driven by a total of six stepper motors and also includes a breast fixation system, a pneumatic biopsy gun, and a safety needle ejection mechanism [25]. All stepper motors used in Stormram 1 4 and Sunram 5 consist of two or three double-acting cylinders that engage on a straight or curved rack by means of a wedge mechanism [26,27]. The rectangular-shaped cylinders allow efficient stacking of multiple cylinders within a single housing. This is particularly useful for the two dual-speed motors in

22. Sunram 5 Robotic System

FIGURE 22.3 Stormram 1 (left), Stormram 2 (center), and Stormram 3 (right).

380

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.4 Stormram 4 (left) and Sunram 5 (right).

Sunram 5 in which four or five cylinders are positioned in line, enabling both large-step and small-step movements in the same movement direction [28]. The minimum required number of DoFs in a biopsy robot is three: the location of a lesion is represented as a point in space while the needle insertion direction can in principle be chosen arbitrarily. Additional DoFs are useful in order to navigate around impassable structures: examples are the grating of the breast fixation system and any anatomical features indicated by the radiologist that should be avoided. With five DoFs Sunram 5 has sufficient dexterity to circumvent such structures and reach difficult locations in the breast.

22.1.5

Organization of the chapter

The remaining sections of this chapter are organized as follows: Section 22.2 describes the design and manufacturization of single-acting cylinders, Section 22.3 describes linear and curved stepper motors and dual-speed motor concepts, Section 22.4 describes the design and kinematics of Sunram 5, Section 22.5 describes control methods for cylinders, stepper motors, and robotic systems, Section 22.6 describes evaluation methods and experimental results of stepper motors and the Stormram 4, and finally Section 22.7 wraps up the chapter.

22.2

Pneumatic cylinders

This section describes the principles, design, and production aspects of pneumatic cylinders. The single-acting cylinder is presented as an example as this is the easiest to design, manufacture, and evaluate. The necessary files for 3D printing and laser-cutting this device are available for download [29]. Pneumatic cylinders consist of a hollow cavity in which a piston can slide back and forth. When a pressure P is applied in the cavity at one side of the piston, a force F 5 P  A is exerted on its surface A which may result in motion, delivering work to the environment. By alternatingly applying pressure to either side of the piston through separate pneumatic connections, the piston can be moved back and forth pneumatically and this is called a double-acting cylinder.

22.2.1

Rectangular cross-sectional shape

Most traditional cylinders have a circular cross-section. The main reasons are that circles have an optimal area/circumference ratio, allowing cylinders to be produced with relatively thin walls. Also, circular holes can be easily manufactured by conventional drilling techniques and the absence of sharp corners in the walls makes sealing relatively easy. On the other hand, cylinders with circular walls are much more difficult to manufacture precisely using 3D printing techniques than cylinders with straight walls due to characteristics involved in 3D printing. More specifically, the additive manufacturing techniques that deposit filament layer-by-layer result in staircase effects and poor surface finish at steep overhangs, present in horizontally oriented cylindrical cavities. In vertically oriented cylindrical cavities the layered structure also causes difficulties in motion and sealing. In order to circumvent these drawbacks, box-shaped cylinders are used in this chapter. Its cross-sectional area is rectangular and the structures can be manufactured with good accuracy. An additional advantage is that a rectangular cylinder makes more efficient usage of the available space than circular cylinders with the same wall thickness.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

381

No protruding cylinder rods are used in the double-acting pneumatic cylinders. While it would be possible to use such rods with appropriate sealing, different methods are used to transfer work to the environment. In the case of stepper motors, interaction between pistons and racks occurs by means of teeth positioned between the two heads of each piston.

22.2.2

Sealing

22.2.3

Design of the single-acting cylinder

The development of a new pneumatic actuator using the technology described in this chapter is an iterative process. In order to study the dimensional aspects of the cylinder parts and how to seal it properly, it is useful to design and build the open single-acting cylinder shown in Fig. 22.5. This design allows to explore different sealing techniques and part dimensions in order to obtain the right tolerances experimentally, as the piston and seal can be taken out even after gluing the housing. It also allows to evaluate the piston forces and air leakages at a range of pressures. The presented techniques can subsequently be applied to create rectangular-shaped cylinders of any size. The shown cylinder has a cavity of 12 3 10 3 10 mm and walls of 2 mm. This results in outer dimensions of 16 3 14 3 12 mm, excluding the pneumatic socket which is a 2-mm hole with depth 4 mm. The theoretical output force at a pressure of 0.4 MPa is (0.4  106Pa)  (12  1023  10  1023 m2) 5 48 N. The cylinder is printed in two parts: a housing and a cap. The piston has a head which covers the cross-sectional area with a small clearance (order of 0.1 mm), to allow for smooth movements without wobbling. In Fig. 22.5 a solid block is used as a piston, but in practical applications the piston may include features to interact with the environment. The seal is also rectangular, but must be slightly larger than the cross-sectional area of the cylinder: typical dimensions are 12.3 3 10.3 3 0.5 mm. The cross-sectional area can be reduced if friction is too high, or increased in case of (excessive) leakage. A thicker seal (e.g., 1 or 1.5 mm) allows for more rigidity which may be useful in cylinders larger than 15 mm in size. FIGURE 22.5 Exploded (left) and assembled (right) views of single-acting cylinder design, with housing (red), piston (green), and seal (yellow).

22. Sunram 5 Robotic System

A piston moves inside a cylinder as a result of pressure being applied to one of its surfaces. This pressure is supplied by a pressure source through a valve manifold outside the MRI room. It is essential that pressure is transferred as effectively as possible, so leakage in the cylinder must be kept to a minimum in order to avoid pressure drops. Without proper sealing a significant amount of leakage would occur through the gap between the piston and the cylinder walls; this gap is necessarily present to allow the sliding motion of the piston inside the cylinder. Elastomer O-rings as used in circular cylinders do not function in rectangular cylinders due to the four right-angled corners which the O-ring cannot cover. Instead, plates of silicone rubber with thickness 0.5 1.0 mm are chosen as the sealing material. A laser cutter is used to cut rectangular pieces out of it. Alternatively, the seals can be cut by hand using a cutting tool. The seal width and height should be approximately 0.2 0.3 mm larger than the respective cylinder cross-sectional dimensions to ensure good sealing without causing excessive friction. The seal edges need to be slanted at an angle of 2 15 degrees. This implies that the seal faces are not equal: one side has a larger surface area than the other. For effective sealing the larger face must be oriented toward the air chamber, while the smaller face touches the piston head. When pressurized air acts on the larger surface of the seal it effectively pushes the seal edges against the cylinder walls resulting in a proper sealing function. In most cases it is not necessary to fixate the seals to pistons mechanically. The sufficient condition is that the piston is only moved pneumatically and not by external forces, which guarantees that the seals are pressed against the piston heads at all times and will not become detached.

382

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.6 Realization of single-acting cylinder. (A) Assembled cylinder with piston, (B) housing, (C) cap, (D) seal.

22.2.4

Manufacturization

The presented pneumatic cylinders and pistons are printed on a Polyjet printer (Connex3 Objet260, Stratasys Ltd., Eden Prairie, MN, United States) in VeroClear material, standard quality, glossy finish. The glossy finish option implies that the top surfaces and side walls of the part are not covered with support material, which is important as supported faces are relatively rough and would cause excessive friction. As shown in Fig. 22.5 (left), the cylinder is printed in two pieces: the housing and the lid. Fig. 22.6B and C shows these parts in the preferred print orientation which ensures that all four cylinder walls are printed with a glossy finish. Furthermore it is advisable to orient the parts such that the print head moves in the same direction (along the X-axis of the printer) as the piston would do in the cylinder, in order to obtain the smoothest possible finish for the cylinder walls along the direction of movement. The next step is to join the parts together. Screws are not a good option in small-scale MR-safe applications due to space constraints, so bonding by glue (e.g., cyanoacrylate) is used. The challenge is to create an airtight bond without excess glue entering the cylinder cavity. It is therefore important to apply the right amount of glue and also to operate the cylinder at very low pressure right after assembly. In closed cylinder designs it is also recommended to cover the piston and seal in petroleum jelly (Vaseline) before gluing the housing together. Besides serving as a lubricant, it also allows the seal to wipe away excess glue inside the housing before the glue has hardened out. A light patch of blue silicone (Loctite 5926) on the cylinder edge may be useful to improve the airtightness and reduce the risk of jamming. Polyurethane tubes can be glued to the housing using cyanoacrylate (Loctite 406). An activator such as Loctite 770 should be applied to the polyurethane parts before gluing to ensure a good bond. See Fig. 22.6A for the resulting single-acting cylinder with a typical piston. The piston can be pushed in by hand and pushed out by pressurized air. The amount of leakage and friction can be qualitatively assessed by hand and quantitatively studied using appropriate equipment. Based on the cross-sectional area, every 0.1 MPa (  1 bar) of pressure approximately results in an additional 12 N (  1.2 kg) of force.

22.2.5

Double-acting cylinder

A single-acting cylinder can push a piston in one direction only. In most applications a return stroke mechanism is required. In our applications a second single-acting cylinder is most suited for this. Alternatives are a mechanical (or permanent pneumatic) spring which requires fewer pneumatic tubes, but this results in much smaller net output forces for a given cross-sectional area. The combination of two single-acting cylinders opposite each other results in a double-acting cylinder. The crosssectional area of the two opposite bores can be the same or different depending on the specific application. The piston is a single rigid object with two piston heads and no protruding rod. The specific piston shape defines the way it interacts with the environment, such as engaging with a toothed rack or firing a biopsy gun.

22.3

Stepper motors

A pneumatic stepper motor can be constructed from two or three double-acting cylinders that act on a toothed rack (or gear). In the two-cylinder version (Fig. 22.7) the rack and pistons have teeth on two sides, while in the three-cylinder version (Fig. 22.8) only one side is toothed. The two-cylinder stepper motor design has four distinct states, shown in Fig. 22.7. In each state both cylinders act on the rack, but only one of them can be fully engaged on it, which is always the piston that moved formerly. The consequence is that there is no backlash, but a hysteresis is present: when reversing direction, the observed rack position

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

383

FIGURE 22.7 (1 4) Sequence of states in a two-cylinder pneumatic stepper motor, with rack moving from left to right. The rightmost state is identical to the leftmost one, but with the rack displaced to the right by one tooth pitch.

includes a certain offset compared to those in a forward direction. While this offset can be measured and accounted for, a simpler method is to approach each setpoint in a consistent (e.g., forward) direction. The three-cylinder stepper motor design has eight distinct states, of which seven are shown in Fig. 22.8. (The eighth one, with all pistons engaging on the rack, is of no practical value.) In states 1, 3, and 5 exactly one piston engages on the rack and backlash may be present due to the finite clearance between piston and cylinder walls. In states 2, 4, and 6 the backlash is eliminated at the cost of the introduction of hysteresis like in the two-cylinder stepper motor design. Unlike in the two-cylinder design there is a free-running state, which allows to move the rack by external forces with negligible resistance.

22.3.1

Design of the two-cylinder stepper motor

Like pneumatic cylinders, stepper motors come in various sizes. One specific design of a two-cylinder stepper motor is presented as an example and the associated files are available for download [29]. Fig. 22.9 shows a rendering of this stepper motor, which has dimensions 32 3 30 3 16 mm (excluding the rack). It consists of a housing with two top cover plates, two pistons, a rack, and four seals. The cylinder cross-sectional area is 12 3 10 mm and the teeth pitch of this particular design is 1.2 mm, resulting in a step size of 0.3 mm. The separation between the cylinders needs to be such that the cylinder pattern spacing is an odd multiple of the step size, as visualized in Fig. 22.7. In this case it is 47  0.3 mm 5 14.1 mm, resulting in a central wall thickness of 14.1 2 12 mm 5 2.1 mm. In this stepper motor design the piston teeth are constructed by laser-cutting a 2 mm acetal plate. The advantages of laser-cut teeth are sharper tips, increased strength, lower friction, and reduced wear. These parts are inserted in the appropriate slots of the pistons and interact smoothly with the 3D printed teeth of the rack. Fig. 22.10 shows a realization of the stepper motor. The production and assembly process is similar to that of the single-acting cylinder. Again, the parts are printed in VeroClear, glossy finish, cylinder walls facing up and aligned with the X-axis of the polyjet printer. The air vents are initially filled with support material, but easily cleared with a small wire and the use of an air gun. Silicone grease (Vaseline or equivalent) is used as lubricant, blue silicone acts as sealant, and cyanoacrylate is used to glue the two caps to the housing. After assembly the cylinders are briefly operated at low pressure (without rack) to clear the cylinder walls from excess blue silicone and/or glue before hardening out while holding the housing together using clamps. After the glue has hardened out, the rack can be carefully inserted while slowly operating the cylinders at low pressure; some grinding and/or lubrication may be required to allow smooth movement of the rack. Finally the pressure can be increased to 0.1 0.4 MPa or higher.

22. Sunram 5 Robotic System

FIGURE 22.8 (1 6) Sequence of states in a three-cylinder pneumatic stepper motor, with rack moving from left to right. The “free” state (right) is the free-running state in which all pistons are moved up.

384

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.9 Cut-out view of small-step two-cylinder linear stepper motor. FIGURE 22.10 Realization of small-step two-cylinder linear stepper motor: (A) housing, (B) one cap, (C) laser-cut teeth piece, (D) piston with one teeth piece inserted, and (E) assembled motor.

22.3.2

Curved stepper motor

The rack of a stepper motor does not need to be linear: it can also follow a circular arc with some (finite) radius R. In fact, a linear stepper motor is a special case having R 5 N. Fig. 22.11 shows the design of a two-cylinder curved stepper motor. The two cylinders are not parallel, but angled in order to keep the piston movement perpendicular to the curvature of the rack. Apart from this, the design principles are the same as those of the linear stepper motors. The piston teeth should be shaped such that they interact with the rack teeth at the same moment and minimize the risk of jamming. Fig. 22.12 shows details of the teeth shapes in two positions of the piston and rack. The rack teeth are straight and symmetrical, while the piston teeth are slightly curved for two reasons. The first reason is made clear in Fig. 22.12A, in which it can be observed that the optimized piston teeth tips all make contact with the rack at the same moment so that load forces are as evenly distributed over all teeth as possible. The second reason is apparent in Fig. 22.12B. With the piston fully engaged on the rack, the nonengaging teeth tips must be opposite to the rack teeth (shown in dotted lines). This ensures that all piston teeth engage with the rack in a consistent manner, that is, all pushing the rack to the left or all to the right. A nonoptimized piston teeth shape (dashed lines) would have the possibility of jamming, making the motor less reliable.

22.3.3

Dual-speed stepper motor

In an MRI environment the stepping frequency is limited to approximately 10 Hz when full force is needed. This would result in long procedure times when large distances are to be covered with high accuracy. A straightforward solution is to combine two (or more) stepper motors together on the same axis, allowing for both high-speed and high-accuracy movements. A dual-speed stepper motor is essentially a combination of two singular stepper motors with different step sizes arranged in a serial kinematic chain. In order to make efficient use of the space, each dual-speed motor consists of a single housing in which cylinders for all pistons of both singular stepper motors are arranged in line. The two racks are positioned at opposite sides of the housing. This way the cross-sectional area of the dual-speed motor is the same as in the single-speed motor, while the extra length in the housing only occupying space in the direction of movement.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

385

FIGURE 22.11 Cut-out view of a curved stepper motor, exposing the two pistons, four seals and rack in the housing.

FIGURE 22.12 Analysis of teeth shape in curved stepper motor. The piston teeth have a special shape in order to optimize contact and reduce risk of jamming.

22. Sunram 5 Robotic System

FIGURE 22.13 Dual-speed stepper motor design. The top rack has pitch 0.3 mm, the bottom rack has pitch 1.7 mm.

The key design aspect is the pitch sizes of both singular motors. The most straightforward approach is to combine a large-step motor for fast movements with a small-step motor for high accuracy. An alternative approach is to use two large-step motors with slightly different step sizes, exploiting the step size difference to making small steps. Fig. 22.13 shows the design of a generic dual-speed motor with size 50 3 32 3 14 mm (excluding racks). It consists of four pistons: the outermost two pistons operate the large-step rack with step size 1.7 mm, while the innermost pistons operate the small-step rack with step size 0.3 mm. Measurements have shown that the maximum force under load is 24 N at a pressure of 0.3 MPa, and positioning accuracy is 0.1 mm [28].

386

Handbook of Robotic and Image-Guided Surgery

22.4

Design of Sunram 5

This section describes the kinematics and mechanical design of Sunram 5. It is driven by linear and curved stepper motors, dual-speed motor combinations, and single-acting and double-acting pneumatic cylinders described earlier in this chapter. The actuators are specifically adapted to the needs of the respective axes. First the kinematic configuration is described, followed by details of the mechanical implementation.

22.4.1

Kinematic configuration

Fig. 22.14 shows the kinematic configuration of Sunram 5. Fig. 22.15 shows a photo of the Sunram 5 indicating the movement directions of the joints and cylinders. Joint J1 is a curved stepper motor with a radius of 260 mm. The teeth pitch is 1.5 degrees, equivalent to a teeth distance of 6.81 mm and a step size of 1.7 mm along the curved rack. The total range is 35 degrees (93 steps). It is used for coarse positioning of the robotic system and the curvature allows for a more favorable insertion angle near the borders of the workspace compared to pure linear motions. Joint J2 is a linear stepper motor with step size 0.3 mm and a range of 45 mm (150 steps). Joint J2 is used for fine lateral adjustments, but can also be used in conjunction with J1 to tilt the needle sideways over small angles to circumvent the grating of the breast fixation system and/or optimize the needle trajectory. Although joint J1 and J2 operate on different axes, the combination has characteristics of a C1 C3

C2

J6 J5 J4

J3

J1 J2 FIGURE 22.14 Kinematic configuration of Sunram 5 with joints J1 J6, biopsy gun cylinders C1 and C2, and emergency ejection cylinder C3. FIGURE 22.15 Photo of Sunram 5 with movement directions of joints J1 J6 and cylinders C1 C3.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

387

dual-speed stepper motor because of the capability of both quick and precise lateral positioning over the full width of the workspace. Joints J3 and J4 are rotational stepper motors that lift and tilt the needle holder vertically. Both joints J3 and J4 are single-speed curved motors with a radius of 62 mm and teeth pitch of 1.2 degrees corresponding to a step size of 0.3 degree (0.32 mm at 62 mm) and a range of 40 degrees (133 steps). Joint J5 is a linear stepper motor which moves the needle holder assembly forwards and backwards in small steps. The pitch size is 1.2 mm corresponding with a step size of 0.3 mm and the range of motion is 50 mm (167 steps). Joint J6 is a three-cylinder linear stepper motor with pitch size 5.1 mm, step size 1.7 mm, and range of motion of 61 mm (48 steps) along the same axis as J5, so joints J5 and J6 together form a true dual-speed stepper motor. Cylinder C1 drives the inner needle of the biopsy gun forward over a distance of 19 mm and cylinder C2 slides the needle shaft over the inner needle over the same distance. Cylinder is C3 is the emergency needle ejection cylinder and it is effective when joint J6 is in free-running state.

22.4.2

Mechanical design of Sunram 5

FIGURE 22.16 Computer-aided design (CAD) drawing of Sunram 5, in compact (left) and extended (right) configurations. FIGURE 22.17 Actuators of joints J1 and J2 to drive the Sunram 5 sideways along the track. The curved bottom rack has step size 1.7 mm along the rack, the top rack has step size 0.3 mm.

22. Sunram 5 Robotic System

Fig. 22.16 shows two 3D computer-aided design (CAD) drawings of the Sunram 5, in compact and extended configurations. The robot consists of 16 pneumatic cylinders in total, distributed over six singular stepper motors, two double-acting cylinders and one single-acting cylinder. The height of each cylinder is 10 mm and the nominal wall thickness is 2 mm. With the exception of joints J3 and J4, all cylinders are oriented horizontally and distributed across three levels of approximately 14 mm each. This results in a total height of only 47 mm for the moving part of the Sunram 5 robot (excluding racks and cable guide). Fig. 22.17 shows the actuators of joint J1 and J2. The motor design and cylinder arrangement are similar to that of the dual-speed stepper motor shown in Fig. 22.13. The only significant difference is that joint J1 is slightly curved with

388

Handbook of Robotic and Image-Guided Surgery

curvature radius 260 mm, resulting in the outermost two cylinders being angled respective to the other cylinders. The small-step rack (joint J2) is connected to the base with a guiderail as shown in Fig. 22.16, in order to reduce parasitic rotational movements in both J1 and J2. Fig. 22.18 shows a cross-section of joints J3 and J4 that lift and tilt the Sunram 5 vertically. Both joints are curved stepper motors similar to the one shown in Fig. 22.11. The curved rack has a radius of curvature of 62 mm and the optimized teeth shapes of Fig. 22.12 are used in the pistons which are laser-cut from 2 mm acetal and attached to the 3D printed piston with small pins. Acrylic pins with diameter 3 mm are used as passive hinges in the axis of rotation of joints J3 and J4. These pins are partially visible in Fig. 22.15 and greatly reduce the amount of parasitic movements in the kinematic chain. Moreover, the hinge of either joint coincides with the curved rack of the other joint, resulting in a truss-like mechanical structure in extended configuration. The needle insertion mechanism consists of one dual-speed stepper motor, one single-acting emergency ejection cylinder, and two dual-acting biopsy gun cylinders. The dual-speed motor is shown in Fig. 22.19. It consists of a three-cylinder large-step part with pitch size 5.1 mm and step size 1.7 mm, and a two-cylinder small-step part with step size 0.3 mm. The specific arrangement of the five cylinders allows for a telescopic expansion as shown in Fig. 22.15. This expansion is needed to allow sideways movements of the Sunram 5 with the 100 mm biopsy needle installed, without colliding with the frame of the breast fixation system. Fig. 22.20 shows the emergency needle ejection and biopsy gun mechanisms. A single-acting cylinder with dimensions 13 3 10 mm is used for the ejection mechanism and is connected to the needle motor’s large-step rack as shown in Fig. 22.21. The ejection mechanism can be activated whenever the three pistons of the large-step needle stepper motor (Fig. 22.19) are all retracted (“Free” state in Fig. 22.8). FIGURE 22.18 Cross-sectional picture of joints J3 and J4 which lifts and tilts the Sunram 5 vertically.

FIGURE 22.19 Cut-out view of Sunram 5’s dual-speed needle insertion motor. The three cylinders in front operate the large-step rack on top with step size 1.7 mm.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

389

FIGURE 22.20 Cut-out view of Sunram 5’s emergency eject single-acting cylinder (left) and the two dual-acting pneumatic cylinders for the biopsy gun (right).

The biopsy gun consists of two double-acting cylinders. Its respective pistons have a smaller and a larger piston head and this asymmetry allows to attach needle sockets to the side of each piston in a compact design. The stroke of both cylinders is 19 mm and the total length of the biopsy gun is approximately 100 mm.

22.5

Control of pneumatic devices

Each pneumatic cylinder is controlled individually by one 5/2-way valve of type Festo MHA2-MS1H-5/2-2 (Festo AG and Co. KG, Esslingen, Germany), located in the valve manifold of the controller located outside the Faraday cage of the MRI scanner. The valve is a high-speed solenoid valve with nominal airflow 100 L/min and a specified switching time of 1.9 ms. The use of more conventional internally piloted valves such as PV5211-24VDC-1/8 (TechniComponents B.V., Waalwijk, The Netherlands) is also possible. In an MRI environment the tube length should be around 5 m and its outer diameter can be 3 4 mm. In the Sunram 5 the last 0.5 m of the tube has a diameter of 2 mm in order to provide sufficient flexibility in the various DoFs. The cable bundle is managed such that it exits the robot at a fixed point in the frame (Fig. 22.22). It is not recommended to use 2 mm tubes for the full distance between controller and robot as this would constrain the airflow too much, resulting in reduced output force at the same stepping frequency. Likewise, it is not recommended to use 6-mm tubes to connect the robot to the controller as the associated cable bundle would be quite big and the large volume of air inside the tubes would require the use of bigger valves and additional air supply capacity in order to provide sufficient airflow. Sunram 5 contains 16 pneumatic cylinders with a total of 31 pneumatic connections (one cylinder is single-acting). It is theoretically possible to lower this number of tubes in different ways. Cylinders belonging to different stepper motors could share the same tube pairs, at the cost of increased controller software complexity and hysteresis effects. It is also possible to control a cylinder with a single pneumatic tube if full force is not needed. In that case a constant

22. Sunram 5 Robotic System

FIGURE 22.21 Needle safety mechanism. Pistons A, B, and C together with rack R form a three-cylinder stepper motor, while piston E (connected to rack R) is part of a single-acting cylinder.

390

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.22 Schematic for controlling a two-cylinder stepper motor using two 5/2-way valves.

FIGURE 22.23 Sunram 5 controller with user interface.

return spring action must be present with (approximately) half the force of the active stroke. This could be provided by, for example, a constant pressurization of the other end at half the system pressure, or equivalently by pressurization of a smaller cylinder area at system pressure. In Sunram 5 the pneumatic manifold is controlled by an Arduino microcontroller, which is in turn commanded by a user interface. Fig. 22.23 shows a photo of the controller. Besides the valves and Arduino controller it also includes a pressure adjustment knob with gauge, internal pressure tanks, an emergency stop button, six sliders for direct control of the stepper motors, a display for status information accessible by a menu dial, and a tristate biopsy fire switch. On the right panel there are connections for electric power, air supply, and universal serial bus (USB). Feed-forward is used as the control method. This is sufficient provided that the initial position is known and no steps are skipped thereafter. The calibration process involves operating the robot to a predetermined position by either visual guidance or by operating each axis toward its endstops at low pressure (or high speed). In order to guarantee that no

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

391

22.6

Evaluation of stepper motors and Stormram 4

Stepper motors can be characterized by finding the relation between output force, system pressure, and stepping frequency. In a full robotic manipulator such as the Sunram 5, relevant characteristics are the needle tip positioning accuracy and precision and the average travel time in an MRI environment.

22.6.1

Stepper motor force

Sunram 5 consists of several cylinders and stepper motors. The theoretical output force of a pneumatic piston can be calculated from its cross-sectional area multiplied by the system pressure. The interaction of a piston with a straight or curved rack by means of the wedge mechanism results in a transfer of force with a specific leverage, as determined from the teeth pitch and depth. The actual output force is lower due to friction in the sliding parts and can be measured. Fig. 22.24 shows the schematic setup for evaluating the stepper motor force as function pressure and stepping frequency parameters. A set of masses with known weights is used to generate forces, transferred to the motor by means of a rope over a pulley. The pressure is adjusted to the lowest level that the motor can still lift the given masses without skipping steps. Fig. 22.25 shows a photo of an actual setup and Fig. 22.26 shows the pressure force graph for this specific motor. It can be observed that there is a good linear relationship between pressure and force, with a maximum of 62 N at a pressure of 0.65 MPa. Given the cylinder cross-sectional area of 10 3 10 mm and the leverage factor of 2.4, this results in a mechanical efficiency of 43% [23].

FIGURE 22.24 Schematic setup for stepper motor force measurements.

22. Sunram 5 Robotic System

steps are skipped after calibration, motor forces must exceed maximum joint load forces by a certain safety margin. A typical stepper motor in Sunram 5 is capable of exerting over 50 N at 0.5 MPa pressure, which is more than enough to insert a sharp needle in breast tissue. It is also possible to increase the stepping frequency beyond the bandwidth when executing free-air movements in order to minimize the total procedure time. While it is certainly possible to incorporate incremental position encoding in an MR-safe robot using, for example, fiber optics, the associated complexity and additional space requirements are considerable. If additional space were available, it could also be used to increase the cylinder cross-sectional areas instead, leading to higher motor forces and reducing the necessity for positional encoders. The presence of the robot inside the MRI scanner offers a potential indirect way of position feedback. By including suitable passive fiducials such as oil capsules in the robot design the relative location of the various linkages can be measured with submillimeter accuracy in a typical MRI scanner. In the clinical workflow the breast is immobilized by a fixation system to which the robot is attached. Fiducials in the frame provide a way of defining the robot MRI coordinate transformation, allowing to represent the target lesion in robot coordinates. After choosing the desired path trajectory by the radiologist, the joint configuration of Sunram 5 can be computed. The specific serial kinematic design of Sunram 5 allows for straightforward computation of the joint configuration corresponding with the desired needle insertion trajectory. The needle lift/tilt and insertion mechanisms all move the needle in the same vertical plane, so the first step is to align that vertical plane with the target lesion by choosing appropriate joint coordinate values for joints J1 and J2. The next step is to calculate the joint coordinate values for joints J3 and J4, aligning the needle holder with the lesion. Finally, joint coordinate values for joints J5 and J6 are calculated based on the distance from needle tip to lesion. The biopsy gun can now be fired and the needle with the specimen extracted.

392

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.25 Photo of force measurement setup for T-26 stepper motor.

70

FIGURE 22.26 Pressure force graph for the T-26 stepper motor. T-26 T-26 linear fit

60

Force (N)

50 40 30 20 10 0

0

0.1

0.2 0.3 0.4 0.5 Gauge pressure (MPa)

0.6

0.7

At a relatively low pressure of 0.15 MPa, the force of 11 N is already large enough to move a Sunram 5-like robot along all its axes, except for inserting the needle through relatively stiff tissue for which more pressure is needed. Besides this motor, the authors of this chapter earlier analyzed and reported about 10 different individual stepper motor implementations with mechanical efficiencies ranging from 22% to 76% [20,26,28]. Relatively low efficiencies (22% 34%) were found for the Stormram 1 motors which have cylinder cross-sectional dimensions of 20 3 4.1 mm. Its 5:1 aspect ratio results in a relatively high perimeter per surface area ratio and the low height of 4.1 mm makes effective sealing challenging [20]. This issue is solved by focusing on square-shaped cylinders with comparable width and height in all subsequent designs. Another case of low measured efficiency (24%) was found in certain motors with 3D printed teeth with small pitch size (1.2 mm or smaller). Examination under a microscope reveals that the associated teeth tips are rounded, as shown in Fig. 22.27 (left), resulting in a reduced leverage factor and increased risk of jamming [28]. This issue is solved by using laser-cut teeth in the pistons and/or the rack, which result in sharper tip shapes as shown in Fig. 22.27 (right).

22.6.2

Stepping frequency

The maximum stepping frequency is another important characteristic and this is mainly dependent on the tube length (approximately 5 m in an MRI environment). Measurements have shown that the bandwidth is approximately 10 Hz

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

393

FIGURE 22.27 3D printed (left) and laser-cut (right) teeth with pitch size 1.2 mm, as seen under a microscope.

[26]. One factor is the limited speed of pressurized air in the tubes which cannot exceed the speed of sound (343 m/s) and is further restricted by the friction in the tubes. Another important factor is the finite volume of air inside the tubes (24.5 mL when using 4-mm tubes with 2.5 mm inner diameter), which implies that any further increase in tube thickness (to reduce wall friction) would require the use of larger valves with larger orifices. Nevertheless, the stepper motors in Stormram 4 have been shown to be able to operate at frequencies up to 65 Hz when moving in free air, as full force is not needed there.

Accuracy

Positional accuracy and overshoot have been studied for several stepper motors. Measurements have shown that hysteresis is present in two-cylinder stepper motors, with a magnitude of 60% 80% of the step size. The positional accuracy depends on the step size due to discretization: by design, the lower bound is half the step size in a singular stepper motor. In dual-speed stepper motors the accuracy can be as good as 0.1 mm [28]. The repeatability or precision is very good in general, measured to be as good as 0.01 mm [26].

22.6.4

Stormram 4 evaluation

Stormram 4 is the predecessor of Sunram 5. While it does not have the dual-speed motors that Sunram 5 has, the kinematic design is comparable and its measurement results give insights into the projected capabilities of Sunram 5. Fig. 22.28 shows the experimental setup for evaluating the accuracy and precision of Stormram 4 in free air. Stormram 4 was programmed to approach all 35 targets in succession by navigating through a sequence of predefined way-points. Afterwards the offsets of each puncture from its respective target were measured. The average precision was found to be 0.71 mm in a horizontal direction and 0.21 mm in a vertical direction. Also, a bias of 1.0 mm was found in the horizontal direction due to manufacturing and/or calibration inaccuracies [23]. Fig. 22.29 shows the experimental setup for accuracy measurements inside a 0.25 T (G-scan, Esaote SpA, Genoa, Italy) MRI scanner. The Stormram 4 was mounted on a table with 10 fiducials in it. A polyvinyl chloride (PVC) phantom was placed on the table, a preoperative scan was made, and a series of 30 targets inside the phantom were selected. For each target, first its MRI coordinates were transformed to the robot coordinate frame as defined by the positions of the fiducials, after which a suitable joint coordinate vector was calculated. The robot, which had been calibrated previously in its zero-position, was then operated through a user interface by rotating the turn knobs corresponding with each joint manually until the displayed joint coordinates matched the target coordinates. A confirmation scan was then taken and automatically segmented to reconstruct the needle tip location and measure the distance to the target coordinates. Tissue deformations were left out of scope in the experiment as no fixation system was used. Fig. 22.30 shows an example confirmation scan after applying geometric distortion correction. The 10 fiducials in the table define the robot coordinate system, as reconstructed automatically based on the orientations and interfiducial distances. The actual needle trajectory is also automatically reconstructed based on the connectivity graph of all lowintensity voxels in the scan, grouped into regions of equal shortest distance from a reference (low-intensity) voxel outside the phantom. The use of these automated algorithms allows one to make optimal use of all available measurement data and also minimizes human estimation errors, resulting in subvoxel precision of reconstruction parameters.

22. Sunram 5 Robotic System

22.6.3

394

Handbook of Robotic and Image-Guided Surgery

FIGURE 22.28 Stormram 4 accuracy experiment in free air.

FIGURE 22.29 Measurement setup of Stormram 4 in a 0.25 T MRI scanner. MRI, Magnetic resonance imaging.

FIGURE 22.30 Three-dimensional rendering of an MRI scan consisting of a phantom with needle inserted and 10 fiducials. The crosshair indicates the target location. MRI, Magnetic resonance imaging.

Sunram 5: A Magnetic Resonance-Safe Robotic System for Breast Biopsy Chapter | 22

395

After targeting all 30 sites, the 3D targeting error of Stormram 4 was measured to be 1.87 6 0.80 mm (range 0.69 3.57) [24].

22.7

Conclusion

References [1] Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018;68:394 424. [2] Atchley DP, Albarracin CT, Lopez A, Valero V, Amos CI, Gonzalez-Angulo AM, et al. Clinical and pathologic characteristics of patients with BRCA-positive and BRCA-negative breast cancer. J Clin Oncol 2008;26(26):4282 8. [3] Shellock FG, Spinazzi A. MRI safety update 2008: part 2, screening patients for MRI. AJR Am J Roentgenol 2008;191(4):1140 9. [4] Su H, Cardona DC, Shang W, Camilo A, Cole GA, Rucker DC, et al. A MRI-guided concentric tube continuum robot with piezoelectric actuation: a feasibility study. In: 2012 IEEE international conference on robotics and automation (ICRA). IEEE; 2012. p. 1939 45. [5] Su H, Shang W, Cole G, Li G, Harrington K, Camilo A, et al. Piezoelectrically actuated robotic system for MRI-guided prostate percutaneous therapy. IEEE/ASME Trans Mechatron 2015;20(4):1920 32. [6] Hungr N, Bricault I, Cinquin P, Fouard C. Design and validation of a CT- and MRI-guided robot for percutaneous needle procedures. IEEE Trans Robot 2016;32(4):973 87. Available from: https://doi.org/10.1109/TRO.2016.2588884. [7] Chapuis D, Gassert R, Ganesh G, Burdet EABE, Bleuler HABH. Investigation of a cable transmission for the actuation of MR compatible haptic interfaces. In: Biomedical robotics and biomechatronics, 2006. BioRob 2006. The first IEEE/RAS-EMBS international conference on. IEEE; 2006. p. 426 31. [8] Yang B, Tan UX, McMillan AB, Gullapalli R, Desai JP. Design and control of a 1-DOF MRI-compatible pneumatically actuated robot with long transmission lines. IEEE/ASME Trans Mechatron 2011;16(6):1040 8. Available from: https://doi.org/10.1109/TMECH.2010.2071393. [9] Franco E, Brujic D, Rea M, Gedroyc WM, Ristic M. Needle-guiding robot for laser ablation of liver tumors under MRI guidance. IEEE/ASME Trans Mechatron 2016;21(2):931 44. Available from: https://doi.org/10.1109/TMECH.2015.2476556. [10] Kokes R, Lister K, Gullapalli R, Zhang B, Richard H, Desai JP. Towards a needle driver robot for radiofrequency ablation of tumors under continuous MRI. In: Robotics and automation, 2008. ICRA 2008. IEEE international conference on. IEEE; 2008. p. 2509 14. [11] Whitney JP, Glisson MF, Brockmeyer EL, Hodgins JK. A low-friction passive fluid transmission and fluid-tendon soft actuator. In: 2014 IEEE/ RSJ international conference on intelligent robots and systems. 2014. p. 2801 8. Available from: https://doi.org/10.1109/IROS.2014.6942946. [12] Felfoul O, Becker A, Bergeles C, Dupont PE. Achieving commutation control of an MRI-powered robot actuator. IEEE Trans Robot 2015;31 (2):387 99.

22. Sunram 5 Robotic System

Pneumatic devices are effective in actuating MR-safe robotic systems. Several techniques for creating pneumatic actuators have been presented in this chapter: single-acting and double-acting cylinders, linear and curved stepper motors, and dual-speed stepper motor combinations. The presented actuators offer advantages over other state-of-the-art MRI actuators in terms of compactness, output force/torque, ease of control, rapid prototypeability, and/or MR safety. Several robotic systems have been developed for MRI-guided breast biopsy that are actuated by the described pneumatic cylinders and stepper motors. The latest generation, Sunram 5, has been shown to be a compact and versatile prototype. Thanks to the dual-speed motor implementations on two axes it combines high speed with high accuracy, especially useful in an MRI environment in which long tubes are necessary which restrict the stepping frequencies. The pneumatic cylinders and stepper motors are manufactured by 3D printing and laser-cutting. The single-acting cylinder model is useful in exploring sealing and manufacturization techniques, while the two-cylinder stepper motor design with laser-cut teeth combines high precision with low friction and good durability. The amount of friction depends on many factors and varies from one motor implementation to another. Measurements on several different stepper motor implementations result in efficiencies from 50% to 76%, provided that the teeth tips are sufficiently sharp, the cylinder cross-section is approximately square, and the seal dimensions are optimized to minimize friction and leakage. The kinematic and mechanical designs of Sunram 5 are extensively described in this chapter. Based on Stormram 4’s measurement results, the projected needle tip accuracy in free air is less than 1 mm, while the expected phantom targeting accuracy in MRI is in the order of 2 mm. Given the capabilities of Sunram 5 and earlier Stormram robots, the presented pneumatic stepper motor technology has been shown to be a promising actuation technique for any MR-safe robotic system. Concerning MRI-guided breast biopsy, Sunram 5 is not yet ready for clinical use. Additional research is needed before clinical trials can be conducted, especially on calibration, path planning and control, sterilization, and safety. Sunram 5 has been shown to be an advanced proof-of-concept which may shape the future of MRI-guided robotic interventions.

396

Handbook of Robotic and Image-Guided Surgery

[13] Ho M, Kim Y, Cheng SS, Gullapalli R, Desai JP. Design, development, and evaluation of an MRI-guided SMA spring-actuated neurosurgical robot. Int J Robot Res 2015;34(8):1147 63. [14] Stoianovici D, Kim C, Petrisor D, Jun C, Lim S, Ball MW, et al. MR safe robot, FDA clearance, safety and feasibility of prostate biopsy clinical trial. IEEE/ASME Trans Mechatron 2017;22(1):115 26. Available from: https://doi.org/10.1109/TMECH.2016.2618362. [15] Bomers JGR, Bosboom DGH, Tigelaar GH, Sabisch J, Fu¨tterer JJ, Yakar D. Feasibility of a 2nd generation MR-compatible manipulator for transrectal prostate biopsy guidance. Eur Radiol 2017;27(4):1776 82. Available from: https://doi.org/10.1007/s00330-016-4504-2. [16] Stoianovici D, Patriciu A, Petrisor D, Mazilu D, Kavoussi L. A new type of motor: pneumatic step motor. IEEE/ASME Trans Mechatron 2007;12(1):98 106. Available from: https://doi.org/10.1109/TMECH.2006.886258. [17] Bosboom DGH, Fu¨tterer JJ, Barentsz JO. Motor system, motor, and robot arm device comprising the same, patent—WO2012069075A1. 2012. [18] Sajima H, Kamiuchi H, Kuwana K, Dohi T, Masamune K. MR-safe pneumatic rotation stepping actuator. J Robot Mechatron 2012;24 (5):820 7. Available from: https://doi.org/10.20965/jrm.2012.p0820. [19] Sajima H, Sato I, Yamashita H, Dohi T, Masamune K. Two-DOF non-metal manipulator with pneumatic stepping actuators for needle puncturing inside open-type MRI. In: World Automation Congress (WAC), 2010. IEEE; 2010. p. 1 6. [20] Groenhuis V, Stramigioli S. Laser-cutting pneumatics. IEEE/ASME Trans Mechatron 2016;21(3):1604 11. Available from: https://doi.org/ 10.1109/TMECH.2015.2508100. [21] Abdelaziz MEMK, Groenhuis V, Veltman J, Siepel F, Stramigioli S. Controlling the Stormram 2: an MRI-compatible robotic system for breast biopsy. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE; 2017. p. 1746 53. [22] Groenhuis V, Veltman J, Siepel FJ, Stramigioli S. Stormram 3: a magnetic resonance imaging-compatible robotic system for breast biopsy. IEEE Robot Autom Mag 2017;24(2):34 41. Available from: https://doi.org/10.1109/MRA.2017.2680541. [23] Groenhuis V, Siepel FJ, Veltman J, Stramigioli S. Design and characterization of Stormram 4: an MRI-compatible robotic system for breast biopsy. In: 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS). 2017. p. 928 33. Available from: https://doi.org/ 10.1109/IROS.2017.8202256. [24] Groenhuis V, Siepel FJ, Veltman J, van Zandwijk JK, Stramigioli S. Stormram 4: an MR safe robotic system for breast biopsy. Ann Biomed Eng 2018. Available from: https://doi.org/10.1007/s10439-018-2051-5. [25] Groenhuis V, Siepel FJ, Welleweerd MK, Veltman J, Stramigioli S. Sunram 5: an MR safe robotic system for breast biopsy. In: Hamlyn symposium on medical robotics: pioneering the next generation of medical robotics. 2018. p. 82 3. [26] Groenhuis V, Stramigioli S. Rapid prototyping high-performance MR safe pneumatic stepper motors. IEEE/ASME Trans Mechatron 2018;23 (4):1843 53. [27] Groenhuis V, Siepel FJ, Stramigioli S. Pneumatic stepper motor and device comprising at least one such pneumatic stepper motor, patent— WO2018038608(A1). 2018. [28] Groenhuis V, Siepel FJ, Stramigioli S, Dual-Speed MR. Safe pneumatic stepper motors. In: Robotics: science and systems 2018. 2018. [29] Groenhuis V. Supplementary files for printing pneumatic devices. 2018. Available from: https://doi.org/10.4121/uuid:9435e52e-0d9e-4bcc90aa-fc1a1901622c.

23 G

New Advances in Robotic Surgery in Hip and Knee Replacement Andrea Volpin1, Carla Maden2 and Sujith Konan3 1

Royal Derby Hospital, Derby, United Kingdom University College London, London, United Kingdom 3 University College London Hospitals, London, United Kingdom 2

ABSTRACT Despite the success of hip and knee arthroplasty, risks of failure due to component malposition remain a problem. Robotic technology is now available to surgeons for use in hip and knee arthroplasties to increase the precision of planning and placement of components. This chapter reviews the key aspects of robotic hip and knee surgery and aims at providing an update of this technology. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00023-2 © 2020 Elsevier Inc. All rights reserved.

397

398

Handbook of Robotic and Image-Guided Surgery

23.1

Introduction

The number of total hip arthroplasties (THAs) is projected to rise to approximately 95,800 439,000 annually in the United Kingdom by 2035 [1]. The similar predicted rise in total knee arthroplasty (TKA) is 118,000 1219,000. Patients who undergo TPA and TKA surgery have a wide range of expectations related to their age and preoperative activity status level [2]. Not achieving these goals, combined with preoperative pain and postoperative complications, are all possible risk factors that influence clinical and functional outcomes [2]. Currently it is estimated that up to 19% of patients who undergo TKA may not be fully satisfied with their outcome [2]. The recovery period for patients following THA and TKA can be challenging and possible complications include stiffness, weakness, significant gait impairment, and postoperative pain that may persist for months [3]. In addition to infection, the other most common reason for revision total joint arthroplasty is aseptic loosening and the main reason for that is the malposition of the components and soft-tissue imbalance [4]. Despite the improvements in surgical techniques, instrumentation, and implant design, revision surgery is expected to grow and is not only a burden from a surgeon/patient point of view, but also socioeconomically [4]. Various technologies have been introduced to assist the orthopedic surgeon in accurate and precise placement of hip and knee arthroplasty with the intention of minimizing mechanical implant failure. In particular, robotic-assisted orthopedic surgery [5 8], that has been introduced in the last two decades, has claimed to better restore normal kinematics of the knee and hip joint by reproducing planned alignment. Robotic devices, such as the Caspar (OrtoMaquet, Rastatt, Germany), Robodoc (CurexoTechnology Corporation, Fremont, CA), Mako (Stryker, Mahwah, NJ), and Navio (Smith and Nephew, United States) systems, have been shown to have excellent surgical and clinical patient outcomes [9 12]. This chapter aims to serve as a review of the current state of robotic surgery in hip and knee arthroplasty.

23.2 23.2.1

Challenges with manual surgery Unicompartmental knee arthroplasty

Medial and lateral unicompartmental knee arthroplasty (UKA) is a popular alternative to TKA. Particularly in unicondylar degeneration, UKA leads to a shorter recovery period, better knee kinematics, and decreased postoperative pain, while preserving the rest of the native knee [13]. Despite studies showing a medium- to long-term survivorship generally above 90% at 10 years [14], aseptic loosening or persistent pain remain the main causes of UKA implant failure with malposition of the components leading to this phenomenon [14]. One of the major factors affecting the implant survivorship in UKA is tibiofemoral alignment. Some authors have suggested that correcting overall tibial alignment to neutral or valgus in medial UKA may accelerate wear of the lateral compartment. Similarly, varus under correction in a medial UKA may result in tibial component loosening or premature polyethylene wear [15]. In a multicenter study analyzing 418 failed UKAs, Epinette et al. [16] reported that 12% of aseptic failures of UKA were due to faulty implantation and inadequate positioning of the components and that 49% of the failures occurred within the first 5 years after the procedure. Other studies have provided evidence that errors of more than 2 or 3 in the coronal plane and excessive tibial slope can predispose to mechanical failure in UKA [17]. The task of reliably achieving perfect alignment when using jig-based approaches and minimal invasive techniques can be challenging for surgeons [17].

23.2.2

Patellofemoral arthroplasty

Despite the early patellofemoral implants having mixed results and outcomes, the modern design improvements have enhanced patellar tracking and reduced patellofemoral complications [18]. In particular the prostheses such as the Avon (Stryker Orthopaedics, Mahwah, NJ), which utilizes an anterior cut similar to a TKA system, have shown promising results [18]. Metcalfe et al. [19] reviewed a total of 483 Avon implants (368 patients) and found that the implant survival rate was 77.3% [95% confidence interval (CI) 72.4 81.7] at 10 years and 67.4% (95% CI 72.4 81.7) at 15 years. Good mid-term survivorship and clinical outcomes have been reported by Middleton et al. [20], who reviewed 103 patellofemoral replacements in 85 patients with a mean follow-up of 5.6 years (range from 2.9 to 14.2 years). The

New Advances in Robotic Surgery in Hip and Knee Replacement Chapter | 23

399

mean postoperative Oxford Knee Score was 36 and they had 9 conversions to TKA (mean time for revision was 2.9 years) for disease progression and one revision of a femoral component for malpositioning. However, satisfactory results are also related to surgeon volume and careful patient selection [19]. Devices that allow accurate planning and placement of components may allow better results even in the hands of low-volume surgeons.

23.2.3

Total knee arthroplasty

23.2.4

Total hip arthroplasty

In THA surgery the correct positioning of the acetabular and femoral components plays an essential role in the survivorship of the implant and functional outcomes [28]. Malposition of the implant can lead to a dislocation in the short term or an impingement of the implant that can impact the wear or the breakage of the liner and therefore the stability of the implant [28]. With conventional methods it is still very difficult to achieve accurate positioning of the components. In 1978, Lewinnek et al. [29] described an acetabular component safe zone of 40 6 10 of inclination and 15 6 10 of anteversion. These landmarks have guided hip surgeons for decades. In 2011 Callanan et al. demonstrated that only 59.3% of implants were within the target safe range in conventional THA and hip resurfacing [30]. The acetabular component is more frequently the cause of errors in positioning and this is due to the fact that the positions of the pelvis and the acetabulum are influenced by the longitudinal axis of the spine and the body [31]. In another study Jolles et al. [32] demonstrated that 10 different surgeons in 10 identical plastic models placed the 150 cups with 10 of anteversion and 3.5 of inclination error despite the model being fixed on a table with a proper device. The majority of surgeons still rely on anatomical landmarks or mechanical guides to avoid acetabular component malposition. Archbold et al. [33] described the use of the transverse acetabular ligament, however other authors have been able to identify it intraoperatively in 47% of patients and the cup position was not improved in these patients [34].

23.3 23.3.1

Robotic knee surgery experience Unicompartmental knee arthroplasty

The primary aim of robotics in arthroplasty is the precise reproduction of the surgeon’s preoperative plan during the surgical procedure [1]. Recent studies have demonstrated that robotic systems can improve the implant’s position and the alignment in both total knee and THA surgery [35,36]. In particular in UKA, several studies [17] have shown significant improvement in the components’ alignment compared to conventional techniques. They have also shown effectiveness in depth of bone resection (as manifested by the thickness of the polyethylene tibial insert used) and the soft-tissue gap balancing [37].

23. Hip and Knee Robotic Surgery

It is essential to restore the coronal alignment in TKA for long-term implant survivorship [21] as well preservation of the surrounding soft-tissue envelope during the procedure [22]. A neutral mechanical axis alignment is the aim for the majority of surgeons, however this alignment can be difficult to achieve in the case of severe valgus or varus deformity and correct alignment and soft-tissue tensioning using a measured resection technique or gap-balancing technique can be very challenging [23,24]. Additionally, it has been reported that severe preoperative coronal deformities can result in greater risk of implant failure, and that correction of the deformity can decrease the risk of failure to 0.5%, suggesting the need for experienced surgeons in performing these cases [25]. However, some authors have reported that obtaining an ideal alignment in TKA did not mean a lower rate of revision due to mechanical failure [26]. This inability to demonstrate a difference in survivorship of the implant between aligned and malaligned TKA, but presumably well balanced, has questioned what the ideal limb alignment for TKA surgery is [26]. Recent studies have shown that a tibial component in 3 5 of varus and the limb in mechanical varus still has good survivorship and functional outcomes [27]. It is clear that with TKA, the alignment targets are narrow and any device that allows accurate planning and execution of desired targets is likely to be beneficial.

400

Handbook of Robotic and Image-Guided Surgery

Lonner and Moretti [37] demonstrated in the initial 31 consecutive patients that underwent robotic-assisted medial UKA by a single surgeon that the tibial component was regularly placed on 0.2 compared with conventional methods that was in more varus 2.7 . In another study MacCallum et al. [38] found that a comparison of tibial baseplate alignment in postoperative knee radiographs between robotic-arm-assisted and conventional UKAs the coronal baseplate was more accurate in the first group. Similarly, Bell et al. [39] reported, in a prospective randomized controlled study comparing 62 UKAs done with MAKO robotic-arm-assisted and 58 UKAs done with conventional technique, that accuracy of component positioning was improved with the use of the robotic-assisted surgical procedure, with lower root-mean-square errors and significantly lower median errors in all component parameters (P , .01). Kayani et al. [40] demonstrated that robotic-arm-assisted UKA was associated with a learning curve of six cases for operating time and surgical team confidence. They also confirmed that robotic-arm-assisted UKA improved the accuracy of femoral (P , .001) and tibial (P , .001) implant positioning with no additional risk of postoperative complications compared to conventional jig-based UKA. Blyth et al. [41] reviewed 139 patients, who were randomized to receive either a manual UKA implanted with the aid of traditional surgical jigs, or a UKA implanted with the aid of a guided robotic-arm-assisted system, and they found that in the robotic-arm-assisted group the patients had early better pain and functional scores, but no difference was observed 1 year postoperatively. However, the authors noted that on subgroup analysis, those patients considered to be highly active preoperatively had statistically better improvement in function with robotic assistance than with conventional techniques for KSS (P 5 .0064), Oxford Knee Score (P 5 .0106), and Forgotten Joint Score (P 5 .0346). In a prospective, randomized controlled trial on that study cohort, Gilmour et al. [42] analyzed the results at 2 years of 58 patients who underwent robotic-arm-assisted UKAs and 54 who underwent conventional UKAs. They have found that, overall, participants achieved an outcome equivalent to the most widely implanted UKA in the United Kingdom, however subgroup analysis suggested that more active patients may benefit from robotic-arm-assisted surgery. Additionally, while no revisions were necessary in the robotic-assisted group, there were two revisions (2.8%) in the manual. Robot assistance has been shown to result in a more conservative tibial cut compared to conventional methods. Ponzio and Lonner [43] showed in a retrospective comparison of polyethylene insert sized including 8421 UKA done with robotic assistance and 27,989 done with conventional methods, that 8 and 9 mm polyethylene inserts were used in 93.6% in the robotic group and 84.5% in the conventional group.

23.3.2

Total knee arthroplasty

Robotics surgery has also been introduced in TKA with the goal of achieving component alignment and so improving the implant’s survivorship. In 2007, Bellemans et al. [44] reported 25 consecutive TKA cases using an active surgical robot with a mean follow-up of 5.5 years. They demonstrated excellent implant positioning and alignment achieved within the 1 error of neutral alignment in all three planes in all cases. However, they concluded due to the excessive operating time required for the robotic implantation, the technical complexity of the system, and the extremely high operational costs, that they had to abandon this procedure. A recent cadaveric study done by Moon et al. [45] comparing 10 robot-assisted procedures and 10 conventional operations, both procedures on each cadaver were performed by the same surgeon and the position of the prostheses was checked with a 3D computed tomography (CT) scanning. The robot-assisted technique showed better accuracy in femoral rotational alignment compared to conventional surgery (P 5 .006). In a prospective randomized study, Song et al. [46] demonstrated 30 patients undergoing simultaneous bilateral TKA with one knee performed using robotic assistance and the contralateral knee performed with a conventional method, that the robotic-assisted ones had better leg alignments, but longer operation times and longer skin incisions. Liow et al. [47] prospectively randomized 31 patients (robot-assisted) and 29 patients (conventional) and they showed that there were no mechanical axis outliers ( . 6 3 from neutral) or notching was noted in the robot-assisted group as compared with 19.4% (P 5 .049) and 10.3% (P 5 .238), respectively, in the conventional group. The robotassisted group had 3.23% joint-line outliers ( . 5 mm) as compared to 20.6% in the conventional group (P 5 .049). The following study with a 2-year follow-up [45] showed that the robotic-assisted group displayed a trend toward higher scores in SF-36 QoL measures, with significant differences in SF-36 vitality (P 5 .03), role emotional (P 5 .02), and a larger proportion of patients achieving SF-36 vitality MCID (48.4% vs 13.8%, P 5 .009).

New Advances in Robotic Surgery in Hip and Knee Replacement Chapter | 23

401

In a recent study Kayani et al. [8] compared intraoperative bone and soft-tissue injury in 30 consecutive TKAs done with robotic-assisted and 30 TKAs done with conventional methods. They proposed a classification called Macroscopic Soft Tissue Injury score based on an intraoperative assessment of the soft-tissue envelope following the TKA. This classification system used a maximum of 40 points and the score was greater in patients undergoing roboticassisted TKA (30.85 6 3.1 vs 27.68 6 3.9, respectively, P , .05). In a comparison of robotic-assisted and computer-navigation systems Clark and Schmidt [48] showed that the average time in the first group was on average 9.0 minutes shorter and also the intraoperative malalignment was 0.5 less. Moreover, the patients who underwent robotic-assisted TKA were discharged 0.6 days earlier.

23.4

Robotic hip surgery experience

23.5

Preoperative preparation

The preoperative planning process is crucial for the effective use of robotics in TKA and THA. The accurate positioning of components intraoperatively facilitated by production of a careful plan reduces interpatient variability [39]. The preoperative process typically involves the creation of an individualized model and interventional plan. Robotic systems for TKA and THA follow a similar preparation. Currently, some robotic systems require preoperative imaging. In TKA, a preoperative CT scan is performed and used to create replicas of a patient’s knee including the capsule, ligaments, and muscle [7,52 55]. The CT scan also allows the surgeon to obtain information to determine

23. Hip and Knee Robotic Surgery

Similar to TKA, the aim of the robotic system in THA is to achieve ideal alignment of the implant in keeping with the preoperative plan. Reproducing proper alignment of the cup with the correct anteversion can be a challenge even for experienced surgeons [6]. An early prospective study comparing ROBODOC with conventional manual techniques in THA reported a significant improvement in clinical outcomes in the Harris Hip Score at 12 months and the Mayo Hip Score at 6 and 12 months [49]. However, this difference was not sustained at the 24-month follow-up, suggesting a short-term rather than long-term benefit. In a more recent study, Nakamura et al. found that incorporating robotics improved the precision of the implant position, reduced variation in limb length, and produced less stress shielding of the proximal femur at a minimum of 5 years’ follow-up [50]. However, although the clinical outcomes reported by the Japanese Orthopaedic Association were significantly higher in the robotics group at 3 years, this difference was no longer apparent at 5 years. In a recent study Tsai et al. investigated the feasibility of restoring native femoral and acetabular anatomy in THA patients [51]. They found that while neither method was capable of fully restoring the native anatomy, a significantly higher percentage of robotic-assisted procedures were in the safe zone and were more highly precise. This suggests that robotic procedures may be the choice for a more accurate and reproducible result. This finding is supported by a cadaveric study, which found that the root-mean-square error in manual implantation was five times higher for cup inclination and 3.4 times higher for cup anteversion (P , .01) [49]. Robotic systems may therefore be considered in THA where failures are often attributed to malposition of the components. In a comparison of conventional hand rasping and robotic milling in cementless THA, Nishihara et al. found that robotic assistance improved both clinical and radiographical outcomes [35]. The robotic group scored significantly higher in the Merle D’Aubigne hip score 2 years postoperatively and a significantly higher number were able to walk more than 6 blocks without a cane in 13 days. Radiographically, the robot allowed more accurate preparation of the femoral canal. Common to THA, UKA, and TKA, cost remains a major drawback to the use of robotics in surgery. In THA, an additional $700 per THA has been reported with use of the ROBODOC system [49]; these extra costs should also be considered in addition to the expense of significantly increased operating room time, particularly during the learning curve when first incorporating robotics into practice. Another consideration is the safety and reliability of robotic systems. In THA, the rate of conversion to manual implantation has been reported to be as high as 18%; the impact of this on clinical outcomes is not clear [49]. There is also concern over the efficacy of robotics should the plan need to be changed during the operation. As the robot itself cannot create a new plan of action, the surgeon’s experience and expertise become paramount should intervention be required [52]. In one study, out of 75 ROBODOC-assisted THAs, two were abandoned due to intraoperative complications and converted to the manual method. In another, also using ROBODOC, technical complications relating to the robot were seen in 9.3% of cases and included acetabular damage and reregistration [53]; the results achieved were no better or worse compared to the manual group. However, such drawbacks may be justified by the benefits of improved outcomes such as fewer complications and earlier hospital discharge once the surgical team is past its learning curve.

402

Handbook of Robotic and Image-Guided Surgery

component sizing, positioning, and bone resection, which will then be performed by the robot. The optimal implant position is then saved on the system. In UKA, a 3D computer model of the patient’s bone is created from the CT scan; the bones are extracted from the images using a semiautomated segmentation algorithm [50,56] (Fig. 23.1). In THA, a preoperative CT scan is also used during the planning process. The images are used to create a 3D reconstruction, the size of the prosthesis and its position in the femur and in the acetabulum will be determined [50,56,57] (Fig. 23.2).

FIGURE 23.1 Computed tomography-based preoperative planning for robotic-assisted total hip arthroplasty.

FIGURE 23.2 Computed tomography-based preoperative planning for robotic-assisted unicompartmental knee arthroplasty.

New Advances in Robotic Surgery in Hip and Knee Replacement Chapter | 23

403

FIGURE 23.3 Computed tomography-based preoperative planning for robotic-assisted total knee arthroplasty.

Imageless robotic systems are also in use; these instead rely on registering anatomical landmarks once in the operating theater. The most commonly used imageless system is Navio PFS, a handheld cutting tool used for unicondylar and patellofemoral knee arthroplasty. Its optical-based navigation provides morphed 3D images of the procedure to provide a virtual model of the knee joint. Other imageless systems include iBlock, a motorized cutting guide used for femoral resections. Again, anatomical data are acquired intraoperatively and registered by the surgeon. While this may be advantageous in the reduced cost, time, and radiation dose for the patient, potential drawbacks include the lack of a detailed preoperative plan and ability to verify anatomical registration landmarks against images once in theater [5] (Figs. 23.3 and 23.4).

23.6

Operative setup

Coon outlined guidelines for the integration of the Mako surgical robotic arm into the operating room [58]. Although robotic devices are large and may be cumbersome, designers working to limit these constraints have made significant

23. Hip and Knee Robotic Surgery

FIGURE 23.4 Mako robotic device.

404

Handbook of Robotic and Image-Guided Surgery

gains in the last decade. Furthermore, efficiency in the theater can be maximized if a well-trained theater team is able to perform such tasks as positioning the patient, sterile draping the arm, and registering the device before the surgeon even enters the room. A computer-guided cutter and careful preoperative planning bestows benefits in reforming instrument setup, reducing the need for unwieldy instrument trays, cutting blocks, and alignment rods; only trial components for the necessary implant sizes are needed in the room [58]. The 3D reconstruction of the bone can be observed on-screen, facilitating placement of registration landmarks and a faster registration [58]. The greater efficiency conferred by the use of robotics may eventually reduce tourniquet time and allow a greater number of procedures to be performed. The position of the patient for robotic procedures is consistent with manual approaches.

23.7

Surgical technique

Robotic systems are described as active, semiactive, or passive [59]. Active systems hold the cutting tool and perform the procedures autonomously while under supervision of the surgeon, while passive systems provide information while being under external control. Semiactive systems, the most frequently used, are constrained by the predetermined plan via tactile (haptic) feedback; the surgeon therefore works within the restrictions given by the robot while ultimately taking control of the reaming and cutting. In semiactive systems such as the Mako robotic arm, active participation from the surgeon is required at the reaming end of the arm as the surgeon guides the force-controlled tip. However, tactile and auditory feedback given during the procedure limits unwanted resections past the preplanned boundaries [55]. Additional safety mechanisms ensure that the burr automatically stops if the surgeon moves past the planned zone, or if the computer calculates that the robot has resected more bone than is necessary. In TKA and UKA, the patient will be positioned as in the manual surgery by the surgeon. The anatomical landmarks of the femur and tibia will be registered into the robotic device, allowing the preoperatively obtained information to be synthesized with the patient’s anatomy [7]. Additionally, some robotic systems allow further refinement of the plan intraoperatively depending on the soft-tissue boundaries [7]. Using an active system such as ROBODOC in TKA, the preoperative plan is loaded onto the system on the day of the procedure. The leg is held at 70 80 in a leg holder and fixed to the robot via two transverse stabilization pins; one in the distal femur and one in the proximal tibia. Additionally, two recovery markers and one motion marker are placed in the femur and tibia [10]. Following surface registration of four landmarks, the system mills the femur and tibia under the direct supervision of the surgeon and according to the predefined plan; the surface angle cuts are customized according to the desired mechanical axis and are therefore different for each patient [47]. The thickness of the resection is preplanned and dependent on the thickness of the implant that will be used. A trial of the predetermined femoral and tibial components follows; stability, patellar tracking, and range of motion are assessed after implantation of the finalized components [47]. Using a semiactive system such as the Mako robotic arm in UKA, optical motion capture technology is used to track markers located on the robotic arm, femur, and tibia, allowing the surgeon to alter the limb position and orientation during the surgery. The setup of the high-speed burr under the control of the haptic system means that conventional instruments are not needed [57]. Moving the robot through defined 3D movements calibrates the arm, and the surgeon moves the patient’s leg through a range of movements while applying a valgus load on the knee [60]. The robot then works with the surgeon in burring tibial and femoral component cavities. It is advisable to prepare the tibial cavity in advance of the femoral cavity to allow for any potential changes to the position of the femoral component, but any order is acceptable [60]. Rough burring with a 6-mm burr is followed by fine milling with a 2-mm burr. The actual versus planned cavity can be checked by referring to the graphical feedback on the robot’s navigation screen. Once the burring is completed, the proposed femoral and tibial components are inserted; computerized simulations provide the surgeon with feedback about leg alignment and kinematics [60]. Accuracy and adherence to the preoperative plan can be checked throughout the procedure. For example, the robot confirms tibial and femoral markers before preparing each section of bone. If further checks are deemed necessary, it is possible to remove the robotic arm from the surgical area and use a tracker probe to visualize accuracy against the patient’s CT scan [57]. As with other semiactive systems, haptic feedback provides physical resistance against the surgeon’s movements when the preplanned boundaries are reached; additional auditory feedback in the form of a beeping sound reduces the likelihood of breaching these zones.

New Advances in Robotic Surgery in Hip and Knee Replacement Chapter | 23

405

1. If the predicted time to recover from the obstacle was greater than 30 minutes; 2. If there were four or more repetitive failures in the same step that could not be skipped; 3. If there was real or potential damage to soft tissue.

23.8

Future directions

Although positive clinical results have thus far been achieved with the use of robotics in hip and knee arthroplasty, its addition to the operating theater remains relatively new and therefore developments are ongoing. Long-term clinical outcomes from multicenter studies and a variety of health models and environments are needed. As surgeons continue to adapt to the new technology, greater attention will be paid to ergonomics in regard to surgeons and theater staff. The planning and setup of robotic-assisted procedures will be reviewed to maximize efficiency and minimize extra costs as well as the learning curve and the time it takes for new trainees to learn the techniques. Surgical workflow may alter such that a “robotically aligned” joint arthroplasty develops in the future. Furthermore, specific implants designed for use only with robotics are already in development, bypassing the limitations set by traditional instruments, setup, and visualization [5]. Areas that may see technological advances in the future include the preoperative planning process and intraoperative sensing. It is possible in the future that preoperative planning will go beyond imaging modalities and utilize the kinematics of the premorbid joint [5]. The greatest drawback of robotics remains it availability. Costs both for installation and operation may delay widespread use of robotics in arthroplasty. Several centers that have adopted robotics have already reduced their surgical time to nonrobotic arthroplasty times and are focusing on reduced hospital stays and long-term reduction of revision burden to justify the initial installation costs. Surgeons are increasingly aware of this tool but training remains an issue and incorporation into busy practices may be slower than expected. The other main discussion point is the data generated by robotics. Currently we have data available at several points such as during planning, surgical bone preparation, and postimplantation. How these data should be handed, who should analyze them, and how they could be used to generate some form of artificial intelligence that may guide the future direction of arthroplasty remains a subject of ongoing debate. All said and done, robotics are here to stay and clinicians should understand this technology and use it appropriately (Table 23.1).

23. Hip and Knee Robotic Surgery

In THA, semiactive systems therefore allow preoperative planning of the femoral component, but do not incorporate reaming or milling; these systems are therefore best for refining the accuracy of sizing and alignment of the acetabular component [6]. The final acetabular component is placed by the robotic arm based on the plan, and a trial reduction performed with the selected head size. Intraoperative adjustment is possible here as the computer provides information on leg length and hip offset compared to the preoperative plan; if necessary, the surgeon can make adjustments. The Acrobot system is composed of a high-speed cutting apparatus attached to a robotic device that prevents the robot from cutting into areas outside of the defined plan while aiding the surgeon to cut away bone only while it is in the correct region [57]. In THA, active systems such as ROBODOC, which consist of a five-axis robotic arm attached to a high-speed milling device, are able to autonomously prepare the femoral canal without additional instruments, as well as position the implants [60] after the surgeons select the optimal size and type of femoral component and input the data. Fully active systems are therefore able to improve the alignment of the femoral component in THA; they are not yet available for use on the acetabulum like semiactive systems [6]. Implantation of the stem in the femoral canal is consistent with the manual method. Although semiactive systems can be used with a standard surgical approach, an additional incision over the iliac crest is necessary for the pelvic reference frame [56]; the surgeon guides the robot by orientating its probe to be in contact with the pins. Single-stage acetabular reaming is possible with robotic systems as they are easily able to overpower torque created by larger reamers [56]. However, there remains the possibility of failure of the robotic system. In some cases, it may be necessary to prematurely terminate the robotic procedure and take over manually to prevent any serious harm; it is therefore vital that the surgeon recognizes when this is necessary so as to avoid further complications. In active systems, an emergency stop button is present [60]. Chun et al. defined three criteria for deciding when to abort the surgery [61]:

TABLE 23.1 Comparison of major robotic devices. Robotic device

Type

Control

Application

Advantages

Mako

Surgical arm

Semiactive

THA, TKA, UKA

TKA: Increased accuracy in posterior tibial slope and coronal tibial alignment

Disadvantages

Adverse events

Clinical availability Available

UKA: Reduction in learning curve. Reduced variability in tibial component alignment. Lower postoperative pain, reduced revision rates, and greater functionality compared to conventional method THA: More accurate cup placement, desired leg length, and offset. Greater preservation of acetabular bone stock ROBODOC

Robotic milling device

Autonomous

Cavity preparation for THA and surface preparation for TKA

Consistent

Can “see” where the robot is milling

Doubts about clinically significant improvements in THA

THA: Reduction in intraoperative embolic events

Increase in time to account for planning, registration, and milling

TKA: Improved implant positioning and alignment

Difficult to modify surgical plan; surgeon has no control over bone preparation after the plan is complete Bulky robot unit In THA, limited to femoral preparation only

Technical complications: Having to reregister patients, inability to reregister patients, expected time .30 min to recover from error leading to aborted surgeries in some cases

No longer available

Caspar

Robotic milling device

Active

THA and TKA

THA: Improved tibiofemoral alignment. Increased accuracy of femoral preparation and prosthesis position

Restrictive, requiring bone screws to be placed preoperatively

Navio PFS

Handheld cutting tool with an end-cutting burr

Semiactive

Freehand sculpting for unicondylar knee arthroplasty

Imageless, reducing cost and radiation exposure

Burr has slow retraction speed; if moved quickly, bone may be resected outside of the planned area

Improved Oxford knee scores and implant accuracy

Increased blood loss, reduced hip abductor function, and increased incidence of Trendelenburg’s sign

No longer available

Available for UKA only

In large procedures, e.g., THA, burring is onerous Orthopilot

Semiactive

OMNIBotic

Intellijoint

Navigation system

TKA

MRI/CT scans not required, reducing cost and radiation exposure

Available

MRI/CT scans not required, reducing cost and radiation exposure

Available

THA

CT, Computed tomography; THA, total hip arthroplasty; TKA, total knee arthroplasty; UKA, unicompartmental knee arthroplasty.

Available

23. Hip and Knee Robotic Surgery

408

Handbook of Robotic and Image-Guided Surgery

References [1] Konan S, Maden C, Robbins A. Robotic surgery in hip and knee arthroplasty. Br J Hosp Med (Lond) 2017;78(7):378 84. [2] Krushell R, Bhowmik-Stoker M, Kison C, O’Connor M, Cherian JJ, Mont MA. Characterization of patient expectations and satisfaction following total hip arthroplasty. J Long Term Eff Med Implants 2016;26(2):123 32. [3] King J, Stamper DL, Schaad DC, Leopold SS. Minimally invasive total knee arthroplasty compared 697 with traditional total knee arthroplasty. J Bone Joint Surg 2007;89(7):1497 503. [4] Weber M, Renkawitz T, Voellner F, Craiovan B, Greimel F, Worlicek M, et al. Revision surgery in total joint replacement is cost-intensive. Biomed Res Int 2018;2018:8987104. [5] Jacofsky DJ, Allen M. Robotics in arthroplasty: a comprehensive review. J Arthroplasty 2016;31(10):2353 63. [6] Banerjee S, Cherian JJ, Elmallah RK, Pierce TP, Jauregui JJ, Mont MA. Robot-assisted total hip arthroplasty. Expert Rev Med Devices 2016;13:47 56. [7] Banerjee S, Cherian JJ, Elmallah RK, Pierce TP, Jauregui JJ, Mont MA. Robot-assisted total knee arthroplasty. Expert Rev Med Devices 2015;12:727 35. [8] Kayani B, Konan S, Pietrzak JRT, Haddad FS. Iatrogenic bone and soft tissue trauma in robotic-arm assisted total knee arthroplasty compared with conventional jig-based total knee arthroplasty: a prospective cohort study and validation of a new classification system. J Arthroplasty 2018;33(8):2496 501. [9] Marchand RC, Sodhi N, Khlopas A, et al. Patient satisfaction outcomes after robotic arm-assisted total knee arthroplasty: a short-term evaluation. J Knee Surg 2017;30(09):849 53. [10] Song E-K, Seon J-K, Yim J-H, Netravali NA, Bargar WL. Robotic assisted TKA reduces postoperative alignment outliers and improves gap balance compared to conventional TKA. Clin Orthop Relat Res 2013;471(01):118 26. [11] Marchand R, Khlopas A, Sodhi N, et al. Difficult cases in robotic arm-assisted total knee arthroplasty: a case series. J Knee Surg 2018;31 (01):27 37. [12] Boylan M, Suchman K, Vigdorchik J, Slover J, Bosco J. Technology-assisted hip and knee arthroplasties: an analysis of utilization trends. J Arthroplasty 2018;33(4):1019 23. [13] Newman J, Pydisetty RV, Ackroyd C. Unicompartmental or total knee replacement: the 15-year results of a prospective randomised controlled trial. J Bone Joint Surg Br 2009;91(1):52 7. [14] Chatellard R, Sauleau V, Colmar M, Robert H, Raynaud G, Brilhault J. Medial unicompartmental knee arthroplasty: does tibial component position influence clinical outcomes and arthroplasty survival? Orthop Traumatol Surg Res 2013;99(4 Suppl.):S219 25. [15] Collier MB, Eickmann TH, Sukezaki F, McAuley JP, Engh GA. Patient, implant, and alignment factors associated with revision of medial compartment unicondylar arthroplasty. J Arthroplast 2006;21(6 Suppl. 2):108 15. [16] Epinette JA, Brunschweiler B, Mertl P, Mole D, Cazenave A. Unicompartmental knee arthroplasty modes of failure: wear is not the main reason for failure: a multicentre study of 418 failed knees. Orthop Traumatol Surg Res 2012;98(6 Suppl.):S124 30. [17] Lonner JH, Fillingham YA. Pros and cons: a balanced view of robotics in knee arthroplasty. J Arthroplasty 2018;33(7):2007 13. [18] Lustig S, Magnussen RA, Dahm DL, Parker D. Patellofemoral arthroplasty, where are we today? Knee Surg Sports Traumatol Arthrosc 2012;20(7):1216 26. [19] Metcalfe AJ, Ahearn N, Hassaballa MA, Parsons N, Ackroyd CE, Murray JR, et al. The Avon patellofemoral joint arthroplasty. Bone Joint J 2018;100-B(9):1162 7. [20] Middleton SWF, Toms AD, Schranz PJ, Mandalia VI. Mid-term survivorship and clinical outcomes of the Avon patellofemoral joint replacement. Knee 2018;25(2):323 8. [21] Cherian JJ, Kapadia BH, Banerjee S, Jauregui JJ, Issa K, Mont MA. Mechanical, anatomical, and kinematic axis in TKA: concepts and practical applications. Curr Rev Musculoskelet Med 2014;7(02):89 95. [22] Peters CL, Mohr RA, Bachus KN. Primary total knee arthroplasty in the valgus knee: creating a balanced soft tissue envelope. J Arthroplasty 2001;16(6):721 9. [23] Apostolopoulos AP, Nikolopoulos DD, Polyzois I, et al. Total knee arthroplasty in severe valgus deformity: interest of combining a lateral approach with a tibial tubercle osteotomy. Orthop Traumatol Surg Res 2010;96(07):777 84. [24] Kim MS, Koh IJ, Choi YJ, Kim YD, In Y. Correcting severe varus deformity using trial components during total knee arthroplasty. J Arthroplasty 2017;32(05):1488 95. [25] Marchand RC, Sodhi N, Khlopas A, Sultan AA, Higuera CA, Stearns KL, et al. Coronal correction for severe deformity using robotic-assisted total knee arthroplasty. J Knee Surg 2018;31(1):2 5. [26] Parratte S, Pagnano MW, Trousdale RT, Berry DJ. Effect of postoperative mechanical axis alignment on the fifteen-year survival of modern, cemented total knee replacements. J Bone Joint Surg Am 2010;92(12):2143. [27] Howell SM, Howell SJ, Kuznik KT, Cohen J, Hull ML. Does a kinematically aligned total knee arthroplasty restore function without failure regardless of alignment category? Clin Orthop Relat Res 2013;471(3):1000.

New Advances in Robotic Surgery in Hip and Knee Replacement Chapter | 23

409

23. Hip and Knee Robotic Surgery

[28] Chang JD, Kim IS, Bhardwaj AM, Badami RN. The evolution of computer-assisted total hip arthroplasty and relevant applications. Hip Pelvis 2017;29(1):1 14. [29] Lewinnek GE, Lewis JL, Tarr R, Compere CL, Zimmerman JR. Dislocations after total hip-replacement arthroplasties. J Bone Joint Surg Am 1978;60(2):217 20. [30] Callanan MC, Jarrett B, Bragdon CR, Zurakowski D, Rubash HE, Freiberg AA, et al. The John Charnley Award: risk factors for cup malpositioning: quality improvement through a joint registry at a tertiary hospital. Clin Orthop Relat Res 2011;469:319 29. [31] Eilander W, Harris SJ, Henkus HE, Cobb JP, Hogervorst T. Functional acetabular component position with supine total hip replacement. Bone Joint J 2013;95-B:1326 31. [32] Jolles BM, Genoud P, Hoffmeyer P. Computer-assisted cup placement techniques in total hip arthroplasty improve accuracy of placement. Clin Orthop Relat Res 2004;426:174 9. [33] Archbold HA, Mockford B, Molloy D, McConway J, Ogonda L, Beverland D. The transverse acetabular ligament: an aid to orientation of the acetabular component during primary total hip replacement: a preliminary study of 1000 cases investigating postoperative stability. J Bone Joint Surg Br 2006;88(7):883 6. [34] Epstein NJ, Woolson ST, Giori NJ. Acetabular component positioning using the transverse acetabular ligament: can you find it and does it help? Clin Orthop Relat Res 2011;469(2):412 16. [35] Nishihara S, Sugano N, Nishii T, Miki H, Nakamura N, Yoshikawa H. Comparison between hand rasping and robotic milling for stem implantation in cementless total hip arthroplasty. J Arthroplasty 2006;21(7):957 66. [36] Park SE, Lee CT. Comparison of robotic-assisted and conventional manual implantation of a primary total knee arthroplasty. J Arthroplasty 2007;22(7):1054 9. [37] Lonner JH, Moretti VM. The evolution of image-free robotic assistance in unicompartmental knee arthroplasty. Am J Orthop 2016;45(4):249. [38] MacCallum KP, Danoff JR, Geller JA. Tibial baseplate positioning in robotic-assisted and conventional unicompartmental knee arthroplasty. Eur J Orthop Surg Traumatol 2016;26(1):93 8. [39] Bell SW, Anthony I, Jones B, MacLean A, Rowe P, Blyth M. Improved accuracy of component positioning with robotic-assisted unicompartmental knee arthroplasty: data from a prospective, randomized controlled study. J Bone Joint Surg Am 2016;98(8):627 35. [40] Kayani B, Konan S, Pietrzak JRT, Huq SS, Tahmassebi J, Haddad FS. The learning curve associated with robotic-arm assisted unicompartmental knee arthroplasty. Bone Joint 2018;100-B(8):1033 42. [41] Blyth MJG, Anthony I, Rowe P, Banger MS, MacLean A, Jones B. Robotic arm-assisted versus conventional unicompartmental knee arthroplasty: exploratory secondary analysis of a randomised controlled trial. Bone Joint Res 2017;6(11):631 9. [42] Gilmour A, MacLean AD, Rowe PJ, Banger MS, Donnelly I, Jones BG, et al. Robotic-arm-assisted vs conventional unicompartmental knee arthroplasty. The 2-year clinical outcomes of a randomized controlled trial. J Arthroplasty 2018;33(7S):S109 15. [43] Ponzio DY, Lonner JH. Robotic technology produces more conservative tibial resection than conventional techniques in UKA. Am J Orthop 2016;45(7):E465. [44] Bellemans J, Vandenneucker H, Vanlauwe J. Robot-assisted total knee arthroplasty. Clin Orthop Relat Res 2007;464:111 16. [45] Moon YW, Ha CW, Do KH, Kim CY, Han JH, Na SE, et al. Comparison of robot-assisted and conventional total knee arthroplasty: a controlled cadaver study using multiparameter quantitative three-dimensional CT assessment of alignment. Comput Aided Surg 2012;17(2):86 95. [46] Song EK, Seon JK, Park SJ, Jung WB, Park HW, Lee GW. Simultaneous bilateral total knee arthroplasty with robotic and conventional techniques: a prospective, randomized study. Knee Surg Sports Traumatol Arthrosc 2001;19(7):1069. [47] Liow MH, Xia Z, Wong MK, Tay KJ, Yeo SJ, Chin PL. Robot-assisted total knee arthroplasty accurately restores the joint line and mechanical axis. A prospective randomised study. J Arthroplasty 2014;29(12):2373 7. [48] Clark TC, Schmidt FH. Robot-assisted navigation versus computer-assisted navigation in primary total knee arthroplasty: efficiency and accuracy. ISRN Orthop 2013;2013:794827. [49] Honl M, Dierk O, Guack C, Carrero V, Lampe F, Dries S, et al. Comparison of robotic-assisted and manual implantation of a primary total hip replacement: a prospective study. J Bone Joint Surg Am 2003;85-A(8):1470 8. [50] Nakamura N, Sugano N, Nishii T, Kakimoto A, Miki H. A comparison between robotic-assisted and manual implantation of cementless total hip arthroplasty. Clin Orthop Relat Res 2010;468(4):1072 81. [51] Tsai TY, Dimitriou D, Li JS, Kwon YM. Does haptic robot-assisted total hip arthroplasty better restore native acetabular and femoral anatomy? Int J Med Robot 2016;12(2):288 95. [52] Nawabi DH, Conditt MA, Ranawat AS, Dunbar NJ, Jones J, Banks S, et al. Haptically guided robotic technology in total hip arthroplasty: a cadaveric investigation. Proc Inst Mech Eng H 2013;227(3):302 9. [53] Schulz A, Seide K, Queitsch C, von Haugwitz A, Meiners J, Kienast B, et al. Results of total hip replacement using the Robodoc surgical assistant system: clinical outcome and evaluation of complications for 97 procedures. Int J Med Robot 2007;3(4):301 6. [54] van der List JP, Chawla H, Pearle AD. Robotic-assisted knee arthroplasty: an overview. Am J Orthop 2016;45(4):202 11. [55] Lang JE, Mannava S, Floyd AJ, Goddard MS, Smith BP, Mofidi A, et al. Robotic systems in orthopaedic surgery. J Bone Joint Surg Br 2011;93(10):1296 9.

410

Handbook of Robotic and Image-Guided Surgery

[56] Wasterlain AS, Buza 3rd JA, Thakkar SC, Schwarzkopf R, Vigdorchik J. Navigation and robotics in total hip arthroplasty. JBJS Rev 2017;5(3). Available from: http://dx.doi.org/10.2106/JBJS.RVW.16.00046. [57] Cobb J, Henckel J, Gomes P, Harris S, Jakopec M, Rodriguez F, et al. Hands-on robotic unicompartmental knee replacement: a prospective, randomised controlled study of the acrobot system. J Bone Joint Surg Br 2006;88(2):188 97. [58] Coon TM. Integrating robotic technology into the operating room. Am J Orthop 2009;38(2 Suppl.):7 9. [59] DiGoia 3rd AM, Jaramaz B, Colgan BD. Computer assisted orthopaedic surgery. Image guided and robotic assistive technologies. Clin Orthop Relat Res 1998;354:8 16. [60] Bargar WL. Robots in orthopaedic surgery: past, present, and future. Clin Orthop Relat Res 2007;463:31 6. [61] Chun YS, Kim KI, Cho YJ, Kim YH, Yoo MC, Rhyu KH. Causes and patterns of aborting a robot-assisted arthroplasty. J Arthroplasty 2011;26 (4):621 5.

Further reading Liow MHL, Goh GS, Wong MK, Chin PL, Tay DK, Yeo SJ. Robotic-assisted total knee arthroplasty may lead to improvement in quality-of-life measures: a 2-year follow-up of a prospective randomized trial. Knee Surg Sports Traumatol Arthrosc 2017;25(9):2942 51.

24 G

Intellijoint HIP: A 3D Minioptical, Patient-Mounted, Sterile Field Localization System for Orthopedic Procedures Andre Hladio, Richard Fanson and Jeffrey Muir Intellijoint Surgical, Waterloo, ON, Canada

ABSTRACT The Intellijoint HIP system is a minioptical navigation system used to make accurate and real-time positional measurements during total hip arthroplasty (THA). Measurements of acetabular implant angle and change in leg length and offset are provided relative to the patient’s anatomy; these measurements are critical to patient outcomes. The Intellijoint HIP system is enabled through a novel and proprietary minioptical navigation technology, involving a patient-mounted camera and a tracker for positional detection by the camera when mounted on surgical instruments or anatomical locations. The Intellijoint HIP system is optimized for accessibility, usability, and integration with orthopedic workflows. There are no preoperative imaging requirements (i.e., it is an “imageless” system), and the device is fully surgeon controlled from the sterile field. Opportunities exist to apply the core minioptical technology outside of THA procedures. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00024-4 © 2020 Elsevier Inc. All rights reserved.

411

412

Handbook of Robotic and Image-Guided Surgery

24.1

Background

Correct implant positioning is critical to the success of orthopedic procedures. For example, in total hip arthroplasty (THA), the alignment of the acetabular component is critical to postoperative hip stability, as a malaligned acetabular component is known to lead to hip dislocations requiring medical intervention [1 3]. In another example with respect to THA, improper femoral and/or acetabular sizing or placement may cause leg length discrepancy leading to postoperative complications. As value-based healthcare models and patient expectations continue to rise, issues such as postoperative dislocation and leg length inequality are more and more highly scrutinized. Fig. 24.1 illustrates a patient before/ after THA. Prior to surgery, the hip joint is diseased (most commonly osteoarthritis), causing the mating articulating surfaces of the acetabulum and the femoral head not to function correctly, which induces pain and decreased patient mobility. After a hip replacement, the new articulating interface is formed by an acetabular implant and a femoral implant; the diseased bone is removed and replaced by artificial components. Surgical navigation systems were developed to increase spatial precision in surgery, as well as to decrease the invasiveness of surgical procedures. A surgical navigation system functions by measuring relative pose between trackers, wherein the trackers are attached to anatomical structures and surgical instruments, and provide the measured relative pose to the surgeon to achieve a spatial goal during surgery. These systems were originally developed for neurosurgical and spinal applications. For example, in a neurosurgical procedure, a location within a patient’s brain, preidentified on a brain scan, could be accessed through a small coin-sized craniotomy with the use of surgical navigation. In a spine surgery, a pedicle screw could be inserted within a vertebra along a safe trajectory, avoiding the spinal cord with use of surgical navigation. Some surgical navigation systems are compatible with preoperative or intraoperative imaging, and involve an “image registration” process, wherein the coordinate frame of the trackers is mapped to the coordinate frame of a medical image. Surgical navigation systems are “imageless” if they do not rely on pre- or intraoperative imaging. Various sensing modalities may be used for pose measurement, including stereoscopic infrared optical detection of active or passive trackers, inertial sensing, and electromagnetic tracking. Most commonly, surgical navigation systems rely on a stereoscopic infrared camera, locating either active or passive markers attached to anatomical structures or instruments, and executing a software workflow on a computer coupled to the camera. As surgical navigation systems demonstrated success in neuro and spinal surgery applications in the late 1990s, many were researching its utility in orthopedic arthroplasty applications. Surgical navigation systems were commercially introduced for orthopedic arthroplasty applications in the United States in the early 2000s [4], applying the same technology base as was used in neuro and spinal navigation. Initially, many systems were image-based (requiring a computed tomography (CT) scan); however, as CT scans were not the standard of care for routine arthroplasty procedures, many systems transitioned to be imageless. Examples of some of the original commercially available orthopedic navigation systems for hip arthroplasty procedures include: G G G G

BrainLab Hip Unlimited (BrainLab, Munich, Germany); Stryker Versatile Hip (Stryker Corp, Kalamazoo, MI, United States); Orthosoft Hip Universal (Zimmer CAS, Montreal, QC, Canada); NaviPro Hip (Kinamed Inc., Camarillo, CA, United States).

Shortly after their commercial introductions, some surgical navigation systems were augmented with robotic manipulators. For example, the Mako robotic system (Mako Surgical Corp, Ft Lauderdale, FL) comprises a traditional navigation system based on a stereoscopic, infrared, passive tracking modality. A tracker is attached to a “passive” robot manipulator with surgical tools (e.g., saws, high-speed burrs, reamers) coupled to its end-effector. A closed-loop control FIGURE 24.1 A patient’s hip before and after THA. THA, Total hip arthroplasty.

Pelvis

Acetabulum

Acetabular implant

Femoral head Femoral implant

Femur

Before

After

Intellijoint HIP Chapter | 24

413

FIGURE 24.2 Fixed alignment guides on acetabular implant inserter.

G G

G

G

G

G

G

Cost [10,11]; Sterility concerns: Some systems require a pseudo-sterile registration procedure to be conducted with the patient supine, prior to positioning them laterally for surgery and establishing the sterile field; Line of sight: The stereoscopic infrared optical modality is most common, in which the camera needs an unobstructed view of the multiple trackers in the sterile field. Operating room (OR) personnel must adjust their positions to allow the camera to see the trackers; OR layout: In addition to line-of-sight issues, most systems have a large OR footprint, making them intrusive during OR set-up. In smaller ORs, these systems may not fit at all; Additional procedure time: Initially, tools and techniques were ported into arthroplasty navigation from neurosurgical or spinal applications, and would cause 20 30 minutes of additional procedure time. Though not significant where neuro and spinal surgeries would last for 5 1 hours, arthroplasty procedures generally last 1 hour, in which case 20 30 minutes is significant [12,13]; Personnel training requirements: Many systems require a dedicated operator running the computer system. Training hospital staff to run the navigation system computer is burdensome; Portability: Most hospitals have multiple surgeons running multiple ORs simultaneously. Robotic and navigation systems for arthroplasty lack portability to facilitate efficient transitions from OR to OR.

The genesis of the Intellijoint minioptical technology was at the University of Waterloo (Waterloo, ON, Canada) in 2007. In an undergraduate mechatronics design course, the Intellijoint design group was inspired to address the

24. intellijoint HIP

system is implemented in which the pose of the robotic manipulator is tracked, and control signals restrict the movements of the surgical tools so that bone removal is according to a surgical plan. This system relies on a segmented CT scan and a preoperatively developed surgical plan, and may be used for partial knee arthroplasty, total knee arthroplasty, and THA. Despite demonstrating increased accuracy and outlier reduction [5 8], surgical navigation and robotic systems have yet to gain widespread market adoption. As a result, surgeons continue to rely on manual techniques and mechanical instrumentation for implant positioning. One common exemplary manual technique is the use of fixed alignment guides on an acetabular implant inserter, as shown in Fig. 24.2. The fixed alignment guides provide the surgeon with a visual cue of the alignment of the acetabular implant inserter relative to the operating table. The numerous drawbacks of using fixed alignment guides include: the guide only provides a fixed, static target (the surgeon cannot effect a patient-specific target); and, the patient’s position on the operating table may shift during the procedure (causing alignment relative to the operating table to result in incorrect alignment relative to the patient). Although the success rate of orthopedic surgery is high [9], implant positioning-related complications still occur when using manual and mechanical techniques. Some commonly cited reasons that surgical navigation and robotic systems have experienced low adoption within orthopedics include:

414

Handbook of Robotic and Image-Guided Surgery

challenges faced by one of their fathers—a rural community orthopedic surgeon, who wanted access to assistive technology to allow him to align hip implants accurately and reproducibly. In his surgical setting, traditional orthopedic navigation technologies were prohibitive for many of the reasons outlined above. The Intellijoint minioptical technology was developed to overcome these barriers. In 2013, the Intellijoint minioptical technology was commercially introduced in Canada, and subsequently in the United States for controlled release of a minimally featured system (measurements of intraoperative leg length and offset change). In late 2015, a fully featured system based on the Intellijoint minioptical technology was introduced commercially in Canada and the United States, which included leg length, offset, and cup angle measurement capability.

24.2 24.2.1

Intellijoint minioptical technology System overview

The Intellijoint minioptical technology comprises a patient-mounted tracking camera within the surgical sterile field, as shown in Fig. 24.3. The camera is rigidly fixed to the bone via patient mounting hardware, and connected to a computer workstation (not shown) via universal serial bus (USB). The camera provides buttons so that a sterile user can control the computer workstation. The computer workstation runs image processing and computer vision algorithms to calculate and display implant positioning data, based on a video feed of the tracker. The tracker is shown attached to an acetabular inserter tool (with an acetabular implant on the distal end of the tool). The acetabular inserter is a part of the instrumentation associated with the acetabular implant, and the tracker interfaces with this tool across a variety of implant vendors via a magnetic v-block adaptor. An animation video illustrating the use of the Intellijoint minioptical technology in an anterior THA procedure is provided. Advantages of this system architecture, relative to the previously commercialized surgical navigation systems, include: G

G

G

Minimal line-of-sight disruptions—as long as the line of sight between the camera and the tracker is unbroken, the system is expected to function; Sterile field control—the sterile user can interact with the computer workstation via buttons on the camera, without compromising sterility or requiring additional staff to run the computer; High portability and flexibility in OR configuration—with the exception of the computer workstation, the entire minioptical system operates within the sterile field; there is no large, bulky equipment to manage. Fig. 24.4 illustrates an OR in which a patient is positioned for hip arthroplasty surgery, along with the Intellijoint technology, comprising a minioptical camera (located on a standard sterile Mayo stand, prior to being mounted to the patient), FIGURE 24.3 Intellijoint minioptical technology for acetabular implant alignment in THA, including a camera (attached to the bone), and a tracker (attached to a surgical acetabular inserter). THA, Total hip arthroplasty.

Intellijoint HIP Chapter | 24

415

FIGURE 24.4 Operating room configuration including the Intellijoint system.

24. intellijoint HIP

FIGURE 24.5 Photograph of the camera.

G

the workstation coupled to the camera via a wired connection (outside of the sterile field), and a tray with sterile instruments for coupling the camera and tracker to anatomical structures and instruments, respectively; The system is universally compatible with implant instrumentation via the tracker adaptor (many of the previous surgical navigation systems are designed for compatibility with a single implant vendor’s products).

24.2.2

Camera

The most novel aspect of the Intellijoint minioptical technology is the camera. The camera is the primary sensor for generating spatial tracking measurements (i.e., the position and orientation of the tracker component, described below). The camera also provides a three-button interface (green circle, x, and square) to send commands to the computer workstation, to which is it connected via USB. A photograph of the camera is provided in Fig. 24.5.

416

Handbook of Robotic and Image-Guided Surgery

The optical technology includes a single camera with a wide field of view lens, an infrared filter, a global shutter imager, and infrared illumination from light emitting diodes (LEDs) surrounding the lens. The camera is designed to be patient-mountable within a surgical sterile field, and/or handheld during use. A size comparison between the camera and a Polaris Optical Tracking System (Northern Digital Inc., Waterloo, ON, Canada)—a market leader in supplying the surgical navigation and robotic surgery industry—is provided in Fig. 24.6. The camera is designed to be enclosed within a sterile drape before being introduced into the sterile field. The sterile drape is a custom design for the Intellijoint minioptical camera, and comprises a long tubular section terminating in an optical window. The system is designed to compensate for the optical effects of the optical window, so as not to introduce measurement inaccuracies. The sterile drape and optical window are held in alignment with the camera via a shroud component; the shroud snaps onto features of the camera without risking puncture to the sterile drape. Fig. 24.7 illustrates a user snapping the shroud over a draped camera. For patient mounting, the ability to aim the camera in a desired direction (i.e., at the surgical site), as well as remove the camera when not in use are important design requirements. A camera clamp is used to rigidly hold the shroud/camera/sterile drape assembly. The camera clamp has a spherical inner-surface profile, and the shroud has a spherical outer surface profile; hence, when loosely engaged, the clamp forms a moveable ball joint assembly with the shroud/camera/ sterile drape. The camera may be aligned by moving the ball joint based on visual cues or on software guidance, and

FIGURE 24.6 Size comparison between Intellijoint minioptical technology and the Polaris system.

FIGURE 24.7 Applying the shroud over a draped camera.

Intellijoint HIP Chapter | 24

417

FIGURE 24.8 Clamp/Shroud/Camera assembly for camera aiming and quick attachment and detachment during use.

G

G G

Detecting movement of a patient during a surgical procedure, by monitoring their orientation with respect to gravity; Measuring an inertial vector (based on the direction of gravity), where optical measurements are not feasible; Detecting vibrations or unwanted movement of the camera that could compromise optical measurement integrity.

24.2.3

Software framework

The Intellijoint minioptical technology includes a software framework, which receives USB camera data, and outputs pose (for use by the clinical application software). The software framework implements an image-processing pipeline to take a raw image, and compute a pose of a tracker within that image, based on a camera calibration and an expected tracker geometry. The high-level steps for determining pose are described below and illustrated in Fig. 24.9: G G

G

G

G

G

G

A raw image 2D image is received into computer memory from the camera; Segment detection: Segments that may correspond to a tracker sphere are identified using segment detection techniques based on pixel intensity; Segment rejection: Since the tracker has four retroreflective spheres, four valid segments are expected. In this step, invalid segments are rejected based on various criteria, including size, shape, and relative spacing; Centroid detection: The 2D pixel centroids of the four valid segments are calculated (i.e., four sets of segment centroid coordinates); Point correspondence: The centroids are associated with the tracker sphere they represent. Heuristics based on expected relative spacing are used. The result is an ordered list of centroid coordinates; Pose calculation: A six degrees of freedom pose (x, y, z, roll, pitch, yaw) is calculated via a nonlinear optimization that receives the ordered list of centroid coordinates and camera and tracker models as inputs; This process is repeated for each image in a video stream running at 30 fps.

24. intellijoint HIP

when in the desired alignment, the clamp may be locked. To facilitate temporarily removing the camera (e.g., during a surgical step to avoid the camera being in the way), the clamp provides a magnetic kinematic mount interface. The kinematic mount interface is highly repeatable, so that the camera returns to the same location between attach/detach cycles. A photograph of the clamp/shroud/camera assembly (sterile drape not shown) is provided in Fig. 24.8. The camera interfaces with a computer workstation via a wired USB connection. The camera system is powered by the USB connection, without the need for any additional dedicated power supply. A 5-m USB cable is used, to ensure adequate length for any OR configuration, as well as to ensure that the workstation is out of the sterile field. This length is at the upper limit of the current USB standard. In addition to optical sensing, the camera includes an integrated three-axis accelerometer that is coregistered to the optical system, and calibrated to measure the camera’s orientation with respect to the direction of gravity. The resulting inertial measurements are provided in real time along with the optical camera feed. There are several possible use cases for inertial sensor measurements, including:

418

Handbook of Robotic and Image-Guided Surgery

FIGURE 24.9 Image processing pipeline.

The process for determining the inclination of the camera (based on accelerometer measurements) follows a similar method. The software framework sets limits on the optical tracking volume, to ensure accuracy. The validated tracking volume is illustrated in Fig. 24.10. In this figure, standard and extended tracking volumes are illustrated. In the extended tracking volume, angular constraints are added (in software) on the working volume, so that when in regions susceptible to pose calculation error (i.e., long distances away, at the edge of the imager), pose measurements of the tracker will not be generated.

24.2.4

Tracker

The Intellijoint minioptical system generates pose measurements between the camera and the tracker (shown in Fig. 24.11). The tracker provides four mounting posts at precisely known positions for attaching sterile retroreflective spheres (such as those marketed by Northern Digital Inc, Waterloo, ON, Canada). The retroreflective spheres reflect infrared illumination back toward the source (i.e., the camera), causing the spheres to appear brightly in the camera image. The tracker interfaces with various components (such as surgical tools) via a magnetic kinematic mount. The Intellijoint minioptical system may be deployed as a single-tracker system.

24.3

Minioptical system calibration

The Intellijoint minioptical system is designed to be “calibration-free,” in the sense that there are no field or preoperative calibration steps required. Other navigation systems (such as the Orthosoft Hip navigation system, marketed by Zimmer CAS, Montreal, QC, Canada) require surgical instruments to be calibrated preoperatively. The minioptical camera only requires factory calibration to consistently function correctly in the field, including when enclosed in the sterile drape. The monocular nature of the camera reduces the opto-mechanical complexity of the system, which enables robust, consistent adherence to the factory calibration. Conversely, for example, in a large-baseline

Intellijoint HIP Chapter | 24

419

FIGURE 24.10 Validated tracking volume.

24. intellijoint HIP

FIGURE 24.11 Tracker with retroreflective spheres.

420

Handbook of Robotic and Image-Guided Surgery

stereo camera system, the exact distance between both cameras is required to be precisely known, and is subject to various mechanical disturbances such as thermal deformation. Should a minioptical camera become compromised for any reason (e.g., damage to the lens), causing the calibration to no longer match the actual camera’s behavior, a software-implemented error metric will prevent inaccurate measurements, and prompt a potential replacement or servicing of the camera. The error metric is based on the optimization residual of the pose estimation operation, previously described. Trackers are not required to be calibrated, but rather rely on mechanical design and manufacturing processes that facilitate a robust adherence to a nominal model, known by the software. Similar to the minioptical camera, should a tracker ever become compromised for any reason (e.g., physical damage), the software-implemented error metric, based on the pose optimization residual, will prevent inaccurate measurements and prompt a potential replacement of the tracker component.

24.4 24.4.1

Clinical applications Intellijoint HIP

Intellijoint HIP is an imageless, portable, streamlined hip navigation system that provides measurements of acetabular implant inclination and anteversion, as well as change in leg length and offset. Intellijoint HIP is compatible with lateral, posterior, anterior, and revision THA workflows. The camera is mounted to the patient’s pelvis during the procedure, and the tracker can be mounted (at various points in time during the procedure) to the patient’s femur, registration instruments, and the acetabular implant inserter. Intellijoint HIP includes a set of sterile instruments used to mount the minioptical camera and the tracker to their respective objects. The sterile instruments are reusable, and autoclavable. Fig. 24.4 illustrates the sterile instruments within an OR configured for a THA procedure. Intellijoint HIP is a calibration-free system. The calibration of the minioptical system has been previously described. Additionally, no surgical instruments require preoperative calibration. This is primarily achieved through a magnetic v-block interface between the tracker and the acetabular implant inserter. The v-block is designed and manufactured to provide a known and repeatable angle with respect to any shaft (e.g., the shaft of an acetabular inserter tool) to which it is mounted. Fig. 24.3 shows the magnetic v-block component, coupling the tracker to the acetabular inserter tool. Intellijoint HIP does not require specific preoperative planning. Surgeons may plan their surgeries according to their standard methods, and use Intellijoint HIP intraoperatively to execute their plans. In some THA surgeries, there is a reliance on fluoroscopy to confirm acetabular implant inclination and anteversion, as well as change in leg length and offset. Intellijoint HIP is compatible with procedures using fluoroscopy (i.e., the system does not interfere with fluoroscopic imaging equipment), and has the potential to decrease the amount of fluoroscopy for a given procedure, since the surgeon may rely on Intellijoint HIP for measurements that would otherwise be captured via fluoroscopy. Fluoroscopy involves irradiating the patient and hospital staff; decreasing radiation is beneficial for those present in the OR. An example display screen, showing acetabular implant inclination and anteversion, is shown in Fig. 24.12, and a photograph of the sterile field corresponding to the display screen is shown in Fig. 24.13, in which a surgeon is holding the acetabular inserter with a tracker coupled thereto, and the camera is coupled to the patient’s pelvis.

24.4.2

Other applications

The Intellijoint minioptical technology may be applied to other procedures. It is well suited for patient-mounted, handheld or body worn, or robot-mounted applications, where sterile usage is required. Some example procedures include: G

G

G

G

Total knee arthroplasty, where the technology could be used to make accurate bony cuts to ensure correct implant alignment; Cranial surgery, where the technology could be used to stereotactically access regions within a patient’s brain, for example, to take a biopsy; Ear, nose, and throat surgery, where the technology could be used to insert surgical tools into a particular cavity within a patient’s skull through their nose; Spinal surgery, where the technology could be used to drill a hole through the pedicle of a vertebra without risking breaching the spinal column.

Intellijoint HIP Chapter | 24

421

FIGURE 24.12 Acetabular implant alignment screen.

24. intellijoint HIP

FIGURE 24.13 Photograph of intellijoint HIP in use for acetabular implant inclination and anteversion measurement.

24.5

Accuracy performance

To quantify the accuracy performance of the Intellijoint minioptical technology, a standard protocol was applied (ASTM 2554-10 Standard Practice for Measurement of Positional Accuracy of Computer Assisted Surgery Systems). A calibrated phantom with precisely known divot locations (see Fig. 24.14) is used as a ground truth. This phantom is specified by the standard, and includes an array of divots at known spatial locations [measured by a coordinate measurement machine (CMM)]. The Intellijoint minioptical system generates 3D measurements of the phantom divot locations by attaching a ball-tipped probe to the end of the tracker, and mating the probe tip with any divot. Those measurements are compared to the known spatial locations to quantify accuracy. The phantom divots are measured with a CMM to an accuracy of 0.1 mm. Divots are arranged at various heights throughout a 20 3 20 cm volume, grouped into sets of five. A photograph of the phantom set up for use, including a tracker with a ball-tipped probe attached thereto is provided in Fig. 24.15.

422

Handbook of Robotic and Image-Guided Surgery

FIGURE 24.14 Accuracy phantom used to perform positional accuracy testing according to ASTM 2554-10.

FIGURE 24.15 Accuracy phantom in use by a user.

The results of design verification testing demonstrated that the Intellijoint minioptical system is capable of localizing a single point with a precision of 0.54 mm (RMS), and the distance between any two points with worst-case accuracy of 0.28 mm (RMS) [note: the accuracy between two points (0.28 mm RMS) is less than the precision of a single point (0.54 mm RMS), since the accuracy metric utilizes an average measured position at each of the two points]. The significance of these results is that the Intellijoint minioptical system should be expected to normally measure with submillimetric accuracy in space. This is sufficient for many clinical applications.

Intellijoint HIP Chapter | 24

24.6

423

Conclusion

The Intellijoint minioptical technology enables accurate surgical navigation via a patient-mounted camera directly within the sterile field, and overcomes many of the barriers to adoption of traditional arthroplasty navigation systems (e.g., cost, line-of-sight issues, portability, etc.). Its utility in THA has been demonstrated via Intellijoint HIP, a commercially available system for providing a surgeon with measurements of acetabular implant angles and positional changes to leg length and offset.

24.7

Challenges and further development

The Intellijoint minioptical technology has not been proven outside of THA. It is speculated that this technology would be well suited for a variety of clinical applications, such as knee, cranial, spinal, and ear, nose, and throat surgery; however, further development should be conducted to confirm this hypothesis. Further, the Intellijoint minioptical technology has only been used in an “imageless” context (i.e., no registration to a 3D medical image dataset). Further development focus may include image-based applications. Although already much smaller than other optical navigation technologies, another area of development may include further miniaturization of the camera and tracker, to enable different types of surgery. Another potentially advantageous application for this technology is integration with robotics. The camera is sufficiently small so as to be mountable directly to a robotic manipulator. It is speculated that the camera may be used as a pose measurement sensor in a closed-loop robotic control system.

[1] Malik A, Maheshwari A, Dorr LD. Impingement with total hip replacement. J Bone Joint Surg Am 2007;89(8):1832 42. [2] Masaoka T, Yamamoto K, Shishido T, Katori Y, Mizoue T, Shirasu H, et al. Study of hip joint dislocation after total hip arthroplasty. Int Orthop 2006;30(1):26 30. [3] Nishii T, Sugano N, Miki H, Koyama T, Takao M, Yoshikawa H. Influence of component positions on dislocation: computed tomographic evaluations in a consecutive series of total hip arthroplasty. J Arthroplasty 2004;19(2):162 6. [4] FDA. 510(k) Premarket notification. US Food and Drug Administration; 2017 [cited December 19, 2017]. Available from: ,https://www. accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm.. [5] Gurgel HM, Croci AT, Cabrita HA, Vicente JR, Leonhardt MC, Rodrigues JC. Acetabular component positioning in total hip arthroplasty with and without a computer-assisted system: a prospective, randomized and controlled study. J Arthroplasty 2014;29(1):167 71. [6] Lass R, Kubista B, Olischar B, Frantal S, Windhager R, Giurea A. Total hip arthroplasty using imageless computer-assisted hip navigation: a prospective randomized study. J Arthroplasty 2014;29(4):786 91. [7] Lin F, Lim D, Wixson RL, Milos S, Hendrix RW, Makhsous M. Limitations of imageless computer-assisted navigation for total hip arthroplasty. J Arthroplasty 2011;26(4):596 605. [8] Najarian BC, Kilgore JE, Markel DC. Evaluation of component positioning in primary total hip arthroplasty using an imageless navigation device compared with traditional methods. J Arthroplasty 2009;24(1):15 21. [9] Shan L, Shan B, Graham D, Saxena A. Total hip replacement: a systematic review and meta-analysis on mid-term quality of life. Osteoarthritis Cartilage 2014;22(3):389 406. [10] Slover JD, Tosteson AN, Bozic KJ, Rubash HE, Malchau H. Impact of hospital volume on the economic value of computer navigation for total knee replacement. J Bone Joint Surg Am 2008;90(7):1492 500. [11] Brown ML, Reed JD, Drinkwater CJ. Imageless computer-assisted versus conventional total hip arthroplasty: one surgeon’s initial experience. J Arthroplasty 2014;29(5):1015 20. [12] Manzotti A, Cerveri P, De Momi E, Pullen C, Confalonieri N. Does computer-assisted surgery benefit leg length restoration in total hip replacement? Navigation versus conventional freehand. Int Orthop 2011;35(1):19 24. [13] Kalteis T, Handel M, Bathis H, Perlick L, Tingart M, Grifka J. Imageless navigation for insertion of the acetabular component in total hip arthroplasty: is it as accurate as CT-based navigation? J Bone Joint Surg Br 2006;88(2):163 7.

24. intellijoint HIP

References

25 G

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Dominique Saragaglia CHU Grenoble-Alpes, South Teaching Hospital, Grenoble, France

ABSTRACT Navigation of knee surgery was born in Grenoble (France) in the mid-1990s. The first total knee arthroplasty (TKA) was implanted on a human being on January 1997 and a prospective randomized study comparing computer-assisted TKA and conventional technique finished in March 1999. The results were published, leading to marketing of the Orthopilot device. In March 2001 we carried out the first high tibial osteotomy for genu varum deformity and in January 2008 we implanted for the first time a UniKA with a “light” software. The aim of this chapter is first to present the Orthopilot device and the operative technique, then the evolution of the software in order to use it for osteotomies around the knee, UniKA, and revision of Uni to TKA. Second, results of these techniques are presented, and finally, the usefulness of navigation is discussed. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00025-6 © 2020 Elsevier Inc. All rights reserved.

425

426

Handbook of Robotic and Image-Guided Surgery

25.1

Introduction

Computer-assisted surgery began with stereotactic neurosurgery [1] toward the end of the 1980s. This new technique aimed to improve the precision of operations, reduce surgical invasiveness, and improve the traceability of interventions. The history of computer-assisted implantation of total knee prostheses dates back to 1993 when we set up a work group including two surgeons (D. Saragaglia and F. Picard), one medical doctor/computer scientist (P. Cinquin), two computer scientists (S. Lavalle´e and F. Leitner), and an industry partner, which was at the time I.C.P. France (boughtover by Aesculap-AG, Tuttlingen, Germany, in 1994). In our first meeting, the senior surgeon drew up the specifications defining computer assistance for total knee replacement. A preoperative scan was not needed to guide surgical navigation for several reasons, this was first because, at the time, this examination was not part of the preoperative check-up required for a knee prosthesis, second, we felt that an examination of this sort could only complicate the operative procedure, and last, this would have added additional cost and a considerable amount of radiation exposure for the patient. We needed to have a reference to the mechanical leg axis throughout the whole operation so that the cutting guides could be placed perpendicular to this axis in a frontal and sagittal plane. The cutting guides needed to be placed freehand without any centromedullary or extramedullary rods. Finally, the operation was not supposed to last more than 2 hours (maximum tourniquet time) and the procedure was to be accessible to all surgeons, whatever their computing skills. The project was assigned to F. Picard, as part of his Postgraduate Diploma in Medical and Biological Engineering, and to F. Leitner a computer scientist who was completing his training. After 2 years of research, the system was validated by the implantation of 10 knee prostheses on 10 cadaver knees, and the results were published in 1997 [2,3] in several national and international publications, including CAOS, SOFCOT, and SOBCOT. After obtaining the consent of the local ethics committee on December 4, 1996, the first computer-assisted prosthesis was implanted in a patient, on January 21, 1997 (D. Saragaglia, F. Picard, T. Lebredonchel). The operation lasted 2 hours and 15 minutes and was uneventful. A prospective randomized study comparing this technique to the conventional technique began in January 1998 and was completed in March 1999. The results (Table 25.1) were published in several national and international meetings and in a lead article in the French Journal of Orthopaedic Surgery [4]. In March 1999, the prototype that we had used in this study evolved to a final model called Orthopilot (B-Braun-Aesculap, Tuttlingen, Germany). Since that time, numerous papers have been published confirming that this technique was well founded and more than 360,000 prostheses have been implanted worldwide with Orthopilot. The software packages have evolved (versions 3.0, 3.2, 4.0, 4.2, TABLE 25.1 Results of the prospective randomized study published in 2001. Conventional surgery

Navigation

Patients (n)

25

25

Preoperative goal

180 6 3 degrees

180 6 3 degrees

Genu varum (%)

80

76

Genu valgum (%)

16

24

Neutral

4

0

P value

Preop HKA angle

175 degrees (162 210 degrees)

175 degrees (162 210 degrees)

Postop HKA angle

181.2 6 2.7 degrees

179 6 2.5 degrees

Goal achievement

75%

84%

P 5 .35 NS

MFMA

91 6 2 degrees

89.5 6 1.6 degrees

P 5 .048 S

Goal achievement for MFMA (90 degrees)

16.5%

48%

S

MTMA

90.2 6 1.6 degrees

89.5 6 1.4 degrees

P 5 .11 NS

Posterior slope

90.8 6 2.2 degrees

89.5 6 2 degrees

P 5 .18 NS

Goal achievement for postslope

41%

76%

S

HKA, Hip knee ankle; MFMA, medial femoral mechanical angle; MTMA, medial tibial mechanical angle; S, significant.

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

427

5.0, 5.1) but the basic principle has remained the same since the system was created. Today, the operative procedure lasts between 1 and 1.5 hours, depending on the difficulty of the case, and almost 4000 total knee arthroplasties (TKAs) have been implanted in our department, using navigation. In March 2001 we did the first high tibial osteotomy (HTO) for genu varum deformity and in January 2008 we implanted for the first time a UniKA with a “light” Software. The aim of this chapter is first to present the Orthopilot device and the operative technique, then the evolution of the software in order to use it for osteotomies around the knee, UniKA, and revision of Uni to TKA. Second, to present the results of these techniques, and last, to discuss the usefulness of navigation.

25.2

The Orthopilot device

FIGURE 25.1 The Orthopilot device.

25. The Orthopilott Device

The Orthopilot device is an image-free navigation system based on intraoperative data acquisition. The equipment includes a navigation station (Fig. 25.1), which allows the markers to be spatially located in real time, as well as an ancillary device adapted to this navigation. The navigation station is made up of a personal computer, an infrared Spectra localizer (Northern Digital Inc.), and a dual-command foot-pedal. The progress of the operative protocol is defined in the software and the surgeon controls this via the pedal and a dedicated graphic interface. The computer-navigation system is placed 1.8 2.2 m from the knee on the patient’s opposite side, closer to the patient’s head. This navigation station also includes ancillary devices, which are the wireless markers and their tightening system. Markers, also called “rigid bodies,” are a collection of four reflective spheres rigidly held together (Fig. 25.2). An embedded localizer infrared source illuminates these spheres whose positions are detected thanks to an image triangulation computation. The attitude (position and orientation) of each marker is therefore computed from sphere positions and the marker’s own shape. All the objects that need to be tracked can have markers attached to them. It is also possible to mark out specific points in space using a metallic pointer linked to a marker whose accurate extremity coordinates are prerecorded. The markers are rigidly attached to the bone using special bicortical screws. For TKA, the ancillary device is made up of cutting guides equipped with markers that are firmly attached to the bone by four threaded pins. These guide the tibial cut (height of cut, valgus varus, tibial slope) and the femoral cut (height of cut, valgus varus, flexum, recurvatum). The chamfer cutting guide allows the anterior and posterior cuts to be made. A distractor can be used to guide the ligament balance in flexion and extension.

428

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.2 Marker with four reflecting balls (passive marker). FIGURE 25.3 Tibial and femoral markers fixed percutaneously to the bone.

25.3

Operative procedures: total knee arthroplasty

A tourniquet is placed at the root of the thigh and the patient is put in a supine position without any particular setup. In the majority of cases a medial parapatellar approach is used but all kind of approaches can be used. The patella is everted and the markers are inserted. To reduce the length of the incision, the femoral and tibial markers are inserted percutaneously (Fig. 25.3) and are positioned so that they can be seen throughout the whole operation without the need to move the localizer. The femoral marker is placed 15 cm above the joint line in an oblique position with respect to the frontal plan and the tibial marker is placed 10 cm below the joint, parallel to the frontal plan.

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

25.3.1

429

Navigation of the femoro-tibial mechanical angle

The navigation step begins with the palpation of the anterior cortex of the femur just above the upper margin of the trochlea and the posterior side of the medial and lateral condyles. Then, the middle of the tibial spines is palpated as well as the middle of the medial or lateral tibial plateau. When the tibial mechanical axis is in varus, as is almost always the case in genu varum deformities, the lateral plateau is palpated. When the tibial mechanical axis is in valgus, the medial plateau must be palpated in order to avoid too much resection on the medial side. At the end of this step, the center of the knee has been located and the size of the femoral implant is registered in the computer. The acquisition of the center of the ankle is obtained by palpation of the medial and lateral malleoli as well as the middle of the tibiotarsal joint. Finally, the center of the femoral head is located by moving the leg in a small circular motion, slowly and progressively, with the knee in extension or in flexion. This lets the localizer track to follow the infrared diodes of the femoral “rigid body” and locate the center of the femoral head. At this step of the procedure, we know the hip knee ankle (HKA) angle which can be compared to the radiological preoperative axis, and the size of the femoral implant. Before inserting the cutting guides, it is very important to check the reducibility of the deformity [5], above all in extension (10 degrees of flexion), in order to predict any release. In the case of genu varum, a lateral manual stress of the knee is applied, and the HKA angle is checked on the computer. In the case of hyperreducibility (varus going into valgus) or hyporeducibility less than 3 degrees, there is no need to do a release of the medial collateral ligament (MCL). In the case of hyporeducibility from 3 to 6 degrees a release of the MCL is needed (pie-crusting) and in case of hyporeducibility above 6 degrees a major release is needed (release of the MCL at its femoral insertion). Otherwise, the laxity of the convexity of the knee is noted.

25.3.2

Navigation of the bone cuts

FIGURE 25.4 Measurement of the resection height of the tibial plateau, the varus valgus alignment, and the posterior slope.

25. The Orthopilott Device

The tibial cutting guide is mounted on a support, which allows the valgus varus, the height of the cut, and the posterior tibial slope to be measured (Fig. 25.4). We currently prefer to position this cutting guide freehand, without any support, which means shorter cutaneous incisions can be made. This cutting guide is positioned in front of the tibia with its “rigid body” (Fig. 25.2) and it is fixed to the bone by four threaded pins once the correct measurements are displayed on the screen, which for us are a valgus varus at 0 degree, a posterior tibial slope from 0 to 2 degrees, and a cutting height of 8 or 10 mm, which corresponds to the thickness of the tibial plateau implant. Once the cutting guide is fixed into position, an oscillating saw is used to make the cut. The femoral cutting guide equipped with its “rigid body” is then placed against the anterior side of the distal end of the femur, with the knee flexed at 90 degrees after the overhang of the femoral trochlea has been resected. This step is very important to determine the femoral mechanical axis and to compare it with the radiological preoperative measurements (Fig. 25.5). The surgeon then adjusts the valgus varus of the distal femoral cutting guide (0 degree for us), the posterior slope (between 0 and 2 degrees flexum to avoid notching the anterior cortex), and the height of the resection

430

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.5 Radiological measurement of the varus deformity: HKA angle, medial femoral mechanical angle, medial tibial mechanical angle. HKA, Hip knee ankle.

(minimal resection in the convex side of the deformity to reduce ligament imbalance) corresponding to the thickness of the prosthesis. At this stage of the operation, the computer has carried out a “bone” alignment of the leg and the prosthesis is then implanted using the classic ancillary equipment, especially when making the anterior, posterior, and chamfer cuts.

25.3.3

Implanting the trial prosthesis

The implantation of the trial prosthesis uses computer assistance to check the mechanical leg axis in extension, in the walking position, and in flexion at 90 degrees. Ligament balance is also controlled by taking stress valgus or varus measurements, and assessing any medial or lateral gaping. The mechanical leg axis can also be checked when the prosthesis is permanently implanted. This will allow the detection of any excess medial or lateral cement that is liable to change the axis by 1 or 2 degrees (1 mm of cement 5 1 degree).

25.3.4

Rotation of the femoral implant

We never systematically apply external rotation to the femoral implant, at least in genu varum. We only apply rotation according to the femoral valgus or varus (Fig. 25.6). If, in the case of genu varum, the femur is in a valgus of 3 degrees or more, we believe it is logical to apply external rotation, because it will be necessary to resect more of the distal medial condyle and as a result more of the posterior condyle if one wants the ligament to be balanced in flexion. This rotation does not need to be navigated since the ancillary makes it easy to perform. If the femur is in varus, and if the genu varum is overreducible, it is equally logical to apply internal rotation, since less distal medial condyle will be resected and therefore less posterior medial condyle. In the case of genu valgum, external rotation is almost systematic since the femoral valgus is almost constant. We usually apply 1 degree of rotation for 1 degree of femoral valgus and do not exceed 5 6 degrees of rotation, to limit the cut in the anterolateral cortex of the femur.

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

431

FIGURE 25.6 Measurement of the femoral mechanical angle with the computer.

25.3.5

Ligament balance

25.3.6

Implanting the final prosthesis

The final prosthesis is cemented (or not, in the case of cementless prostheses) when the HKA angle is at 180 6 3 degrees, the ligaments are well balanced, the tracking of the patella is optimal as well as the range of motion. At last, the approach is closed step by step according the habits of each.

25. The Orthopilott Device

There are two ways to proceed: either by working from reducibility of the deformity tests (valgus and varus stress near extension) or by following ligament balance management software. We prefer to use the first method, which allows the surgeon to consider and remain master of his/her decisions. We proceed in the following way: when the mechanical leg axis appears on the computer screen, before any ablation of osteophytes, we apply manual force in varus and valgus, with the knee at 5 10 degrees of flexion, to assess the reducibility of the deformity and the gap in convexity. If the deformity is completely reducible, or even hyperreducible, we are certain that the ligaments will be balanced in extension and that it will not be necessary to release soft tissue in the concavity. The same is true if reducibility gives a hypocorrection of 2 3 degrees. If hypocorrection is greater than this, it will be necessary to allow for the progressive release of soft tissue with trial implants, after removing the osteophytes. However, it should be stated that a perfect balance does not necessarily mean that there is a symmetrical gap between the medial and the lateral side, since it is known that, in a normal knee, the lateral compartment is more lax than the medial compartment. We therefore readily accept, in genu varum, a difference of 3 or 4 degrees more for the lateral compartment of the knee. As far as the management of gaps between extension and flexion is concerned, we never have an imbalance since, on one hand we commonly use a posterior cruciate ligament (PCL) retaining prosthesis which is a good “keeper” of the gaps, and on the other hand the bone resection thickness is identical to the thickness of the implants. Thus, there is no reason for the balance which was adequate before the implantation of the prosthesis to change afterwards. Finally, the medial lateral balance in flexion can be controlled without any distractor, since we believe that this is an artificial procedure which does not guarantee an adequate balance; indeed, creating tension between the two sides is subjective and difficult to reproduce from one surgeon to the next. To check this balance, it is sufficient, once the cutting guide for the chamfers has been applied at the distal femur level, to raise the thigh through the use of this supporting point, to manually pull into the axis of the knee flexed at 90 degrees, and to check the parallelism of the cutting guide with the cut of the tibial plateau (Fig. 25.7). In genu varum, parallelism is perfect in most cases and it is not necessary to release soft tissue. Otherwise, especially in genu valgum, it is necessary to release the medial or lateral collateral ligaments progressively.

432

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.7 Checking of the ligaments balancing in flexion at 90 degrees.

FIGURE 25.8 HKA angle displayed on the screen of the computer. HKA, Hip knee ankle.

25.4 25.4.1

Osteotomies for genu varum deformity High tibial opening wedge osteotomy

The same principles of real-time acquisition of the rotation center of the HKA centers and of the anatomical landmarks at the level of the knee joint line (palpation of medial and lateral epicondyles and the tip of the patella) and ankle is applied. They allow the mechanical axis of the lower limb to be shown dynamically on the computer screen (Fig. 25.8), that is, the axis of the lower limb to be seen both pre- and postosteotomy and to check if the preplanned correction has been established [6 8]. Generally, the procedure follows this sequence: The rigid body markers are fixed percutaneously at the level of the distal femur and proximal tibia allowing acquisition of the centers of hip, knee and ankle. The lower limb mechanical axis then appears on the screen and can be compared with the preoperative radiological goniometry. The HTO is performed 3 cm below the level of the medial joint line, the level confirmed by placing an intraarticular needle. The osteotomy is directed at the fibula head, keeping the saw as horizontal as possible to avoid fracturing the lateral tibial plateau.

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

433

FIGURE 25.9 Opening wedge high tibial osteotomy filled by a tricalcium phosphate wedge (Biosorb) and fixed by a locking screw plate.

25.4.2

Double-level osteotomy

The first stage is essentially the same as for that of an HTO: percutaneous insertion of the rigid body markers (high enough not to hamper the femoral osteotomy and low enough on the other level to avoid interfering with the tibial osteotomy), followed by acquisition of the hip center, middle of the knee, and tibio-tarsal joints in order to find the mechanical axis of the lower limb. The second stage consists of making the femoral closing osteotomy in the distal femur (in general a 5 6 degrees alteration is made, although sometimes more in congenital femoral varus) and fixing it in position with a plate. A lateral approach with elevation of the vastus lateralis is chosen, the lateral arthrotomy allowing the tip of the trochlea to be located. The track of the osteotomy lies above the trochlea and is directed obliquely from above laterally to below on the medial femoral cortex. A wedge of bone is then excised from the distal femur with a 4 5 mm lateral base, corresponding to a 5 6 degrees correction. The osteotomy is fixed with the plate after placing the femur into valgus manually. Once this stage is reached the mechanical axis is rechecked so that the required correction at the level of the tibia can be calculated in order to achieve the preoperative objectives. The last stage is to perform the HTO exactly in the fashion described above (Fig. 25.10A and B). The definitive axis is then displayed on the computer screen.

25.5

Osteotomy for genu valgum deformity

The technique used is similar to that used in varus knees as described above [9]. After intraoperative acquisition of the mechanical axis of the lower limb, the appropriate femoral varus osteotomy is carried out: or medial closing or lateral

25. The Orthopilott Device

With the aid of two Pauwels osteotomes inserted along the track of the saw cut, the tibia is placed into valgus. These are then replaced by a metal spacer which is inherently stable and allows the amount of correction to be checked. If there was 8 degrees of varus one would try a 10 11 mm spacer and make sure that an appropriate hypercorrection is produced real time on the computer screen. If this is insufficient we try a thicker spacer, and the reverse if the correction is too great. The metallic spacer is then replaced with a bio-absorbable tricalcium phosphate wedge (Biosorb, B. Braun Aesculap, Boulogne, France) of the desired thickness, and the intervention is completed by plating the proximal tibia (Fig. 25.9) with a locking screw plate.

434

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.10 (A) Double-level osteotomy performed for severe genu varum deformity: AP view. (B) Doublelevel osteotomy performed for severe genu varum deformity: lateral view. AP, antero-posterior view.

opening wedge. In some cases of excessively tight fascia lata where the required lateral opening osteotomy exceeded 6 8 degrees, piecrust lengthening is performed on the iliotibial band; this contributes to easier recovery of knee flexion. Medial closing osteotomies were performed in our earliest cases and were secured with an AO T-shaped plate. Lateral opening osteotomy is performed currently; the opening is filled with bio-absorbable tricalcium phosphate wedge (Biosorb, SBM, Lourdes, France) and secured with an AO locking plate or an OTIS-F locking plate (SBM, Lourdes, France) (Fig. 25.11). A double varus osteotomy of the femur and tibia due to valgus medial distal femoral mechanical axis and medial proximal tibial mechanical axis is performed to avoid an oblique joint line (Fig. 25.12A). In these cases, a medial closing-wedge osteotomy of the tibia is performed first and fixed with an OTIS locking plate (SBM, Lourdes, France) and then a lateral opening-wedge varus osteotomy of the femur is carried out (Fig. 25.12B and C).

25.6

Uni knee arthroplasty

The rigid body markers are set as for TKA. A 7 9-cm medial parapatellar skin incision is performed, its length dependent on the patient’s BMI and soft-tissue elasticity. Then a subvastus or midvastus intraarticular approach is done to access the knee [10,11]. The next step involves data collection of the knee: middle of the intercondylar notch, the middle of the tibial spines (Fig. 25.13), the middle of the medial tibial plateau, the more distal point of the femoral condyle, and finally, the medial, lateral malleoli, and the middle of the ankle joint. The second step is the registration of the hip kinematic center using hip circumduction movement and the knee kinematic center using flexion extension and rotation axis of the knee in 90 degrees flexion. At the end of the registration, the femoro-tibial mechanical axis (FTMA) is displayed on the monitor screen and the surgeon then starts to navigate the tibial bone cut resection. Before proceeding to tibial cutting guide navigation, one checks the correlation between FTMA displayed on the computer monitor screen and the measured preoperative HKA X-ray angle and also assesses reducibility of the leg deformity. If the deformity is hypercorrectable (too much valgus), it is a contraindication for mobile uni knee arthroplasty (UKA) due to the risk of hypercorrection. Indeed, if the leg

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

435

FIGURE 25.11 Opening wedge distal femoral osteotomy for genu valgum deformity filled by a tricalcium phosphate wedge and fixed by an AO locking screw plate.

25. The Orthopilott Device FIGURE 25.12 (A) Genu valgum with femoral mechanical axis in valgus as well as proximal tibial mechanical axis. (B) Double-level osteotomy to correct the deformity (tibial closing wedge and femoral opening wedge): AP view. (C) Double-level osteotomy to correct the deformity (tibial closing wedge and femoral opening wedge): lateral view. AP, antero-posterior view.

436

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.13 Palpating of the intercondylar eminence for UKA. UKA, Uni knee arthroplasty.

stays in varus, the MCL will remain slack and the mobile bearing surface is at risk of dislocation. In order to avoid this, a larger polyethylene would fill the gap but this would then increase the load-stress on the other side of the joint. The tracked tibial cutting jig is positioned in front of the medial tibial plateau, then navigated and fixed using three to four threaded pins (Fig. 25.14). Using the graphic user interface, the tibial jig is placed between 2 and 3 degrees varus and between 3 and 5 degrees slope with a bone resection ranging between 4 and 8 mm (never more than 8 mm in order to avoid any subsidence of the tibial plateau implant related to weak cancellous bone). However, the less varus the more bone resection, and vice versa according to the indications for UKA [12]. Once these three parameters (coronal, sagittal, and height) are satisfactory, the tibial cut is performed using an oscillating saw for the horizontal cut and a reciprocating saw for the vertical cut. At this stage, the navigation is finished, and the femoral cut is performed using a tibial spacer placed between the resected tibial plateau bony surface and the distal medial femoral condyle with the knee in extension. The distal femoral jig is slotted onto the tibial spacer until it reaches the femur, ensuring that the tibial plateau fits perfectly well onto the resected bony surface (Fig. 25.15). Two threaded pins are used to stabilize the femoral jig and the distal femoral cut is performed using an oscillating saw. Then the knee is flexed to 90 degrees and with the use of several templates, the best-size fit is chosen for the femoral condyle. Posterior and chamfrain cuts are performed using the suitable template. The tibial and femoral trial implants are tested. The surgeon verifies leg alignment which should be within 177 6 2 degrees that is a slight hypocorrection. When using a mobile bearing tibial plateau, a “safe laxity” is tolerated at around 1 degree. If the “safe laxity” is more than 2 degrees, the choice will then be to use a fixed bearing tibial plateau. Once satisfactory alignment and position are obtained, the final implants are cemented and postimplant FTMA is controlled and recorded before closing.

25.7

Uni knee arthroplasty to total knee arthroplasty revision

In cases of UKA to TKA revisions, the navigation is the same as in primary TKAs. The implants are not removed and the images are taken with the implants in place even when there is loosening or sinking of the implants (Fig. 25.16). This works because the navigation system is not based on bone morphing but on palpation of several critical points, especially the lateral tibial plateau in the case of revision of medial UKA or the medial plateau in the case of revision

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

437

FIGURE 25.15 Insertion of the distal cutting guide for UKA. UKA, Uni knee arthroplasty.

25. The Orthopilott Device

FIGURE 25.14 Fixation of the tibial cutting guide for UKA. UKA, Uni knee arthroplasty.

438

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.16 Loosening of UKA revised by computer-assisted TKA. TKA, Total knee arthroplasty; UKA, uni knee arthroplasty.

of the lateral one. Once the lower limb angle is obtained, the various cuts are made using the cutting guides, positioned with the help of the navigator. The tibial cut rarely presents problems, except where there is major bone loss under the tibial component, which requires either a bone graft or the use of metal wedges. Loss of bone substance after removal of the implant is rarely a problem in the femur, except in rare cases where the femoral component has subsided into the cancellous bone of the condyle (Fig. 25.17). After removing the implant, it is important to ensure that the posterior cut is compensated for when implanting the primary prosthesis, by adjusting the external rotation of the femoral implant, using the navigation system. The rest is carried out in the same way as a primary TKA [11].

25.8 25.8.1

Results Total knee arthroplasty

The first results were published in 2001 to validate the device [4]. Two other papers were published in 2017 to present the results after more than 10 years follow-up [13,14]. These results are very interesting regarding the HKA angle because this angle was at 180 6 3 degrees in more than 90% of cases (n 5 208) for severe preoperative varus deformities [13] and 92.3% (n 5 243) for all types of osteoarthritis [14]. The mean IKS scores were, respectively, 180 and 189.5 points. Moreover, the survival rate of the prostheses at more than 10 years of follow-up was 99.2% for all types of osteoarthritis and 99.3% for severe genu varum deformities.

25.8.2

Uni knee arthroplasty and revision to total knee arthroplasty

The results were published in 2017 [11]. In cases of genu varum (n 5 79), preoperative objectives were attained in 88.5% of cases for the HKA angle, with four cases (5.1%) under 175 degrees and five cases (6.1%) over 179 degrees; in 92.4% of cases for the mechanical tibial axis with three cases over 90 degrees and three cases under 84 degrees; and in 95% of cases for the tibial slope. In cases of genu valgum (n 5 19), preoperative objectives were attained in 84% of cases for the HKA angle and in 92% of cases for posterior tibial slope.

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

439

FIGURE 25.17 Navigated TKA for the case of Fig. 25.16. Note the screws to fill femoral bone loss. TKA, Total knee arthroplasty.

25.8.3

Osteotomies

25.8.3.1 High tibial osteotomy In a comparative study comparing navigation to conventional technique [6], two groups of 28 patients were included. The preoperative aim (184 6 2 degrees) was attained in 27 of 28 cases in the navigated group (96%), compared to 20/28 in the conventional group (71%). This difference was statistically significant in favor of the computer-navigated cases. Not only was the objective achieved but also a wide dispersion of results was avoided (especially in overcorrection).

25.8.3.2 Double-level osteotomy In an article published in 2011 [8], based on 42 cases, we found that not only did the patients had very good functional results but also that the preoperative goal was reached in 92.7% of the cases.

25.8.3.3 Osteotomies for genu valgum These results were published in 2014 [9]. No complications other than a transient paralysis of the common fibular nerve were observed. Twenty-three patients (25 knees out of 29) were reviewed at a mean follow-up of 51 months. Twentytwo patients were satisfied or very satisfied. The preoperative goal was achieved in 86.2% of cases (25/29) for the HKA angle.

25.9

Discussion

Computer navigation in knee replacement surgery has reached maturity, but it should be recognized that it has not achieved the degree of development it deserves. The reasons for this lack of enthusiasm are multiple: the complexity of

25. The Orthopilott Device

In the revision series, preoperative objectives were achieved in 92.4% of cases for the HKA angle. In another article where we compared computer-assisted revision to conventional revision [15], we found that this rate for revision with conventional technique was 87.5%. This slight difference in favor of the computer-assisted group was not statistically significant.

440

Handbook of Robotic and Image-Guided Surgery

FIGURE 25.18 Osteoarthritis post malunion of the right femur (plate in place for more than 20 years in an 82-year-old female patient). (A) Preoperative X-ray: AP view. (B) Preoperative X-ray: lateral view. (C) Postoperative long leg X-ray. (D) Postoperative lateral view. AP, antero-posterior view.

certain navigation systems which emerged in the early 2000s [16] which added over 30 minutes to the operating time; the cost of the equipment at the time, when it had to be bought or rented; the lack of long-term studies proving increased survival of navigated implants; a certain confusion regarding the objectives of navigation, which is not just a tool for implanting prostheses at 180 6 3 degrees, but according to the axis chosen, whether anatomical or kinematic [17], with increased precision and greater reproducibility. It is important to recognize that there are close to zero complications linked to navigation—far fewer than with traditional techniques—in terms of fat embolism and postoperative bleeding [18,19]. Also, the learning curve is fast and it is a remarkable tool for teaching knee replacement surgery [20]. The major advantage of navigation is its usefulness in major knee deformities, whether varus or valgus. In this context, it allows preoperative objectives to be met in over 90% of cases [13], and thanks to its evaluation of the reducibility of the deformity [5], it considerably reduces the major release rate, which was only 9.5% in a recent series [13]. It is also of particular interest for evaluating sagittal ligament balance in the knee, as it is always difficult intraoperatively, to evaluate visually the degree of flexum or residual recurvatum, which navigation can quantify precisely, in degrees. Finally—and this is certainly one of its major benefits—navigation allows knee replacement to be carried out in excellent conditions in cases of gonarthrosis with femoral or tibial bone malunion, and especially where nonremovable osteosyntheses hardware (particularly femoral) are in place [21 23]. The presence of this type of hardware (plates or nails) makes it impossible to use an intramedullary rod, which makes surgery riskier (Fig. 25.18A D). Regarding UKA navigation and osteotomies, these are probably the best indications for navigation. Indeed, the long-term results are related to the accurate coronal alignment. For UKA, the best coronal alignment is to leave 2 5 degrees of undercorrection that is not easy to reach when using a conventional technique. In the case of overcorrection or too much undercorrection, the risk of failure is much more important. It is the same for osteotomies when there is a varus or valgus deformity. For genu varum, the best results are obtained when there is an overcorrection from 3 to 6 degrees. Without navigation it is not easy to reach this goal and in cases of too much under- or overcorrection, the risk of poor results is not negligible. It is probably the major reason explaining the lack of acceptance of this technique.

25.10

Conclusion

Computer-assisted knee surgery is a fantastic tool allowing to reach the preoperative goal in a large majority of cases. When the goal is well defined the superiority of navigation to conventional techniques has been demonstrated over more than 20 years of experience with Orthopilot. Navigation allows knee replacement to be carried out in excellent conditions in cases of gonarthrosis with femoral or tibial bone malunion, and especially where nonremovable

More Than 20 Years Navigation of Knee Surgery With the Orthopilot Device Chapter | 25

441

osteosyntheses hardware (particularly femoral) is in place. Regarding UKA navigation and osteotomies, these are probably the best indications for navigation since the long-term results are related to the accurate coronal alignment.

References

25. The Orthopilott Device

[1] Lavalle´e S. Geste me´dico-chirurgicaux assiste´s par ordinateur: application a` la neurochirurgie ste´re´otaxique [The`se, ge´nie biologique et me´dical]. Grenoble, France; 1989. [2] Leitner F, Picard F, Minfelde R, Schultz HJ, Cinquin P, Saragaglia D. Computer-assisted knee surgical total replacement. In: “lecture note in computer science”: CURMed-MRCAS’97. Berlin-Heidelberg: Springer Verlag; 1997. p. 629 38. [3] Picard F, Leitner F, Saragaglia D, Cinquin P. Mise en place d’une prothe`se totale du genou assiste´e par ordinateur: a propos de 7 implantations sur cadavre. Rev Chir Orthop Reparatrice Appar Mot 1997;83(Suppl. II):31. [4] Saragaglia D, Picard F, Chaussard C, Montbarbon E, Leitner F, Cinquin P. Computer-assisted knee arthroplasty: comparison with a conventional procedure. Results of 50 cases in a prospective randomized study. Rev Chir Orthop Reparatrice Appar Mot 2001;87:18 28. [5] Saragaglia D, Chaussard C, Rubens-Duval B. Navigation as a predictor of soft tissue release during 90 cases of computer-assisted total knee arthroplasty. Orthopedics 2006;29:S137 8. [6] Saragaglia D, Roberts J. Navigated osteotomies around the knee in 170 patients with osteoarthritis secondary to genu varum. Orthopaedics 2005;28(Suppl 10):S1269 74. [7] Saragaglia D, Mercier N, Colle PE. Computer-assisted osteotomies for genu varum deformity: which osteotomy for which varus? Int Orthop 2010;34:185 90. [8] Saragaglia D, Blaysat M, Mercier N, Grimaldi M. Results of forty two computer-assisted double level osteotomies for severe genu varum deformity. Int Orthop 2012;36:999 1003. Available from: https://doi.org/10.1007/s00264-011-1363-y. [9] Saragaglia D, Chedal-Bornu B. Computer-assisted osteotomy for valgus knees: medium-term results of 29 cases. Orthop Traumatol Surg Res 2014;100:527 30. Available from: https://doi.org/10.1016/j.otsr.2014.04.002 Epub 2014 Jul 30. [10] Saragaglia D, Picard F, Refaie R. Navigation of the tibial plateau alone appears to be sufficient in computer-assisted unicompartmental knee arthroplasty. Int Orthop 2012;36:2479 83. Available from: https://doi.org/10.1007/s00264-012-1679-2. [11] Saragaglia D, Marques Da Silva B, Dijoux P, Cognault J, Gaillot J, Pailhe´ R. Computerised navigation of unicondylar knee prostheses: from primary implantation to revision to total knee arthroplasty. Int Orthop 2017;41:293 9. Available from: https://doi.org/10.1007/s00264-0163293-1 Epub 2016 Sep 28. [12] Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res 2004;423:161 5. [13] Saragaglia D, Sigwalt L, Gaillot J, Morin V, Rubens-Duval B, Pailhe´ R. Results with eight and a half years average follow-up on two hundred and eight e-Motion FPs knee prostheses, fitted using computer navigation for knee osteoarthritis in patients with over ten degrees genu varum. Int Orthop 2017. Available from: https://doi.org/10.1007/s00264-017-3618-8. [14] Saragaglia D, Seurat O, Pailhe´ R, Rubens Duval B. Computer-assisted TKA: long-term outcomes at a minimum follow-up of 10 years of 129 emotion FP mobile bearing prostheses. EPiC Ser Health Sci 2017;1:322 4. [15] Saragaglia D, Cognault J, Refaie R, Rubens-Duval B, Mader R, Rouchy RC, et al. Computer navigation for revision of unicompartmental knee replacements to total knee replacements: the results of a case-control study of forty six knees comparing computer navigated and conventional surgery. Int Orthop 2015;39:1779 84. Available from: https://doi.org/10.1007/s00264-015-2838-z Epub2015 Jul 2. [16] Stindel E, Briard JL, Merloz P, Plaweski S, Dubrana F, Lefevre C, et al. Bone morphing: 3D morphological data for total knee arthroplasty. Comput Aided Surg 2002;7:156 68. [17] Howell SM, Papadopoulos S, Kuznik K, Ghaly LR, Hull ML. Does varus alignment adversely affect implant survival and function six years after kinematically aligned total knee arthroplasty? Int Orthop 2015;39:2117 24. Available from: https://doi.org/10.1007/s00264-015-2743-5. [18] Kalairajah Y, Cossey AJ, Verrall GM, Ludbrook G, Spriggins AJ. Are systemic emboli reduced in computer-assisted knee surgery? A prospective, randomised, clinical trial. J Bone Joint Surg Br 2006;88:198 202. [19] McConnell J, Dillon J, Kinninmonth A, Sarungi M, Picard F. Blood loss following total knee replacement is reduced when using computerassisted versus standard methods. Acta Orthop Belg 2012;78:75 8. [20] Jenny J-Y, Picard F. Learning navigation Learning with navigation. A review. SICOT J 2017;3:39. Available from: https://doi.org/10.1051/ sicotj/2017025. [21] Mullaji A, Shetty GM. Computer-assisted total knee arthroplasty for arthritis with extra-articular deformity. J Arthroplasty 2009;24:1164 9. [22] Kim KK, Heo YM, Won YY, Lee WS. Navigation-assisted total knee arthroplasty for the knee retaining femoral intramedullary nail, and distal femoral plate and screws. Clin Orthop Surg 2011;3:77 80. Available from: https://doi.org/10.4055/cios.2011.3.1.77. [23] Tigani D, Masetti G, Sabbioni G, Ben Ayad R, Filanti M, Fosco M. Computer-assisted surgery as indication of choice: total knee arthroplasty in case of retained hardware or extra-articular deformity. Int Orthop 2012;36:1379 85. Available from: https://doi.org/10.1007/s00264-0111476-3.

26 G

NAVIO Surgical System—Handheld Robotics Riddhit Mitra and Branislav Jaramaz Smith & Nephew, Pittsburgh, Pennsylvania, United States

ABSTRACT Robotics-assisted arthroplasty has gained increasing popularity as orthopedic surgeons aim to increase the accuracy and precision of implant positioning. With advances in computer-generated anatomy data through image-free data collection, surgeons have the ability to better predict and influence surgical outcomes. Based on planned implant position and soft-tissue considerations, robotics-assisted systems can provide surgeons with planning tools to make informed decisions for knee replacement specific to the needs of the patient and with intelligent tools to implement those decisions. This is achieved by customizing the surgical cuts rather than prosthesis designs, while staying within clinically acceptable boundaries. Postoperative alignment of knee implants has been shown to influence patient outcomes in terms of implant longevity and functionality. The use of robotics in orthopedic surgery has helped to minimize human error, and in turn reduce implant wear and theoretically lead to longer prosthesis survivorship. This chapter provides a framework for the surgical techniques for using the NAVIO surgical system to perform partial and total knee arthroplasty (TKA). The NAVIO system supports unicompartmental knee arthroplasty, patellofemoral knee arthroplasty, bicruciate retaining, cruciate-retaining or bicruciate sacrificing TKA. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00026-8 © 2020 Elsevier Inc. All rights reserved.

443

444

26.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Although conventional knee arthroplasty is considered a successful intervention for end-stage osteoarthritis, some patients still experience reduced functionality and require revision procedures, ranging from 4.9% to 19.6% over a 10-year period, for partial and total knee replacement [1,2]. Similarly, in the space of unicondylar or partial knee replacement, which is considered as a relevant option for younger, active patients with early stages of arthritis, successful results and durability of knee arthroplasty are affected by a variety of factors, including appropriate surgical indications, implant design, component alignment and fixation, and soft-tissue balance [3,4]. Accurate alignment of the tibial component using conventional techniques has been difficult to achieve [5 7]. Outliers beyond 2 degrees of the desired alignment may occur in as many as 40% 60% of cases using conventional methods, and the range of component alignment varies considerably, even in the hands of skilled knee surgeons [8]. Similarly, for total knee replacement outliers beyond 2 degrees of the desired alignment may occur in as many as 15% of cases in the coronal plane, going up to 40% of unsatisfactory alignment in the sagittal plane [9]. From the 1970s, designs of implants and instruments have evolved into two principal branches. Anatomic implant designs, which attempt to reconstruct the patient’s anatomy and preserve the cruciate ligaments, are limited by expensive manufacturing and challenging execution. On the other hand, functional designs, with a focus on measured resection and gap balancing techniques, attempt to restore joint function by optimizing the use of a serially produced implant design. While the implant designs are continually evolving, there is still room for improvement in achieving repeatable functional outcomes and natural proprioception. Modern implants tend to blend the two principles, where anatomic designs and a functional approach to resection are combined for improved functional outcomes. However, there still remains room for optimization by improving the precision of proper alignment and ligament balancing [10]. While navigation and patient-specific blocks have aimed to help in achievement of accuracy in alignment, studies have shown that 15% 20% of the cases may fall outside the range of 6 3 degrees of desired outcome [11]. Robotics-assisted systems have challenged traditional instruments as a method to decrease mechanical alignment outliers, optimize soft tissue balancing, and restore normal knee kinematics [12 15]. Robotics-assisted surgery has been available for nearly 25 years. Current robotics-assisted systems use various navigation principles augmented with the technology of robotic bone preparation, allowing the surgeon to conduct a UKA (unicompartmental knee arthroplasty) or TKA (total knee arthroplasty), based on preoperative 3D images or image-free intraoperative planning [16]. While most of the current robotics-assisted orthopedic cutting systems are autonomous robots or haptic arm-based instruments, NAVIO (Smith & Nephew, Pittsburgh, PA, United States), is a next-generation robotics-assisted system that uses handheld miniaturized robotics-assisted instrumentation that is freely moved by the surgeon but restricts the bone cutting within the confines of the designated resection area of the patient’s bone, and within the proper depth and orientation (Fig. 26.1).

26.2

The NAVIO surgical workflow

The following section provides an overview of the recommended techniques for using the NAVIO surgical system technology in clinical applications. The NAVIO surgical system is indicated for use in surgical knee procedures in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be determined. The NAVIO technique description for knee replacement surgery is divided into the following steps: 1. Patient and system setup: This section covers the details of patient and system setup needed for a handheld roboticsassisted surgery. 2. Registration: The intraoperative registration steps to define anatomical and soft-tissue information for the system are explained in this section. 3. Prosthesis planning: This section captures the ability of the system to plan the implant on the defined patient’s anatomy based on registration. 4. Robotics-assisted bone cutting: The modes of robotic-assisted control and execution are defined in this section. 5. Trial reduction: Postbone preparation, implant trialing, and ligament balancing assessment are highlighted in this section. 6. Cement and close: This section defines the specifics of cementation and closing.

NAVIO Surgical System—Handheld Robotics Chapter | 26

445

26.2.1

Patient and system setup

The NAVIO computer system is positioned close to the operating table to allow the surgeon to easily interact with the system, while maintaining complete control of the procedure. In addition to the surgeon-controlled touch screen, the surgeon can navigate through the different stages of operation using foot-pedal controls. During setup, the NAVIO handheld robotic instrument is assembled and configured according to the surgeon’s preference with regard to the robotic mode of control. After incision, all peripheral osteophytes are removed, to reliably assess the patient’s anatomy as well as joint stability. For cruciate retaining (CR) or posterior stabilized designs, it is recommended to release the anterior and posterior cruciate ligament, respectively, depending on the design of the prosthesis component to be implanted, and the disease condition of the patient’s knee (Figs. 26.2 and 26.3).

26.2.1.1 Bone tracking hardware For attachment of tracking arrays to the femur and tibia, NAVIO utilizes a two-pin bicortical fixation system. To place the tibia tracker, the first bone screw is percutaneously placed inferior to the tibial tubercle on the medial side of the tibial crest, to avoid stress risers on the tibia bone. The bone screw should be drilled slowly into the tibia, perpendicular to the bony surface, stopping once the opposing cortex has been engaged. A tissue protector is used to mark the position of the second bone screw, inferior to the initial placement and the second screw is engaged with the bone. Similarly for the femur, the bone screws are percutaneously placed about four to five finger breadths superior to the patella. The femur and tibia tracker frames are clamped to these screws, and oriented with the reflective markers toward the camera,

26. NAVIO Surgical System

FIGURE 26.1 Handheld robotic-assisted tool and the NAVIO surgical system.

446

Handbook of Robotic and Image-Guided Surgery

FIGURE 26.2 Typical OR setups. OR, Operating room.

FIGURE 26.3 Bicortical engagement of bone screws for rigid fixation of tracking arrays.

to maximize the operative field of view. A camera orientation stage in the system guides the surgeon for best position of the camera and trackers, to allow manipulation of the patient’s leg through the range of motion (ROM), and ensure that the trackers stay visible throughout the surgical procedure. Small checkpoint pins are also placed in the femur and tibia. They are used throughout the procedure to determine if the bone tracking frames have moved, ensuring the system integrity throughout the procedure (Fig. 26.4).

26.2.2

Registration—image-free technology

The NAVIO surgical system does not require the use of any preoperative data, and the entire anatomy registration and planning workflow is performed intraoperatively (Fig. 26.5). NAVIO relies on image-free localization of anatomic landmarks and surface “painting” to construct a virtual representation of the patient’s anatomy. This step ensures that the true surface of the patient anatomy is captured, as the system registers the articulating surfaces and defects, for best assessment of prosthesis placement during planning. The first step in registration is to use the point probe to identify the most prominent points on the medial and lateral malleoli in order to register the ankle center. The next step, hip center calculation, follows the femoral tracker array through circular movements of the hip and calculates the center of rotation. The femur should be slowly pivoted at the hip until all sectors of the graphic on the screen have turned green. Then the leg is placed in full extension, applying a slight compressive force on the tibia, to calculate the patient’s varus/valgus deformity and capture any existing flexion contracture. This collection is followed by the preoperative knee motion collection step, which allows the user to record normal flexion motion. The leg is moved through a normal ROM to maximum flexion, while keeping the knee joint in contact, making sure to collect all possible sectors. Then, constant and consistent varus and/or valgus stress is applied

NAVIO Surgical System—Handheld Robotics Chapter | 26

447

FIGURE 26.4 Tibial tracker attachment on the patient’s bone, with the reflective markers facing the camera.

26. NAVIO Surgical System

FIGURE 26.5 Hip center collection of the patient’s anatomy to determine the center of the femoral head.

to the collateral ligaments to collect soft-tissue laxity data throughout the patient’s knee flexion. The system graphically depicts medial and/or lateral compartment space, depending on the surgery (uni or total). These data are used to identify how much laxity exists in the current soft-tissue structure. This information is unique to the patient and can be used during the planning stage. This allows the surgeon to plan for the best approach when it comes to bony resections and soft-tissue releases, with virtual implementation of anatomic or functional principles of bone preparation, before making any cuts (Figs. 26.6 and 26.7). With a focus on rigid anatomy next, the femoral condyle is registered using landmark points. Using the point probe, the surgeon collects the knee center that with the hip center determines the mechanical axis of the femur. Additional landmark points in the uni and TKA applications allow the surgeon to inform the NAVIO system of basic anatomy, with which the system can provide a starting selection of implant size and position. At this stage, the user further

448

Handbook of Robotic and Image-Guided Surgery

FIGURE 26.6 Patient ligament laxity collection, to assess medial and lateral joint space. This stage can be accessed before making cuts, as well as after bone cuts are made.

FIGURE 26.7 Image-free patient femur anatomy mapping.

digitizes the femoral condyle by moving the probe over the entire surface while holding down the foot pedal. In the total knee application the rotational reference of the femur or tibia can additionally be fine-tuned from the previously defined rotation axes (posterior condylar axis, or transepicondylar, or anterior posterior axis), in the free surface collection stage, giving the surgeons the flexibility to make the most informed decision, based on specific patient anatomy. Following successful femoral registration, tibial landmarks are collected, including the knee center which, in conjunction with the ankle center defines the mechanical axis. The last registration step, tibial condyle surface mapping, offers visualization of the mapped surface and previously collected tibial mechanical and landmark points. Similar to the femur, the rotational axis of the tibia can be fine-tuned by the surgeon at this stage, customizing it specific to the patient’s anatomy (Figs. 26.8 and 26.9).

26.2.3

Prosthesis planning

The implant planning stage provides the user a virtual reconstruction of the patient’s femoral and tibial anatomy, soft-tissue ligament tension, and joint balance. There are three stages during planning: (1) initial sizing and placement, (2) gap planning, and (3) cut guide placement. The surgeon has multiple options of visualizing the anatomy at this stage, with the choice of solid bone and implant graphics, or a “virtual” CT slice mode, called the cross-section view.

NAVIO Surgical System—Handheld Robotics Chapter | 26

449

FIGURE 26.8 Fine tuning of the rotational axis by taking into consideration transepicondylar axis, Whiteside’s line as well as posterior condylar axis, specific to the patient’s knee.

FIGURE 26.9 Image-free patient tibia anatomy mapping.

26. NAVIO Surgical System With coronal, transverse, and sagittal screens, the surgeon can assess the position of the implant components on the bone in all three dimensions. Using the cross-sections, the surgeon first confirms that the component size provides adequate coverage on the digitized femur bone surface. The transition of the implant component on the anatomy is verified and adjusted in the sagittal view screen, confirming good anterior posterior implant component fit and transition from bone to implant on the terminal edges. In order to assess size coverage, implant anterior transition, and the bone resection plan, the user can toggle on the virtual cut view mode to visualize the implant component on the bone surface. Rotation of the component on the anatomy is confirmed in the transverse view, while assessing diseased bone from the virtual reconstruction of the anatomy, compared to the build back due to the position of the implant. The coronal view is used to assess implant alignment and distal resection within the bounds of surgical principles when compared to the patient’s mechanical axis (Figs. 26.10 and 26.11).

450

Handbook of Robotic and Image-Guided Surgery

FIGURE 26.10 Initial femur implant planning with consideration of sizing and anatomical fit on the patient’s bone.

FIGURE 26.11 Initial tibia implant planning with consideration of sizing and anatomical fit on the patient’s bone. Image shows bicruciate retaining knee implant.

For the tibial component, the NAVIO software will attempt to provide a starting size and initial placement utilizing the landmarks and tibia “paint” collection. With similar view screens and options, the surgeon can assess the depth of resection and alignment, component rotation, and posterior slope with respect to the mechanical axis. The tibial component defaults to the thinnest poly insert, but thicker inserts can be selected by changing the poly component during the planning stages.

NAVIO Surgical System—Handheld Robotics Chapter | 26

451

The second stage of implant planning allows the user the ability to dial in soft-tissue laxity for the patient in extension and flexion based on the soft-tissue input from prior ligament balancing collection. There are four interactive views for translating and rotating the components with respect to the patient’s virtualized joint. The goal of this stage is to optimize extension and flexion laxity balance with no overlap (or tightness) in either medial or lateral condyle. The surgeon can choose to perform anterior cruciate ligament (ACL) release for a CR procedure, or ACL and posterior cruciate ligament release for a bicruciate sacrificing procedure, and collateral ligament release to recollect laxity information by clicking on the “Recollect Joint Laxity” button in order to augment what the joint space will actually look like after the bone cuts are made. The user can manipulate and fine tune the position and orientation of the implant components in this stage such that the resulting gaps are “balanced” in both extension and flexion. While the extension gaps are affected by changing distal femur resection, and varus/valgus cut adjustments within the bounds of acceptable surgical principles, balancing of the flexion gap in the medial and lateral compartments can be performed by rotating the femur component internally or externally. Adjustments to femoral component rotation should be carefully considered relative to prior parameters such as anterior notching for TKA, and implant fit on condyle, for a UKA. Adjustments to femoral flexion should also be considered against prior considerations regarding anterior fit and bone transition (Figs. 26.12 and 26.13).

FIGURE 26.13 Full range of motion ligament balance planning with the virtual components in place, before the bone cuts are performed for a UKA surgery. UKA, Unicompartmental knee arthroplasty.

26. NAVIO Surgical System

FIGURE 26.12 Ligament balance planning with the virtual components in place, before the bone cuts are performed. This stage can also be accessed after ligament releases or bone cuts for fine-tuning implant placement plan. Image shows bicruciate sacrificing implant.

452

Handbook of Robotic and Image-Guided Surgery

FIGURE 26.14 Full range of motion ligament balance planning with the virtual components in place, before the bone cuts are performed. This stage can also be accessed after ligament releases or bone cuts for fine-tuning implant placement plan. The image shows bicruciate retaining implant.

The NAVIO system also shows the full ROM laxity, based on the planned implant positions to assess mid-flexion stability and joint space. The above stage therefore gives the surgeon the ability to augment and virtually construct the patient’s final ligament balance based on bone resections, as well as soft-tissue laxity definitions. This ability allows the bone cuts to be customized for the patient, based on the prosthesis design, the soft-tissue balance, and the amount of bony resections for an informed surgical output as desired for the patient. Lastly, during the cut guide placement stage, the surgeon can fine tune the position of cut guides optimally on the specific patient anatomy. The femur cut guide is placed to ensure that all locking features on the femur cut guide assembly have purchase into the bone surface. Similarly locking features on the tibia cut guide assembly are confirmed to have purchase into the bone surface (Fig. 26.14).

26.2.4

Robotic-assisted bone cutting

In the area of orthopedic robotics, the robotic systems can be distinguished as active, semiactive, or passive. Active systems, such as TSolution One (THINK Surgical, Fremont, CA), are completely autonomous, where the cutting tool is free from human control and the robot completes the required bone cuts according to the plan. In the semiactive segment, the surgeon and the robotic tool share control over the operation, where the robotics is utilized for active action assistance and guidance during fine manipulations, while the surgeon is still in general control over the procedure. Examples of such systems are NAVIO (Smith and Nephew, Pittsburgh, PA) and Mako (Stryker, Kalamazoo, MI). Passive systems guide the surgeons with assistive information during the procedure (navigation systems) or may provide robotically controlled positioning tools (such as OMNIBotics, OMNIlife science, Inc., Raynham, MA), but do not perform any action. Semiactive robotics prevents the surgeon from cutting the bone outside the areas designated by the surgical plan. Haptic systems attach the cutting tool to the end of the robotic arm, which then prevents the surgeon from moving outside the designated space. NAVIO employs a different principle, where the tool can be moved freely in space, but the cutting action is disabled when the tool is outside designated space (Figs. 26.15 and 26.16).

NAVIO Surgical System—Handheld Robotics Chapter | 26

453

FIGURE 26.16 NAVIO surgical screen depicting the locking features to be created on the patient’s bone with a color map.

26. NAVIO Surgical System

FIGURE 26.15 NAVIO handheld robotics using exposure control that retracts the bur in a safe guard once the depth and orientation of cut as per implant plan is reached on the anatomy.

454

Handbook of Robotic and Image-Guided Surgery

NAVIO supports two modes of robotic control. With spatial boundaries of bone preparation defined during planning, the handheld robotics disables the bone cutting when the cutting end moves outside the planned boundaries of bone preparation. In exposure control, the system prevents cutting by retracting an operational bur within a safe-physical guard, to prevent bone from being overcut. Alternatively, the bur can be used in a speed control mode, where once the bur is approaching the perimeters of cutting limits, the speed of the bur slows down to ultimately stop as the final required depth and orientation of cut planes are reached. For total knee replacement, the NAVIO system alternatively supports a hybrid approach, with the use of burs and saws for complete bone preparation. For unicompartmental knee replacement, the system uses the handheld roboticassisted bur for the entire bone preparation. The NAVIO system also supports bicruciate retaining TKA implants, where the femur can be prepared using a hybrid approach, while the tibia is completed using the robotics-assisted handpiece. In the case of total knee replacement, the robotic speed control mode is used to prepare locking features on the patient’s bone, to secure femur and tibia guides in positions consistent with the surgical plan. The use of robotics in total knee replacement is to allow for implementation of the implant placement plan, by precise placement of rigid, fixed cutting guides on the bone. When preparing the locking features on the bone, the handheld instrument ensures that the alignment and depth of feature preparation are controlled with the bur. This only allows a single accurate positional fixation of the cut guide on the bone, which then controls the position of the final cuts for the prosthesis (Fig. 26.17). In accordance with the implant placement plan, the robotic-assisted handpiece creates features on the patient’s bones that lock the cutting guides in place. The recommended saw blade thickness for the implant system is 1.35 mm. Utilizing crosshair visualization and tool’s eye view modes, the handpiece tool is aligned to the cut target and the bone preparation is executed under speed control. As the bur works to remove bone, the system provides live feedback visually by removing all of the colors on the virtual bone model until the target surface is reached. The bur automatically shuts off once the depth of preparation has been reached, or if the bur goes out of alignment. Once the locking features are prepared, the femoral distal cut guide is placed on the anterior features that have been prepared on the bone and its position is “locked” using the stabilized block. The distal cut guide assembly at this point is secured on the bone surface using 1/8v speed pins. Before engaging the saw, a virtual confirmation tool is used to assess the position of the cut guide, compared to the NAVIO plan. Using a recommended saw, the distal cut is executed on the patient’s bone. Based on the implant size plan, a NAVIO drill guide adapter is attached to the distal cut guide. This adapter ensures that the rotation of the anteroposterior (AP) cut block is set according to the implant plan made in NAVIO. Based on the femur implant size chosen in the plan, holes are drilled through the drill guide adapter. The appropriate AP cut guide is inserted in the drilled holes as per the planned implant size and pinned into position. The virtual confirmation tool can be used to ensure that the AP cut guide is placed in its intended position (Figs. 26.18 and 26.19). Similarly for the tibia, the robotics-assisted cutting tool is used to prepare features in order to lock the tibia cut guide onto the bone surface. Under speed control on the cut zone, the handpiece is aligned utilizing the crosshair and tool’s eye view. While engaging the drill foot pedal, the bur is slowly plunged into the bone surface until the system has removed all colors and the target surface is reached. Once the burred fixation feature depths are reached, the bur automatically shuts off, similar to the femur preparation. The tibia cut guide is placed on the prepared bone features and the position is confirmed with the virtual confirmation tool against the planned prosthesis position. The tibia cut guide is secured on the bone surface using 1/8v diameter headless speed pins. The saw is then engaged to complete the final tibial preparation. Once the cut is completed, the virtual confirmation tool can once again be used to gage the accuracy of the saw preparation. This tool can also be used before, as well as after sawing, to visualize the actual cut prepared. Handheld robotics can be used in case of any errors during saw cut execution, where there is bone left behind. Using exposure or speed control mode with the bur, cuts can be fine-tuned to achieve accurate execution to the prosthesis placement plan. For unicompartmental resections, the handheld robotics-assisted tool is used to completely prepare both the femur and tibia resections. For bicruciate-retaining tibia implants, similarly, the handheld tool is used to prepare bone in accordance with the plan (Fig. 26.20).

26.2.5

Trial reduction

After completing all bone cuts and adjustments to the final surfaces, the incision is cleaned and dried thoroughly. With appropriate-sized trial components, the leg is taken through its ROM, which displays the achieved balance of the knee,

NAVIO Surgical System—Handheld Robotics Chapter | 26

455

FIGURE 26.17 Hybrid total knee execution where handheld robotics prepares “locking” features on the patient anatomy, as per the surgical plan, and utilizes “locking” reusable cut guides for saw preparation of bone cuts.

26. NAVIO Surgical System FIGURE 26.18 Confirmation of saw cut before execution, in accordance to prosthesis plan.

456

Handbook of Robotic and Image-Guided Surgery

FIGURE 26.19 Hybrid total knee tibia execution where handheld robotics prepares “locking” features on patient anatomy, as per surgical plan, and utilizes “locking” reusable cut guides for saw preparation of bone cuts.

FIGURE 26.20 Robotic-assisted burring to fine tune saw cuts, in order to achieve cut accuracy for final implant placement.

against the virtual plan created by the system, before bone resections are made. Holding the leg in extension allows the surgeon to confirm the achieved long-leg mechanical alignment. A postop stressed gap assessment screen allows the user to assess the postop gap throughout flexion in both the medial and lateral compartments. After the dynamic ROM test, final preparation of the fixation features for the implantation of the final components is done by using appropriate instruments and tools. The NAVIO planning and bone removal stage can be easily accessed from this step, in the case of any needs for adjustment. The robotics-assisted bur can be utilized in the exposure or speed mode for recutting. The plan for the component can also be flexibly adjusted based on the trial postop evaluation, and the new bone cuts can be prepared robotically using the handheld bur, to get the desired balance in the joint. When the final results are acceptable per the dynamic ROM test, the NAVIO system is shut down and cleaned for use in the next surgical case.

26.2.6

Cement and close

To better anchor the cement, it is recommended to prepare additional anchor holes on both the tibia and femur. First, the bone should be prepared with pulse lavage and dried. Then, a thin layer of cement is applied to the inner surfaces of the components, and to the prepared bone surfaces. With the knee flexed, the tibial and femoral components are inserted and seated using appropriate tools. Excess cement is carefully removed after implant placement. Finally the bearing component is inserted, and the joint incision is closed.

NAVIO Surgical System—Handheld Robotics Chapter | 26

26.3

457

Conclusion

The NAVIO surgical system represents the next generation of robotics-assisted technology in orthopedics. It combines the benefits of CT-free anatomic localization and planning with the flexibility of handheld robotics. This approach optimizes the use of the surgeon’s skills, while maintaining the precision and accuracy required for knee orthopedics. For optimal outcome, the knee arthroplasty procedures require that soft-tissue considerations are taken into account during planning. The systems that rely on preoperative CT imaging often ignore this information in the interest of easier preoperative planning, or allow the plan to be modified based on the intraoperatively collected soft-tissue information. In addition, as CT scans do not image cartilage, the articulating surface cannot be accurately assessed from the CT image. NAVIO collects the bone landmarks and articulating surface information, as well as the ligament laxity and ROM information at the time of surgery, thus creating a reliable framework for planning. When compared to haptic technologies or active robotics, the concepts of minimizing the footprint with handheld smart instruments, combined with intraoperative CT-free planning and dynamic gap balancing, establish the NAVIO system as an ergonomic and efficient robotics-assisted solutions for orthopedic reconstruction.

References

Further reading Park SE, Lee CT. Comparison of robotic-assisted and conventional manual implantation of a primary total knee arthroplasty. J Arthroplasty 2007;22:1054 9. Lording T, Lustig S, Neyret P. Coronal alignment after total knee arthroplasty. EFORT Open Rev 2016;1:12 17. Available from: https://doi.org/ 10.1302/2058-5241.1.000002.

26. NAVIO Surgical System

[1] Kim YH, Kim JS, Kim DY. Clinical outcome and rate of complications after primary total knee replacement performed with quadricepssparing or standard arthrotomy. J Bone Joint Surg Br 2007;89:467 70. [2] Jackson G, Waldman BJ, Schaftel EA. Complications following quadriceps-sparing total knee arthroplasty. Orthopedics 2008;31:547. [3] Collier MB, Eickmann TH, Sukezaki F, et al. Patient, implant, and alignment factors associated with revision of medial compartment unicondylar arthroplasty. J Arthroplasty 2006;21(6 suppl 2):108 15. [4] Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res 2004;423:161 5. [5] Fisher DA, Watts M, Davis KE. Implant position in knee surgery: a comparison of minimally invasive, open unicompartmental, and total knee arthroplasty. J Arthroplasty 2003;18(7 suppl 1):2 8. [6] Hamilton WG, Collier MB, Tarabee E, et al. Incidence and reasons for reoperation after minimally invasive unicompartmental knee arthroplasty. J Arthroplasty 2006;21(6 suppl 2):98 107. [7] Keene G, Simpson D, Kalairajah Y. Limb alignment in computer-assisted minimally-invasive unicompartmental knee replacement. J Bone Joint Surg Br 2006;88:44 8. [8] Cobb J, Henckel J, Gomes P, et al. Hands-on robotic unicompartmental knee replacement: a prospective, randomized controlled study of the acrobot system. J Bone Joint Surg Br 2006;88:188 97. [9] Iorio R, Bolle G, Conteduca F, Valeo L. Accuracy of manual instrumentation of tibial cutting guide in total knee arthroplasty. Knee Surg Sports Traumatol Arthrosc 2013;21(10):2296 300. [10] Hamelynck KJ. History of TKR. 2012. ,https://www.scribd.com/document/200390981/01-Hamelynck-1.. [11] Lustig S. Unsatisfactory accuracy as determined by computer navigation of VISIONAIRE patient-specific instrumentation for total knee arthroplasty, 2012. J Arthroplasty 2013;28(3):469 73. [12] Bellemans J, Vandenneucker H, Vanlauwe J. Robot-assisted total knee arthroplasty. Clin Orthop Relat Res 2007;464:111 16. [13] Song EK, Seon JK, Yim JH, Netravali NA, Bargar WL. Robotic-assisted TKA reduces postoperative alignment outliers and improves gap balance compared to conventional TKA. Clin Orthop Relat Res 2013;471:118 26. [14] Borner M, Wiesel U, Ditzen W. Clinical experiences with robodoc and the duracon total knee. In: Stiehl JB, Konermann WH, Haaker RG, editors. Navigation and robotics in total joint and spine surgery. Berlin: Springer; 2004. p. 362 6. [15] Mai S, Lorke C, Siebert W. Clinical results with the robot-assisted Caspar system and the Search-Evolution Prosthesis. In: Stiehl JB, Konermann WH, Haaker RG, editors. Navigation and robotics in total joint and spine surgery. Berlin: Springer; 2004. p. 355 61. [16] Decking J, Theis C, Achenbach T, et al. Robotic total knee arthroplasty: the accuracy of CT-based component placement. Acta Orthop Scand 2004;75:573 9.

27 G

Development of an Active Soft-Tissue Balancing System for RoboticAssisted Total Knee Arthroplasty Sami Shalhoub1, Christopher Plaskos1, Alex Todorov2, Jeffrey M. Lawrence3 and John M. Keggi4,5 1

OMNI, Raynham, MA, United States Cobot, United States 3 Gundersen Health System, Viroqua, WI, United States 4 Orthopaedics New England, Middlebury, CT, United States 5 Connecticut Joint Replacement Institute, Hartford, CT, United States 2

ABSTRACT Total knee arthroplasty is an ideal surgical application for robotic assistance because accurate soft-tissue balance and implant positioning are critical for achieving superior clinical outcomes. OMNI has developed and commercialized the world’s first robotic-assisted system that integrates robotic ligament tensioning with bone resection planning to predict and achieve a well-balanced and well-aligned knee replacement. A novel robotic ligament tensioning tool, called the BalanceBot, is inserted into the knee to quantify the soft-tissue envelope throughout the range of knee motion. Predictive Balancing then allows the surgeon to predict postoperative ligament balance based on a virtual 3D implant plan before making bone cuts. The bone anatomy is reconstructed intraoperatively based on image-free 3D BoneMorphing technology, which eliminates the need for preoperative 3D imaging modalities and their associated cost, time, and radiation exposure burdens. A miniature semiactive bone-mounted robotic cutting guide called the OMNIBot is used to guide the resections according to the surgeon’s plan. The BalanceBot is then used to assess the final soft-tissue tension and balance achieved, closing the loop on the surgeon’s plan and minimizing the amount of ligament releases required to achieve a balanced knee. In this chapter we describe the evolution of the development of the system, with a specific focus on the initial development, validation, and clinical application of the BalanceBot. Early clinical results demonstrate a high level of patient satisfaction and warrant further clinical use and studies to evaluate the impact on longer term patient outcomes. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00027-X © 2020 Elsevier Inc. All rights reserved.

459

460

Handbook of Robotic and Image-Guided Surgery

27.1

Introduction

Total knee arthroplasty (TKA) is one of most common orthopedic surgical procedures performed today, with over 600,000 procedures performed annually in the United States alone. Despite the overall success of the procedure for relieving knee pain and restoring knee function, up to 20% of patients have been reported to be unsatisfied with the results of their surgery [1]. Several factors have been associated with patient satisfaction, including gender, age, preoperative pain and joint function levels, expectations, and psychological factors [24]. However, among the strongest factors associated with satisfaction are the degree of improvement in knee function and pain relief following TKA [5]. Therefore technologies that further improve pain relief and knee joint function in TKA should in theory significantly improve patient satisfaction. Additionally, next to infection, soft-tissue tension in the form of excessive laxity or stiffness accounts for the most frequent indication for early revision after TKA [6]. Better attention to the soft-tissue envelope may also play a significant role in patient satisfaction. Current challenges with conventional TKA include the inability to quantify, plan, and consistently achieve a wellbalanced knee that is stable throughout the range of motion. Traditional methods rely on making bone resections using fixed, manually positioned cutting jigs that reference the bone anatomy with limited accuracy. After bone cuts are made, trial implant components are inserted and the balance of the knee is assessed manually, primarily using the surgeon’s feel and experience and at times with the aid of manual devices such as laminar spreaders, static spacer blocks, or mechanical distracters. Ligament releases are then performed where the knee feels tight in flexion or in extension, by stabbing or “pie-crusting” ligaments with a scalpel or by releasing them from where they attach to the bone. This is known to be a highly subjective and variable process however, and it depends on the experience and individual preference of the surgeon [7]. Moreover, performing unnecessary ligaments releases may compromise proprioception and the afferent nerve fibers in the tissue, which are thought to help regulate knee joint stability during daily activities through muscle activation. Injuring ligaments by performing releases may therefore affect how natural the joint feels to the patient, as well as knee sensation and ultimately patient satisfaction [8]. Moreover, current computer and robotic technologies do not integrate soft-tissue tension information in a reproducible and quantitative manner when planning knee implant component placement. Today’s robotic systems rely on surgeons manually applying stress to the limb to distract the joint to assess knee gaps and ligament tension [9], which is not reproducible across surgeons or over time. Other technologies use passive sensors to measure loads in the knee once the implants are in place to correct for imbalance after the fact [10]. However, in this scenario surgeons must release soft tissues or recut bone to achieve balance since the soft tissues were not considered during the implant planning phase. Thus most manual and robotic systems today rely on surgeon feel and experience to assess and achieve soft-tissue tensioning and balance. The OMNI BalanceBot system allows the surgeon to set a predefined tension to the soft tissues in tension control mode and to plan the knee implant alignment and predict knee soft-tissue balance throughout the range of motion according to the measured tensions and the patient’s individual anatomy. Using a gap balancing technique after navigating the tibial cut allows the surgeon to perform this technique prior to making any femoral cuts. This minimizes the amount of soft-tissue releases required, which is believed to improve patient satisfaction and knee function and reduce postoperative knee stiffness and manipulations under anesthesia [11]. In this chapter we review the development and technical features of the OMNIBotics system, including a brief overview of BoneMorphing and the OMNIBot (previously called the iBlock) miniature robotic cutting guide. The BalanceBot (previously called the Active Spacer) robotic ligament tensioning tool is then discussed in detail, including the initial prototype design and design requirements, proof of concept, and product design engineering for commercialization. The verification and validation activities performed and regulatory clearance obtained are discussed, followed by the surgical workflow and results of the cadaver labs and early clinical results. Finally, a perspective on the technology and its contribution to solving the clinical problem of balancing in TKA is provided.

27.2 27.2.1

The OMNIBotics system System overview

The OMNIBotics system includes the BalanceBot robotic ligament balancing tool, the OMNIBot miniature robotic bone-cutting guide, and BoneMorphing 3D statistical shape modeling for anatomic modeling and implant planning (Fig. 27.1). The system employs 3D optical tracking technology to track the position of the patient’s femur and tibia in space as well as the position of surgical instruments and robotic cutting guides relative to the patient’s bones. The

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

461

clinical goals of the OMNIBotics system are to aid the surgeon in achieving a well-aligned and well-balanced knee replacement with optimal fit of the implants to the patient’s individual bone morphology and to the patient’s unique ligament laxity profile to minimize the need for performing ligament releases. This reduces the trauma inflicted to the knee soft tissues and the risk of overreleasing ligaments which can create iatrogenic injury and instability.

27.2.2

BoneMorphing/shape modeling

The OMNIBotics system uses a patented bone model registration process where initial generic shape models that are not specific to the patient are deformed to match a cloud of points that are acquired intraoperatively directly on the patient’s bone surfaces with the system pointer (Fig. 27.2A and B). The process starts by first acquiring the hip, ankle, and knee centers to provide an initial rough alignment of the generic models as well as the patient’s preoperative mechanical axis. Clouds of points are then acquired directly on the patient’s bone surface using the system on-screen prompts and help images (Fig. 27.2A). Points are acquired in accessible zones that are key in computing the optimal fit of the femoral and tibial implants, including the distal and posterior femoral condylar regions, the medial and lateral sides, and the anterior femoral cortex. The initial generic model is then deformed to the acquired points using a proprietary octree-spline deformation algorithm that is highly accurate and robust to a wide array of anatomical variations [12]. The final model can then be checked in all regions in real time and should be accurate to with 6 1 mm in all digitized areas (Fig. 27.2B). The point cloud acquisition process takes about a minute to perform and the deformation process takes just a few seconds to achieve a final 3D reconstruction that is accurate, complete, and which can be used to visualize and plan implant placement and size with virtual bone resection contours overlaid directly on the patient’s individual anatomy (Fig. 27.2D). The process has proven to be robust in tens of thousands of surgeries over a wide range of patient anatomies and pathologies and has been previously demonstrated to be more accurate and repeatable than manual landmark digitization in multiuser studies [13].

27. The OMNIBoticss System

FIGURE 27.1 The OMNIBotics system includes the BalanceBot ligament balancer (left), the OMNIBot bone-cutting robot (bottom center), and a Station with Predictive Balancing software.

462

Handbook of Robotic and Image-Guided Surgery

FIGURE 27.2 Screenshots from the ART software: (A) and (B) 3D BoneMorphing acquisitions and model check screens; (C) ligament balancing acquisitions before making femoral resections; (D) femoral planning with predictive gaps; and (E) final kinematic assessment.

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

463

27. The OMNIBoticss System

FIGURE 27.2 Continued

464

Handbook of Robotic and Image-Guided Surgery

FIGURE 27.2 Continued

27.2.3

OMNIBot miniature robotic cutting guide

The OMNIBot is a miniature bone-mounted robotic guide that mounts onto the medial side of the distal femur within the incision (Fig. 27.3A). It precisely positions a single cutting guide to all five resections of the distal femur according to the surgeon’s plan, using a single cutting guide for all implant sizes. We elected to design a miniature bone-mounted robotic system to minimize the overall disruption in the OR and surgical workflow, and the amount of space occupied around the OR table. Bone-mounted robotic systems have the implicit advantage of following the motion of the bone during the procedure as to not require tracking of the bone during the bone-cutting process. Since optical line of sight for motion tracking is not required during cutting, this allows for surgical assistants to stand on either side of the table to assist in the procedure. Most commercially available robotic systems for TKA however consist of large and expensive floor-mounted robotic arms that crowd the operating theater, leaving little room for the physician and their surgical assistants [14]. For these reasons, performing bilateral knee arthroplasties is prohibitive. The OMNIBot employs precision rotary drive components, including harmonic drives and small, electronically commutated precision DC motors to achieve a high degree of positioning accuracy. The robot positioning control modes include both low and high torque positioning control. Low torque control is used for positioning the saw-guide from one cut to the next for safe movement around the patient, and resistance monitoring is incorporated for collision detection during movement. High torque position control is used to maintain the saw-guide in position while the surgeon is performing the resection, to achieve a stable and accurate cut. Internal sensors monitor the guide’s position during the resection and will provide the user with feedback if the saw-guide is deviating from the targeting cutting plane during cutting. This design has been shown to produce cuts that are more accurate than conventional, navigated cutting blocks in bilateral cadaveric studies [15]. The system also includes a patented “fit adjustment” feature that allows the surgeon to intraoperatively adjust the amount of press-fit (interference-fit) at the implantbone interface. The tightness of the fit of the implant can be tailored according to surgeon preference or to the patient’s bone quality, which can be evaluated intraoperatively (Fig. 27.3B). The anterior and posterior femoral resections can be adjusted in increments of 0.25 mm either before or after making the initial distal cut, to accommodate hard bone (typically a line-to-line fit) or softer bone, with up to 0.75 mm of additional press available per resection. To validate this feature, robotic positioning accuracy measurements were performed on a bench test setup (Fig. 27.3C) and accuracy was shown to be within 20.04 6 0.14 mm

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

465

(mean 6 SD) for the overall dimension between the anterior and posterior resections. Resections were also performed on synthetic bones and compared to cuts made with conventional blocks [16]. The average error in the anteroposterior (AP) dimension between the targeted and measured cuts was significantly more accurate with the OMNIBot than with the conventional blocks (20.14 6 0.13 vs 0.7 6 0.52 mm, P 5 .021).

27.2.4

BalanceBot system development

The BalanceBot is a miniature ligament balancing robotic device with two linear actuators that can be independently controlled in one of two modes to control either the space (gap) or the ligament tension (force) in the medial and lateral compartments of the knee. The device can be switched between the following two control modes using the system application software: (1) tension mode in which servos move the actuator position based on force measurements made by integrated force sensors to achieve a targeted ligament tension in the knee and (2) gap or insert mode which is a position control loop that integrates deflection compensation to account for any bending in the device and its attachments under high loads.

27.2.5

Initial prototype design requirements

Based on the clinical requirements, the BalanceBot had to generate and sustain a certain amount of force for safety and usability to allow for back drivability of each of the axes. This required precise calculation of the gear ratios for the axis and choose the type of transmission which complied with those stringent requirements. The following equation was used to model the driving torque: T5

Fa l 2πη

27. The OMNIBoticss System

FIGURE 27.3 (A) The OMNIBot robotic cutting guide. (B) The software allows for adjustment of the anterior and posterior resections in increments of 0.25 mm at their most proximal point. The solid and broken lines represent the default (line-to-line) and adjusted cut locations, respectively. (C) Bench test setup showing how the robot positioning accuracy and bone cut accuracy assessed the optical comparator measurement system [11].

466

Handbook of Robotic and Image-Guided Surgery

FIGURE 27.4 Engineering analysis of the BalanceBot system included FEA modeling of the system components to optimize rigidity, and friction and power modeling analysis of the robotic drive and transmission components. FEA, Finite element analysis.

where T is the driving torque (N m), Fa is the thrust force (N), l is the screw lead (m), and η is the efficiency. We chose a planetary type gearhead, which induces minimum friction, and a high precision ball screw directly coupled with the DC servo motor rotor. The ball screw pitch was also chosen such that it would allow for back drivability. Tb 5

ðF:P:η2 Þ 2π

where Tb is the back driving torque (N m), F is the axial load (N), P is the screw lead (m), and η2 is the reverse efficiency (0.80.9 for ball screws). Another important design aspect was determining the deflection of some of the critical system components and optimizing the design to minimize it. The goal was to minimize deflection by employing different design approaches related to the construction of the individual components. Fig. 27.4 illustrates an example of finite element analysis for one of the BalanceBot’s levers.

27.2.6

Proof of concept

In order to establish the feasibility of the BalanceBot concept an initial prototype was developed and evaluated in two early cadaver labs involving seven orthopedic surgeons and seven knee specimens (Fig. 27.5) [17]. The objectives of these tests were to (1) establish the repeatability of acquiring the knee gaps dynamically throughout a range of knee motion; (2) to determine the range of forces that surgeons typically apply to the knee during a manual varus/valgus stability assessment to refine the system requirements; and (3) determine the amount of ligament tension that feels “ideal” to a surgeon when evaluating the stability of the knee. We also evaluated the usability of the initial concept device over the entire TKA procedure. The repeatability of the gap acquisitions throughout flexion ranged from 0.4 to 1.4 mm SD depending on the specimen, with the majority of variations being within 0.8 mm SD. This indicated that the dynamic gap acquisition process was repeatable. Peak forces measured during a manual blinded varus/valgus stability test ranged significantly between the different users, from 70 to 310 N, in extension and 50215 N in flexion. The loads applied were also different from medial to lateral (Fig. 27.5), and they depended on the amount of pretension applied to the ligaments. This was indicative of the large intrasurgeon variability in applying loads based on surgeon “feel,” and supported the need to provide surgeons with an objective ligament tensioning tool for TKA. In the clinical assessment of ideal tension, each surgeon in the group of five surgeons rated 50 N as “slightly loose” and 80 N as “ideal” in extension, while 50 N was rated “ideal” at 90 degrees flexion. Each surgeon in the group of three surgeons rated 80 N as “ideal” in both extension and flexion. These data gave us an important starting point as to what

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

(A)

467

(B)

Metal bellows

Independent medial and lateral femoral paddles

Removable tibial insert augments

Maxon spindle drive -Planetary gearhead -Ball screw

Slider ball type

Maxon motor

Active dual motor unit

Clearance for reduced patella

Removable tibial baseplate

Encoder

Force sensor

FIGURE 27.5 Evolution of the BalanceBot design: (A) and (B) CAD models of the initial and second-generation prototype design; (C) initial prototype software that allowed measurement of the stressstrain curve of the ligaments to characterize the tissue properties; (D) and (E) initial secondgeneration prototype being tested in cadaver specimen; and (E) final version used in surgery.

27.2.7

Engineering for product commercialization

Based on the user feedback obtained during our initial cadaver labs with our surgeon team, we redesigned the BalanceBot to be smaller and lighter, reducing the weight by approximately 1 kg. The force requirements were also reduced allowing for the reduction in size and weight and for increased safety of the system, minimizing any inadvertent damage to ligaments and soft-tissue structures. Additionally, the system was integrated into the OMNIBotics knee application software which allowed comprehensive knee implant planning with predictive balance curves throughout the range of flexion (Fig. 27.2D). The design of the system electrical architecture and control box were also refined to meet the EN 60601-1 standard for “Medical electrical equipment: General requirements for basic safety and essential performance,” as well as compliance with electromagnetic compatibility requirements for emissions and immunity. The system was then ready to undergo final verification and validation testing before submission for CE marking and FDA clearance.

27.2.8

Verification, validation, and regulatory clearances

Verification and validation are an important part of a device development cycle. It also provides essential data and paves the way for regulatory clearance. The system was tested in two cadaveric labs to ensure it does not raise any concerns regarding safety and that it met the established performance requirements, usability, and human factors specifications. The tests were developed using international standards EN 60601-1-6 (2007), IEC 62366 (2007), and FDA guideline “Medical Device User-Safety: Incorporating Human Factors Engineering into Risk Management.” The two cadaveric labs included six surgeons and seven cadaveric specimens. Throughout the workflow, participating surgeons were asked questions regarding the system usability and safety which included questions related to the graphical user interface, color scheme, and device operation. The functional requirements which describe the systems intended behavior and performance were also verified in these labs. These requirements include functions such as the ability of the spacer to apply controlled gaps between the femur and tibia, its fit within a standard incision surgical approach and

27. The OMNIBoticss System

tension should be recommended to apply clinically, as was in the range of what some other studies reported for applying ligament tension.

468

Handbook of Robotic and Image-Guided Surgery

FIGURE 27.5 Continued

confirmation that it can be held in the user’s hand easily. The cadaveric labs were also used to validate that the risk control measures implemented were capable of minimizing potential failure and hazard identified in the system risk analysis. These included BalanceBot load limit considerations and visual feedback warnings. The addition of the BalanceBot to the OMNIBotics system required a new FDA 510k submission for regulatory clearance. We used the prior FDA-cleared OMNIBotics system and digital sensor devices as the substantially equivalent

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

469

devices. Performance bench tests, software and hardware specifications were submitted in addition to the cadaver validation and verification labs. A subsequent submission was provided addressing the FDA request for additional information such as material biocompatibility and system cyber security. Upon reviewing the submission with the additional information, the OMNIBotics system with the BalanceBot received 510k clearance for clinical use in September 2017. The first surgeries in the United States were then subsequently performed by the surgeon coauthors (JML and JMK) in October 2017 with a successful outcome (Fig. 27.6).

FIGURE 27.6 (A) Initial cadaver experiments were performed to determine the amount of force applied to the knee during a manual varus/valgus knee stress test and (B) load versus time plots showing the forces measured during the stress tests (solid lines). Forces varied significantly between users, between the medial and the lateral sides, and with the amount of pretension applied to the ligaments.

27. The OMNIBoticss System

470

Handbook of Robotic and Image-Guided Surgery

27.2.9

Surgical workflow

To use the OMNIBotics system in surgery, the surgeon starts by exposing the knee with a standard anterior, midaxial incision and medial parapatellar arthrotomy (Fig. 27.7A). The initial exposure involves a limited release of the deep medial collateral ligament from the proximal tibia to an extent necessary for removal of the tibial articular surface. Similarly, a limited lateral release is required. These releases along with cruciate resection and osteophyte removal are considered universal as part of the standard exposure. In the case of marked deformity, the surgeon may elect to extend the medial and/or lateral releases during initial dissection. In the case of flexion contracture, posterior capsule release may be performed early as well, but may not be fully released until the femoral cuts are made. Tibial and femoral tracking arrays are then rigidly fixed onto their respective bone using bone pins and cancellous bone screws (Fig. 27.7B). The tibial and femoral pins are placed medial to lateral in the coronal plane. The femoral pins reside within the standard exposure and avoid any muscle tethering. The tibial pins are placed percutaneously. The tibio-femoral coordinate system is established by digitizing various anatomical landmarks on the femur and the tibia. The bone morphing process described in Section 27.2.2 is used to develop the patient specific 3D bone model. A tibia-

FIGURE 27.7 Intraoperative photo sequence illustrating the surgical workflow: (A) initial exposure through a standard anterior incision; (B) array positioning on the femur and tibia; (C) tibial resection with the Nanoblock; (D) the BalanceBot is inserted in the knee; (E) BalanceBot applying constant tension in the knee to quantify the soft-tissue envelope before cuts; (F) robotic femoral resections with the OMNIBot; (G) distal femoral resection validation with the system cut probe; and (H) active trialing with the BalanceBot in place of the tibial insert.

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

471

27.2.10

Cadaver labs and clinical results

The cadaveric labs used for validation and verification also helped provide a better understanding on the native and postoperative gaps and provided an assessment of the system prediction accuracy. The results showed that the native gaps are neither equal nor symmetric through the range of flexion. Both the medial and lateral gaps were tightest in extension. The gaps increase with flexion until 20 degrees of flexion, at which point the gaps remain consistent until 90 degrees. The medial gaps were tighter than the lateral gaps throughout the entire range of motion. The native gaps measured in the cadaver were similar to those measured in other studies [19]. Postoperatively, on the other hand, the gaps were equal and symmetric at 0 and 90 degrees of flexion, with an increase of 1.93 mm of laxity in midflexion. When comparing the predicted gaps to the measured postoperative gaps, the average difference was approximately 1 mm and the root mean square (RMS) error was 1.6 and 1.7 mm for the medial and lateral gaps, respectively [20]. The results further validated the ability of the BalanceBot to measure preoperative gaps and accurately predict the postoperative gaps based on specific implant alignment. Early clinical results have produced similar results to those from the cadaveric labs. The accuracy of the prediction algorithm was assessed in a study of 88 robotic TKAs [21]. Comparing the predicted gaps to the measured

27. The OMNIBoticss System

first, gap balancing technique takes full advantage of the system’s capacities which can also be useful for kinematic alignment strategies and with measured resection techniques, if preferred. The tibia is first resected using a navigated adjustable cutting block (Nanoblock, Fig. 27.7C). The tibial cut is validated and soft-tissue releases, if necessary, are performed at this time. The BalanceBot is next inserted into the joint space (Fig. 27.7D and E). The load to be applied by the BalanceBot in flexion and extension is set using the OMNIBotics system (Fig. 27.2C). The loads are evaluated by the surgeon and can be adjusted to the surgeon’s stability preference and can be customized for a particular patient. The BalanceBot can be used in two ways. Tension mode distributes this load uniformly throughout the range of motion, but the arms can move based on the space available to maintain a constant force. This will generate a gap profile unique to this patient and the force selected. Insert mode locks the arms in place regardless of space available and can therefore act as a spacer block. This gives a surgeon the sense of what the knee will feel like at full extension and 90 degrees of flexion with the selected force. Gap laxity data with this mode are shown on the computer screen and can be used to make decisions on the most appropriate force to use for a patient. The preoperative gap profiles are measured using the navigation system in tension mode as the limb is manually taken through the range of flexion with the patella reduced (Fig. 27.7E). After the initial acquisition of the gap profiles, if a pattern of deformity is present that is not fully correctable with the virtual femoral cut planning, the surgeon can perform appropriate releases and reacquire the gap profiles. The resulting measurement of tibio-femoral kinematics is used in a virtual gap algorithm to predict the postoperative gap profile for the planned implant alignment. The femoral implant varusvalgus alignment, internalexternal rotation, flexionextension, distal and posterior cut depth can be adjusted virtually on the planning page to achieve a balanced gap in both flexion and extension (Fig. 27.2D). Once the preferred femoral implant position is determined, the OMNIBot robotic cutting guide is assembled and then used to perform the planned femoral cuts (Fig. 27.7F). Prior to inserting the femoral trial component, the distal and anterior femoral resections are validated using the system cut controller (Fig. 27.7G). The femoral trial is placed, and then the BalanceBot is reinserted into the joint space (Fig. 27.7H). Separate medial and lateral inserts which match the femoral trial are attached to the BalanceBot before it is inserted into the joint. Using the BalanceBot, the postoperative gaps are measured in tension mode under the same loading profile used for the preoperative gap acquisition to generate a gap profile throughout the range of motion (Fig. 27.2E). The BalanceBot is then switched to insert mode, and a force profile is generated that plots the balance of the pressures in the medial and lateral compartments at 10-degree intervals. Finally, a trial tibia and polyethylene component is inserted and the knee is taken through a range of motion and stability assessed manually. The postoperative gap and force profile, and the manual feel of the trial components are all used to determine the insert thickness and whether any additional soft-tissue releases or bone recuts are required The surgeon has the option of removing the arrays after definitive trialing or performing an additional data acquisition after cementation of final implants. Throughout the procedure, screenshots are automatically saved along with key data points and graphs which are outputted to a USB key that the surgeon can transfer to the office or to the patient’s hospital record. An assessment of the learning curve for a transition to the OMNIBotics System showed that the first seven cases required an additional 25 minutes of operative time after which subsequent cases returned to the expected case duration [18].

472

Handbook of Robotic and Image-Guided Surgery

postoperative gaps resulted in a RMS error of 1.3 and 1.5 mm for the medial and lateral sides, respectively. Medial-tolateral balance through the flexion range was also measured in the study. Depending on the flexion angle, 90%95% of the knees were balanced within 2 mm with a maximum imbalance of less than 3.3 mm across all knees and flexion angles. An ongoing multicenter study is investigating the effects of the joint balance achieved by using the BalanceBot on patient reported outcomes (PROMs). The study aims to report on the short- and long-term effects on PROMs by collecting Knee injury and Osteoarthritis Outcome Score, the University of California, Los Angeles (UCLA) activity scale, and patient satisfaction data. Our early clinical results with the Predictive Balancing technique indicate a 92% and 95% patient satisfaction rate at the 3- and 6-month timepoints, respectively, which is higher than other studies that report a 73%79% patient satisfaction rate for the same timepoints [2,22,23].

27.3

Discussion and conclusion

The interest in robotics in the field of orthopedics has increased significantly in the last two decades. This has resulted in numerous systems reaching the market or in current development, such as Stryker MAKO, Smith and Nephew Navio, Zimmer Biomet Rosa, Depuy Synthese Orthotaxi, and Omni OMNIBotics. These systems share a similar overall architecture: the patient bones are first registered using a 3D motion-sensing system. After registering the bones, the surgeon manipulates the leg to assess the joint alignment and laxity. Using the subjective laxity test, the surgeon plans his bony cuts with the intent of achieving a balanced postoperative soft-tissue tension. Soft-tissue releases are typically preformed if the postoperative gaps were deemed unbalanced. Robotic-assisted TKA has demonstrated an improvement in the accuracy of bone cuts and implant alignment [12,13,15]; however, ligament balancing remains difficult to achieve using the current tools and instruments. Developing a new tool that helps surgeons optimize their surgical plan and objectively assess and achieve balanced soft tissues could lead to a significant improvement in PROMs. The BalanceBot was developed to address this void in existing robotic systems. The intent of the BalanceBot was to better assess the state of the joint preoperatively and help the surgeon achieve reproducible postoperative results. During the clinical phase, it was demonstrated that the data acquired from the BalanceBot allowed surgeons to predict the postoperative gap balance with an RMS error of less 2 mm prior to making any femoral cuts [16]. The ability to predict the postop gap balance resulted in a medial-to-lateral gap balance within 2 mm in over 90% of patients [17]. The BalanceBot has shown a significant improvement in medial-to-lateral balance compared to manual instruments in both flexion (97% vs 57%) and extension (96% vs 52%) [24]. Short- and long-term follow-up studies are currently underway to assess the effect of the improvement in soft-tissue balance on patient report outcomes. The development of a robotic tensioner allows surgeons to know what to expect from the planned femoral resection before making the bone cuts, and to objectively assess soft-tissue balance afterward without having to rely on feel and experience. The data from the BalanceBot can help the surgeon optimize femoral component placement to achieve a desired gap or knee laxity profile throughout the entire range of flexion while minimizing the need for any soft-tissue releases. Combined with the accuracy of the robotic-assisted cutting block, the BalanceBot can precisely predict and achieve postoperative balance. The development and introduction of the BalanceBot is a notable milestone in refining gap balancing TKA and is a likely step toward improved patient outcomes.

Acknowledgments We wish to thank the OMNI team and our surgeon collaborators who contributed to the development, validation, and clinical evaluation and trials of the BalanceBot system. These include Dr. Leonid Dabuzhsky, Dr. Paramjeet Gill, Dr. Jan Koenig, Dr. Amber Randall, Dr. Jeffrey DeClaire, Dr. Brett Fritsch, Dr. Wayne Moschetti, Dr. David S. Jevsevar, and Dr. Sam Sydney; and Fred Leger, Christian Joly, Marty Nichols, George Cipolletti, and several others from the OMNI team.

References [1] Bourne RB, Chesworth BM, Davis AM, et al. Patient satisfaction after total knee arthroplasty: who is satisfied and who is not? Clin Orthop Relat Res 2009;468(1):57. [2] Van Onsem S, Van Der Straeten C, Arnout N, Deprez P, Van Damme G, Victor J. A new prediction model for patient satisfaction after total knee arthroplasty. J Arthroplasty 2016;31(12):26602667.e1. [3] Jain D, Nguyen LL, Bendich I, Nguyen LL, Lewis CG, Huddleston JI, et al. Higher patient expectations predict higher patient-reported outcomes, but not satisfaction, in total knee arthroplasty patients: a prospective multicenter study. J Arthroplasty 2017;32(9S):S16670. [4] Maratt JD, Lee YY, Lyman S, Westrich GH. Predictors of satisfaction following total knee arthroplasty. J Arthroplasty 2015;30(7):11425.

Development of an Active Soft-Tissue Balancing System for Robotic-Assisted Total Knee Arthroplasty Chapter | 27

473

27. The OMNIBoticss System

[5] Gunaratne R, Pratt DN, Banda J, Fick DP, Khan RJK, Robertson BW. Patient dissatisfaction following total knee arthroplasty: a systematic review of the literature. J Arthroplasty 2017;32(12):385460. [6] Le DH, Goodman SB, Maloney WJ, Huddleston JI. Current modes of failure in TKA: infection, instability, and stiffness predominate. Clin Orthop Relat Res 2014;472(7):2197200. [7] Elmallah RK, Mistry JB, Cherian JJ, Chughtai M, Bhave A, Roche MW, et al. Can we really “feel” a balanced total knee arthroplasty? J Arthroplasty 2016;31(9 Suppl.):1025. [8] Delport HP, Vander Sloten J, Bellemans J. New possible pathways in improving outcome and patient satisfaction after TKA. Acta Orthop Belg 2013;79(3):2504. [9] Kayani B, Konan S, Tahmassebi J, Pietrzak JRT, Haddad FS. Robotic-arm assisted total knee arthroplasty is associated with improved early functional recovery and reduced time to hospital discharge compared with conventional jig-based total knee arthroplasty—a prospective cohort study. Bone Joint J 2018;100-B:9307. [10] Camarata DA. Soft tissue balance in total knee arthroplasty with a force sensor. Orthop Clin North Am 2014;45:175e84. [11] Dabuzhsky L, Neuhauser-Daley K, Plaskos C. Post-operative manipulation rates in robotic-assisted TKA using a gap referencing technique. Bone Joint J 2017;99-B:S387. [12] Szeliski R, Lavallee S. Matching 3-D anatomical surfaces with non-rigid deformations using octree-splines. Int J Comput Vision 1996;18(2): 17186. [13] Perrin N, Stindel E, Roux C. BoneMorphing versus freehand localization of anatomical landmarks: consequences for the reproducibility of implant positioning in total knee arthroplasty. Comput Aided Surg 2005;10(56):3019. [14] Mont MA, Khlopas A, Chughtai M, Newman JM, Deren M, Sultan AA. Value proposition of robotic total knee arthroplasty: what can robotic technology deliver in 2018 and beyond? Expert Rev Med Devices 2018;15(9):61930. [15] Koulalis D, O’Loughlin PF, Plaskos C, Kendoff D, Cross MB, Pearle AD. Sequential versus automated cutting guides in computer-assisted total knee arthroplasty. Knee 2011;18(6):43642. [16] Ponder C, Plaskos C, Cheal E. Press-fit total knee arthroplasty with a robotic-cutting guide: proof of concept and initial clinical experience. Bone Joint J 2013;95(S-28):61. [17] Plaskos C, Todorov A, Joly C, Dabuzhsky L, Gill P, Jevsevar D, et al. A novel active spacer system for ligament balancing in robotic-assisted knee arthroplasty—concept feasibility and early cadaver results. In: 14th Annual meeting of the international society for computer assisted orthopaedic surgery. Osaka, Japan; June 2016. [18] Keggi JM, Plaskos C. Learning curve and early patient satisfaction of robotic-assisted total knee arthroplasty. In: International society for technology in arthroplasty (ISTA). Boston, MA; October 2016. [19] Roth JD, Howell SM, Hull ML. Native knee laxities at 0 , 45 , and 90 of flexion and their relationship to the goal of the gap-balancing alignment method of total knee arthroplasty. J Bone Joint Surg Am 2015;97(20):167884. [20] Shalhoub S, Moschetti WE, Dabuzhsky L, Jevsevar DS, Keggi JM, Plaskos C. Laxity profiles in the native and replaced knee—application to robotic-assisted gap-balancing total knee arthroplasty. J Arthroplasty 2018;33(9):30438. [21] Shalhoub S, Randall AL, Lawrence JM, Keggi JM, Declaire JH, Plaskos C. Can we predict laxity in robotic TKA using pre-operative force-controlled laxity measurements? In: International society for technology in arthroplasty (ISTA). London, UK; October 2018. [22] Turcot K, Sagawa Jr Y, Fritschy D, Hoffmeyer P, Suva` D, Armand S. How gait and clinical outcomes contribute to patients’ satisfaction three months following a total knee arthroplasty. J Arthroplasty 2013;28(8):1297300. [23] Vissers MM, de Groot IB, Reijman M, Bussmann JB, Stam HJ, Verhaar JA. Functional capacity and actual daily activity do not contribute to patient satisfaction after total knee arthroplasty. BMC Musculoskelet Disord 2010;11:121. [24] Joseph J, Simpson PM, Whitehouse SL, English HW, Donnelly WJ. The use of navigation to achieve soft tissue balance in total knee arthroplasty—a randomised clinical study. Knee 2013;20(6):4016.

28 G

Unicompartmental Knee Replacement Utilizing Robotics Michael J. Maggitti1, Alexander H. Jinnah2 and Riyaz H. Jinnah1,2 1

Southeastern Regional Medical Center, Lumberton, NC, United States Wake Forest School of Medicine, Winston-Salem, NC, United States

2

ABSTRACT Unicompartmental knee arthroplasty is a procedure that has demonstrated varying results historically. The primary issue identified was component malpositioning. The introduction of robotics into the surgical armamentarium over the past decade has led to the ability to have reproducible accurate component positioning with these implants and therefore has led to a resurgence of this procedure with the addition of new technology. The robotic systems available have evolved over the past decade to allow precise execution of the surgeon’s preoperative plan by minimizing aberrant bony resection, and allowing dynamic intraoperative ligamentous balancing. Herein, we describe our experience using the MAKO RIO system. A step-by-step instruction guide is provided within this chapter. Short-tomedium-term outcomes reported (revision rates) are comparable to historical rates seen for total knee arthroplasties within the literature. Long-term outcomes are not yet available for these implants, therefore future research must be performed to identify long-term revision rates and complications. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00028-1 © 2020 Elsevier Inc. All rights reserved.

475

476

Handbook of Robotic and Image-Guided Surgery

28.1

Introduction

Unicompartmental knee replacement (UKA) in one form or another has been part of the orthopedic surgical armamentarium since the 1950s. The first implants were metallic tibial components designed by Duncan C. McKeever, the intermediate- and long-term results of these components were less than satisfactory and were noted to have a high complication rate. As a result, these initial components were abandoned. Early bicompartmental components, such as the Polycentric total knee arthroplasty (TKA) designed by Dr. Frank Gunston consisted of metallic femoral runners engaging ultrahigh-molecular-weight polyethylene (UHMWPE) condylar tracks as well as Sir John Charley’s Load Angle Inlay Total Knee design which consisted of convex UHMWPE femoral components articulating with flat metallic tibial plateau inserts. Both designs again failed to achieve intermediate-to-long-term survival and success [1]. The true legacy of modern UKA began in 1970 with the development of the Marmor Modular Knee, designed by Dr. Leonard Marmor. The initial design consisted of an unconstrained metallic single peg femoral component articulating with an all polyethylene tibial inlay component [2,3]. Dr. Marmor later modified the design of the tibial component with metal backing in order to resist polyethylene deformity and early failure [4]. Early studies by Laskin in a review of 89 patients revealed striking postoperative relief of pain following Marmor modular UKA in both osteoarthritic and rheumatoid arthritis patients. This improvement was noted over 2 years in nearly all patients. Implant loosening or tibial subsidence was not identified. The deficiency of this study was the lack of long-term follow-up [5]. Insall reviewed 32 UKAs over 5 7 years and found discouraging results with only one rated excellent, seven as good, 14 knees as fair, and 10 as poor. The medial compartment was replaced in 25 patients, the lateral compartment in seven patients. Seven of the unicondylar knees were converted to a bicondylar prosthesis during the study. A majority of the patients were women (25/30) and the patella was removed at the time of arthroplasty in 18 knees for associated patellofemoral arthritis (PFA). Knees in preoperative varus averaging 8 degrees (range 5 17 degrees) were corrected with surgery to an average of 4 degree valgus (range 0 15 degrees). Valgus knees preoperatively averaging 21 degrees (range 18 32 degrees) were corrected postoperatively to an average of 8 degrees (range 5 10 degrees). Insall questioned the design of the prosthesis given the asymmetrical frontal curve of both components making placement critical to success. He also opined that the decision to remove the patella was unwise. Failures due to progressive arthritis in the contralateral compartment were noted and implied mechanical causes, that is, malalignment [6]. Insall did not identify advantages to UKA, such as decreased blood loss and preservation of preoperative motion. Operative time can be shortened and with conservative bone resection conversion to another prosthesis is fairly straightforward if the unicondylar prosthesis fails. Kolstad reported in 1982 a series of 70 knees in which 52 presented with rheumatoid arthritis and 18 with osteoarthritis (OA). The Marmor UKA was performed through a medial parapatellar incision. The observation period averaged 45 months (36 55 months). Only five knees of the rheumatoid arthritis group and one knee in the OA group deteriorated sufficiently to require reoperation [7]. Murray reported on the results of a long-term study of the Oxford mobile bearing UKA for medial unicompartmental arthritis. A total of 143 knees were operated on over a 10-year period (1982 92). The mean follow-up was 7.6 years and remarkably the survivorship peaked at 97%. There were five revisions reported: two for progression of OA in the lateral compartment; one for component loosening; one for periprosthetic infection; and one for a painful prosthesis without identifiable cause [8].

28.2

Challenges with manual surgery

As in all orthopedic surgical procedures, the preoperative planning executed to the utmost precision in the operative setting is the goal of every surgeon. The challenges which are faced in performing UKA are not uncommon to arthroplasty surgery yet are unique in the fact that this procedure is typically performed through a limited incision restricting anatomic visualization and is reliant upon external cutting guides. Whiteside stressed that alignment, ligament balance, and implant fixation were the three most important keys to success in any knee arthroplasty. Outcome studies have shown him to be prophetic in his assessment. Ligament balance in unicondylar arthroplasty seldom requires excessive ligament release but is dependent on sizing and position of the implants and on careful osteophyte excision. Avoiding overresection of the tibial surface was essential to ensure adequate support of the implant [9].

28.2.1

Limb alignment/component positioning

With the use of extramedullary and intramedullary instrumentation, the potential for either overcorrection or undercorrection of limb alignment exists [10]. This failure of alignment can be the result of a malpositioned tibial component, a

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

477

malpositioned femoral component, or a combination of the two. Overcorrection will result in an increase in loadbearing in the contralateral compartment resulting in accelerated degeneration [11,12]. Undercorrection will result in excessive loading of the UHMWPE increasing early wear and loosening [11]. Specific parameters are required for successful component placement. The tibial component must be perpendicular to the tibial axis in the coronal plane [13]. With respect to the tibial slope, it should mirror the native slope in order to decrease likely Anterior cruciate ligament (ACL) attenuation and attritional rupture. By not recreating the native tibial posterior slope, the scenario will be set to reduce femoral rollback and knee flexion and thus increase load stress on the tibial component in the setting of decreased slope as opposed to excessive posterior slope resulting in flexion instability [14,15]. Positioning of the tibial component is critical since medial or lateral overhang will result in soft-tissue impingement (medial collateral ligament or iliotibial band) and translation of the tibial component into the tibial spine will result in ACL impingement. The femoral component must be positioned perpendicular to the tibial component in the coronal plane in order to avoid edge loading of the HMWPE tibial component. Balanced flexion and extension gaps are tantamount to proper joint alignment and rotation. The femoral component must not impinge on patellofemoral flexion and must be inset to prevent this error in placement [16]. With respect to lateral UKA, the arthroplasty surgeon must take into account the nuances of lateral compartment component placement. The lateral compartment is smaller than the medial compartment and there is increased native laxity in the lateral compartment. The lateral femoral condyle is smaller than the medial femoral condyle. The lateral tibial plateau anatomically is flat and slightly convex in comparison to the concave larger medial tibial plateau. There is also the inherent screw home mechanism unique to the lateral compartment stability and required for native knee kinematics. As a result, the surgical technique must take into account minimized resection of the lateral tibial plateau along with mild internal rotation of the vertical tibial cut coupled with mild external rotation of the femoral component. One must avoid overstuffing the lateral compartment given its inherent physiologic laxity. Posterior-lateral overhang should be avoided to prevent popliteal tendon impingement. All of these factors play a key role in successful lateral UKA [17 19].

Robotic surgery experience

One cannot overemphasize proper surgical technique, yet proper patient selection is equally important to long-term success. In 1989 Kozen and Scott detailed recommended indications for unicondylar arthroplasty. The procedure was considered an alternative for patients with OA limited to either the medial or lateral compartment as opposed to the alternative surgery of proximal tibial osteotomy. Their recommendations were that the age of the patient should be greater than 60 years old and have a low activity demand. An appropriate patient would not be obese, with a target weight below 82 kg. Candidates should not be considered appropriate if their level of physical activity was high, such as a manual laborer or athletically inclined. Preoperative pain assessment required a patient to exhibit activity-related pain which would be isolated by the “one finger test” of Bert [20]. Diffuse pain or pain at rest may indicate a more involved arthritic process or an inflammatory component. Preoperative range of motion of 90 degrees with less than a 5-degree flexion contracture along with angular deformity of less than 15 degrees (10 degree varus to 15 degree valgus). The deformity should be evaluated by stress radiographs preoperatively and definitively by correctability following osteophyte removal at the time of surgery. The final decision regarding implantation of a unicondylar arthroplasty must be made following thorough examination of the patellofemoral joint, contralateral compartment, and integrity of the cruciate ligaments. Subchondral bone exposure in either the patellofemoral compartment or the contralateral compartment would require conversion to a TKA. Contraindications included inflammatory arthropathy, age less than 60 years, high patient activity level, pain at rest, patellofemoral pain, and a deficient anterior cruciate ligament [21]. Since that initial presentation, the indications for unicondylar arthroplasty have been modified as a result of multiple studies showing improved survivorship in patients younger than 60 years old, patients with increased activity level, as well as obese patients. Recently Berend published a consensus statement on indications and contraindications for medial unicondylar arthroplasty. The indications as listed are anterior medial OA (grade IV with eburnated bone), osteonecrosis with adequate bone stock, lateral joint intact, coronal deformity less than 10 degrees, and sagittal deformity less than 15 degrees. Obesity was identified as less of a concern and no longer a contraindication. The use of metal-backed tibial component designs has resulted in the modification of this parameter. Younger age is acceptable given the higher satisfaction in young patients when compared to TKA. There was an improved safety profile in higher satisfaction versus TKA in the younger age group. Chondrocalcinosis is not a contraindication unless the patient has recurrent bouts of synovitis. Absence of the ACL is a relative contraindication, and an extended indication in an elderly patient where the

28. Mako Robotic Arm

28.3

478

Handbook of Robotic and Image-Guided Surgery

wear pattern is primarily anterior medial. The surgical technique is altered to decrease posterior tibial slope or in a younger individual consider ACL reconstruction. Patellofemoral disease is still a controversial issue, with some arthroplasty surgeons ignoring the joint completely, while many would proceed with a unicondylar arthroplasty with only medial patellar facet disease. Current contraindications are a fixed deformity of 10 15 degrees in any plane, systemic inflammatory disease, prior high tibial osteotomy, full-thickness lateral patellofemoral disease, and the relative contraindication of ACL deficiency [22 24]. The advantages of unicondylar arthroplasty stem from the fact that there is less soft-tissue dissection, resulting in less blood loss as well as improved postoperative range of motion due to decreased tissue disruption. With decreased surgical exposure and bone cuts, postoperative pain management is easier and managed with nonnarcotic multimodal analgesics. There are reduced surgical costs and, with added operative experience, decreased operative time. There is a shorter postoperative recovery time and many procedures are now being performed on an outpatient basis. Improved rehabilitation time and return to activities of daily living and work have been documented and this may be attributed to a maintenance of proprioception from native ligament integrity [16,20,23]. These advantages are further expanded upon with the introduction of robotic-assisted UKA. The development of robotic-assisted systems occurred over several years with the introduction of the active, autonomous ROBODOC. This system fell out of favor due to technical complications [25,26]. As a result of the initial weaknesses in the original active robotic system, further robotic development led to the introduction of passive and haptic robotic systems. The Robotic Arm Interactive Orthopedic System (RIO) is an example of a commercially available, tactile robotic system which requires active surgeon participation [26,27]. Passive systems are marketed by various companies and use multiple arrays and cameras to provide intraoperative feedback to the surgeon. No direct action is taken by a passive system in contradistinction to the RIO where the robot will disengage active bone resection if placed outside of the predetermined operative field [27]. In order to prepare for a robotic-assisted UKA, a preoperative CAT scan of the knee is obtained along with spot capturing of the hip and ankle joint. This information is downloaded into the proprietary software system and allows for a preoperative planning of the bone cuts, size of the femoral and tibial components, as well as a determination of the femoral and tibial bone quality to ascertain the acceptability of proceeding with a UKA. Preoperative X-rays, including stress, further inform the surgeon as to the appropriateness of the surgical plan. By manipulating the software, precise bone cuts and implant placement are recommended in a virtual scenario. The advantages of robotic-assisted UKA are consistent placement of the components through a mini incision exposure. Dynamic testing of the trial components allows verification of ligament balancing. Appropriate tracking and placement of the components, decreased soft-tissue dissection, and exposure due to the robotic precision through the mobile operative window, with less bone resection from the tibia and reduced blood loss are all advantages to the robotic procedure. There is improved conversion to a TKA secondary to less initial bone resection. There is an increased postoperative range of motion. There is a shorter postoperative recovery and rehabilitation time. And finally, with the surgical procedure being simplified with the use of the robot a less experienced surgeon may achieve consistent, precise results [28,29]. The most obvious drawback to robotic-assisted UKA is the capital expenditure required for placement of the robot and the proprietary software necessary to direct the robotic arm and to complete the preoperative plan and intraoperative surgical modifications. In August 2010, the Mako system cost $793,000 for the robotic platform and $148,000 for the partial knee application (PKA) software [30]. Moschetti documented, as of 2016, the cost of the robot was $934,728 over 5 years, with an additional 10% per year for 2 5 years for the associated service contract. Costs were converted into present-value dollars assuming a 3% discount rate, giving a total present discounted cost of the robotic system of $1.362 million. Moschetti detailed that robotic-assisted UKA was cost effective if case volume exceeded 94 cases/year, robotic UKA 2-year failure rate was below 1.2%, and robotic-assisted UKA provided an improved failure rate beyond a 2-year period when compared to conventional UKA [31]. With the use of the MAKO System, there is a limitation to the use of the MAKO RESTORIS fixed-bearing UKA. No other commercial fixed bearing or mobile bearing UKA can be used with the MAKO RIO System. The learning curve for the operating surgeon, the operating room staff, and the overall operative setup of the robot and patient positioning will impact the enthusiasm and dedication of the participants as well as the implementation of the robotic UKA procedure.

28.4 28.4.1

Preoperative preparation/operative setup/surgical technique Indications for use—RESTORIS partial knee application

The RESTORIS PKA for use with the RIO is intended to assist the surgeon in providing software-defined spatial boundaries for orientation and reference information to anatomical structures during orthopedic procedures.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

479

The RESTORIS PKA for use with the RIO is indicated for use in surgical knee procedures, in which the use of stereotactic surgery may be appropriate, and where reference to rigid anatomical bony structures can be identified relative to a CT-based model of the anatomy. These procedures include unicondylar knee replacement and/or patellofemoral knee replacement.

28.4.1.1 Preoperative G G G G G

A patient is selected and an implant system chosen. Cleaning and sterilization of instrumentation is completed. A patient CT scan is taken using the Mako PKA CT Scanning Protocol (PN 200004). The patient CT data are loaded and processed to create the patient bone model. A patient-specific operative plan for the chosen implant system is created by the Mako Product Specialist and reviewed by the surgeon.

28.4.1.2 Intraoperative G G G G G G G G G G G

28.4.2

Patient positioning

28.4.2.1 Single-leg procedures Place the patient in a supine position. The patient must be positioned such that: (1) the operative leg can be moved through a full range of motion while visible by the camera and (2) the knee can be accessed by the end of the robotic arm. Before draping, either the patient or robotic arm can be moved to achieve the desired patient-RIO effective positioning. The following leg holder (knee positioner) in Fig. 28.1 is available.

28.4.2.2 Securing the leg and IMP De Mayo knee positioner 1. Prepare the leg per standard sterile technique. Locate the sterile boot for the De Mayo knee positioner. Place the patient’s leg in the sterile boot. 2. A sterile staff member or sterile leg hoist should keep the leg safely elevated. Locate the leg holder base and place it at the appropriate location on the patient bed. The end of the base should extend 3 4 in. past the patient’s foot when extended. The two fixation posts on the base should be in the up position at this step. 3. Locate the leg holder attachment block. Orient the block such that the handle is facing down and adjustment knobs point up (Fig. 28.2). Attach to the bed rail, but do not secure. 4. Adjust the tension of the attachment block with the circular knob on the top of the block. The tension should be such that when the lever is tightened, the block is securely fastened to the bed. Turn the lever to lock the block. 5. Lower the posts from the leg holder base into the slots of the block (Fig. 28.3). 6. Loosen the smaller silver release on the leg holder base and attach the sphere of the sterile boot (attached to the patient) into the base. Fasten the smaller lever to secure (Fig. 28.4).

28. Mako Robotic Arm

G

The operative plan is loaded into the application software. The patient is positioned. Patient anatomy and RIO position are registered in the application software. Patient leg kinematic data are collected. Intraoperative implant planning is performed. The RIO is positioned and registered (if not performed earlier). The RIO and bone registration are checked for accuracy. Bone is resected per the operative implant plan. Trials are placed; alignment, stability, and range of motion are assessed. Patient leg kinematic analysis is reassessed. Final implants are cemented. The RIO system’s positional accuracy is defined as follows: cutting tool tip accuracy of 6 2 mm/ 6 2 degrees; bone registration accuracy of 6 2 mm/ 6 2 degrees.

480

Handbook of Robotic and Image-Guided Surgery

FIGURE 28.1 Leg holder: IMP De Mayo knee positioner.

FIGURE 28.2 How to locate the leg holder attachment block.

FIGURE 28.3 How to put the posts into the slots of the leg holder block.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

481

FIGURE 28.4 How to secure the leg holder.

28.4.2.3 Bone pin insertion (tibia only)

28.4.2.4 Bone pin insertion (femur only) 1. Flex the knee to greater than 90 degrees to elongate the quadriceps muscles. 2. Using a scalpel, make one incision through the skin and the fascia a minimum of 10 cm (approximately four finger breadths) proximal to the superior edge of the patella and 30 35 degrees lateral of the midline. 3. The second incision can be completed by using either of the following methods: a. make the second stab incision approximately 15 mm proximal to the previous incision or b. place the most distal sleeve of the array stabilizer through the first incision and make an incision where the proximal sleeve rests on the skin. 4. Fully seat the array stabilizer through both incisions so that the barrels are on the bone surface. 5. Drive one of the bone pins through the first cortex and pierce the second cortex. 6. While holding the array stabilizer in place, drive the second bone pin through the first cortex and pierce the second cortex.

28.4.2.5 Array assembly (femur and tibia) 1. Loosely assemble the array adapter and 2-pin clamp. 2. Sliding the clamp over the bone pins seat the clamp against the top of the array stabilizer. Orient the assembly such that the clamp’s screw points away from the camera and the array adapter’s screw points away from the incision. 3. Attach the femoral array to the array adapter (Fig. 28.5).

28. Mako Robotic Arm

1. Using a scalpel, make one incision through the skin and fascia a minimum of 10 cm (approximately four finger breadths) inferior to the tibial tubercle and 1 1.5 cm medial to the tibial crest. 2. The second incision can be completed by using either of the following methods: a. make the second stab incision approximately 15 mm distal to the previous incision or b. place the most proximal sleeve of the array stabilizer through the first incision and make an incision where the distal sleeve rests on the skin. 3. Ensure that the arrays are aligned such that both the tibial and femoral arrays are in the same plane and parallel to the camera. 4. Fully seat the array stabilizer through both incisions so that the barrels are on the bone surface. 5. Drive one of the bone pins through the first cortex and pierce the second cortex. 6. While holding the array stabilizer in place, drive the second bone pin through the first cortex and pierce the second cortex.

482

Handbook of Robotic and Image-Guided Surgery

FIGURE 28.5 Femoral array attached to the adapter.

FIGURE 28.6 Operating room configuration.

28.4.2.6 Base array placement and orientation—surgeon/Mako product specialist 1. Position the base array such that it lays in the same sagittal plane as the knee center and has an unobstructed line of sight to the camera. 2. Tighten the larger attachment knob on the RIO tracker arm. 3. Orient the base array such that the “Knee Center/Incision” arrow points down. 4. Run the patient’s leg through a full range of motion to confirm that the base array will not interfere with array tracking during the procedure.

28.4.2.7 Operating room configuration The arrays and camera should be in parallel planes with the arrays in the field of view of the camera as indicated below (Fig. 28.6).

28.4.2.8 Patient time out page The Pre/Intra-Op page allows the final confirmation of patient information and acceptance of the preoperative plan before registration and resection begin.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

483

FIGURE 28.7 Bone registration—with the femoral array visible to the camera, pivot the leg about the hip joint in an expanding spiral motion (but avoiding the limits of the hip range of motion) until the data collection progress on screen reaches 100%.

The “Agree” button must be selected to proceed.

28.4.3

Bone registration

1. Navigate to the Patient Landmarks page. 2. With “Hip Center” selected, select the “Capture” button on the Patient Landmarks page. The progress bar on the bottom of the main window will become active. 3. With the femoral array visible to the camera, pivot the leg about the hip joint in an expanding spiral motion (but avoiding the limits of the hip range of motion) until the data collection progress on screen reaches 100% (Fig. 28.7). An accuracy measurement will be displayed. If the measurement is within the tolerance, the user may proceed to the next landmark.

28.4.3.2 Capturing remaining landmarks—surgeon Hold the tip of the blunt probe (green) at the indicated anatomical location on the correct side with both the probe and the corresponding bone array visible to the camera (Fig. 28.8).

28.4.3.3 Bone registration: femur and tibia—Mako product specialist/surgeon 1. Navigate to the Bone Registration page and select “Femur.” 2. Select “Start.” A progress bar will be displayed at the bottom of the main window. 3. Flexing/extending the knee as appropriate for point access, locate the tip of the blue probe (consistent with the color of the active point) on the bone as indicated. Use the end of the sharp probe to reach the bone surface through cartilage. 4. Initiate point collection by selecting “Capture.” Hold the probe steady for each point until confirmation of point acquisition is heard (Fig. 28.9).

28.4.3.4 Registration verification—Mako product specialist/surgeon 1. Touch the tip of the blue probe (corresponding to the color of the large spheres) to the point on the bone corresponding to one of the larger spheres displayed on screen.

28. Mako Robotic Arm

28.4.3.1 Patient landmarks—Mako product specialist

484

Handbook of Robotic and Image-Guided Surgery

FIGURE 28.8 Capturing remaining landmarks—hold the tip of the blunt probe (green) at the indicated anatomical location on the correct side with both the probe and the corresponding bone array visible to the camera.

FIGURE 28.9 Bone registration: femur and tibia. Hold the probe steady for each point until confirmation of point acquisition is heard.

2. Press “Verify.” The software will display an error value (depending on the set surgeon preference) of the measured distance between the probe tip and the registered bone model. If the error value for that point is below the tolerance, a sound will play and the blue point will turn white, signifying that the bone registration at the point is accurate. If the blue point turns red, the accuracy has not been verified and the error value was found to be above the tolerance. Repeat for all six verification points.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

485

FIGURE 28.10 A visual representation of the tightness or looseness of the knee at captured pose angles.

3. After femur registration verification, select “Tibia” in the side window and repeat the previous steps for the tibia. Located to the right of the model are guides displaying the probe distance to bone in mm and a 2D CT slice of the bone contour.

28.4.3.5 Implant planning

28.4.4

Bone preparation

28.4.4.1 Checkpoints The checkpoint is a required confirmation of system accuracy before burring commences. By placing the probe in the designated, previously set checkpoint, and the cutting tool in the end effector array, the system can allow safe and accurate resection (Fig. 28.11).

28.4.4.2 Bone preparation page layout Main window G G G

Displays a 3D representation of bone. Clicking on any of the four preset views will reorient the bone. Allows the user to adjust the zoom of the bone model (Fig. 28.12).

28. Mako Robotic Arm

Gap settings—Provides the ability to set pose angle ranges and desired gap values for extension, midflexion, and flexion. Quick fit—Places the femoral component on the femur in the sagittal plane using inputs from the captured poses from joint balancing and the desired gap values entered in gap settings or surgeon preferences. The implant location and graph are automatically updated. Map point/surface—Allows a single point to be mapped corresponding to the tip of the sharp or blunt probe. All mapped points also display an estimate of the cartilage surface, contouring to the bone at the probed offset thickness (represented as a yellow line and surface in the 2D and 3D views). Clear point/surface—Removes the most recent mapped point and can be used to remove all mapped points. Map cartilage—Collects probe tip points continuously when selected. Once the points have been collected a cartilage surface is created (represented as a yellow line and surface in the 2D and 3D views). Information box—Displays operative side, tight/loose gap distance at the pose selected, and distance from the femoral component anterior tip to the bone. Graph—Displays a visual representation of the tightness or looseness of the knee at captured pose angles. The distance between planned components is displayed as a bar for each pose captured from the joint balancing step. Each bar represents knee tightness or gap/looseness at that pose. The purpose of the graph is to combine in one display the effects of implant placement for all captured poses. For example, if five poses were captured during joint balancing, five gap numbers would be calculated, and five bars would be displayed on the graph (Fig. 28.10).

486

Handbook of Robotic and Image-Guided Surgery

FIGURE 28.11 Bone preparation. The checkpoint is a required confirmation of system accuracy before burring commences.

FIGURE 28.12 Bone preparation page layout. Allows the user to adjust the zoom of the bone model.

28.4.4.3 Visualization and stereotactic boundaries The stereotactic boundaries guide resection by restricting the burr position. It is, however, possible to exert force and overcome the stereotactic walls. A number of software features have been put in place to prevent this occurrence and mitigate possible issues.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

487

Beeps If the user exerts force against the stereotactic boundaries in burring mode, the system will emit audible beeps, indicating a situation which, if continued, may result in inaccurate resection. If beeps are heard, do not continue to exert force on the robotic arm. Burr shutoff If the user exerts excessive pressure against the stereotactic boundaries, the burr motor will automatically disengage to prevent inaccurate resection. Bone model color As described previously, the green volume on the 3D bone model represents the resection region. The stereotactic boundary is a volume smaller than the green volume. It is possible for the user to exert a force on the robotic arm, which places the burr tip outside of the green volume, but still allows the burr to continue spinning. To account for this region, the software surrounds the green volume in a 1 mm thick cushion, visualized as a white surface. Under the white surface, at the limit of where the system allows the burr to safely travel, is a red surface. The red surface represents locations on the bone where the burr has gone more than 1 mm outside of the planned resection. Consequently, the user should avoid creating red surfaces on the bone model by responding to the visual and audible cues listed above. Though the surface appears red, the “Burr Shutoff” feature will prevent the user from creating an inaccurate resection. The presence of a red surface is an indication that the user attempted to push beyond the stereotactic boundaries but does not necessarily represent an unacceptable or excessively inaccurate resection.

28.4.4.4 Mode: approach Approach mode assists the user in guiding the burr into the desired location for burring. From “Free Mode,” and after the checkpoint for the burring step has passed, the system will proceed to “Approach Mode.” A yellow wireframe representing the stereotactic boundary will appear in the main window overlaid on the 3D model of the bone (Fig. 28.13).

28.4.4.5 CT view

FIGURE 28.13 Approach mode assists the user in guiding the burr into the desired location for burring.

28. Mako Robotic Arm

CT view allows the user to use the tip of the active burr or tracked probe to investigate real-time correlation of resection positions relative to the preoperative plan, the CT data, and the patient anatomy. The CT view can be accessed in any bone preparation screen by selecting the “CT View” toggle button on the side window. When active, the CT View

488

Handbook of Robotic and Image-Guided Surgery

FIGURE 28.14 CT view allows the user to use the tip of the active burr or tracked probe to investigate real-time correlation of resection positions relative to the preoperative plan, the CT data, and the patient anatomy.

screen will display a transverse, sagittal, coronal, and 3D view of the CT data and implant model with a live update of any active probe or burr. The CT view can remain active in “Free Mode,” “Approach Mode,” or “Burring Mode” (without allowing the burr to spin). To exit “CT View,” toggle the “CT View” button. The various cursor displays and sizes are outlined in Fig. 28.14.

28.4.5

Kinematic analysis

During either component trial evaluation or final implant evaluation, the kinematic analysis page provides the surgeon the ability to quantify postresection limb alignment in extension, joint gaps, and kinematic range of motion (Fig. 28.15). This page also enables the user to compare postresection limb alignment and joint gaps to preresection values. This step is available, but not required for completing the MAKO procedure.

28.4.6

Case completion—archive and exit

Once the user has reached the last page of the workflow, the application will display a “Patient Time Out” to remind the healthcare professional to remove the mechanical checkpoints. Remove the femoral and tibial checkpoints using the Checkpoint Driver tool. Acknowledge the reminder to remove checkpoints before selecting the “Archive and Exit” button [32].

28.5

Discussion

The application of the Mako System in Australia has recently been introduced and as a result can be compared with existing unicompartmental implants presently being utilized by the Australian orthopedic community. The 2017 Annual Report of the National Joint Replacement Registry by the Australian Orthopaedic Association identified the Restoris MCK (Stryker) as increasing in usage from initial introduction in 2015 and becoming the third most popular UKA by 2016. Early data revealed a revision rate of 0.8% at 1 year (5 of 752), an improvement over all other unicompartmental components. Long-term follow-up will be necessary to validate the advantage of the Mako System in this cohort. The

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

489

FIGURE 28.15 The kinematic analysis page provides the surgeon the ability to quantify postresection limb alignment in extension, joint gaps, and kinematic range of motion.

28. Mako Robotic Arm

most common reasons for revisions of all UKAs were loosening (39%), progression of disease (31.3%), and pain (8.9%). There was no difference in the rate of revision for lateral versus medial UKA [33]. Blyth et al. determined that robotic arm assisted surgery (Mako System) resulted in improved early pain scores and early function scores in some patient reported outcome measures. The analysis of 139 patients was randomized into a robotic arm assisted surgery group (70 patients) and a manual surgery group (69 patients). A total of 90.5% of patients completed to 12-month follow-up. Preoperative pain levels were not significantly different between the two groups. However, from the first postoperative day through to week eight postoperatively, the median pain scores for the Mako System group were 55.4% lower than those observed in the manual surgery group. This difference eventually resolved at 3 months and 1 year postoperatively [34]. Given the coordinated effort by orthopedic surgeons to minimize narcotic usage and decrease narcotic abuse and addiction, the potential to lower postoperative pain in the early postoperative period would reduce narcotic usage and therefore decrease patient exposure to these medications. In a multicenter study of midterm survivorship of Mako System UKA, Kleeblad et al. found a 97% survivorship at a minimum of 5 years postoperatively. Ninety-one percent of patients were either very satisfied or satisfied with their knee function and a mean follow-up of 5.7 years. A total of 384 patients (432 knees) were included at a mean followup of 5.7 years. The primary outcome was conversion to a TKA, which included 11 total revisions and two UKA revisions which corresponded to a survival rate of 97% and annual revision rates (ARRs) of 0.52. Modes of failure requiring revision were aseptic loosening (54%), unexplained pain (31%), and progression of OA (8%). When comparing ARR, younger patients (,59 years) had the highest revision rates. When comparing ARRs by BMI, the rates rose with increasing BMI, with the highest revision rate in the BMI group greater than 35 kg/m2. The conclusion identified early promising survival rates closely resembling TKA outcomes, yet longer term follow-up will be necessary to compare survivorship and satisfaction of Mako System UKA to conventional UKA and TKA [35]. In a prospective, randomized, single blinded control trial Bell et al. evaluated the accuracy of component positioning in UKA comparing robotic-assisted (MAKO) and conventional implantation techniques. All patients underwent CT scans 3 months postoperatively using a standard protocol. It was noted that robotic assistance resulted in significantly lower component median implantation errors in all three femoral and tibial component parameters (sagittal, coronal, and axial) [36]. This improved accuracy has been reported in prior case studies [37]. Recently, Blyth et al. identified improved functional outcomes as a result of accurate positioning and improved soft-tissue balancing using robotically assisted UKA. The study compared 139 patients undergoing medial UKA randomized by either manual traditional surgical cutting guides or robotic-assisted surgery [34].

490

Handbook of Robotic and Image-Guided Surgery

The use of navigation and robotic assistance in orthopedic surgery continues to increase and their application is expanding (Figs. 28.16 28.19). Current applications include UKA, PFA, TKA, Total hip arthroplasty (THA), and spine surgery, but future development of navigation and robotic-assisted systems may include revision TKA and THA as well as other surgical procedures. There is little doubt that robotic technology is here to stay and the orthopedic community is beginning to embrace it. Trends are now moving toward miniaturization and once enough progress is made in this direction it will become a routine part of our armamentarium. However, long-term clinical outcomes of contemporary robotic systems for UKA and TKA are not available. The short- to midterm survivorship of robotic-assisted UKA using the MAKO robotic arm has been comparable to historical rates for TKA [38,39]. Further long-term results are needed to validate the relationship between improved accuracy of component placement and survivorship.

FIGURE 28.16 Preoperative: AP and lateral views, respectively. AP, anteroposterior.

FIGURE 28.17 Three-year postoperative: AP and lateral views, respectively. AP, anteroposterior.

Unicompartmental Knee Replacement Utilizing Robotics Chapter | 28

491

FIGURE 28.18 Preoperative: AP and lateral views, respectively. AP, anteroposterior.

FIGURE 28.19 Postoperative: AP and lateral views, respectively. AP, anteroposterior.

28. Mako Robotic Arm

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

SM K. The origins and adaptations of UHMWPE for knee replacement. In: Biomaterials handbook. Muriel Dekker, Inc.; 1995. Marmor L. The modular knee. Clin Orthop Relat Res 1973;94:242. Marmor L. Surgical insertion of the modular knee. RN 1973;36(9):OR1 6. Ryd L, Lindstrand A, Stenstrom A, Selvik G. Cold flow reduced by metal backing. An in vivo roentgen stereophotogrammetric analysis of unicompartmental tibial components. Acta Orthop Scand 1990;61(1):21 5. Laskin RS. Unicompartmental tibiofemoral resurfacing arthroplasty. J Bone Joint Surg Am 1978;60(2):182 5. Insall J, Aglietti P. A five to seven-year follow-up of unicondylar arthroplasty. J Bone Joint Surg Am 1980;62(8):1329 37. Kolstad K, Wigren A. Marmor knee arthroplasty. Clinical results and complications during an observation period of at least 3 years. Acta Orthop Scand 1982;53(4):651 61. Murray DW, Goodfellow JW, O’Connor JJ. The Oxford medial unicompartmental arthroplasty: a ten-year survival study. J Bone Joint Surg Br 1998;80(6):983 9. Whiteside LA. Making your next unicompartmental knee arthroplasty last: three keys to success. J Arthroplasty 2005;20(4Suppl. 2):2 3. Barrett WP, Scott RD. Revision of failed unicondylar unicompartmental knee arthroplasty. J Bone Joint Surg Am 1987;69(9):1328 35. Hernigou P, Deschamps G. Alignment influences wear in the knee after medial unicompartmental arthroplasty. Clin Orthop Relat Res 2004;423:161 5.

492

Handbook of Robotic and Image-Guided Surgery

[12] Ridgeway SR, McAuley JP, Ammeen DJ, Engh GA. The effect of alignment of the knee on the outcome of unicompartmental knee replacement. J Bone Joint Surg Br 2002;84(3):351 5. [13] Sawatari T, Tsumura H, Iesaka K, Furushiro Y, Torisu T. Three-dimensional finite element analysis of unicompartmental knee arthroplasty— the influence of tibial component inclination. J Orthop Res 2005;23(3):549 54. [14] Dejour H, Bonnin M. Tibial translation after anterior cruciate ligament rupture. Two radiological tests compared. J Bone Joint Surg Br 1994;76 (5):745 9. [15] Shoemaker SC, Markolf KL, Finerman GA. In vitro stability of the implanted total condylar prosthesis. Effects of joint load and of sectioning the posterior cruciate ligament. J Bone Joint Surg Am 1982;64(8):1201 13. [16] Borus T, Thornhill T. Unicompartmental knee arthroplasty. J Am Acad Orthop Surg 2008;16(1):9 18. [17] Ollivier M, Abdel MP, Parratte S, Argenson JN. Lateral unicondylar knee arthroplasty (UKA): contemporary indications, surgical technique, and results. Int Orthop 2014;38(2):449 55. [18] Xing Z, Katz J, Jiranek W. Unicompartmental knee arthroplasty: factors influencing the outcome. J Knee Surg 2012;25(5):369 73. [19] Jiranek W, editor. Lateral compartment UKA. In: Specialty day AAOS annual meeting, New Orleans, LA; 2018. [20] Bert JM. Unicompartmental knee replacement. Orthop Clin North Am 2005;36(4):513 22. [21] Kozinn SSR. Current concepts review unicondylar knee arthroplasty. J Bone Joint Surg Am 1989;71(1):145 50. [22] Berend KR, Berend ME, Dalury DF, Argenson JN, Dodd CA, Scott RD. Consensus statement on indications and contraindications for medial unicompartmental knee arthroplasty. J Surg Orthop Adv 2015;24(4):252 6. [23] Bert JM. 10-year survivorship of metal-backed, unicompartmental arthroplasty. J Arthroplasty 1998;13(8):901 5. [24] Berend KR, Lombardi Jr. AV, Morris MJ, Hurst JM, Kavolus JJ. Does preoperative patellofemoral joint state affect medial unicompartmental arthroplasty survival? Orthopedics 2011;34(9):e494 6. [25] Davies BL, Rodriguez y Baena FM, Barrett AR, Gomes MP, Harris SJ, Jakopec M, et al. Robotic control in knee joint replacement surgery. Proc Inst Mech Eng, H: J Eng Med 2007;221(1):71 80. [26] Lang JE, Mannava S, Floyd AJ, Goddard MS, Smith BP, Mofidi A, et al. Robotic systems in orthopaedic surgery. J Bone Joint Surg Br 2011;93(10):1296 9. [27] Beasley R. Medical robots: current systems and research directions. J Robot 2012;2012:14. [28] Plate JF, Mofidi A, Mannava S, Smith BP, Lang JE, Poehling GG, et al. Achieving accurate ligament balancing using robotic-assisted unicompartmental knee arthroplasty. Adv Orthop 2013;2013:837167. [29] Cossey AJ, Spriggins AJ. The use of computer-assisted surgical navigation to prevent malalignment in unicompartmental knee arthroplasty. J Arthroplasty 2005;20(1):29 34. [30] lang JE, Mannava S, Jinnah RH. Specialty update: general orthopaedics robotic systems in orthopaedic surgery. J Bone Joint Surg Am 2011;93B:1296 9. [31] Moschetti WE, Konopka JF, Rubash HE, Genuario JW. Can robot-assisted unicompartmental knee arthroplasty be cost-effective? A Markov decision analysis. J Arthroplasty 2016;31(4):759 65. [32] Stryker. MAKO PKA application user guide. PN210712 Rev 00; 2015. [33] National Joint Registry. Hip, knee and shoulder arthroplasty annual report. Australian Orthopaedic Association; 2017. [34] Blyth MJG, Anthony I, Rowe P, Banger MS, MacLean A, Jones B. Robotic arm-assisted versus conventional unicompartmental knee arthroplasty: exploratory secondary analysis of a randomised controlled trial. Bone Joint Res 2017;6(11):631 9. [35] Kleeblad LJ, Borus TA, Coon TM, Dounchis J, Nguyen JT, Pearle AD. Midterm survivorship and patient satisfaction of robotic-arm-assisted medial unicompartmental knee arthroplasty: a multicenter study. J Arthroplasty 2018;33(6):1719 26. [36] Bell SW, Anthony I, Jones B, MacLean A, Rowe P, Blyth M. Improved accuracy of component positioning with robotic-assisted unicompartmental knee arthroplasty: data from a prospective, randomized controlled study. J Bone Joint Surg Am 2016;98(8):627 35. [37] Pearle AD, van der List JP, Lee L, Coon TM, Borus TA, Roche MW. Survivorship and patient satisfaction of robotic-assisted medial unicompartmental knee arthroplasty at a minimum two-year follow-up. Knee 2017;24(2):419 28. [38] Conditt MA, Coon T, Roche M, Pearle A, Borus T, Buechel F, et al. Two year survivorship of robotically guided unicompartmental knee arthroplasty. J Bone Joint Surg Br 2013;95:294. [39] Plate JF, Augart MA, Seyler TM, Bracey DN, Hoggard A, Akbar M, et al. Obesity has no effect on outcomes following unicompartmental knee arthroplasty. Knee Surg Sports Traumatol Arthrosc 2017;25(3):645 51.

Further reading Lonner JH, John TK, Conditt MA. Robotic arm-assisted UKA improves tibial component alignment: a pilot study. Clin Orthop Relat Res 2010;468 (1):141 6. Roche M, O’Loughlin PF, Kendoff D, Musahl V, Pearle AD. Robotic arm-assisted unicompartmental knee arthroplasty: preoperative planning and surgical technique. Am J Orthop 2009;38(2 Suppl.):10 15.

29 G

Robotic and Image-Guided Knee Arthroscopy Liao Wu1,2, Anjali Jaiprakash1,2, Ajay K. Pandey1,2, Davide Fontanarosa1, Yaqub Jonmohamadi1, Maria Antico1, Mario Strydom1, Andrew Razjigaev1,2, Fumio Sasazawa1, Jonathan Roberts1,2 and Ross Crawford1,2,3 1

Queensland University of Technology, Brisbane, QLD, Australia Australian Centre for Robotic Vision, Brisbane, QLD, Australia 3 Prince Charles Hospital, Brisbane, QLD, Australia 2

ABSTRACT Knee arthroscopy is a well-established minimally invasive procedure for the diagnosis and treatment of knee joint disorders and injuries, with more than 4 million cases and costing the global healthcare system over US$15 billion annually. The complexities associated with arthroscopic procedures dictate relatively longer learning curves for surgeons with the potential to not only cause unintended damage during surgery but may cause postsurgery complications. The advancement of robotics and imaging technologies can reduce the shortcomings, alleviating some of the health access and workforce stressors on the health system. In this chapter, we discuss several key platform technologies that form a complete system to assist in knee arthroscopy. The system consists of four components: (1) steerable robotic tools, (2) autonomous leg manipulators, (3) novel stereo cameras for intraknee perception, and (4) three-dimensional/four-dimensional ultrasound imaging for tissue and tools tracking. As a platform technology, the system is applicable in other minimally invasive surgeries like hip arthroscopy, intraabdominal surgery, and any surgical site that can be accessed with a continuum robot, imaged with either stereo vision, ultrasound, or a combination of both. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00029-3 © 2020 Elsevier Inc. All rights reserved.

493

494

29.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Knee arthroscopy is the most common orthopedic procedure in the world with more than 4 million cases per year and costs the global healthcare system over US$15 billion annually. It is a type of minimally invasive surgery (MIS). After 40 years since its clinical introduction, knee arthroscopy is a well-established diagnostic and therapeutic procedure in which a camera, that is, an arthroscope, and a surgical tool are introduced into the knee joint through small incisions in the skin. The arthroscope and instruments are placed in the “soft spot” on either side of the patella tendon just below the patella. Each instrument is pushed into the knee to gain access. Cartilage inside the knee can be damaged at this stage if care is not taken. Once inside the knee the arthroscope generates real-time images that are displayed on a screen. The surgeon then inspects the entire knee looking for any unsuspected abnormalities before addressing the pathology that would have been detected by preoperative imaging. The phase of knee inspection can involve complex manipulation of the limb to allow the surgeon access to different areas of the knee [1]. In patients with large muscular legs, this phase of the operation can be particularly difficult. Despite several clinical advantages, arthroscopic technique faces long-standing challenges: (1) physically demanding ergonomics during patient manipulation, (2) lack of depth perception, (3) limited field of view (FOV), and (4) counterintuitive hand eye coordination between scope and surgical instruments. These above challenges make knee arthroscopy a complex procedure. Like any new skill there is a learning process to reach full competency. A recent publication from Oxford demonstrated that it was necessary to complete 170 cases to reach baseline competency [2]. Even experienced surgeons feel that harm can be caused in many procedures. In a recent paper looking at surgeons’ attitudes to knee arthroscopy it was noted that unintended damage to the knee was common [3]. In this study, half the surgeons said unintentional damage to articular cartilage, which is the tissue that covers the end of the bones that make up the joints, occurred in at least one in 10 procedures. About a third (34.4%) felt that the damage rate was at least one in five procedures. About 7.5% of the surgeons said such damage occurred in every procedure carried out [3]. The ergonomics of arthroscopy can mean the procedure is physically demanding. During a knee arthroscopy the surgeon needs to control the leg, the camera, and a surgical tool, while watching a remote screen (Fig. 29.1). It was also highlighted that 59% of surgeons reported they found the procedure to be physically challenging, and more than a fifth (22.6%) said they had experienced physical pain after performing a knee arthroscopy [3]. Arthroscopic tools and the arthroscope itself are usually straight and metallic. The arthroscope has sharp edges. Moving the straight instrument inside a curved knee can make it difficult to navigate the environment without hitting upon cartilage. Being able to use curved instruments with robotic assistance would be a significant advantage.

FIGURE 29.1 A typical scenario in a knee arthroscopy. Here the surgeon is reaching backward for a tool while looking sideways at a screen, holding the arthroscope, and controlling the leg with his body. This illustrates some of the complexity of the procedure that can be reduced with improved technology.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

495

FIGURE 29.2 Proposed autonomous robotic knee arthroscopy systems consisting of steerable robotic tools, autonomous leg manipulators, miniature stereo cameras, and 3D/4D ultrasound imaging systems. 3D, Three-dimensional.

29.2 29.2.1

Steerable robotic tools for arthroscopy Why steerable robotic tools are necessary for arthroscopy

One of the main reasons behind unintended damage to cartilage that occurs commonly in arthroscopy is the rigidity of the tools, including arthroscopes and other instruments, like graspers and punches, that are used in the current procedure. While the rigidity of the tools provides good force transmission that is beneficial to some operations, it also

29. Knee Arthroscopy

Arthroscopic cameras are angled at 30 or 70 degrees from the line of sight. Adjusting to a 30-degree angle is relatively straightforward but many surgeons find it difficult in using the 70-degree camera. A robot-assisted system would not have this limitation and could in fact view at any angle. Robotics started to focus on solving medical challenges in the 1940s but only began to grow from industrial systems in the early 1980s [4]. Today they are complex systems customized for specific medical procedures, using multiple sensors to measure, track, align, and understand patient and environmental parameters. Their role is to improve the safety, success, and consistency of unusually involved surgeries [5]. Over the past few decades, robots have grown in precision and complexity for a wide variety of medical applications including orthopedic surgery [6]. Surgical support through robotics is developing fast [7] because of the increased demand for noninvasive surgery that stretches surgeon capabilities to the limit. It is evident that research into new techniques and technologies benefits the medical community as a whole [8]. Robot-assisted surgery has significant advantages for both novice and skilled surgeons in delivering a more precise operation. Improvements include but are not limited to reduced unintended trauma and shorter recovery times for patients [9]. In this chapter, we discuss four novel technologies that could assist surgeons in performing arthroscopy and improving outcomes. These technologies include robotic steerable tools, autonomous leg manipulators, miniature stereo cameras, and three/four-dimensional (3D/4D) ultrasound (US) imaging systems. We describe how each individual technology may assist surgeons in performing arthroscopy with enlarged accessibility, improved ergonomics, enhanced guidance, and better precision in the near future; we also discuss the possibility of a fully autonomous robotic and image-guided system in the far future (Fig. 29.2) that could be supervised by surgeons with their clinical expertise input, while minimizing the effects caused by the limitation of humans, such as hand tremor, fatigue, and limited precision. Though the chapter focuses on knee arthroscopy we consider many of these concepts platform technologies that could eventually assist in other forms of arthroscopy, laparoscopy (keyhole abdominal surgery), and ultimately in many procedures where access can be achieved by keyhole techniques.

496

Handbook of Robotic and Image-Guided Surgery

indicates less accessibility and less dexterity given the confined space of the joint and the keyhole surgery setting. In the current approach, surgeons usually make two to three portals for the tools to cover the surgical areas, and change the portals for each tool frequently during the procedure. As a consequence, the chance of damage to the cartilage is significantly increased. In addition, the arthroscopes are often made beveled at the tip to enlarge the range of view, which leads to a high probability of chopping the cartilage. Moreover, the lack of dexterity makes the manipulation of the tools through small portals and under limited vision extremely challenging, leading to a long learning curve and high occurrence of unintended damage to patients [3]. Steerable robotic tools are excellent candidates to replace the current rigid designs. By making the tools steerable, the accessibility and dexterity of the tools can be tremendously improved. For example, by adding a rotation degree of freedom (DoF) to the tip of the tool, the surgeon is able to change the approaching direction of the tool to the surgical target without moving the main shaft. As a result, the area that can be accessed by the tool through a single port is enlarged, and thus the necessity of changing the ports is decreased. The addition of DoFs also makes it possible to bypass some obstacles, thus holding the potential to extend the application of this keyhole procedure to broader syndromes. Robotics, or more broadly mechatronics, is another technology that is revolutionizing the instruments used in surgeries. Compared to purely mechanical and manually manipulated instruments, robotic instruments have better maneuverability as more DoFs can be actuated simultaneously than could be directly handled by human hands. They are also more intelligent since more sensors can be integrated and processed during the operation to monitor the process and assist in decision-making. As a consequence, robotic tools are more and more widely adopted in surgeries. This also applies to arthroscopy since the tools are supposed to have greater maneuverability and better sensing ability to increase the accessibility, safety, and ease of use. In this section, we introduce some prototypes of steerable robotic tools for arthroscopy. Some of the prototypes were designed for knees and some for other joints like hips, but they share many characteristics and we will discuss their mechanical design, user interface, sensing, and evaluation together in the following.

29.2.2

Mechanical design

Mechanical design is the most important part to endow arthroscopic tools with steerability. There are generally three challenges faced in the mechanical design of steerable arthroscopic tools: 1. Size. Restricted by the keyhole surgery setting and the confined space inside the knee joint, the size of the arthroscopic tools has to be made very small. This is especially challenging when the tools are to be added with multiple DoFs to increase dexterity. Current arthroscopic tools used in surgeries usually have a shaft with a diameter of less than 5 mm. The steerable arthroscopic tools should be designed with comparable sizes. 2. Dexterity. In order to make the tools steerable and dexterous, complex structures and mechanisms need to be integrated with the mechanical design. There is generally a tradeoff between the dexterity of the tool and the compactness the tool can be made with. 3. Force transmission ability. In some operations in arthroscopy, the tools are used to exert some forces on the hard/ soft tissues inside the knee. In these applications, a good force transmission ability is necessary. This is, however, very challenging when the tools are made steerable. A compromising design should be taken for these situations where both the force transmission ability and the steerability are desired. To address these challenges, researchers have proposed different mechanisms. Some of the mechanisms that have been proposed for the steerable robotic arthroscopic tools are depicted in Fig. 29.3. Traditional serial-link robots have three structural components: links, joints, and motors. Links are the main body of a robot and are connected by joints; joints are actuatable mechanisms that can move the links they connect; motors are actuators that drive the motion of the joints. Steerable tools for arthroscopy usually do not have the same structures, but we can map their components to those of the serial-link robots by mimicking their functions. In this way, the mechanical structures of the prototypes shown in Fig. 29.3 can be summarized in Table 29.1. The characterization of the prototypes shown in Fig. 29.3 is also summarized in Table 29.1. As discussed previously, size, dexterity, and force transmission ability are three important factors for steerable arthroscopic tools. In Table 29.1, these factors are embodied by the diameter of the tool, the number of DoFs, and the applied force, respectively. Generally, most of the prototypes can be made as small as less than 6 mm in diameter. The prototype in Ref. [10] was slightly larger than the others, with a diameter of 8 mm. However, according to the authors, it could be reduced to 4 mm, which is rational considering the simplicity of the design.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

497

FIGURE 29.3 Mechanisms for constructing steerable arthroscopic tools: (A) SMA-based [10]; (B) hinged-joint-based [11]; (C) lobed-feature-based [12]; (D) notched-tube-based [13]; (E) tube-and-slider-based [14]; and (F) spine-and-hinge-based [15] SMA, Shape-memory alloy.

TABLE 29.1 Mechanical structure and characterization of the prototypes shown in Fig. 29.3. Mechanical structure

Characterization

Links

Joints

Motors

Diameter (mm)

DoF

Applied force

A [10]

Plastic disks

Special arrangement of SMA wires

SMA wires

8

1

1N

B [11]

Disks and spines

Hinges

Cables

4.2

1

At least 1N

C [12]

Disks and spines

Lobed features

Tendons

4.2

1

At least 3N

D [13]

Two nested tubes

Asymmetric notches on the tubes

Cables

5.99

1

At least 1N

E [14]

A distal link and two proximal tubes

A hinge composed by two sliders

Rotation of outer tube

5

1

Axial 100 N Lateral 20 N

F [15]

Disks and a central spine

Space between disks and deformation of spine

Cables

3.6

3

Unknown

DoF, Degree of freedom; SMA, Shape-memory alloy.

In terms of the dexterity, most designs chose to endow the device with only one bending DoF. Since the device is handheld, it naturally has four additional DoFs (three rotations and one translation) empowered by the motion of the hand. In consideration of this, one DoF at the tip is sufficient for some operations inside the joint. The prototype in Ref. [15] added another bending DoF to the proximal part of the tool and a pivoting DoF to the distal tip. The additional

29. Knee Arthroscopy

Fig. 29.3

498

Handbook of Robotic and Image-Guided Surgery

DoFs enable the device to be bent in a different plane without changing the orientation of the handle, and adjust the approaching direction of the tip without affecting the bending segment. Most cable-driven mechanisms, which form the main part of the steerable mechanisms, have applied force less than 3 N. This is sufficient for examination purposes when the mechanism is used to deliver a camera to the surgical site for inspection. However, for some other operations such as cutting, greater force transmission capability is desired. Distinguished from the other prototypes, the design in Ref. [14] does not rely on the pulling of cables but rather uses a special mechanism to transform the rotation of a tube to the translation of a hinge at the tip. As a consequence, the force it can exert increases significantly. The tradeoff, however, is the lack of flexibility that may cause damage to the cartilage.

29.2.3

Human robot interaction

For handheld devices, the design of the interface between the surgeons and the devices is critical for the final adoption of the devices. Due to the addition of DoFs, the surgeons have to be able to feed in more inputs when manipulating the steerable devices than the conventional ones. A good interface should let the surgeons intuitively and naturally maneuver the devices with little attention spared to the control of the devices itself. Three types of interfaces have been proposed for the steerable arthroscopic tools: pure mechanical, mechatronic, and a combination of both. In Ref. [14], the steerable arthroscopic cutter is controlled by pure mechanical interfaces composed of a wheel for steering the tip and a handle for actuating the cutting. The advantages of pure mechanical interfaces include the simpleness of implementation and the ease of sterilization; the disadvantages, however, include difficulty in controlling multiple DoFs simultaneously and integration with additional sensing. Therefore more prototypes adopt mechatronic interfaces, such as in Refs. [10,13,15]. Usually, joysticks and buttons are used to relay the control signals from the surgeons to the onboard processors to actuate the associated motors. Some designs [11,12] provide both options that can be switched by the surgeons, so that they can select the best interface according to the tasks they are performing. The redundancy is also beneficial for coping with unexpected failures of one of the interfaces.

29.2.4

Modeling

Modeling of steerable arthroscopic tools usually includes kinematics modeling and mechanics modeling. The former develops the mapping between the position of the actuators and the position of the tip (position kinematics) or between the motion of the actuators and the motion of the tip (velocity kinematics). The latter investigates how the force applied to the tip relates to the force generated by the actuators. Kinematics modeling is useful for mechatronic control. If the position kinematics is derived, the surgeons can directly control the absolute position of the tip; if the velocity kinematics is obtained, the surgeons are able to control the incremental motions of the tip from any state. Examples of kinematics modeling can be found in Refs. [10,13,15]. Generally, the position kinematics can be derived from the geometry of the mechanism and the velocity kinematics can be obtained by differentiating the position kinematics. However, due to the involvement of elastic elements, the accuracy of the modeling is usually inferior to the rigid robotic counterparts. Mechanics modeling is helpful for safety monitoring. As the arthroscopic tools usually make contacts to the tissues inside the joint when performing operations, it is beneficial to be aware of how much force is being exerted by the tip. The force is generally difficult to measure directly; an indirect approach is to measure the force experienced by the actuators (such as tension in cables) and then estimate the force at the tip through the mechanics model. Some preliminary work has been done in Refs. [10,14] but in general it is still an open research question how to efficiently and accurately monitor the force output.

29.2.5

Sensing

One of the advantages of robotic arthroscopic tools over traditional ones is the capability of integrating multimodality sensors. Six types of sensors can be used to guide the operation of robotic arthroscopic tools. 1. Encoder. The motion of the actuators, such as the cables, can be recorded by the encoders that are installed with the motors for the actuation. Since many motors have in-built encoders, these are the most convenient sensors to be integrated. Knowing the motion of the actuators is a prerequisite for kinematics modeling and control.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

499

2. Strain gauge. Strain gauges are sensors that can measure the force applied to them based on the deformation caused. By coupling them with the actuators, such as the cables, it is possible to sense the forces the actuators exert. Then with the mechanics model, the force output of the tip of the device can be estimated. The sensors can be used for safety monitoring to prevent overload. 3. Electromagnetic (EM) tracking system. An EM tracking system consists of two components, a field generator that can generate a modulated EM field and a coil whose current can react to the EM field it is placed in. By attaching the EM coil sensors to the tip of the device and placing the sensors within the EM field of the generator, the position and orientation of the sensors can be accurately measured with respect to the coordinate frame of the generator system. The EM tracking system provides an effective method to sense the spatial position as well as the shape of the device when it is inserted into the human body [16]. The disadvantage, however, is its incompatibility with ferro materials. 4. Optical tracking system. An optical tracking system also comprises two parts: a camera system and a set of markers. The markers can be rigidly bound to the devices and tracked by the camera system in real time. Due to the size of the markers and the occlusion problem, usually these markers can only be used outside the human body, such as on the handle of the device. Although for steerable devices, the optical tracking system is not able to give the information of the tip directly, it is useful for providing the position and orientation of the base of the device, which can be used to assist in mechatronic control, estimation of the tip position, etc. 5. Inertial measurement unit (IMU). While the optical tracking system can measure the absolute position and orientation of the device, the IMU sensor is capable of giving the relative rotation and translation of the device with respect to an initial state by means of integration. When the initial state is calibrated, the absolute information can also be recovered. Compared with the optical tracking system, the IMU sensor is much cheaper and is not hampered by the occlusion problem. However, the accuracy of the IMU is not competitive and it often suffers from the drift problem. 6. Camera. For arthroscopic operations, the arthroscope can not only provide direct visual feedback to the surgeons, but also be used as a sensor for the control of the arthroscope and other tools. Visual servoing techniques can be employed to automatically move the arthroscope or the tools to the desired position, or assist in the surgeon’s operation, as demonstrated in Ref. [17]. Advanced developments in the fabrication of cameras combined with computer vision techniques are enabling more and more sophisticated sensing within the joints, as is discussed later.

Evaluation

Evaluation of the robotic steerable arthroscopes can be performed in two ways. The first, namely the quantitative way, is to evaluate the individual performance of the device through separate experiments. Examples include the test on the kinematics modeling [10,11,13,15] and validation on the force output [11 14]. The second, namely the qualitative way, is to validate the designs via phantom, cadaver, or animal experiments. These experiments provide more close-to-reality simulations for the devices and usually can expose problems that are latent in lab experiments. In Ref. [12], a cadaver study was carried out to validate the developed robotic steerable arthroscope and it was found that “the quality of the onboard camera and light source were significantly inferior to the rigid endoscope and standard arthroscopes.” Feedback from the surgeons can also be requested with these experiments. In Ref. [14], questionnaires were given to the surgeons who participated in the cadaver experiments and it was confirmed that the steerability increased the reachability of the tools compared with their rigid counterparts in real surgery. Fig. 29.4 shows a cadaver experiment conducted by the Medical and Healthcare Robotics group from the Australian Centre for Robotic Vision, Queensland University of Technology. In this experiment, a steerable arthroscope integrated with a tiny camera and a lead was inserted into a cadaver knee through a standard trocar. At the same time, FIGURE 29.4 Cadaver experiment to evaluate the steerable arthroscope compared with the standard arthroscope.

29. Knee Arthroscopy

29.2.6

500

Handbook of Robotic and Image-Guided Surgery

a conventional straight rigid arthroscope was inserted to the knee from another portal. It can be seen that the quality of the image from the camera on the steerable arthroscope is comparable to that from the conventional arthroscope. Without sacrificing imaging quality, the robotic steerable arthroscope holds the potential to make arthroscopy easier and safer by incorporating multimodality sensing and intelligent control of the additional DOFs.

29.3 29.3.1

Leg manipulators for knee arthroscopy Leg manipulation systems

The process of knee arthroscopy can be decomposed into three main stages: (1) inserting an arthroscope, (2) navigating to the affected location, and (3) removing damaged cartilage. To enable the navigation of the arthroscope, the patient’s leg is manipulated by the surgeon (Fig. 29.5) to create a gap (instrument gap) [18] at a specific point in the knee joint. Automating leg movement has significant advantages for both novice and skilled surgeons in reducing their workload. The knee joint’s complexity and anatomic structure cause arthroscopic surgery to depend largely on the surgeon’s skill level. The joint has six DoFs allowing it to move in various directions. Zavatsky used the “Oxford rig” to prove that the ankle and hip systems combine with knee movement to enable the six DoFs movement of the knee [19]. To provide access for surgical instruments during an arthroscopy, surgeons physically manipulate the patients’ knee through bending, lifting, twisting, and rotating the leg. Moustris et al. [20] suggest learning from expert surgeon actions, where the surgeon’s insight into how to manipulate the leg during surgery provides an insight for future robotic arthroscopic procedures. Tarwala and Dorr [21] note that the current state-of-the-art robotic orthopedic technologies, such as the MAKO RIO surgical arm, are limited to robotic partial knee replacement, with the potential to be used for total hip, knee, and tunnel placement during anterior cruciate ligament (ACL) replacement surgery. However these systems still rely on the surgeon to manually manipulate the patient’s leg during the procedure. To support a level of automation for knee surgery, a range of manual devices are found in theater rooms today, such as the DeMayo, Stryker, and SPIDER2 [7] leg manipulators, where the surgeon is in control by manually moving the limb with the device. The main limitation of these devices, however, is that each position change requires the surgeon to stop the surgery and move the leg manually as automation is not incorporated. Manipulators such as the SPIDER2 are also used for other limbs (such as shoulder) surgery, while the Stryker leg manipulator is mainly used for partial knee replacement. For knee arthroscopy most surgeons today opt to maneuver the patient’s leg themselves in the traditional manner due to the low benefit and disruptive nature of how these systems operate.

FIGURE 29.5 Leg manipulation during an arthroscopy.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

29.3.2

501

Knee gap detection for leg manipulation

29.4

Miniature stereo cameras for medical robotics and intraknee perception

Visual information plays a significant role in our everyday life, as 80% of the information we receive comes from visual inputs. No doubt, vision improves our ability in making decisions. The vision system plays an important role in MIS. Current arthroscopes use a rigid rod lens geometry to visualize the surgical area of interest. The two-dimensional (2D) images provided by arthroscopes are used by surgeons to perform surgery. However, the current technology lacks perception and for autonomous robotic arthroscopy having the access to depth information is an important element. Autonomous robotic navigation would heavily rely on stereo vision in robots and a lot of research efforts are underway in designing smart robotics for security and surveillance operations in unstructured environments. Technological challenges in visualizing the interior of the human body present new opportunities for designing advanced vision systems and a 3D reconstruction of complex joints such as knee cavity will benefit arthroscopic surgeries. As a first step toward the realization of better medical imaging systems for medical diagnosis and therapy, miniature stereo vision cameras are developed to assign depth perception to robotic surgical procedures.

29.4.1

Complementary metal-oxide semiconductor sensors for knee arthroscopy

Human vision is the most sophisticated and powerful vision solution to observe the environment and extract location information. Akin to the human visual system, in robotics stereo vision forms a reliable depth perception technique for

29. Knee Arthroscopy

Although progress has been made in autonomous surgical maneuvers, optical coherence, tomography guidance, and motion compensation which is revolutionizing surgical procedures in robotic laparoscopic surgery [22], research has overlooked these technological advantages in the context of knee arthroscopy [23]. There are currently no robotic or automated technologies for meniscus or cartilage surgery used in knee arthroscopy. The reason for this omission is the confined spaces within the knee joint and the intricate maneuvers of the leg that are required to access the joint to perform the surgery [24]. Other causes include the low level of standardization of routines, limitations of surgical tools, inadequate procedures, and postsurgery issues such as iatrogenic vascular lesions. It highlights the significant complexity in developing robotic solutions for arthroscopic knee surgery. However, it presents an opportunity for the integration and adoption of existing and new technologies to deliver the vast benefits of robotic surgery to knee arthroscopy patients, as detailed by Bartoli et al. for laparoscopic surgery [9]. To perform precision robotic surgery, it is necessary as a first step to automate today’s leg manipulation systems to move the patient’s leg and knee safely, and in a controlled manner. For an initial step toward the development of robotic arthroscopy, it is essential to detect the instrument gap [18] for feedback to a robotic system. Strydom et al. reviewed, tested, and analyzed segmentation algorithms for suitability to detect the knee joint features such as the instrument gap. During cadaver experiments, arthroscopy videos were recorded. These sequences were used to create 10 3 100 image sets as detailed in Fig. 29.6 to test the segmentation algorithms against. Three segmentation algorithms were examined and implemented to test their suitability to segment the instrument gap. It was found the Chan Vese level set active contour algorithm is easy to initialize, has a high average accuracy level, and is robust across all image sets. Using its shape capability the level set active contour can be a great option for segmenting the instrument gap if its performance can be optimized. The Watershed algorithm has performed sporadically well across the image sets, and needs to be tuned for each set to work well. It is not suited to be used for segmenting the instrument gap. The Otsu adaptive thresholding algorithm performed fast and accurately across the image range as seen from Fig. 29.7 for two of the data sets [18]. Fig. 29.7D and H is the difference image, color coded to highlight the true positive (green), true negative (gray), false positives (red), and false negative (blue) from the segmentation results. Overall the Otsu algorithm outperformed the watershed and level set algorithms in segmenting the instrument gap. Future development of computer learning algorithms might improve on these methods, especially for complex areas in the knee. However, for knee joint analysis of selected areas in the joint and fast processing, the Otsu algorithm is effective and accurate. Analyzing the knee gap is one step toward robotic leg manipulation to support automated surgery. Future research will need to include a range of computer vision and kinematic technologies and sensor solutions, to enable decisions on how, when, and in which direction to move the leg to support the surgeon at a specific point in time.

FIGURE 29.6 Segmentation data sets.

FIGURE 29.7 Otsu algorithm. Top row: L3 (A) Frame 2 of L3 arthroscope video, (B) L3 marked up image, (C) Otsu L3 mask, (D) L3 SAD output. Bottom row: L5 (E) Frame 2 of L5 arthroscope video, (F) L5 marked up image, (G) Otsu L5 mask, and (H) L5 SAD output.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

503

29.4.2

Emerging sensor technology for medical robotics

Current image sensors have many attractive features but their mechanical rigidity limits application in devices that require flexible packaging (e.g., within a soft tubing). CMOS-based image sensors are traditionally planar and lack mechanical flexibility due to the strong nature of covalent bonding. The most advanced nonplanar inorganic electronic systems were recently demonstrated by Ko et al. where silicon-based hemispherical image sensors with a wide FOV (akin to the human eye) by employing elastomeric interconnects were realized [27]. The development of mechanically flexible image sensors is particularly important for aberration-free conformal imaging with a wide FOV required for medical and security applications. Mechanical flexibility is therefore an important attribute in choosing the lightsensing material for designing the next generation of image sensors [28]. The nature of weak van der Waal interactions among neighboring molecules in organic semiconductors enables intrinsic flexibility at the molecular scale—making this imaging technology particularly suitable for medical and soft robotics. Organic semiconductors offer cheaper processing methods, the fabrication of devices which are light, flexible, and manufactured in large (or small) sizes, and the tuning of photophysical and optoelectronic properties. Lighting plays an important role in the color constancy and quality of 3D reconstruction that an imaging system can produce. A theoretical approach to circumvent the vision impairment of CMOS image sensors for machine vision was presented by Finlayson in which the use of a combination of narrow spectral responses and a logarithmic pixel were proposed to create an image that is invariant to the change in lighting conditions [29]. Along these lines, the practical feasibility of employing a set of organic absorbers for producing color with high purity using this approach was

29. Knee Arthroscopy

successful navigation of robots in unknown and unstructured environments [25]. The stereo vision technique requires two cameras to observe a scene from different locations and in turn produces different image locations of the objects. The disparity and baseline of the system are used for distance estimation and 3D reconstruction of the scene. It is not surprising that modern imaging technologies now form an integral part of the minimally invasive surgical procedures. However, development of miniature cameras that can reach and see inside tight spaces of complex body joints, such as the knee, is highly desired for surgical planning, image-guided surgery, surgery simulation, and disease monitoring. Arthroscopic surgical procedures such as meniscal repair and ACL reconstruction require extreme care and a 3D reconstruction using stereo vision can be used by a robotic system as its primary sensor for collision avoidance and mapping of the knee cavity. There are four key steps by which an image sensor captures an image: first, photons emitted by a light source (in natural and artificially illuminated scenes) are absorbed by the photoactive material which constitutes the pixels resulting in the generation of electron hole pairs; second, the electrons and holes are driven by means of an external electric field toward opposite electrodes where they are extracted, resulting in signal charge which is collected and accumulated at each pixel; third, the accumulated charge from each pixel in the 2D array is read out—the process by which this occurs gives rise to the different image sensors on the current market which include charge-coupled devices, chargeinjection devices, complementary metal-oxide semiconductor (CMOS) image sensors, and image pickup tubes; finally, the detected charges are converted into digital data, which are processed in order to produce the final color image [26]. The current market for CMOS image sensors is nearly $14 billion. Market trends predict an ever-increasing demand for CMOS-based image sensors in the next few years, with new applications in medical, scientific, automotive, and industrial contexts accounting for a significantly larger share of the market (up to 25% by 2022). This expansion is being driven by the need for increasingly smaller and cheaper electronic devices with more functionality. Emerging alternatives to CMOS technology with organic semiconductors and quantum dots are being seen to further broaden up the digital imaging market. Significant progress has already been made to the point that organic photodetectors (OPDs) complement the CMOS technology in changing the landscape of the image sensor market. Organic semiconductor based imaging systems are particularly suitable for flexible design in a soft packaging form. It is not difficult to envisage them making their way into medical and robotic applications. Bringing real-time situational awareness to medical robotics would require multimodal sensing. For robot-assisted MIS the current imaging system has to change. There is a growing demand for the next generation of arthroscopes that can provide volumetric representation in real time. Different components and characteristics of the image sensor have to be modified to facilitate their use in medical robotics. The areas that require modifications include (1) image processing software to augment medical imaging with multimodal image fusion, (2) the image sensor architecture that enables 3D imaging, (3) fast frame rates with active pixel arrangement, (4) the integration of “intelligence” at the chip level, and (5) multispectral imaging pixels that are able to see through blood and occlusions. Importantly, the type of photodetector material and color separation system associated with medical imaging would play an important role in defining the quality of the final image.

504

Handbook of Robotic and Image-Guided Surgery

FIGURE 29.8 Prototypes of the stereo cameras and their feasibility as an imaging system for knee arthroscopy. The NanEye stereo module is shown in (A). Its miniature dimension is ideal for arthroscopy; however, it suffers from a low image resolution and unreliable performance. The second prototype (B) is based on pairing two mcU103A cameras.

reported [30,31]. Assuming Gaussian profiles for the sensing material and a full width at half maxima of ,100 nm, it was reported that a combination of four sensors is capable of producing high purity of color information in an image. It further identified organic chromophores that offer a close match to ideal Gaussian profiles required for producing a high color quality in image sensors. Four narrow absorbers separately absorbing blue, green, yellow-orange, and red colors of a scene were reported. Further development of this concept has led to demonstrations of narrow-spectrum OPDs that are suitable to produce high color purity for blue and green regions [32]. Motivated by this work, researchers at Samsung have developed a narrow-spectrum sensing sensor for sensing the green color [33]. Further developments in material design and processing of spectrally selective organic semiconductor pixels would be beneficial for producing miniature cameras with desired color purity.

29.4.3

Miniature stereo cameras for knee arthroscopy

In this section, we aim to describe the feasibility of miniature stereo vision cameras for robotic knee arthroscopy. Three-dimensional reconstruction using stereo image pairs requires identification of image pixels that correspond to the same point in a physical scene that is being observed by the two cameras. Two prototypes of arthroscopes have been developed at Queensland University of Technology for achieving this aim: (1) NanEye stereo camera (each with 250 3 250 pixels) mounted on a 3D printed head and (2) a pair of muC103A cameras (each with 400 3 400 pixels) assembled as a stereo pair and mounted on a 3D printed head. The images of arthroscope prototypes are shown in Fig. 29.8. In comparison to prototype 1, substantial improvements have been achieved using prototype 2, in terms of image resolution and reliability of the camera during the recordings. However, there is room for further improvements in physical characteristics of the camera such as adding LED illuminators and having a multibaseline stereo arrangement.

29.4.3.1 Validation of stereo imaging in knee arthroscopy Fig. 29.9 shows a set of stereo images acquired by arthroscope prototype 1. It is evident that despite the stereo camera having an image resolution of 250 3 250 only the amount of detail present in these images is more informative than that provided by traditional oblique-view arthroscopes. An example of a stereo rectified image is also shown together with its disparity map. This proves the potential of direct use of CMOS-based stereo image sensors in MIS. Extension of the stereo vision concept was further developed by use of prototype 2. This prototype offers better resolution and provides detailed images as can be seen from the set of images shown in Fig. 29.5. These images were acquired from the same phantom knee. The robustness of this arthroscope for MIS was further verified in cadavers. Experiments in this direction have shown some very exciting results where internal features and textures associated were successfully captured and depth maps were created for every frame in real time. The depth map is an essential part in creating 3D surfaces of the internal knee structures. Fig. 29.10 also shows the corresponding depth maps for stereo images taken from a phantom knee and a cadaver knee. Images acquired from cadaver knee further validate the use of stereo cameras in knee arthroscopy. The computed depth map shows parts of femur and ACL that are close to the camera with features up to 25 mm captured in good detail. It is also evident that prototype 2 is unable to accurately reconstruct knee anatomy as there are gaps in the depth map. A lot of it has to do with the lack of texture and we also found that the use of light conditions plays an important role. For robotic knee arthroscopy it is apparent that more than one imaging mode is required to address situations where the main imaging system is not able to provide detailed depth information. The use of 3D US is looked into as an additional mode of imaging for getting a volumetric information in real time to complement the stereo

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

505

FIGURE 29.9 The stereo images and their corresponding depth maps from a phantom knee using stereo endoscope prototype 1. Although the left and right images carry sufficient details, the reconstructed disparity map lacked features.

29. Knee Arthroscopy FIGURE 29.10 The stereo images and their corresponding depth maps from a phantom knee (A), and a cadaver knee (B). In (A) the meniscus is the main knee structure visible in the stereo images, whereas in (B) the ACL is clearly visible. ACL, Anterior cruciate ligament.

506

Handbook of Robotic and Image-Guided Surgery

camera based imaging system. The suitability and potential of 3D US for robotic knee arthroscopy are discussed in the next section.

29.5 29.5.1

Ultrasound-guided knee arthroscopy Ultrasound-based navigation

US is an imaging modality based on the use of high-frequency compressional sound waves. At tissue interfaces the wave fronts are partially reflected back to the US transducers, and from the time of flight it is possible to calculate the depth of the structures. Several lines of view are typically scanned sequentially and used to reconstruct a volume. Modern systems can scan several full volumes per second, making it suitable for real-time or quasi-real-time applications [34]. Some recently introduced technologies even allow for refresh rates of thousands of Hertz [35]. US scanning has peculiar characteristics that make it interesting for intraoperative autonomous surgery applications: it is the only real-time and volumetric imaging modality currently clinically available in operating theaters; moreover it is noninvasive, it provides superior soft-tissue contrast and high resolution, and it is cost-effective, compared to other modalities such as CT and MRI. Advanced modalities such as elastography [36] or ultrasonic backscatter analysis [37] potentially allow for tissue typing/characterization, which can be used to inform the surgeon or the autonomous robotic system about the distribution of the different tissues in the operating region. Navigation using US has been successfully explored in several medical fields. For example, US guidance in radiation therapy before or during treatment is currently a clinical standard [38,39]. In robotic needle procedures or MIS, US guidance is becoming increasingly more common, either alone or in combination with other modalities, such as MRI, CT, or vision systems. US is used to track both internal tissue distributions and tools, like arthroscopes [40], catheters [41], or biopsy [42]/brachytherapy [43] needles. Although presently there is no clinically available system using US guidance for arthroscopic procedures in the knee, several studies in the literature have investigated procedures which could be adapted to (autonomous) arthroscopic applications, in particular for identification and tracking of knee tissues and of surgical tools with characteristics similar to arthroscopes.

29.5.2

Ultrasound for the knee

29.5.2.1 Automatic and semiautomatic segmentation and tracking In a clinical workflow where (autonomous) robotic arthroscopy for the knee is implemented, the first possible application for US imaging is the identification and segmentation of the structures of interest. This imaging modality allows to recognize and contour most of the tissues in this region, such as tendons [44], ligaments [45], menisci [46], nerves [46,47], and cartilage [48,49] (see Fig. 29.11 for an example of a US-based map of hard-tissue knee structures). Vessels (like the popliteal artery) can be segmented and tracked dynamically using duplex US [50,51]. Bony structures cannot be fully imaged, due to the physical limitations of this imaging modality: since US reflection and FIGURE 29.11 On the left, sagittal 2D US Bmode scan of the knee through the patellar tendon. On the right, identification and segmentation of the main hard-tissue structures. 2D, Twodimensional; US, ultrasound.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

507

transmission coefficients at an interface between tissues depend on the difference between the relative acoustic impedance values, at the muscle bone interface where there is a large difference almost all the signal is reflected back. Still, it is possible to image the proximal surface of the bone and use registration techniques (e.g., with preoperative MRI or CT) to reconstruct and track in real time the correct position, orientation, and shape of the whole structures [52]. To facilitate the surgeon’s work and provide them with a more familiar navigation tool, US can be used as a real-time source of information to navigate (rigidly or in principle also elastically) in another volumetric imaging modality (a technique also referred to as “virtual sonography”) [45].

29.5.2.2 Ultrasound-guided interventions

29.5.2.3 Ultrasound-guided robotic procedures US-guided anesthetic procedures, such as nerve block, could benefit as other surgical procedures from the introduction of robotic procedures. In the knee area, in order to guarantee stable incision and a faster learning curve, Hemmerling et al. [60,61] developed a robotic system (Magellan) in which the needle is held and placed under US guidance by a robotic arm, remotely controlled by a joystick. To select the point for needle insertion, the combined information from a camera and from a manually operated 2D US system was visualized and used by the surgeon. In computer-assisted orthopedic surgery (CAOS) [62] robots are most of the time an essential component of the workflow. A successful example of a commercial CAOS product for the knee is the Mako Total Knee System (Stryker, 2825 Airview Boulevard Kalamazoo, Michigan, United States) [63] (see Fig. 29.12). Currently, most systems use CT as preoperative images and fluoroscopic imaging during the operation. This exposes the patients and the operators to an unwanted X-ray dose. Moreover, to coregister the structures intraoperatively with the preoperative plan typically heavy invasive tracking devices are used, possibly resulting in pain and longer recovery times. This also limits the use to larger nonmobile bones. The reference markers used are fixed directly to the bones (pelvis, femur, and tibia) through very small incisions (1 2 cm). The patients seldom complain about the small incisions because the main wounds for joint replacement are larger and more painful. However, it is fact that more incisions (albeit small) are needed and extra holes on the healthy parts of bones are created. In elderly patients, the markers might be loosened during surgery because of porotic bone. If this happens, the system does not work correctly. US has been proposed to reduce significantly these issues providing potentially submillimetric accuracy and robust registration procedures not requiring segmentation [64]. Mozes et al. tested on phantoms and cadavers an optically tracked US probe in A-mode as a variable length pointer to position in real time in absolute (room) coordinates bony structures and register with preop scans [65]. Some studies even suggest the possibility to avoid altogether preoperative CT scans. Using 3D US imaging, the surface of the femur, for example, can be reconstructed and registered to a generic accurately segmented CT scan. The resulting contours can be used for real-time navigation during surgery [66].

29. Knee Arthroscopy

Interventional US guidance is currently used in the knee mainly for injections, in particular when accurate positioning of the needle is required. Clinical indications include voluminous and painful cysts which need in situ injections; tendinopathies/bursopathies that are difficult to reach; complex cases, like when synovitis or prostheses are present and joint aspirations are prescribed [53]. For these procedures, the use of US has reportedly transformed completely the outcomes, significantly increasing accuracy and safety. Research work has shown that there is potential also for needle placement guidance in the posterior cruciate ligament (PCL): some authors have investigated the aspiration of ganglion cysts [54], for example; but probably the most interesting application is for the injection of advanced regenerative treatments like cells, growth factors, or scaffolds [55], as this would open new scenarios for the role of orthobiologic agents in the treatment of the injuries to this ligament. US has been used also to guide Baker’s cyst aspiration with significant clinical improvements in osteoarthritis patients [56]. Procedures for patellar tendons are also reported in the literature using US guidance. In an application in sports medicine, Maffulli et al. describe a procedure to inject sclerosing agents into the vascularization site of recalcitrant tendinopathy patients [57]. In this study, the authors show how US is a valuable and precise tool not only to identify the interface between the tendon and the Hoffa body, but also to follow in real time the distribution of the fluid and, with color Doppler, to assess the neovascularization state of the tendons to understand how effective the injection was. To diagnose and treat patellar tendon tendinosis, Kanaan’s group proposed to use US to identify sonographic features that correlate with the disease, and to guide percutaneous tendon fenestration [58]. They concluded in their work that this workflow produced clinical improvements or no changes in all patients. Hirahara et al. [59] described a technique to use US for percutaneous reconstruction of the anterolateral ligament (ALL), where sonography was used to identify accurately the origin and the insertion of the ALL.

508

Handbook of Robotic and Image-Guided Surgery

FIGURE 29.12 Medial unicompartmental knee arthroplasty using the MAKO RIO robotic system (Stryker) (Dr. William A, Leone, The Leone Center for Orthopedic Care at Holy Cross Hospital, Fort Lauderdale, Florida, United States).

29.5.3

Ultrasound guidance and tissue characterization for knee arthroscopy

There is a clear indication from clinical institutions that only automatization of interventions, at least the most standard ones, can in the near future be sustainable in terms of efficiency, minimization of error occurrence, and costeffectiveness. Arthroscopic procedures are among these types of intervention but, in particular for the knee, specific issues must be addressed before fully autonomous applications can be envisioned. First, the robots need to be aware of the position of all the relevant structures (targets, organs at risk, and tools) at all times with the level of accuracy and precision required by the specific application. It must also be noted that imaging for soft tissues (which is essential in robotic surgery to identify, segment, and track the structures of interest) has not been thoroughly investigated in the literature because most applications studied have focused on bony structures. Currently, internal vision systems, such as microcameras, are typically employed which, though, typically cannot provide any volumetric or spatially localized information. Volumetric X-ray systems (like CT) are not real time but time integrated, deliver extra unwanted radiation to the operators, have poor contrast and resolution, and are incompatible with normal operational conditions. Other clinically available options presently only include (open) MRI scanning, which has several remarkable disadvantages: it is hardly compatible with standard arthroscopic workflows due to the high-intensity magnetic fields required and the limited operating space, and it is extremely costly. US is instead a portable, cheap, harmless real-time volumetric imaging modality with advanced tissue characterization capabilities, superior soft-tissue contrast, and virtually unlimited resolution possibilities. Moreover, it has already been proven as a reliable image guidance tool in other applications with similar requirements, in the radiotherapy space [38,39]. These requirements are include the ability to distinguish automatically tissues; automatic segmentation of tissues and tools; automatic tracking of tissues and tools; and real-time volumetric navigation (possibly elastically registered to another modality such as a preoperative MRI or CT for the convenience of physicians). Most arthroscopic procedures in the knee are performed from the anterior side through the parapatellar portals (the soft spots on the sides of the patella) and only involve the anterior region of the joint. Therefore it is natural to assume the best option is to scan anteriorly the knee during the operation, using a 4D probe with adequate resolution (e.g., in the range 5 13 MHz, for optimal penetration and resolution). US can identify and track the cartilages, the ligaments, the meniscus, the bony structures, the tendons, possible pathological scar tissue, and possible liquid present (inflammation/joint fluid/saline solution). It is also advisable to perform a posterior preoperative scan to segment arteries, nerves, PCL, posterior part of the bones, posterior part of the cartilages, muscles, and tendons. The latter scan can be used to fix safety margins, in particular for vessels and nerves, although it is very rare that these structures enter the operating region. The only structures usually assessed during arthroscopy that cannot be visualized with US from the anterior view are the PCL and the patellar cartilage. While the PCL can be generally identified from the popliteal fossa, the patellar cartilage cannot be imaged since it is located beyond the patella [67]. It has been reported in the literature that US cannot demonstrate meniscal and ligamentous lesions [56]. Instead several authors propose US (possibly combined

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

509

FIGURE 29.13 On the left, the probe held in position on the knee; in yellow the extension of the US probe surface and in red the actual surface of contact. On the right, the resulting 3D image, showing respectively, the expected (in yellow) and the actual (in red) field of view generated. 3D, Three-dimensional; US, ultrasound.

29.6 Toward a fully autonomous robotic and image-guided system for intraarticular arthroscopy So far we have introduced four technologies that may change the way that arthroscopies are performed. These technologies can bring the surgery with enlarged accessibility, improved ergonomics, enhanced guidance, and better precision. We have described how each individual technology may assist the surgeons in performing arthroscopy in the near future; from a longer perspective, these technologies have the potential to be combined to form a fully autonomous robotic and image-guided system. The system could be supervised by surgeons with their clinical expertise input, while minimizing the effect caused by the limitations of humans, such as hand tremor, fatigue, and limited precision. In this section, we discuss some of the possibilities that these technologies may interlink with each other toward the development of a fully autonomous robotic and image-guided system for intraarticular arthroscopy.

29. Knee Arthroscopy

with preoperative images) to find possible areas of pathology [68 72]. This information could be used to guide the robot to the disease. In combination with the US-based map of the tissues, a full set of active constraints could reduce damage to normal tissue and shorten the overall time required for the procedure. This scenario presents some serious challenges. Traditionally, US has always been regarded as a very highly operator-dependent modality. So for autonomous applications, a certain level of automatic interpretation of images must be introduced, and there is already some promising literature in this direction. For optimal US scanning, a good coupling between the surface scanned and the surface of the US probe should be maintained throughout the whole procedure. Unfortunately, the knee has a complex unevenly curved shape which does not match the curvature of a standard 4D US probe (see Fig. 29.13). Moreover, the leg can be moved during surgery, so the coupling must be maintained dynamically. And the different tissues inside the joint might require different orientations of the probe and possibly specific flexion angles for optimal visualization, which in principle might not be optimal to ensure arthroscope access to those structures. However, the US guidelines presently available for the knee focus on the visualization of each of the structures independently, with the most commonly used 2D probes. Further studies are needed to evaluate whether 3D/4D volumetric US can overcome these limitations. Another parameter to take into account is the FOV of the probe: scanning from the anterior side means through the patellar tendon which can provide a small contact surface, reducing significantly the accessible FOV. Furthermore, several structures of interest in the knee might be beyond the air gap on the sides (Fig. 29.13). Finally, also the arthroscopic tools must be considered in designing the navigation system, because it is necessary to provide them with enough space to access the joint (while not letting the US probe lose coupling) and dynamically move in it. It becomes then apparent how important it is to select the most appropriate US probe, possibly with a small footprint in order to minimize these problems. Adopting novel concept fill-in gel pads that fill the whole volume between the probe and the knee is another possible complementary approach to mitigate the issues described.

510

Handbook of Robotic and Image-Guided Surgery

29.6.1

Sensor fusion of camera image and ultrasound guidance

The oblique viewing design of current arthroscopes drastically reduces the FOV. Use of miniature cameras has been shown to circumvent this problem. Development of arthroscopes with stereo vision will bring further advances and surgeons will benefit from an increased FOV as well as real-time availability of depth perception. Such improvements to vision systems will improve visualization of the surgical space and reduce the long learning curves associated with knee arthroscopy. A limitation of this approach, in particular when robotic applications are investigated, is that the use of microcameras cannot provide detailed volumetric information and that the information is not spatially localized in absolute coordinates. The ideal vision system for knee arthroscopy therefore would necessarily be a multimodal imaging system. 4D US imaging is currently the only imaging modality, as discussed in the previous sections, with all the characteristics to augment camera-based vision systems. Together, a multimodal imaging system, consisting of a set of stereo cameras fused with the volumetric view of a 4D US, can provide quantitative rich surface visualization of surgical space accurately spatially localized. The fusion between the two modalities can be performed in several different ways. For example, it is possible to identify the same structures and overlap them, using image processing techniques. It might still be challenging, though, to recognize typically 2D surfaces in a 3D space, as provided by the cameras, in 3D US volumes. Another, more realistic, approach would be to localize and track the metallic tools using US and then align the view and orientation of the depth maps with the position and orientation of the tools in the US scans [73]. There are several examples in the literature of work that proves the feasibility of this strategy (although mainly just in 2D). Nevertheless there are no established algorithms for reliable 3D tracking of surgical instruments. Moreover, to provide the robotic system with absolute coordinates for the structures of interest, the US probe must be accurately spatially localized. Multiple solutions are available, for example, optical systems like the NDI Polaris (NDI, Waterloo, Ontario, Canada) or EM systems like the NDI Aurora. It must be noted that, for each of these solutions, the geometric errors introduced in the localization chain must be carefully evaluated to define the final precision and accuracy of the surgical system. Working in tandem, such imaging systems will allow surgeons to fix safety margins around sensitive vessels and nerves. Integration of this technology with robotic platforms will bring further improvements in enforcing even greater precision and accuracy. Adding a certain level of artificial intelligence to cameras as well as US imaging modes would add further advantages. Of course, successful clinical validations are paramount before such imaging systems are widely deployed.

29.6.2

Leg manipulation for better imaging and image-guided leg manipulation

The joint space inside the knee is so confined that leg manipulation is indispensable in arthroscopy to make space for the arthroscope and tools and expose certain tissues to the imaging system. Therefore there is a link between leg manipulation and imaging. On the one hand, the quality of the imaging calls for a good strategy of leg manipulation during the procedure; on the other hand, the leg manipulation can be guided by the real-time images acquired from either the arthroscope or the US system, or both. To automate the leg manipulation, it is essential to understand how the motion of the manipulator relates to the image feedback. Understanding the anatomic structure of the leg, especially its kinematic model, will help establish the mapping between the leg manipulator and the imaging. In Section 29.3 we described the knee gap detection and segmentation that could facilitate autonomous leg manipulation. In addition, the techniques enabling the automation of leg manipulation are also shared with those of the steerable robotic tools, and are discussed in the following section.

29.6.3

Vision-guided operation with steerable robotic tools

Autonomous operation with steerable robotic tools is possible with abundant sensing feedback available. In Section 29.2 we discussed how multimodality sensors may be equipped to the handheld robotic tools to make the operation more situation-aware. Particularly, the camera image from the arthroscope provides the most direct information on the surgical sites and the interaction between the tools and the tissues. At the same time, the US imaging system is able to track the tools with respect to the anatomy of the joint in real time. The fusion of the two sensors, as discussed above, will provide a detailed description of the situation and thus enable the autonomous motions of the steerable robotic tools. Visual servoing is a technique that can make vision-guided automation possible. It is essentially a control algorithm that takes images, either RGB images from the camera [74] or US images [75], as feedback. There are two frameworks

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

511

for visual servoing, position based and image based. Position-based visual servoing first tries to estimate the positions of the relevant components such as the robot and the target, and then uses the positions to feed a conventional closedloop control algorithm; image-based visual servoing directly establishes the relationship between the motion of the robot and the change of the image via a matrix called the image Jacobian, and updates the Jacobian matrix while the robot moves to the target position until an ideal configuration is reached. Whichever framework is used, the quality of the image acquired significantly affects the precision and robustness of the visual servoing implementation. Therefore the development of an autonomous robotic system for arthroscopy will be established on the advancement of highquality imaging systems. Learning-based methods are another approach toward the development of a fully autonomous system. In recent years, significant advancement has been seen in the field of machine learning and artificial intelligence, which has started impacting the area of medicine. For autonomous surgical robotic systems, techniques such as deep learning may be used for understanding the situation and decision-making. However, this requires a huge amount of data to train a neural network, whereas the acquisition of such data is extremely difficult in the context of surgical applications. Unsupervised learning such as reinforcement learning does not require labeled training data, but this technique still suffers from sampling inefficiency and tricky design of reward functions, leading to problems in robustness, accuracy, and predictability. Even though the rapid progress in machine learning and artificial intelligence holds the potential to revolutionize the area of medicine, the development of fully autonomous robotic and image-guided systems for arthroscopy is promising.

29.7

Discussion

29.8

Conclusion

Knee arthroscopy is a widespread complex procedure with a long learning curve causing unintended damage to patients and surgeons. The use of autonomous technologies like steerable instruments and leg manipulators along with improved camera technology and imaging modalities such as US will make the operation safer, more predictable, and easier.

29. Knee Arthroscopy

In this chapter we have described knee arthroscopy, discussed some of the current difficulties with the procedure, and outlined some potential solutions as the procedure becomes more automated. The solutions we propose may never reach clinical practice as we cannot predict if a different strategy may prove more effective. Furthermore, each of the technologies will not evolve at the same rate. Some may be utilized and others not. That is the inevitable fate of new technologies. While we do not claim we are describing the future of arthroscopy we believe that the technologies described will be increasingly utilized in surgical procedures and other interventions such as cardiology. Technological advances have been so rapid in recent years that changes to the way that knee arthroscopies and other procedures are performed are inevitable. However, change will inevitably be slower in the medical field than others due to issues such as regulation and safety as well as resistance from patients and the medical profession. The barriers to the introduction of medical devices are great. To develop even some of the technologies that we describe would be incredibly expensive and probably only be achievable by large multinational corporations. Regulatory approval can be difficult to gain for the introduction of new technology as the standards of effectiveness and safety that must be reached are high. Finally, patients and healthcare workers need to be convinced of the safety of new technologies. Though this chapter focuses on knee arthroscopy all the technologies described are platform technologies. That is, they can be used in other procedures beyond knee arthroscopy. Though pioneered in the knee, arthroscopy is now performed in many joints including hip, shoulder, wrist, and ankle to name a few. Arthroscopy in these joints can be even more difficult than in the knee and the technologies described here have an opportunity to provide an upside. Access can be particularly difficult in the hip joint and curved steerable cameras and instruments will be of even greater value when navigating around the circular femoral head. Outside arthroscopy there are many other procedures where technology as described could be utilized. This includes inside the abdominal cavity (endoscopy), the chest (bronchoscopy), or the bowel as just three examples. Finally, new fields could be opened up with variations of the technologies described greatly expanding the role of minimally invasive and semiautonomous and ultimately autonomous surgery.

512

Handbook of Robotic and Image-Guided Surgery

References [1] Phillips BB. General principles of arthroscopy. Campbell’s Oper Orthop 2003;3:2497 514. [2] Price AJ, Erturan G, Akhtar K, Judge A, Alvand A, Rees JL. Evidence-based surgical training in orthopaedics: how many arthroscopies of the knee are needed to achieve consultant level performance? Bone Joint J 2015;97-B:1309 15. [3] Jaiprakash A, O’Callaghan WB, Whitehouse SL, Pandey A, Wu L, Roberts J, et al. Orthopaedic surgeon attitudes towards current limitations and the potential for robotic and technological innovation in arthroscopic surgery. J Orthop Surg 2017;25 2309499016684993. [4] Catani F, Zaffagnini S. Knee surgery using computer assisted surgery and robotics. Springer Science & Business Media; 2013. [5] Abolmaesumi P, Fichtinger G, Peters TM, Sakuma I, Yang G-Z. Introduction to special section on surgical robotics. IEEE Trans Biomed Eng 2013;60:887 91. [6] Jabero M, Sarment DP. Advanced surgical guidance technology: a review. Implant Dent 2006;15:135 42. [7] Gomes P. Medical robotics: minimally invasive surgery. Elsevier; 2012. [8] Camarillo DB, Krummel TM, Salisbury Jr. JK. Robotic technology in surgery: past, present, and future. Am J Surg 2004;188:2S 15S. [9] Bartoli A, Collins T, Bourdel N, Canis M. Computer assisted minimally invasive surgery: is medical computer vision the answer to improving laparosurgery? Med Hypotheses 2012;79:858 63. [10] Dario P, Paggetti C, Troisfontaine N, Papa E, Ciucci T, Carrozza MC, et al. A miniature steerable end-effector for application in an integrated system for computer-assisted arthroscopy. In: Proceedings of the international conference on robotics and automation, ,https://doi.org/10.1109/ robot.1997.614364.; 1997. [11] Dario P, Carrozza MC, Marcacci M, D’Attanasio S, Magnami B, Tonet O, et al. A novel mechatronic tool for computer-assisted arthroscopy. IEEE Trans Inf Technol Biomed 2000;4:15 29. [12] Payne CJ, Gras G, Hughes M, Nathwani D, Yang G-Z. A hand-held flexible mechatronic device for arthroscopy. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS), 2015. https://doi.org/10.1109/iros.2015.7353466. [13] Kutzer MDM, Segreti SM, Brown CY, Armand M, Taylor RH, Mears SC. Design of a new cable-driven manipulator with a large open lumen: preliminary applications in the minimally-invasive removal of osteolysis. In: 2011 IEEE international conference on robotics and automation; 2011. https://doi.org/10.1109/icra.2011.5980285. [14] Horeman T, Schilder F, Aguirre M, Kerkhoffs G M MJ, Tuijthof GJM. Design and preliminary evaluation of a stiff steerable cutter for arthroscopic procedures. J Med Device 2015;9:044503. [15] Paul L, Chant T, Crawford R, Roberts J, Wu L. Prototype development of a hand-held steerable tool for hip arthroscopy. In: 2017 IEEE international conference on robotics and biomimetics (ROBIO); 2017. https://doi.org/10.1109/robio.2017.8324546. [16] Wu L, Song S, Wu K, Lim CM, Ren H. Development of a compact continuum tubular robotic system for nasopharyngeal biopsy. Med Biol Eng Comput 2017;55:403 17. [17] Wu L, Wu K, Ren H. Towards hybrid control of a flexible curvilinear surgical robot with visual/haptic guidance. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS); 2016. p. 501 7. [18] Strydom M, Jaiprakash A, Crawford R, Peynot T, Roberts JM. Towards robotic arthroscopy: “Instrument gap” segmentation, Australian Robotics & Automation Association; 2016. [19] Zavatsky AB. A kinematic-freedom analysis of a flexed-knee-stance testing rig. J Biomech 1997;30:277 80. [20] Moustris GP, Hiridis SC, Deliparaschos KM, Konstantinidis KM. Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature. Int J Med Robot 2011;7:375 92. [21] Tarwala R, Dorr LD. Robotic assisted total hip arthroplasty using the MAKO platform. Curr Rev Musculoskelet Med 2011;4:151 6. [22] Uecker DR, Lee C, Wang YF, Wang Y. Automated instrument tracking in robotically assisted laparoscopic surgery. J Image Guid Surg 1995;1:308 25. [23] Picard F, DiGioia AM, Moody J, Martinek V, Fu FH, Rytel M, et al. Accuracy in tunnel placement for ACL reconstruction. Comparison of traditional arthroscopic and computer-assisted navigation techniques. Comput Aided Surg 2001;6:279 89. [24] Ward BD, Lubowitz JH. Basic knee arthroscopy part 3: diagnostic arthroscopy. Arthrosc Tech 2013;2:e503 5. [25] Murray D, Little JJ. Using real-time stereo vision for mobile robot navigation. Auton Robots 2000;8:161 71. [26] Jansen-van Vuuren RD, Armin A, Pandey AK, Burn PL, Meredith P. Organic photodiodes: the future of full color detection and image sensing. Adv Mater 2016;28:4766 802. [27] Ko HC, Stoykovich MP, Song J, Malyarchuk V, Choi WM, Yu C-J, et al. A hemispherical electronic eye camera based on compressible silicon optoelectronics. Nature 2008;454:748 53. [28] Forrest SR. The path to ubiquitous and low-cost organic electronic appliances on plastic. Nature 2004;428:911 18. [29] Finlayson GD, Hordley SD. Color constancy at a pixel. J Opt Soc Am 2001;18:253. [30] Ratnasingam S, Collins S. Study of the photodetector characteristics of a camera for color constancy in natural scenes. J Opt Soc Am A Opt Image Sci Vis 2010;27:286 94. [31] Jansen van Vuuren R, van Vuuren RJ, Johnstone KD, Ratnasingam S, Barcena H, Deakin PC, et al. Determining the absorption tolerance of single chromophore photodiodes for machine vision. Appl Phys Lett 2010;96:253303. [32] Pandey AK, Johnstone KD, Burn PL, Samuel IDW. Solution-processed pentathiophene dendrimer based photodetectors for digital cameras. Sens Actuators B Chem 2014;196:245 51. [33] Lim S-J, Leem D-S, Park K-B, Kim K-S, Sul S, Na K, et al. Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors. Sci Rep 2015;5:7708.

Robotic and Image-Guided Knee Arthroscopy Chapter | 29

513

29. Knee Arthroscopy

[34] Bushberg JT, Anthony Seibert J, Leidholdt EM, Boone JM, Goldschmidt EJ. The essential physics of medical imaging. Med Phys 2003;30:1936. [35] Montaldo G, Tanter M, Bercoff J, Benech N, Fink M. Coherent plane-wave compounding for very high frame rate ultrasonography and transient elastography. IEEE Trans Ultrason Ferroelectr Freq Control 2009;56:489 506. [36] Bamber J, Cosgrove D, Dietrich C, Fromageau J, Bojunga J, Calliada F, et al. EFSUMB guidelines and recommendations on the clinical use of ultrasound elastography. Part 1: Basic principles and technology. Ultraschall in Der Medizin Eur J Ultrasound 2013;34:169 84. [37] Lizzi FL, Feleppa EJ, Kaisar Alam S, Deng CX. Ultrasonic spectrum analysis for tissue evaluation. Pattern Recognit Lett 2003;24:637 58. [38] Fontanarosa D, van der Meer S, Bamber J, Harris E, O’Shea T, Verhaegen F. Review of ultrasound image guidance in external beam radiotherapy: I. Treatment planning and inter-fraction motion management. Phys Med Biol 2015;60:R77 114. [39] O’Shea T, Bamber J, Fontanarosa D, van der Meer S, Verhaegen F, Harris E. Review of ultrasound image guidance in external beam radiotherapy part II: intra-fraction motion management and novel applications. Phys Med Biol 2016;61:R90 137. [40] Tyryshkin K, Mousavi P, Beek M, Ellis RE, Pichora DR, Abolmaesumi P. A navigation system for shoulder arthroscopic surgery. Proc Inst Mech Eng H 2007;221:801 12. [41] Brattain LJ, Loschak PM, Tschabrunn CM, Anter E, Howe RD. Instrument tracking and visualization for ultrasound catheter guided procedures. Lect Notes Comput Sci 2014;2014:41 50. [42] Bluvol N, Shaikh A, Kornecki A, Del Rey Fernandez D, Downey D, Fenster A. A needle guidance system for biopsy and therapy using twodimensional ultrasound. Med Phys 2008;35:617 28. [43] Banerjee S, Kataria T, Gupta D, Goyal S, Bisht SS, Basu T, et al. Use of ultrasound in image-guided high-dose-rate brachytherapy: enumerations and arguments. J Contemp Brachyther 2017;2:146 50. [44] Wong-On M, Til-Pe´rez L, Balius R. Evaluation of MRI-US fusion technology in sports-related musculoskeletal injuries. Adv Ther 2015;32:580 94. [45] Oshima T, Nakase J, Numata H, Takata Y, Tsuchiya H. Ultrasonography imaging of the anterolateral ligament using real-time virtual sonography. Knee 2016;23:198 202. [46] Faisal A, Ng S-C, Goh S-L, George J, Supriyanto E, Lai KW. Multiple LREK active contours for knee meniscus ultrasound image segmentation. IEEE Trans Med Imaging 2015;34:2162 71. [47] Giraldo JJ, Alvarez MA, Orozco AA. Peripheral nerve segmentation using Nonparametric Bayesian Hierarchical Clustering. In: 2015 37th Annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2015. ,https://doi.org/10.1109/ embc.2015.7319048.. [48] Faisal A, Ng S-C, Goh S-L, Lai KW. Knee cartilage segmentation and thickness computation from ultrasound images. Med Biol Eng Comput 2017;56:657 69. [49] Faisal A, Ng S-C, Goh S-L, Lai KW. Knee cartilage ultrasound image segmentation using locally statistical level set method. In: IFMBE proceedings; 2017. p. 275 81. [50] Guerrero J, Salcudean SE, McEwen JA, Masri BA, Nicolaou S. Real-time vessel segmentation and tracking for ultrasound imaging applications. IEEE Trans Med Imaging 2007;26:1079 90. [51] Shetty AA, Tindall AJ, Qureshi F, Divekar M, Fernando KWK. The effect of knee flexion on the popliteal artery and its surgical significance. J Bone Joint Surg Br 2003;85-B:218 22. [52] Wein W, Karamalis A, Baumgartner A, Navab N. Automatic bone detection and soft tissue aware ultrasound CT registration for computeraided orthopedic surgery. Int J Comput Assist Radiol Surg 2015;10:971 9. [53] Morvan G, Vuillemin V, Guerini H. Interventional musculoskeletal ultrasonography of the lower limb. Diagn Interv Imaging 2012;93:652 64. [54] DeFriend DE, Schranz PJ, Silver DAT. Ultrasound-guided aspiration of posterior cruciate ligament ganglion cysts. Skeletal Radiol 2001;30:411 14. [55] Hackel JG, Khan U, Loveland DM, Smith J. Sonographically guided posterior cruciate ligament injections: technique and validation. PM&R 2016;8:249 53. [56] Ko¨ro˘glu M, C ¸ allıo˘glu M, Eri¸s HN, Kayan M, C¸etin M, Yener M, et al. Ultrasound guided percutaneous treatment and follow-up of Baker’s cyst in knee osteoarthritis. Eur J Radiol 2012;81:3466 71. [57] Maffulli N, Del Buono A, Oliva F, Testa V, Capasso G, Maffulli G. High-volume image-guided injection for recalcitrant patellar tendinopathy in athletes. Clin J Sport Med 2016;26:12 16. [58] Kanaan Y, Jacobson JA, Jamadar D, Housner J, Caoili EM. Sonographically guided patellar tendon fenestration: prognostic value of preprocedure sonographic findings. J Ultrasound Med 2013;32:771 7. [59] Hirahara AM, Andersen WJ. Ultrasound-guided percutaneous reconstruction of the anterolateral ligament: surgical technique and case report. Am J Orthop 2016;45:418 60. [60] Hemmerling TM, Taddei R, Wehbe M, Cyr S, Zaouter C, Morse J. First robotic ultrasound-guided nerve blocks in humans using the Magellan system. Anesth Anal 2013;116:491 4. [61] Morse J, Terrasini N, Wehbe M, Philippona C, Zaouter C, Cyr S, et al. Comparison of success rates, learning curves, and inter-subject performance variability of robot-assisted and manual ultrasound-guided nerve block needle guidance in simulation. Br J Anaesth 2014;112:1092 7. [62] Hernandez D, Garimella R, Eltorai AEM, Daniels AH. Computer-assisted orthopaedic surgery. Orthop Surg 2017;9:152 8. [63] Gilmour A, MacLean AD, Rowe PJ, Banger MS, Donnelly I, Jones BG, et al. Robotic-arm assisted vs conventional unicompartmental knee arthroplasty. The 2-year clinical outcomes of a randomized controlled trial. J Arthroplasty 2018;33:S109 15.

514

Handbook of Robotic and Image-Guided Surgery

[64] Chen TK, Abolmaesumi P, Pichora DR, Ellis RE. A system for ultrasound-guided computer-assisted orthopaedic surgery. Comput Aided Surg 2005;10:281 92. [65] Mozes A, Chang T-C, Arata L, Zhao W. Three-dimensional A-mode ultrasound calibration and registration for robotic orthopaedic knee surgery. Int J Med Robot 2009. Available from: https://doi.org/10.1002/rcs.294. [66] Barratt DC, Chan CSK, Edwards PJ, Penney GP, Slomczykowski M, Carter TJ, et al. Instantiation and registration of statistical shape models of the femur and pelvis using 3D ultrasound imaging. Med Image Anal 2008;12:358 74. [67] Friedman L, Finlay K, Jurriaans E. Ultrasound of the knee. Skeletal Radiol 2001;30:361 77. [68] Paczesny Ł, Kruczy´nski J. Ultrasound of the knee. Semin Ultrasound CT MRI 2011;32:114 24. [69] Fuchs S, Chylarecki C. Sonographic evaluation of ACL rupture signs compared to arthroscopic findings in acutely injured knees. Ultrasound Med Biol 2002;28:149 54. [70] Sohn C, Casser HR, Swobodnik W. Ultrasound criteria of a meniscus lesion. Die Sonogr Kriter einer Meniskuslasion. 1990;11(2):86 90. Available from: ,http://ovidsp.ovid.com/ovidweb.cgi?T 5 JS&PAGE 5 reference&D 5 med3&NEWS 5 N&AN 5 2192453. [accessed 26.08.18]. [71] De Flaviis L, Scaglione P, Nessi R, Albisetti W. Ultrasound in degenerative cystic meniscal disease of the knee. Skeletal Radiol 1990;19:441 5. [72] Cook C, Stannard J, Vaughn G, Wilson N, Roller B, Stoker A, et al. MRI versus ultrasonography to assess meniscal abnormalities in acute knees. J Knee Surg 2014;27:319 24. [73] Yang L, Wang J, Kobayashi E, Ando T, Yamashita H, Sakuma I, et al. Image mapping of untracked free-hand endoscopic views to an ultrasound image-constructed 3D placenta model. Int J Med Robot 2014;11:223 34. [74] Azizian M, Khoshnam M, Najmaei N, Patel RV. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging— techniques and applications. Int J Med Robot 2014;10:263 74. [75] Azizian M, Najmaei N, Khoshnam M, Patel R. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities— techniques and applications. Int J Med Robot 2015;11:67 79.

30 G

Robossis: Orthopedic Surgical Robot Mohammad H. Abedin-Nasab1,2 and Marzieh S. Saeedi-Hosseiny1,2 1

Robossis, Glassboro, NJ, United States Rowan University, Glassboro, NJ, United States

2

ABSTRACT Alignment of femur fractures requires high precision in the presence of a huge traction force which is very arduous. The difficulties are mainly due to the bone’s elongated anatomy and its strong counteracting muscles, which necessitate a high traction force to be exerted by the surgeon and the medical team. Robossis eliminates the need for the surgeon to apply this force, and significantly improves the precision of the procedure. This platform will balance the accuracy, payload, and workspace for the surgeon, resulting in more efficient, successful surgeries. The experimental tests on a phantom reveal that the mechanism is well capable of applying the desired reduction steps against the large muscular payloads with high accuracy. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00030-X © 2020 Elsevier Inc. All rights reserved.

515

516

30.1

Handbook of Robotic and Image-Guided Surgery

Introduction

Nearly one third of all fractures occur in the lower limb, and the ratio is growing, due to the aging of the population [1,2]. A large number of lower limb fractures, that are very common in trauma centers, are those associated with long bones [3]. Currently, a fracture of the femur is treated first through reduction, followed by either an external or internal fixation system, commonly external plating or intramedullary rods. With strides to reduce invasiveness, the intramedullary rod is the preferred means of fixation for most femoral fractures [4 6]. However, using the conventional surgical techniques, accurate reduction of long-bone fractures is difficult to achieve. High radiation exposure, to both the patient and the operating team, and malalignment of the bony fragments frequently occur; the latter can dramatically affect the course of healing, leading to nonunion complications. Also, softtissue damage, due to large manipulation forces and repeated reduction attempts, are not uncommon [7 9]. Unfortunately, the potential problems of this trial-and-error approach are widely accepted as limitations of current medical practice, due to the lack of a better alternative. The difficulties involved in the reduction procedure of long-bone fractures are mainly due to the bones’ elongated anatomy and strong counteracting muscles [5,6,10,11]. The fracture ends are usually far from the distal and proximal joint attachments and free to move with very limited restrictions. Also, a huge traction force is commonly needed to be exerted by the surgeon and medical staff for pulling the bone fragments apart. This would raise the danger of physical overshoot, resulting in unnecessary soft-tissue strain during the manipulation process. A fracture table can partially facilitate this procedure by applying longitudinal traction from the distal end of the injured limb. However, it is an indirect reduction aid with limited capability for force and position control. In particular, it cannot help positioning of the fragments in six degrees of freedom (DoFs) [2,7]. The aforementioned challenge can be effectively addressed with the use of a new technology platform that helps the surgeon with preoperative planning and execution of the surgery. Robot-assisted surgeries are gaining popularity and have been successfully commercialized to perform hip and knee arthroplasty and laparoscopy. Robots used for fracture reduction must be precise, capable of applying large amounts of force, provide six DoF control, and should minimize radiation exposure to the patient and operating staff [5,10,12,13]. To address this current issue, we propose to use a robotic fracture reduction system that will function based on preoperative planning conducted with the use of X-ray images of the fractured bones. Robot-assisted fracture reduction of long bones is a new and actively researched technique. Between 2004 and 2008, different classes of the serial industrial robotic arms were studied for fracture reduction [5,7,8,10 12]. As an important line of work, several researchers studied the Gough Stewart parallel robot for closed reduction of long-bone diaphyseal fractures [3,13 19]. The Gough Stewart platform is a well-known parallel mechanism with six DoFs. It consists of two rigid bodies connected with six extensible legs, with many variants introduced in the literature for different applications [20 29]. However, its small workspace has prevented its successful application to fracture reduction. We have developed a novel wide-open three-legged parallel mechanism to create a larger workspace and increase performance in this application. In general, a lower number of legs induce less interference between the kinematic chains, enlarging the workspace of the robot. For fracture reduction applications, it can provide a further advantage as the surgeon would have a better vision and access to the wounds on the patient’s leg. However, more importantly, if the legs are configured nonsymmetrically on one side of the base and moving platforms, the resulting open architecture would facilitate positioning the patient’s leg into the mechanism. This robot prototype has an open-ring structure, which will enable it to easily embrace and manipulate column-shaped objects, such as long bones, while also providing free range of motion for a variety of surgical maneuvers. The rest of this chapter is organized as follows. In Section 30.2, the design and structure of a novel surgical robot, Robossis, are discussed. In Section 30.3, the characteristics of the proposed mechanism are compared with the wellknown Gough Stewart platform in terms of the performance indices, workspace, singularity, and dynamic analysis. In Section 30.4, the experimental tests on a fully functional prototype of the robot are described. Finally, a general conclusion on the feasibility of the system for being used in computer-assisted long-bone fracture reduction procedures is provided in Section 30.5, followed by our future work description.

30.2

Robot structure

The structural design of Robossis is a unique configuration that possesses all the structural advantages of a parallel robot design, while also incorporating an open and more accessible workspace for surgeons. Parallel robots provide stability, precision, and load carrying capacity. The Gough Stewart platform, a well-known parallel mechanism with six DoFs [30,31], consists of two rigid bodies connected with six extensible legs. Although it has been introduced in the literature for a variety of applications, it would not be feasible to be used as a fracture reduction robot [32]. The Robossis design

Robossis: Orthopedic Surgical Robot Chapter | 30

517

alters the Gough Stewart platform by reducing the number of legs from six to three or four, where each leg is now composed of three joints. This is done by replacing one rotary joint from each passive universal joint with an active joint. The rigid structure for Robossis is two-third rings which enable the robot to fit over the surgical area of the patient. Measurements taken during reduction maneuvers of femoral shaft fractures with single-axis force sensors have shown that surgery forces can reach up to around 250 N [7]. Maeda et al. [33] have also reported that the average torque required during the reduction of femoral fractures is 3.2 N m. Using the input output force transmission results, the maximum required actuator forces and torques are found to be near 280 N and 33 N m. Considering the above functional requirements and the anthropometric data of the human legs, a fully functional prototype of the surgical robot was developed, as shown in Fig. 30.1. The larger ring remains fixed, while the smaller ring moves with six DoFs. The middle leg seen on the fixed ring can be adjusted depending on the surgeon’s preference. Each leg is composed of three joints: universal, prismatic, and spherical. A rotary and a linear actuator are used to actuate each leg. The rotary actuators, whose shafts are attached to the lower parts of the linear actuators through universal joints, are placed on a semicircle on the fixed platform. The spherical joints connect the upper parts of the linear actuators to the moving platform [32,34 40]. Each of the three legs is equipped with a rotary actuator located on the base platform. This rotary actuator consists of a stepper motor followed by a high-ratio gearbox, to enlarge the shaft torque. The gearbox shaft is connected to the lower part of the leg using a revolute joint, which is made of a miniature needle bearing. Linear actuation at each leg is provided using a ball screw powered by a stepper motor. Finally, the spherical joints connect the upper parts of the legs to the moving platform. Fig. 30.2 illustrates the robot setup for a patient in lateral position. A stand was also incorporated and designed to fit over the surgical table. This provides a valuable advantage in fracture reduction procedures because the surgeon gains better vision and access to the patient’s leg. The robot’s open-ring structure enables it to easily embrace and manipulate long bones, while also providing free range of motion for varying surgical maneuvers [32,34 37]. Robossis also features a precise gripping mechanism for adhering to the fractured bone fragments. The mechanisms are attached on both rings and hold half-pins which grip the bone fragments, and can exert heavy forces required to reduce both femur and hip fractures. These gripping mechanisms can be altered by changing their location on the ring depending on the surgical approach. Robossis accommodates multiple surgical approaches by the adjustable top leg, gripping mechanisms, and half-pins. Using its own imaging technology, Robossis will realign the bone fragments into the correct anatomical alignment and hold them in position for the duration of the surgery. Its imaging technology creates new avenues of preoperative planning and intraoperative imaging guidance through its unique software.

30. Robossis Surgical Robot

FIGURE 30.1 Robossis shown gripping a fractured femur saw bone.

518

Handbook of Robotic and Image-Guided Surgery

FIGURE 30.2 Illustration of the robot for long-bone fracture reduction application. The patient is in a lateral position. The robot has six degrees of freedom. It has a full-frontal open surface which provides a large operational field for the surgeon.

FIGURE 30.3 Control panel of the robot. The user can adjust the motion parameters from point 1 to point 2, including the time interval, translational and rotational step sizes, and so on. The robot is manipulated by pushing the plus or minus button for any of the six translational or rotational movements.

A graphical user interface (Fig. 30.3) was then developed to assist the user when performing the operation. In the designed graphical interface, the user can easily adjust the motion parameters from point 1 to point 2, including the time interval, translational and rotational step sizes, and so on. Then he/she can manipulate the robot by pushing the plus or minus button for any of the six translational or rotational movements. The surgeon manipulates the robot from a

Robossis: Orthopedic Surgical Robot Chapter | 30

519

workstation, which includes a control panel and a visual aid. The user can also control the Robossis using a master controller. The force feedback generated from the master controller provides a smooth trajectory, and prevents the user from moving the Robossis outside its surgical workspace (Fig. 30.4).

30.3

Comparison with the Gough Stewart platform

In comparison with the well-known Gough Stewart platform, one rotary joint from each passive universal joint is replaced with an active joint, reducing the number of legs from six to three. This change makes the mechanism lighter, since the rotary actuators are resting on the fixed platform, which allows for higher accelerations to be available due to smaller inertial effects. More importantly, in the proposed parallel manipulator, the legs are configured nonsymmetrically on a semicircle on the base and moving platforms. This frontally wide-open architecture enables the mechanism to embrace and manipulate column-shape objects. The applications of such a mechanism are versatile, from fracture reduction of long bones in surgical robotics to column climber robots in industrial robotics. In order to investigate the kinematic performance of the proposed mechanism, it has been compared to the wellknown Gough Stewart platform, as well as its redundant four-legged counterpart (Fig. 30.5). The responses of the mechanisms are compared in several different aspects, including the workspace, singularity analysis, dynamic analysis, and load-carrying capacity. We have used the following parameters for all the simulations. Radii of the fixed and moving platforms are, respectively, g 5 0.156 m and h 5 0.153 m. Half lengths of the lower and upper parts of the legs are, respectively, e1 5 0.135 m and e2 5 0.032 m. Masses of the lower and upper parts of the legs are, respectively, m1 5 1.615 kg and m2 5 0.147 kg. Mass of the moving platform is mp 5 0.681 kg.

30.3.1

Workspace

30.3.2

Singularity analysis

Singularity of parallel manipulators implies significantly more complicated problems compared to serial mechanisms. We use the inclusive orientation workspace, where for every position in a fixed surface, the moving platform is rotated in every possible orientation, to determine if a configuration is singular or not [37]. Fig. 30.7 illustrates the results obtained for the mechanisms under study. The moving platform was rotated simultaneously in three different directions according to the roll-pitch-yaw Euler angles. For each position, if the mechanism did not encounter any singular configuration after 20 rotations, it was assumed to be at a nonsingular position. There are more singular areas at lower z values. Thus to capture more singular areas, we have chosen z 5 0.1 m for singularity analysis. As seen from Fig. 30.7, for the Gough Stewart and three-legged mechanisms, there are lots of singular configurations. Meanwhile, the four-legged mechanism does not encounter any singular configurations. The nonsingular areas for four-legged, three-legged, and Gough Stewart mechanisms are 3894, 707, and 633 cm2, respectively. These results prove the great effect of a simple redundancy; namely, the addition of a leg to the three-legged mechanism in removing lots of singular configurations.

30.3.3

Singularity effects on actuator forces and torques

To analyze the singularity effects on dynamic responses of the mechanisms, we have solved the inverse dynamic problem for different paths. In all the simulations, the orientation of the moving platform is fixed at 120 for each of the three Euler angles. We have plotted the nonsingular area for this orientation at z 5 0.1 m in the top plots in Fig. 30.8.

30. Robossis Surgical Robot

By assuming the kinematic constraints of dmin 5 e1, dmax 5 2(e1 1 e2), and the spherical joints rotations bounded by 50, we are interested in determining the space volume for which each mechanism can successfully reach. The results, illustrated in Fig. 30.6, indicate that the three- and four-legged mechanisms have much larger workspaces in comparison with the Gough Stewart platform. This is due to the fact that in the six-legged Stewart-like UPS mechanisms, the workspace is constructed by intersection of six spheres. However, in the three- and four-legged UPS mechanism, the workspace is constructed by intersection of only three or four spheres. Assuming similar dimensions for the mechanisms, a larger workspace is expected for the three- and four-legged mechanisms. On the other hand, as can be seen in the figure, adding one leg to the basic three-legged mechanism reduces the workspace by about 2%. However, the quality of the workspaces is not the same. Although the redundant mechanism has a relatively smaller workspace, it has much less singular configurations within this space in comparison with the nonredundant mechanism, as well as lower actuator forces and torques. The workspace of the three-legged mechanism is 91% more than that of Gough Stewart.

Total view

0.15 0.1

0.1

0.05

0.05

0

0

–0.05

–0.05

–0.1

–0.1

–0.15 –0.15 –0.15 –0.05

0

0.05

0.1

0.150.4

0.35

0.3

0.25

0.2

Top view

0.15

–0.15 0.2

0.1

0.05

0.05

0

0

–0.05

–0.05

–0.1

–0.1

0.22

0.24

0.26

0.28

0.22

0.24

0.26

0.3

0.32

0.34

0.36

–0.15 –0.15

0.28

0.3

0.32

0.34

0.36

Side view 2

0.15

0.1

–0.15 0.2

Side view 1

0.15

–0.1

–0.05

0

0.05

0.1

0.15

FIGURE 30.4 Multiple views of surgical workspace. Graphical representation of the position of the master robot inside the workspace. Force feedback on the master robot prevents the user from moving the Robossis outside the boundaries of the workspace.

Robossis: Orthopedic Surgical Robot Chapter | 30

521

FIGURE 30.5 Schematics of the three- or fourlegged nonredundant and redundant mechanisms and Gough Stewart platforms. The moving platform of all the mechanisms has six degrees of freedom. The mechanisms differ in the structure, number, and attachment positions of their legs. The three- and four-legged mechanisms have two active joints in each leg (one rotary and one linear), while the Gough Stewart platforms have only a linear actuator in each leg. Leg structure and attachment angles (γi’s) of the target mechanisms and Gough Stewart platforms are shown. Four- and three-legged mechanisms have the same leg attachment angle for their moving and fixed platforms, while it is different in the Gough Stewart platform.

FIGURE 30.7 The results of the singularity analysis at z 5 0.1 m plane for the mechanisms under study. At each position the moving platform is rotated 20 simultaneously around three rotation axes.

30. Robossis Surgical Robot

FIGURE 30.6 Workspaces of the mechanisms under the kinematic constraints of minimum and maximum lengths of the actuators, and the spherical joints rotations. Three- and four-legged mechanisms have much larger workspace compared to that of Gough Stewart.

522

Handbook of Robotic and Image-Guided Surgery

FIGURE 30.8 Singularity effects on dynamic responses of the mechanisms. In all the simulations, the orientation of the moving platform is fixed at 120 for each of the three Euler angles. The center of the moving platform follows the three paths shown. The time history of the power consumption for the three paths is plotted. Path 3 crosses the singularity line in both three-legged and Gough Stewart, except four-legged which enjoys a singularfree plane. As expected, the power consumption tends to infinity when the singularity line is passed, at points 2 and 3 in three-legged, and at points 5 and 6 in Gough Stewart. The unique performance of the four-legged mechanism is obvious in all paths. The time history of the actuator forces and torques for path 3 is also shown.

Robossis: Orthopedic Surgical Robot Chapter | 30

523

FIGURE 30.9 Dynamic performance index of the mechanisms at z 5 0.3 m.

As it is predicted from Fig. 30.5, the four-legged mechanism has a singularity-free plane. However, three-legged and Gough Stewart mechanisms encounter some singular points. Next, we have defined three paths, as shown in Fig. 30.8, for the center of the moving platform to follow. The time history of the power consumption for the three mentioned paths is plotted in the figure. Power consumption for path 1 shows three-legged has a relatively lower mean power consumption compared to the four-legged, and significantly better than Gough Stewart. None of the mechanisms encounters any singular points in path 1. Path 2 has a higher amplitude than path 1, and in three-legged it approaches the singular area at point 1 in the figure. This results in a jump in the power consumption plot, as shown in the figure. It is notable that the mean value of power consumption for three-legged is still lower than that of Gough Stewart, even though Gough Stewart is not close to any singularities in path 2. Path 3 crosses the singularity line in both threelegged and Gough Stewart, except four-legged, which enjoys a singular-free plane. As expected, the power consumption tends to infinity when the singularity line is passed, at points 2 and 3 in three-legged, and at points 5 and 6 in Gough Stewart. Point 4 is tangent to the singularity line and we observe a peak occurrence at the power consumption plot. The unique performance of the four-legged mechanism is obvious in path 3. The time history of the actuator forces and torques for path 3 is also shown in the figure.

Dynamic performance index

Dynamic performance index is defined as a criterion for measuring the dynamic isotropy of a manipulator as 1/|| J 2 T  M|| 3 ||(J 2 T  M) 2 1|| [41], where M denotes the mass matrix. Fig. 30.9 shows the dynamic performance index of the three mechanisms at z 5 0.3 m. As seen from the figure, four-legged has the best overall performance compared to the others, while three-legged is best at the center.

30.3.5

Dynamic load carrying capacity

Dynamic load carrying capacity (DLCC) is the maximum load which a manipulator can lift repeatedly in its fully extended configuration, while the dynamics of both the load and the manipulator itself are considered [42,43]. Due to the mechanisms requiring a large force and torque, it is important to know the maximum load-carrying capacity to ensure high accuracy and proper robotic function [44]. The load carrying capacity was analyzed at each point along the designated trajectory, path 1 of Fig. 30.8, to determine the maximum load. As seen from Fig. 30.10, the four-legged has significantly better DLCC values than the other mechanisms. In conclusion, the above results indicate that from the design point of view, by replacing the passive universal joints in the Stewart platforms with active joints, the number of legs could be reduced from six to three or four. This makes the mechanism lighter, since the rotary actuators are resting on the fixed platform, which allows higher accelerations to be achieved due to smaller inertial effects.

30.4 30.4.1

Experimental testing Trajectory tracking

In the first step, trajectory tracking experiments were performed to verify the robot’s kinematic model and assess the effects of the friction and backlash. In the tests, a desired trajectory was defined to be followed by the end-effector.

30. Robossis Surgical Robot

30.3.4

524

Handbook of Robotic and Image-Guided Surgery

FIGURE 30.10 Dynamic load carrying capacity (DLCC) of the three mechanisms along path 1 of Fig. 30.8.

FIGURE 30.11 Experimental data (red) in comparison with the predefined motion (black). The square size is 170 mm by 170 mm.

Then the corresponding joint trajectory for each actuator was computed using the inverse kinematics equations, to generate the position data of the actuators. The experiments revealed reasonably acceptable outcomes. The results are shown in Fig. 30.11. The robot was capable of moving the end-effector on a relatively large 85 mm radius square cocentered with a predefined virtual point (see Fig. 30.11). The maximum deviation from the reference trajectory was 3%, which is satisfactory. This error is mainly due to the clearance in the joints that can be properly solved by using more precise passive joints.

30.4.2

Surgical workspace

The main goal for workspace testing is to use the software to track the markers located on the rings of Robossis and determine its experimental workspace. During the experiments, an optical stereoscopic vision system was used as a tracking device to track the robot’s end-effector. The setup included two 3D cameras that detect a set of markers attached to the moving platform to find its position and orientation with respect to a fixed Cartesian coordinate system, defined using another set of markers attached to the frame. For this particular test, the markers were placed in a specific location and orientation on the fixed and moving rings of Robossis with the two infrared cameras placed above Robossis at a given angle. In order to test the experimental workspace of Robossis, a rigid body was created on the

Robossis: Orthopedic Surgical Robot Chapter | 30

525

software in order to track and collect data. The control panel for Robossis is designed to move the robot in six DoF (x, y, z, alpha, beta, and gamma). It also includes the step ratio, angle rotation, and time interval that can be adjusted to the user needs to increase accuracy and precision. The control panel main job is to make it easier for the user to manipulate a fracture when in use. In the experimental tests the workspace of the robot was measured by running its actuators in different directions to the end of the moving range and recording the end-effector position by the optical tracker. The experimental results yield a large enough workspace for surgical maneuvers. The trajectory tracking experiments performed verify the robot’s kinematic model and assess the effects of friction and backlash. The distribution of the workspace deviation from theoretical calculations can be seen in Fig. 30.12. The average displacement error is 0.324 mm, which is highly acceptable.

30.4.3

Force testing

A force, upward of 400 N, is required to reduce femur fractures. This immense force is created by the contraction of muscles in the leg that are pretensioned around the femur. To successfully complete a reduction of the femur, the bone must be manipulated and returned to its correct anatomical position. All of this must be done while the surgeon is exerting a large force. Robossis aims to make the reduction easier by allowing the surgeon to effortlessly manipulate the bone fragments using the robot. The robot also allows for a greater degree of precision during this manipulation. To demonstrate that Robossis’ surgical capability to reduce femoral fractures, force testing must occur. Force testing involves the verification that Robossis can exert the immense force required at all positions throughout its workspace. It also must be able to exert this force accurately to ensure a good reduction. The ability of Robossis to exert this force is one of its greatest value propositions, making the verification of the forces an important aspect of Robossis. To measure the force Robossis can exert there must be a resistive force for the robot to act against and a gauge to read the force the robot is exerting. In surgery, this will be the force created by the contraction of the muscles surrounding the femur however; for this experiment, we will be using springs as an analog for the muscles. To simulate these

30. Robossis Surgical Robot FIGURE 30.12 Theoretical workspace (red lines) compared with experimental boundaries (circles). The experiments verify the theoretical workspace with high accuracy.

526

Handbook of Robotic and Image-Guided Surgery

forces, a force testing rig has been established. The force testing rig can be seen in Fig. 30.1 and consists of two springs and two force gauges that are connected to the proximal and distal half-pins of Robossis. To measure the force that Robossis is exerting using the rig, two force gauges are required, one for each set of halfpins. The force gauge being used had to be small enough to fit inside the rings of Robossis and not completely block the view of the reduction. The force gauge also had to be digital and be able to read over 400 N. The springs run from the distal half pin to a force gauge connected to the proximal half pin. This will exert a compressive force in the z-direction of Robossis. The vast majority of the force exerted by the muscles of the leg will be compressive and in the z-direction so the springs in this attachment are a good analog. For initial force testing two conditions were decided. The first was a set of springs that would exert a force of approximately 100 N in the home position of Robossis (275 mm in the z-direction). The second was a set of springs that would exert a force of approximately 200 N in this home position. These two conditions were decided because they were 25% and 50% of the maximum force required to conduct a femur reduction so the conditions served as a good starting point. In addition, it is important that they exert this force at the home position so that testing can be conducted at positions less that the home position. In addition, the increase in force as the femur fragments are manipulated and spread apart (away from their anatomical position) will be accurately represented by these springs. To create these two conditions two sets of springs were required. The springs would have an initial length of 6 in. meaning that would be extended approximately 2 in. at the home position. K values of 6 and 12 lb/in. were calculated for the 100 and 200 N tests, respectively. The force gauge was connected to the proximal half pin by sliding the half pin through one of the holes on the force gauge used to accept a hook for measuring. On either side of the force gauge a small Taylor spatial frame post was attached. These two posts prevent the force gauge from sliding up and down along the half pin. The spring was then hooked to this clasp and then hooked to a larger post attached to the distal half pin. Fig. 30.13 shows how these components were attached for preliminary force testing. Preliminary data prove Robossis’ capability in accurately and precisely realigning a spiral femoral fracture on a saw bone. As testing continues, with the help of the AO Foundation as a means of measuring and determining fracture type, numerous fracture types will be studied to ensure Robossis’ seamless integration into the operating room regardless of fracture type. Beyond fracture type and muscle simulation, there is a necessity to begin testing with cadaver bones to accurately understand how the mechanical properties of the bone will interact with Robossis.

FIGURE 30.13 High-stiffness rubber bands that simulate the effects of muscular tissues are added to the system. The desired movements were accomplished with high precision, resulting in a complete fracture reduction.

Robossis: Orthopedic Surgical Robot Chapter | 30

30.5

527

Conclusion and future work

Reduction is a crucial step in the surgical treatment of bone fractures to achieve anatomical alignment and facilitate healing. A six-DoF three-legged parallel mechanism was proposed for computer-assisted fracture reduction of long bones. The feasibility of the system for practical use was evaluated in experimental tests on a phantom, which included a plastic model of a midshaft broken femur and high-stiffness rubber bands that simulated the effects of muscular tissues. During the experiments, it was found that the system was capable of applying the desired reduction steps against the large muscular payloads with high accuracy. Moreover, an optimal tracker was used to measure the workspace of the robot by running its actuators in different directions within the moving range. The results indicated a workspace that was adequate for fracture reduction maneuvers. Trajectory tracking experiments were also performed to verify the robot’s kinematic model and assess the effects of friction and backlash. It was concluded that the system has the potential to be used clinically in order to improve the quality of fracture reduction with no need for repetitive manipulations and to reduce the amount of radiation exposure to the operating staff and patients. Since we have already successfully evaluated the function of the prototype, as well as improved its design, we would like to transition into testing the new robot design on cadavers, and ultimately to clinical testing. The major goal is to compare how use of the robot affects operating times, rates of nonunion, and radiation exposure to current surgical methods. Development of the control panel will occur simultaneously, using surgeons’ feedback to determine how the control panel can be most surgeon-friendly. We expect to develop a master robot, in which the surgeon would directly control the robot. The other aspect of this project is the development of the unique imaging software which uses data from current imaging modalities, such as CT and fluoroscopy, to create a 3D replication of the fracture and a predetermined path for its reduction. Completion of this phase would provide software development to analyze, display, and plan for reduction of femoral shaft fractures. Future research would concentrate on path planning so that the reduction occurs along a physiological path, reduces risk of additional breakage at the fracture site, and does not violate the workspace of the robot. Navigation feedback would be incorporated into a surgeon console to provide visual feedback.

References

30. Robossis Surgical Robot

[1] Kaye JA, Jick H. Epidemiology of lower limb fractures in general practice in the United Kingdom. Inj Prev 2004;10(6):368 74. [2] Hung SS, Lee MY. Functional assessment of a surgical robot for reduction of lower limb fractures. Int J Med Robot Comput Assist Surg 2010;6(4):413 21. [3] Tang P, Hu L, Du H, Gong M, Zhang L. Novel 3D hexapod computer-assisted orthopaedic surgery system for closed diaphyseal fracture reduction. Int J Med Robot Comput Assist Surg 2012;8(1):17 24. [4] Wolinsky PR, McCarty E, Shyr Y, Johnson K. Reamed intramedullary nailing of the femur: 551 cases. J Trauma Acute Care Surg 1999;46 (3):392 9. [5] Gosling T, Westphal R, Hufner T, Faulstich J, Kfuri M, Wahl F, et al. Robot-assisted fracture reduction: a preliminary study in the femur shaft. Med Biol Eng Comput 2005;43(1):115 20. [6] Buschbaum J, Fremd R, Pohlemann T, Kristen A. Computer-assisted fracture reduction: a new approach for repositioning femoral fractures and planning reduction paths. Int J Comput Assist Radiol Surg 2015;10(2):149 59. [7] Fu¨chtmeier B, Egersdoerfer S, Mai R, Hente R, Dragoi D, Monkman G, et al. Reduction of femoral shaft fractures in vitro by a new developed reduction robot system ‘RepoRobo’. Injury 2004;35:S-A113 9. [8] Oszwald M, Ruan Z, Westphal R, O’loughlin PF, Kendoff D, Hufner T, et al. A rat model for evaluating physiological responses to femoral shaft fracture reduction using a surgical robot. J Orthop Res 2008;26(12):1656 9. [9] Hawi N, Haentjes J, Suero EM, Liodakis E, Krettek C, Stu¨big T, et al. Navigated femoral shaft fracture treatment: current status. Technol Health Care 2012;20(1):65 71. [10] Westphal R, Winkelbach S, Go¨sling T, Hu¨fner T, Faulstich J, Martin P, et al. A surgical telemanipulator for femur shaft fracture reduction. Int J Med Robot Comput Assist Surg 2006;2(3):238 50. [11] Westphal R, Winkelbach S, Wahl F, Go¨sling T, Oszwald M, Hu¨fner T, et al. Robot-assisted long bone fracture reduction. Int J Robot Res 2009;28(10):1259 78. [12] Oszwald M, Westphal R, Bredow J, Calafi A, Hufner T, Wahl F, et al. Robot-assisted fracture reduction using three-dimensional intraoperative fracture visualization: an experimental study on human cadaver femora. J Orthop Res 2010;28(9):1240 4. [13] Li C, Wang T, Hu L, Zhang L, Du H, Wang L, et al. Accuracy analysis of a robot system for closed diaphyseal fracture reduction. Int J Adv Robot Syst 2014;11(10):169. [14] Seide K, Faschingbauer M, Wenzl ME, Weinrich N, Juergens C. A hexapod robot external fixator for computer assisted fracture reduction and deformity correction. Int J Med Robot Comput Assist Surg 2004;1(1):64 9.

528

Handbook of Robotic and Image-Guided Surgery

[15] Majidifakhr K, Kazemirad S, Farahmand F. Robotic assisted reduction of femoral shaft fractures using Stewart platform. Stud Health Technol Inform 2009;142:177 9. [16] Hu L, Zhang J, Li C, Wang Y, Yang Y, Tang P, et al. A femur fracture reduction method based on anatomy of the contralateral side. Comput Biol Med 2013;43(7):840 6. [17] Wang J, Han W, Lin H. Femoral fracture reduction with a parallel manipulator robot on a traction table. Int J Med Robot Comput Assist Surg 2013;9(4):464 71. [18] Wang T, Li C, Hu L, Tang P, Zhang L, Du H, et al. A removable hybrid robot system for long bone fracture reduction. Biomed Mater Eng 2014;24(1):501 9. [19] Du H, Hu L, Li C, Wang T, Zhao L, Li Y, et al. Advancing computer-assisted orthopaedic surgery using a hexapod device for closed diaphyseal fracture reduction. Int J Med Robot Comput Assist Surg 2015;11(3):348 59. [20] St-Onge BM, Gosselin CM. Singularity analysis and representation of the general Gough-Stewart platform. Int J Robot Res 2000;19(3):271 88. [21] Dasgupta B, Mruthyunjaya TS. The Stewart platform manipulator: a review. Mech Mach Theory 2000;35(1):15 40. [22] Li H, Gosselin CM, Richard MJ. Determination of the maximal singularity-free zones in the six-dimensional workspace of the general Gough Stewart platform. Mech Mach Theory 2007;42(4):497 511. [23] Jiang Q, Gosselin CM. Determination of the maximal singularity-free orientation workspace for the Gough Stewart platform. Mech Mach Theory 2009;44(6):1281 93. [24] Jiang Q, Gosselin CM. Maximal singularity-free total orientation workspace of the Gough Stewart platform. J Mech Robot 2009;1(3):034501. [25] Jiang Q, Gosselin CM. Evaluation and representation of the theoretical orientation workspace of the Gough Stewart platform. J Mech Robot 2009;1(2):021004. [26] Inner B, Kucuk S. A novel kinematic design, analysis and simulation tool for general Stewart platforms. Simulation 2013;89(7):876 97. [27] Liu G, Qu Z, Liu X, Han J. Singularity analysis and detection of 6-UCU parallel manipulator. Robot Comput Integr Manuf 2014;30(2):172 9. [28] Karimi A, Masouleh MT, Cardou P. Singularity-free workspace analysis of general 6-UPS parallel mechanisms via convex optimization. Mech Mach Theory 2014;80:17 34. [29] Zhou W, Chen W, Liu H, Li X. A new forward kinematic algorithm for a general Stewart platform. Mech Mach Theory 2015;87:177 90. [30] Gough VE. Universal tyre test machine. In: Proceedings of the FISITA 9th international technical congress. London; 1962. p. 117 37. [31] Stewart D. A platform with six degrees of freedom. Proc Inst Mech Eng 1965;180(1):371 86. [32] Abedinnasab MH, Farahmand F, Gallardo-Alvarado J. The wide-open three-legged parallel robot for long-bone fracture reduction. J Mech Robot 2017;9(1):015001. [33] Maeda Y, Sugano N, Saito M, Yonenobu K, Sakuma I, Nakajima Y, et al. Robot-assisted femoral fracture reduction: preliminary study in patients and healthy volunteers. Comput Aided Surg 2008;13(3):148 56. [34] Abedinnasab MH, Farahmand F, Tarvirdizadeh B, Zohoor H, Gallardo-Alvarado J. Kinematic effects of number of legs in 6-DOF UPS parallel mechanisms. Robotica 2017;35(12):2257 77. [35] Abedinnasab MH, Gallardo-Alvarado J, Tarvirdizadeh B, Farahmand F. Sliding-mode tracking control of the 6-DOF 3-legged wide-open parallel robot. Parallel manipulators: design, applications and dynamic analysis. Nova Science Publishers; 2016. [36] Abedinnasab MH, Yoon YJ, Zohoor H. Exploiting higher kinematic performance—using a 4-legged redundant PM rather than Gough-Stewart platforms. Serial and parallel robot manipulators-kinematics, dynamics, control and optimization. InTech; 2012. [37] Abedinnasab MH, Vossoughi GR. Analysis of a 6-DOF redundantly actuated 4-legged parallel mechanism. Nonlinear Dyn 2009;58(4):611. [38] Abedinnasab MH, Farahmand F, Gallardo-Alvarado J. A 3-legged parallel robot for long bone fracture alignment. In: ASME 2017 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical Engineers; August 6, 2017. p. V003T13A002. [39] Wilhelm M. A surgical robot with ambitions. Mech Eng 2018;140(04):24 5. [40] Aghababai O. Design, kinematic and dynamic analysis and optimization of a 6 DOF redundantly actuated parallel mechanism for use in haptic systems [Doctoral dissertation, M.Sc. thesis]. Tehran, Iran: Sharif University of Technology. [41] Khezrian R, Abedloo E, Farhadmanesh M, Moosavian SA. Multi criteria design of a spherical 3-DOF parallel manipulator for optimal dynamic performance. In: 2014 Second RSI/ISM international conference on robotics and mechatronics (ICRoM). IEEE; October 15, 2014. p. 546 51. [42] Korayem MH, Haghighi R, Korayem AH, Nikoobin A, Alamdari A. Determining maximum load carrying capacity of planar flexible-link robot: closed-loop approach. Robotica 2010;28(7):959 73. [43] Korayem MH, Ghariblu H, Basu A. Dynamic load-carrying capacity of mobile-base flexible joint manipulators. Int J Adv Manuf Technol 2005;25(1 2):62 70. [44] Ding B, Cazzolato BS, Stanley RM, Grainger S, Costi JJ. Stiffness analysis and control of a Stewart platform-based manipulator with decoupled sensor actuator locations for ultrahigh accuracy positioning under large external loads. J Dyn Syst Meas Control 2014;136(6):061008.

31 G

EOS Imaging: Low-Dose Imaging and Three-Dimensional Value Along the Entire Patient Care Pathway for Spine and Lower Limb Pathologies Jamie Milas, Mathieu Garayt, Elena Oriot and Joe Hobeika EOS Imaging, Paris, France

ABSTRACT EOS is a full imaging solution for orthopedics associating safe imaging, complete and precise image-based data, and surgical planning based on the patient-specific three-dimensional (3D) patient anatomy. The EOS system provides low-dose, full-body, stereo-radiographic images of patients in a functional (standing, sitting, bending) position. In a few seconds, the EOS exam produces two simultaneous frontal and lateral, low-dose images of the whole body or an anatomical segment. Dose reduction is in the range of 50% 85% with respect to digital and computed radiography. Through the sterEOS workstation or via online 3D services, a patient-specific, 3D model of the spine, pelvis, and lower limb can then be developed from EOS biplanar images. The 3D model embeds numerous anatomical landmarks that allow the fully automated measurements of more than 100 key clinical parameters for the diagnosis, surgical planning, or control of spine, hip. and knee pathologies. The imaging solution is completed with online surgical applications that provide 3D preoperative planning and control based on the 3D patient-specific model. hipEOS and kneeEOS are applications for total hip and total knee replacement that allow to plan the size and position of the implants as well as bone resection levels according to the patient’s anatomy and functional positions (standing, sitting, etc.). spineEOS is an application for surgical planning and control for patients suffering from degenerative or deformative spine conditions. In all online 3D applications, the clinical parameters resulting from the planning are automatically calculated and real-time displayed in 3D in order to evaluate the implant position adequation and patient alignment restoration in the sagittal, coronal, and axial planes. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00031-1 © 2020 Elsevier Inc. All rights reserved.

529

530

Handbook of Robotic and Image-Guided Surgery

31.1

Introduction

The EOS platform is a unique combination of a low-dose 2D/3D imaging technology, software, and services that add value to each step of the musculoskeletal patient care pathway. From diagnosis to long-term follow-up, an EOS exam provides full-body, 2D/3D images and patient-specific data sets to plan and control surgeries based on 3D anatomical models of the patient in a weight-bearing position. Section 31.2 provides a technological description of the EOS acquisition system and its clinical benefits, such as the ability to provide frontal and lateral full-body images at very low dose without magnification nor stitching in one single acquisition. This new imaging system combines numerous benefits required for orthopedic imaging. Section 31.3 is dedicated to sterEOS. The sterEOS workstation creates full-length, 3D models of a patient’s skeleton and calculates more than 100 clinical parameters for spine, pelvis, and lower limb orientation for diagnostics, preoperative planning, postoperative analysis, and long-term follow-up. Section 31.4 covers the EOSapps: online 3D surgical planning software solutions based on EOS biplanar stereoradiographic images and 3D data sets for the spine, hip, and knee.

31.2

EOS acquisition system

The EOS system is designed around a vertically traveling arm supporting two image acquisition systems mounted at right angles. Each acquisition system is comprised of an X-ray tube and a linear detector.

31.2.1

System description

Fig. 31.1 shows a potential EOS setup including all necessary components. The patient area corresponds to the specified radiation area. The gantry contains two X-ray tubes, two X-ray detectors, and all the mechanical and electrical components for acquiring radiographic images of the patient. The generator cabinet contains two generators that supply high voltage to each of the two X-ray tubes (frontal and lateral). The electrical cabinet enables the operator to power up or down the system itself as well as the computer controlling the system. A laser-positioning device is also included. It is installed in the EOS room, facing the gantry to assist in the display and definition of the acquisition area. EOS is delivered with a removable platform, used to elevate the patient in the cabin to a height that enables most of the population to have a full-body scan including the full head all the way to the feet. However, the access steps can be removed, and the platform taken out for very tall patients or those who are too unstable to step up. Patient stabilization accessories are available in the EOS cabin (Fig. 31.2): G G

a posture stabilization device, to be used for spine or whole-body examinations; a stabilization bar, to be used for examinations of the lower limbs.

FIGURE 31.1 Typical EOS room.

EOS Imaging Chapter | 31

531

FIGURE 31.2 Patient stabilization accessories.

FIGURE 31.3 Slot-scanning technology principle.

31.2.2

Benefits of slot-scanning weight-bearing technology

Linear slot-scanning radiography is the method of taking X-ray images whereby the X-ray source and detector move along the patient, or region of interest, during image acquisition. For the EOS imaging system, the detector aperture is limited to 500 µm. This contrasts with conventional or digital radiography systems, where a static source and detector are used, with a limited field of view. It results in the ability to perform full-body imaging (localized images as well), to obtain X-ray images without magnification, and to minimize the radiation dose (Fig. 31.3). 1. Full-body imaging capability Continuous acquisition results in a very large field of view. With such a technology, the user only needs to define the start and stop position for the field of view. The results are images up to 44.8 cm (17.3v) 3 180 cm (71v). Stitching is never required nor any other related distortion that could lead to misinterpretations.

31. EOS Imaging

The posture stabilization device stabilizes the patient by proprioceptive contact at the head for examinations of the spine and the full body, in the antero-posterior (AP) or postero-anterior (PA) position. The radiolucent stabilization bar helps the patient stand still during the acquisition of the lower limbs, particularly in the AP position.

532

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.4 Cobb angle variation between the weight-bearing and prone position.

This full-body capacity combined with the stereo-radiographic approach results in a more efficient workflow versus standard flat-panel imaging. The average exam time for EOS is about 4 minutes for an entire spine, including frontal and lateral views. This represents nearly half of the time an exam would take on a standard flat-panel system [1]. 2. Functional position The EOS vertical slot scanning also allows patients to be imaged in a functional position. Long-axis imaging can be performed either in a weight-bearing or seated position with a stool or the dedicated EOS radiolucent chair. This benefit is a key to assessing musculoskeletal pathologies. As depicted in Fig. 31.4, clinical parameters can be very different between weight-bearing EOS exams and the prone position normally used with computerized tomography (CT) scans. Combined with the full-body capability, clinicians can assess the sagittal balance of their patients and potential compensation mechanisms. This is also an input for surgical planning applications. 3. 1:1 scale With flat-panel technologies, the X-ray geometry is a cone beam. That means that the resulting anatomy is artificially larger than reality due to the projection. In addition, the magnification of the anatomy near the tube versus anatomy further away from the tube would be different. This nonlinear magnification effect is particularly significant with obese patients, resulting in overestimated measurements. It is possible to add a radio-opaque ball on the patient with a known size to correct the magnification a posteriori, but this is not a convenient workflow, nor is it perfectly accurate. Slot scan technology results in a fan beam geometry. Therefore there is absolutely no vertical magnification regardless of the patient position. This is a guarantee that the vertical component of any measurement will always be perfectly accurate. There is still inherent magnification on the horizontal component. However, thanks to the biplanar capability, the patient position can be used to automatically correct the horizontal magnification without any workflow ramifications. It allows clinicians to obtain reliable measurements to make better informed clinical decisions (Table 31.1). 4. Low-dose imaging One inherent benefit of the slot-scanning technology is the scatter removal which can result in either improvements to the image quality or dose reduction. When an X-ray beam enters a patient’s body, a large portion of the photons engage in Compton interactions and produce scattered radiation. Some of this scattered radiation leaves the body and exposes the image receptor. This scattered radiation reduces image contrast. In most radiographic and fluoroscopic procedures, the major portion of the X-ray beam leaving the patient’s body is scattered radiation. This, in turn, significantly reduces contrast. Scattered radiation leaves the patient’s body in a direction different from that of the primary beam. With slot scan technology, the detector aligned in front of the source only captures the primary signal since it is only a 500µm aperture. Deviated scattered radiation does not even enter the detector with this inherent detector collimation. Scatter is rejected without the need for an antiscatter grid that would result in primary beam drop. The superiority of the EOS detection technology results from the combined suppression of scattered radiation and direct conversion and subsequent tunable amplification of X-rays. These features lead to an improved signal-tonoise ratio and dynamic range, which in return allows diagnostic images at a lower dose without compromising the excellent image quality. It results in an 85% dose reduction compared to computed radiography technology with equivalent or better image quality [2] and a 50% dose reduction compared to digital radiography technology [1].

EOS Imaging Chapter | 31

533

TABLE 31.1 Magnification-free images. Illustration of various zooming factor depending on patient positioning.

CR/DR

EOS (before correction)

EOS (after correction)

31.3 31.3.1

Patient-specific three-dimensional models Modeling technology

From EOS low-dose weight-bearing images, 3D patient-specific models may be created on the sterEOS 3D workstation. The sterEOS software uses the simultaneity and orthogonality of the frontal and lateral images to generate a 3D model of the patient’s bone envelope. The operator begins by identifying several anatomical structures on the image (Fig. 31.5A). The software uses the position of these landmarks to provide an initial model (Fig. 31.5B). The contours of this first model are projected on the radiograph and fitted, both manually and, depending on the area concerned, by automatic detection of the bone contours on the images. When the contours projected on the 3D model correspond to the radiologic bone edges, the modeling process is complete (Fig. 31.5C). 3D clinical parameters are automatically calculated from the model. Today, the software can be used to model the thoracic and lumbar spine as well as the lower limbs (femur and tibia). Pelvis 3D orientation can also be modeled. sterEOS proposes five modeling workflows, each of them leading to the computation of a specific dataset of clinical parameters. Once a workflow is completed, a customizable patient report including 2D images, 3D models, and accurate clinical data can be exported. This report provides valuable information to surgeons throughout the patient care pathway to make accurate diagnoses, preoperative plans, postoperative assessments, and follow-up evaluations with patients over time. 1. Identification of anatomical landmarks The identification of anatomical landmarks in both 2D registered images allows software algorithms to determine the 3D coordinates of these landmarks in 3D space and to automatically calculate the distances and angles between them. This set of 3D measurement tools is based on the well-established paradigm that the location of a landmark in 3D space can be directly derived from the knowledge of its location within two matched orthogonal images (Fig. 31.6).

31. EOS Imaging

CR, Computed radiography; DR, digital radiography.

534

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.5 3D modeling method. (A) Anatomical structures identification; (B) Initial 3D model; (C) Adjusted 3D model.

FIGURE 31.6 Illustration of 3D modeling of diaphysis axis (green line), where Af (Al) and Bf (Bl) are the frontal (lateral) projections of its extremities A and B, respectively.

For example, the femoral axis length is defined as the distance between two identified points on the lateral and frontal views. Given the simultaneous acquisition of frontal and lateral images with the EOS system, sterEOS algorithms can automatically compute the 3D coordinates of the identified points. Once the points are located in the 3D space, the algorithm can compute the actual distance between these two points. The same paradigm applies for the measurement of angles in 3D space (Fig. 31.7). 2. 3D modeling sterEOS uses the anatomical landmarks already defined in the 3D space on the two registered X-ray images as well as additional information gained from an a priori 3D model of vertebrae or lower limb bones and 3D model contour adjustments made by the operator. The core algorithm in sterEOS is based on three elements: a simplified parametric model (approximation of the external 3D shape of the bone with simple generic volumes), statistical inference rules obtained from statistical databases, and a morpho-realistic parametric model coming from a CT acquisition. The morpho-realistic parametric model is personalized to each patient using the predictors identified by the operator, that is, anatomical landmarks obtained during the first step of the process described earlier. This leads to an automatic model initialization. Then, the personalized morpho-realistic parametric model is projected on the native X-ray image and the operator may manually deform the model to fit the anatomic bone contours on the image.

EOS Imaging Chapter | 31

535

FIGURE 31.7 Pelvis 3D orientation modeling in sterEOS. The operator identifies five anatomical landmarks: the sacral plate on the lateral view, the acetabula on both views, and the sacroiliac joints on the frontal view.

FIGURE 31.8 sterEOS patient frame. (A) Patient’s frontal plane is the vertical plane containing the line connecting the centers of the acetabula. (B) Patient’s sagittal plane is the vertical plane passing through the midpoint of the acetabular axis.

Pelvis

Pelvis 3D orientation can be modeled in sterEOS by proceeding through the steps detailed in Fig. 31.7. The landmarks on the pelvis lead to the computation of the patient frame (Fig. 31.8). Anatomical landmarks are projected in this frame, allowing the 3D clinical parameters to be independent from the pelvis axial rotation of the patient during the acquisition.

31.3.3

Lower limbs

Femur and tibia can be modeled in 3D by proceeding through the steps detailed in Fig. 31.9. The operator is requested to identify the femoral condyles on both views. This allows the computation of the femoral frame (Fig. 31.10). Projection of anatomical landmarks in this frame allows lengths and angles to be independent from the axial rotation of the knee during the acquisition. Measuring lower limb parameters in 3D with sterEOS avoids the measurement bias that exists when performing 2D measurements. As an example, complete hip/knee/ankle images from a traditional X-ray system do not permit a valid comparison of the length of lower limbs in patients with genu flexum (knee that cannot be fully extended) or genu recurvatum (back-knee). Fig. 31.11 shows an EOS image of the legs: the posttraumatic unequal length, assessed by a 2D measurement of 10 mm, is actually 26 mm. The difference between the standard 2D measurement and the 3D sterEOS measurement is explained by the flexion of the left leg. Only a 3D measurement allows an exact assessment of bone length in these cases. A second example occurs while measuring the mechanical femorotibial angle, or hip-knee-ankle (HKA), which is an essential parameter for surgical treatment of gonarthritis [3,4]. It is conventionally measured in 2D on an X-ray of the entire leg (HKA image). The result is correct only if both knees remain in a frontal position. With sterEOS,

31. EOS Imaging

31.3.2

536

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.9 sterEOS femur and tibia 3D modeling.

FIGURE 31.10 sterEOS femoral frame. Frontal femoral plane is defined by the center of the femoral head and the transcondylar line (axis connecting the center of the condyles) and the sagittal femoral plane is the orthogonal plane to the frontal femoral plane.

FIGURE 31.11 2D/3D EOS examination of the legs. Courtesy: Pr Hauger, Pellegrin Hospital, Bordeaux.

EOS Imaging Chapter | 31

537

however, the HKA angle can be calculated secondarily in the frontal femoral plane, even if the knee was not perfectly positioned during acquisition. Fig. 31.12, for example, shows a patient with an excess internal femoral rotation associated with bilateral genu flexum. On a conventional HKA image, the combination of these two conditions creates the impression of genu valgus, measurable at 10 degrees on the left, although the valgus calculated by the sterEOS software is actually less than 1 degree. A third example is the femoral offset, which is described in the literature as an important parameter for planning hip arthroplasty [5,6]. Comparing the standard 2D offset measurements to CT measurements for 223 patients, Sariali et al. [7] assessed an 8% error for femoral offset on frontal radiographs. This can be explained by the natural anteversion of the femoral heads that complicates the interpretation of measurements taken from frontal radiographs centered on the hips. The planning of total hip arthroplasties (THAs) is based on such images and is significantly affected by measurement errors associated with the phenomenon of radiographic projection. sterEOS 3D parameters are totally independent of the spatial orientation of the femoral neck (Fig. 31.13).

FIGURE 31.12 (A) EOS radiography of the legs as a whole and standard measurement of the valgus. (B) 3D modeling of the lower limbs visualized as positioned during acquisition. (C) 3D modeling of the lower limbs with the knee seen frontally. The valgus calculated on the front of the knee is nearly 0 degree. Courtesy: Pediatric Orthopedics Department, Motor Skills Analysis Laboratory, Timone Children’s University Hospital Center, Marseille.

31. EOS Imaging FIGURE 31.13 Standard measurement of femoral offset: (A) standard measurement distorted by anteversion of the femoral neck; (B) 3D sterEOS measurements. Courtesy: Pr Nizard, Hospital Lariboisie`re, Paris.

538

Handbook of Robotic and Image-Guided Surgery

31.3.4

Spine

Spine can be modeled in 3D from T1 (first thoracic vertebra) to L5 (last lumbar vertebra) by proceeding through the steps detailed in Fig. 31.14. Once the workflow is completed, users can manipulate the bone-surface 3D model and visualize it from any point of view, as well as consulting the 3D parameters. When the pelvic 3D orientation is modeled in addition to the spine, vertebral rotations are computed in the patient plane, making them independent from pelvic axial rotation during the acquisition. It was shown that EOS provides accurate and reliable 3D measurements versus CT [7], even in severe spine deformities such as scoliosis with Cobb angle greater than 50 degrees [7]. Three-dimensional measurements are key to understanding and planning spine surgery: as an example, 2D measurements of thoracic kyphosis underestimate loss of kyphosis in a patient with idiopathic scoliosis [8]. sterEOS spine 3D modeling also provides a quality control tool to assess surgical treatment efficacy [9]. Computation of axial rotation from low-dose weight-bearing images is of high interest for the orthopedic community compared to existing technologies, that is, standard X-rays that allow only 2D measurements and CT that delivers a high radiation dose and only allows 3D computation in a supine position. In addition to scoliosis pathology assessment, sterEOS spine 3D modeling is also useful for adult spine deformity patients thanks to reliable transverse plane parameters to help assess the 3D and rotational aspects that are important factors to consider [10]. sterEOS also offers tools for analyzing postural disorders. Spinopelvic parameters can be compared to reference values as shown in Fig. 31.15. It provides a fast way to assess patient’s balance pre- and postoperatively. As early as 1991, Itoi et al. [11] insisted on the importance of studying postural balance as a whole, especially in degenerative diseases. There is now evidence showing the relationship between sagittal balance and clinical outcomes in surgical treatment of degenerative spinal diseases [12,13]. sterEOS allows the computation of spinopelvic sagittal parameters as well as evaluation of the pelvic and lower limb compensation mechanisms [14].

31.4

Preoperative surgical planning solutions and intraoperative execution

With the 3D data set created using the sterEOS software, preoperative 3D surgical planning can be performed. This section will particularly focus on how the EOS 3D dataset can be used to prepare a spine surgery, a THA and a total knee arthroplasty (TKA). It will highlight the importance of the preoperative planning to anticipate difficulties that may be encountered intraoperatively.

31.4.1

spineEOS

1. Technology description spineEOS is an online 3D preoperative surgical planning software intended for adults suffering from degenerative or deformative spine conditions, as well as for pediatric patients with adolescent idiopathic scoliosis. The planning uses a 3D model and data set of the spine and the pelvis obtained from two EOS images: a frontal and a lateral low-dose X-ray acquired in a weight-bearing position. The planning in spineEOS is based on a patient-specific biomechanical finite element model allowing simulation of the spine correction according to a target defined by the surgeon. Based on the preoperative 3D model of the spine, the surgeon defines the following values. The targeted postoperative lordosis, kyphosis, and pelvic tilt (PT): pediatric and adult reference values of the sagittal balance are integrated to guide the surgeon’s surgical plan including: a. selecting the vertebral levels to be instrumented; b. assessing the spine rigidity (bending or suspension EOS images can be used to estimate); c. planning various potential PSOs (pedicle subtraction osteotomies) and Ponte osteotomies; d. placing intervertebral body cages (define the size and position of the cages in 3D). In addition to the surgeon’s input, the biomechanical model integrates mechanical relationships between the vertebrae, ligaments, and disks, leading to computation of the expected postoperative 3D spinal alignment (Figs. 31.16 and 31.17). Moreover, spineEOS provides the 3D shape and length of the rods to be used intraoperatively. A 3D-printed patient-specific template may be used in the operating room (OR) to bend the rods according to the plan. Furthermore, knowing that the rods deform when implanted due to the mechanical resistance of the spine, spineEOS also automatically computes the required overbending of the rods in order to achieve the correct

FIGURE 31.14 sterEOS spine 3D modeling.

31. EOS Imaging

540

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.15 sterEOS postural assessment: clinical parameters and reference values.

FIGURE 31.16 spineEOS interface showing on the left side the frontal view of the preoperative spine for an adolescent idiopathic scoliosis (AIS) case and on the right side the frontal correction simulated by the biomechanical finite element model.

postoperative shape (Fig. 31.18). These overbent rods are calculated using a biomechanical finite element model that takes into account the material used (titanium, cobalt-chrome, stainless steel, etc.) as well as the shape and diameter of the selected rods. 2. Clinical need being addressed spineEOS is intended to help in the treatment of adults suffering from degenerative or deformative spine conditions and pediatric patients with adolescent idiopathic scoliosis. In the United States, complex spine and thoracolumbar fusion surgery together represent approximately 400,000 patients per year. 3. How does this technology improve upon current standards of care? The deformation of the spine is a 3D deformity that needs to be analyzed in 3D. Furthermore, in the specific case of degenerative spine pathologies, it is extremely important to assess the patient’s sagittal balance, including compensatory mechanisms such as PT and knee flexion.

EOS Imaging Chapter | 31

541

FIGURE 31.17 spineEOS interface showing on the left side the sagittal view of the preoperative spine for an adolescent idiopathic scoliosis (AIS) case and on the right side the sagittal correction simulated by the biomechanical finite element model.

FIGURE 31.18 Display of the 3D overbent rod based on the final postoperative result of the correction.

31. EOS Imaging Current spine planning tools are based on 2D conventional X-ray images and do not take into account 3D deformation, resulting in inaccurate planning and rod shape and length estimation. Additionally, due to the limited field of view of 2D conventional X-ray images, the planning tools cannot assess a patient’s global sagittal balance without the need to stitch the images. Moreover, complex 3D mechanical interactions occur during surgery that cannot be accounted for by 2D planning without biomechanical information.

542

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.19 Rod implant bent according to a 3D-printed template to match the preoperative planning.

spineEOS is based on low-dose stereo-radiographic full-body images and 3D models of the spine and pelvis. It allows planning in 3D based on clinical and biomechanical parameters not available in 2D planning. In addition, it is based on full-body images, allowing to plan degenerative cases by considering compensatory mechanisms. spineEOS helps surgeons to optimize the surgical strategies, including what correction to achieve, levels to instrument, rods to implant (curves, length, material, etc.), osteotomies to perform, and intervertebral body cages to implant thanks to the unique finite element biomechanical simulation of the spine and rods. Moreover, they can rely on a simulation to suggest how to overbend the rods to anticipate intraoperative biomechanical deformities. 4. How can the spine preoperative planning be considered for intraoperative execution? The preoperative planning allows the surgeons to understand the patient’s anatomy and define the target to achieve intraoperatively. Several planned parameters can directly be used by the surgeon intraoperatively, such as levels to be instrumented, number and sizes of intervertebral body cages to be implanted, as well as the number and sizes of osteotomies to be executed (PSO, Ponte, etc.). Those steps will help to restore the patient’s spine to a condition close to the target defined in the preoperative plan. In addition, the planned 3D rod information can be used by bending the rods to match a 3D-printed template preoperatively (Fig. 31.19) or any other support tool to visualize the planned curves of the rods. Once the rods are implanted, the spine will have the same alignment as planned. Other techniques using intraoperative imaging, augmented reality, or other innovative technologies may also be used in the near future to execute the planned spinal alignment.

31.4.2

hipEOS

Although THA surgery is a very common procedure, it remains a challenging surgery. The number of patients having a THA is rising, controlling costs is becoming increasingly essential, and anticipating issues during surgery to avoid postoperative complications is becoming more important than ever. Around 2% of patients experience dislocation after a THA [15], around 7% report dissatisfaction [16], and almost 8% of all THA litigations are due to leg length discrepancies [17]. Identifying preoperative risks and minimizing postoperative complications are crucial for improving results and patient-reported outcome measures. 1. Technology description and clinical needs being addressed hipEOS online 3D surgical planning software is used to plan primary, THAs. The planning starts by analyzing the EOS 2D/3D data set in standing and seated positions to understand the patient’s functional anatomy and the pelvis mobility between the two positions. The PT is automatically measured on the EOS 3D models, in both the standing and seated positions, and allows for an assessment of the pelvic mobility and makes it easier to identify patients at risk. For example, in Figs. 31.20 and 31.21, the patient has a lumbar fusion preventing him from adapting the position of his pelvis when going from a standing to a seated position. These types of patients are those that surgeons would like to identify preoperatively to anticipate how best to orient their implants while planning. Another example of patients who are at high risk are those with a significant pelvic mobility between the standing and seated positions. It is also important to consider substantial changes in the PT while determining the correct type, size, and orientation of the implant components. The planning with hipEOS continues with an automatic proposal for the size and position of the stem and cup based on the surgeon’s preferred implant manufacturer and the patient’s unique 3D anatomy. The patient-specific plan can be adjusted by the surgeon with immediate feedback on how changes affect relevant clinical parameters like femoral offset and torsion, leg length discrepancy, and rotations (Fig. 31.22). The third step in the planning continues with an implant range of motion (ROM) analysis in 3D allowing to check if the configuration of implant sizes and positions provides an optimal level of mobility for the patient. This

EOS Imaging Chapter | 31

543

FIGURE 31.20 Example of a patient with lumbar fusion and a pelvic tilt (PT) of 22 degrees in a standing position (Pr. Lazennec—La Pitie´, Paris).

31. EOS Imaging

FIGURE 31.21 The same patient from Fig. 31.20 with a pelvic tilt (PT) of 12 degrees in a seated position (Pr. Lazennec—La Pitie´, Paris).

544

Handbook of Robotic and Image-Guided Surgery

FIGURE 31.22 hipEOS interface allowing the planning of the implant sizes and positions and automatically calculating 3D clinical parameters in a weight-bearing position.

FIGURE 31.23 Implants ROM (range of motion) analysis in a standing position—implant impingement detected in flexion position.

ROM analysis can be done in a standing (Fig. 31.23), seated, and theoretical seated position. This functional analysis allows the surgeon to personalize implant orientations and sizes based on the patient’s mobility. 2. How does this technology improve upon current standards of care? Until now, hip preoperative planning was done in 2D using standard X-rays or in 3D using CT scan images. The first practice, clearly, does not address the need to control the planning parameters in 3D and has a higher radiation dose when compared to EOS, as the mean radiation dose for an EOS lower limb exam is half that of conventional radiography [18], while maintaining excellent accuracy [18,19] and reproducibility [20]. The second practice which is based on a CT scan, despite offering 3D information, has two main limitations: a. CT images lack the required functional parameters: an increasing number of publications are now demonstrating that weight-bearing PT differs from supine PT [15] (Fig. 31.24). b. CT images deliver a drastically higher dose of radiation [21]. Thanks to the full-body, weight-bearing standing and seated 2D/3D low-dose EOS images, hipEOS can be used to anticipate the results of the surgical strategy on implant ROM, leg length discrepancies, femoral offset, and torsion, which are key criteria for successful THAs.

EOS Imaging Chapter | 31

545

FIGURE 31.24 An optimized planning in supine position (base on a computed tomography (CT) scan) can be inappropriate for the functional, standing position of the patient and increase impingement risk.

FIGURE 31.25 Inclination of the cup in the coronal plane.

FIGURE 31.26 Anteversion of the cup in a transversal plane.

31.4.3

kneeEOS

Due to the aging population, the number of TKAs performed annually continues to increase. Despite significant efforts, progress is still needed to improve the outcome, as 20% of patients are not satisfied with their surgical results [22,23]. Two important factors that may help to improve these outcomes are: G G

achieving correct coronal alignment [24] which also impacts implant longevity [25] and selecting the appropriate size of implants [26].

31. EOS Imaging

3. How can the hip preoperative planning be considered for intraoperative execution? Several intraoperative systems, like navigation or robotics, exist and allow to control the surgery including femoral neck resection, acetabulum milling, and implant positioning. This practice is executed in 3D, so it needs to be powered by 3D planning parameters. For example, the orientation of the cup is one of the parameters that can be planned in 3D with hipEOS and executed with an optical navigation system. The cup orientation is defined by the inclination (Fig. 31.25) and the anteversion (Fig. 31.26) of the acetabulum cup plane. To execute the cup positioning in 3D, those angles also need to be calculated and planned in 3D. The planning can be done in hipEOS in a functional frame considering the standing and seated orientation of the pelvis and can be then calculated according to an anatomical frame like the anterior pelvic plane of the pelvis. This anatomical frame can be used by the navigation system to redefine a work frame similar to the planning. This will allow the execution of the cup orientation according to hipEOS preoperative functional planning. Other parameters can be executed with the same principle according to the planning. Optical navigation was stated here as an example, though it is not the only option. New, simple tracking techniques and robotics are becoming more and more common. Innovative solutions such as augmented reality are also being developed. Having a preoperative plan combined with intraoperative solutions will help to define the appropriate target before starting the surgery.

546

Handbook of Robotic and Image-Guided Surgery

The following section describes how kneeEOS helps to assess these parameters accurately using EOS 2D/3D lowdose images. In addition, the kneeEOS software is also useful for patient engagement as the surgeon can explain the plan to the patient with 3D visualization of the anatomy and implants which makes it easy to understand and patients are often more satisfied knowing a customized plan was prepared for their unique anatomy. 1. Technology description and clinical need being addressed kneeEOS is an online 3D surgical planning software for a primary TKA that uses a 2D/3D patient data set obtained from weight-bearing, low-dose EOS images. An initial proposal of the size and position of the implants is provided automatically, including femoral and tibial resection levels, without complex radiological calibration protocols or additional CT exams (Fig. 31.27). The surgeon can modify the plan with immediate feedback on how the changes affect relevant clinical parameters in 3D, such as leg alignment (HKA angle) and knee rotations (varus/valgus, flexion/extension, internal/external rotation) in a functional position. Thanks to the full-body, weight-bearing EOS 3D images, kneeEOS can be used to anticipate the consequences of the prosthesis placement on leg alignment and knee rotation (Fig. 31.28); two key criteria for successful TKAs. 2. How does this technology improve upon current standards of care? Until now, knee preoperative planning was done in 2D using standard X-rays of the knee, without looking at the rest of the leg, or in 3D using supine CT scan images. That makes the alignment calculated in 2D biased by projections and the one calculated in 3D biased by the supine position of the patient. In Fig. 31.29 an example illustrates the value of looking at the lower limb alignment using weight-bearing 2D/ 3D EOS images. The HKA angle of the same patient can indicate a knee valgus using a 2D measurement (left image in Fig. 31.29), but in reality, the 3D shows a knee varus (right image in Fig. 31.29). This can directly impact the surgeon’s decision for the correction to apply during the surgery. Having the right weight-bearing information is important to optimize surgical decisions. In addition, predicting knee implant size within one size before surgery has been shown to reduce costs and improve OR efficiency, due to the reduced number of instruments and trays required in the operating room [27]. A retrospective pilot study on 21 patients showed that kneeEOS predicted the correct knee implant size within one size in 100% of cases [28]. By planning the required implant sizes before surgery, kneeEOS may help to reduce costs and increase efficiency, particularly for unusual sizes, and allow complications to be anticipated and prepared for in advance to minimize the time and expense in the operating room. 3. How can knee preoperative planning be considered for intraoperative execution? As with THA, several intraoperative systems like navigation and robotics exist for TKA and allow controlling the surgery execution including femoral and tibial resection levels and implants orientations. Using 3D execution

FIGURE 31.27 kneeEOS interface showing the planning of the resection levels and implants rotations.

EOS Imaging Chapter | 31

547

FIGURE 31.28 kneeEOS interface showing the implant positions in 3D and the limb alignment. FIGURE 31.29 EOS 3D weight-bearing reconstructions allow accurate calculation of limb alignment and rotations, unaffected by knee flexion, unlike 2D X-rays.

31.5

Conclusion

The EOS platform is a unique combination of a low-dose 2D/3D imaging technology, software, and services that add value to each step of the musculoskeletal patient care pathway. From diagnosis to long-term follow-up, EOS exams provide full-body, 2D/3D images and patient-specific data sets to plan and control surgeries based on 3D anatomical models of the patient in a weight-bearing position. The EOS system is designed around a vertically traveling arm supporting two image acquisition systems mounted at right angles. This unique biplanar design and linear, vertical scanning technique acquires frontal and lateral images of patients simultaneously in either a standing or seated position. The 3D models and accurate 2D/3D patient-specific data sets generated by the sterEOS software are then used for the EOSapps, an online suite of 3D surgical planning solutions for the spine, hip, and knee. The EOSapps offer an initial proposal for the size and position of implants, based on a patient’s unique 3D anatomy and the surgeon’s preferred components, while visualizing the postoperative results in real time, including the impact on key clinical parameters. The spineEOS software is used to plan deformative or degenerative spine surgeries. It provides a 3D visualization of the patient’s spine in its current state as well as a literature-based, optimal correction of their anatomy in 3D. The correction can be modified by simulating osteotomies, selecting and positioning cages, and accurately planning the length,

31. EOS Imaging

techniques requires 3D planning parameters, and is particularly effective when the weight-bearing, lower limb alignment is taken into consideration.

548

Handbook of Robotic and Image-Guided Surgery

width, and shape of overbent spinal rods in 3D. With the full-body, weight-bearing 2D/3D EOS images, spineEOS displays the anticipated curve after correction with immediate feedback on how changes to the planning affect the important clinical parameters. Overbent rods can be prepared before entering the OR thanks to 3D-printed templates that correspond to the planning. The hipEOS software provides a 3D visualization of a patient’s full lower limbs or full body in a standing and/or seated position as well as a proposal for the size and position of the implants based on the patient’s 3D anatomy. Before finalizing the plan, surgeons can confirm that there are no leg length discrepancies and that key clinical parameters fall within the current recommendations based on scientific publications. A ROM simulation may also be performed to detect whether there is a risk of implant impingement. If there are any concerns with the planning, surgeons can make changes before entering the OR and avoid surprises. Finally, EOS provides functional, 3D, weight-bearing information about your patients at a low dose of radiation. Using this information, kneeEOS online software allows you to create a 3D patient-specific plan that helps to better anticipate surgical difficulties that may be encountered intraoperatively. Using the EOSapps for 3D surgical planning in a functional position contributes to improving the outcomes when intraoperative systems like optical navigation, robotics, augmented reality are used and also provide the data required to prepare 3D-printed patient-specific instruments.

References [1] Dietrich TJ, Pfirrmann CW, Schwab A, Pankalla K, Buck FM. Comparison of radiation dose, workflow, patient comfort and financial breakeven of standard digital radiography and a novel biplanar low-dose X-ray system for upright full-length lower limb and whole spine radiography. Skeletal Radiol 2013;42(7):959 67. [2] Descheˆnes S, Charron G, Beaudoin G, Labelle H, Dubois J, Miron MC, et al. Diagnostic imaging of spinal deformities: reducing patients radiation dose with a new slot-scanning X-ray imager. Spine 2010;35(9):989 94. [3] Desme´ D, Galand-Desme´ S, Besse JL, Henner J, Moyen B, Lerat JL. Axial lower limb alignment and knee geometry in patients with osteoarthritis of the knee. Rev Chir Orthop Reparatrice Appar Mot 2006;92(7):673 9. [4] Cerejo R, Dunlop DD, Cahue S, Channin D, Song J, Sharma L. The influence of alignment on risk of knee osteoarthritis progression according to baseline stage of disease. Arthritis Rheum 2002;46(10):2632 6. [5] Sakalkale DP, Sharkey PF, Eng K, Hozack WJ, Rothman RH. Effect of femoral component offset on polyethylene wear in total hip arthroplasty. Clin Orthop Relat Res (1976 2007) 2001;388:125 34. [6] Patel AB, Wagle RR, Usrey MM, Thompson MT, Incavo SJ, Noble PC. Guidelines for implant placement to minimize impingement during activities of daily living after total hip arthroplasty. J Arthroplasty 2010;25(8):1275 81. [7] Sariali E, Mouttet A, Pasquier G, Durante E. Three-dimensional hip anatomy in osteoarthritis: analysis of the femoral offset. J Arthroplasty 2009;24(6):990 7. [8] Newton PO, Fujimori T, Doan J, Reighard FG, Bastrom TP, Misaghi A. Defining the “three-dimensional sagittal plane” in thoracic adolescent idiopathic scoliosis. JBJS 2015;97(20):1694 701. [9] Ilharreborde B, Sebag G, Skalli W, Mazda K. Adolescent idiopathic scoliosis treated with posteromedial translation: radiologic evaluation with a 3D low-dose system. Eur Spine J 2013;22(11):2382 91. [10] Ferrero E, Lafage R, Vira S, Rohan PY, Oren J, Delsole E, et al. Three-dimensional reconstruction using stereoradiography for evaluating adult spinal deformity: a reproducibility study. Eur Spine J 2017;26(8):2112 20. [11] Itoi E. Roentgenographic analysis of posture in spinal osteoporotics. Spine (Phila Pa 1976) 1991;16(7):750 6. [12] Le Huec JC, Faundez A, Dominguez D, Hoffmeyer P, Aunoble S. Evidence showing the relationship between sagittal balance and clinical outcomes in surgical treatment of degenerative spinal diseases: a literature review. Int Orthop 2015;39(1):87 95. [13] Fechtenbaum J, Etcheto A, Kolta S, Feydy A, Roux C, Briot K. Sagittal balance of the spine in patients with osteoporotic vertebral fractures. Osteoporos Int 2016;27(2):559 67. [14] Ferrero E, Liabaud B, Challier V, Lafage R, Diebo BG, Vira S, et al. Role of pelvic translation and lower-extremity compensation to maintain gravity line position in spinal deformity. J Neurosurg: Spine 2016;24(3):436 46. [15] Buckland AJ, Puvanesarajah V, Vigdorchik J, Schwarzkopf R, Jain A, Klineberg EO, et al. Dislocation of a primary total hip arthroplasty is more common in patients with a lumbar spinal fusion. Bone Joint J 2017;99(5):585 91. [16] Anakwe RE, Jenkins PJ, Moran M. Predicting dissatisfaction after total hip arthroplasty: a study of 850 patients. J Arthroplasty 2011;26 (2):209 13. [17] Upadhyay A, York S, Macaulay W, McGrory B, Robbennolt J, Bal BS. Medical malpractice in hip and knee arthroplasty. J Arthroplasty 2007;22(6):2 7. [18] Escott BG, Ravi B, Weathermon AC, Acharya J, Gordon CL, Babyn PS, et al. EOS low-dose radiography: a reliable and accurate upright assessment of lower-limb lengths. JBJS 2013;95(23):e183. [19] Meijer MF, Velleman T, Boerboom AL, Bulstra SK, Otten E, Stevens M, et al. The validity of a new low-dose stereoradiography system to perform 2D and 3D knee prosthetic alignment measurements. PLoS One 2016;11(1):e0146187.

EOS Imaging Chapter | 31

549

[20] Guenoun B, Zadegan F, Aim F, Hannouche D, Nizard R. Reliability of a new method for lower-extremity measurements based on stereoradiographic three-dimensional reconstruction. Orthop Traumatol: Surg Res 2012;98(5):506 13. [21] Delin C, Silvera S, Bassinet C, Thelen P, Rehel JL, Legmann P, et al. Ionizing radiation doses during lower limb torsion and anteversion measurements by EOS stereoradiography and computed tomography. Eur J Radiol 2014;83(2):371 7. [22] Beswick AD, Wylde V, Gooberman-Hill R, Blom A, Dieppe P. What proportion of patients report long-term pain after total hip or knee replacement for osteoarthritis? A systematic review of prospective studies in unselected patients. BMJ Open 2012;2(1):e000435. [23] Gunaratne R, Pratt DN, Banda J, Fick DP, Khan RJ, Robertson BW. Patient dissatisfaction following total knee arthroplasty: a systematic review of the literature. J Arthroplasty 2017;32(12):3854 60. [24] Matsuda S, Kawahara S, Okazaki K, Tashiro Y, Iwamoto Y. Postoperative alignment and ROM affect patient satisfaction after TKA. Clin Orthop Relat Res 2013;471(1):127 33. [25] Liu HX, Shang P, Ying XZ, Zhang Y. Shorter survival rate in varus-aligned knees after total knee arthroplasty. Knee Surg Sports Traumatol Arthrosc 2016;24(8):2663 71. [26] Shervin D, Pratt K, Healey T, Nguyen S, Mihalko WM, El-Othmani MM, et al. Anterior knee pain following primary total knee arthroplasty. World J Orthop 2015;6(10):795. [27] McLawhorn AS, Carroll KM, Blevins JL, DeNegre ST, Mayman DJ, Jerabek SA. Template-directed instrumentation reduces cost and improves efficiency for total knee arthroplasty: an economic decision analysis and pilot study. J Arthroplasty 2015;30(10):1699 704. [28] Nizard. kneeEOS 3D TKA planning: experience at the Lariboisie`re hospital, SOFCOT; 2015.

31. EOS Imaging

32 G

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Zahra Faraji-Dana1, Adrian L.D. Mariampillai1, Beau A. Standish1, Victor X.D. Yang2 and Michael K.K. Leung1 1

7D Surgical Inc., North York, ON, Canada Sunnybrook Health Sciences Centre, Toronto, ON, Canada

2

ABSTRACT The 7D Surgical Machine-vision Image-Guided Surgery (IGS) (MvIGS) system provides surgical guidance to spinal and cranial procedures where high navigation accuracy is required. By leveraging machine vision technologies, the 7D Surgical MvIGS System has the advantage of allowing surgeons to quickly achieve image registration and start navigation without the need for intraoperative radiation-emitting devices or laborious traditional point matching techniques. The 7D Surgical MvIGS System uses an all-optical nonionizing structured light to acquire a threedimensional (3D) surface scan of the patient. Advanced machine vision algorithms are then used to register the 3D surface to a preoperative scan of the patient. This approach reduces the need for intraoperative X-rays, significantly reducing the surgeon’s, the staff’s, and the patient’s exposure to radiation. By leveraging the intraoperative highresolution optical surface data acquired from the patient and machine vision algorithms, the 7D Surgical MvIGS System significantly reduces the steps required to set up and operate an IGS system leading to an unprecedentedly fast workflow which we call Flash Registration. Fewer required user interactions with the system also allow for a short learning curve. These innovations have resulted in an IGS system that is more accessible to a broader user base, while providing a radiation-free surgical environment for surgeons, hospital staff, and patients. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00032-3 © 2020 Elsevier Inc. All rights reserved.

551

552

Handbook of Robotic and Image-Guided Surgery

32.1

Overview of image-guided surgery technology

The intraoperative navigational techniques commonly referred to as image-guided surgery (IGS) are a procedure whereby preoperative or intraoperative imaging is used to guide a surgery. They typically involve the use of real-time tracking of surgical instruments, which are shown in multiplanar views, in relation to the patient’s anatomy. IGS can be particularly helpful when the anatomy of interest is unexposed or only partially exposed and allows for the accurate guided positioning of surgical instruments or placement of implants into the patient’s anatomy. IGS technology can improve the accuracy of the surgical procedures, hence improving the surgeons’ assurance and the patients’ safety. Even experienced surgeons who have a thorough knowledge of human anatomy can benefit from the information provided by IGS during surgery [1]. IGS technology should be viewed as complementary to and not a replacement for the surgeon’s experience and judgment. The IGS benefits with respect to the safety and accuracy of a variety of spinal and cranial procedures have been well documented in the medical literature and thus utilization of image guidance continues to grow throughout a broad range of surgical procedures. This chapter provides details about the 7D Surgical Machinevision IGS (MvIGS) system with a focus on spinal and cranial procedures.

32.1.1

Importance of navigation in spinal and cranial procedures

The inclusion of navigation technologies in cranial procedures has become the standard of care as the brain is a very delicate organ and utmost care is required while interacting with this tissue. For example, in a tumor resection procedure, it is important that the tumor is removed as much as possible without damaging the healthy brain tissue. This is a challenging task, especially in cases where areas responsible for critical brain function are adjacent to the tumor. In these cases, IGS allows the surgeon to accurately plan an entry point and trajectory, and to locate the intracranial lesion for resection or biopsy [2]. For spinal procedures, however, use of navigation is not as widespread. Many surgeons still operate freehand or rely on traditional fluoroscopy technologies. As a result, breach of screws outside the intended trajectory occurs in 12% 40% of screw placements (Fig. 32.1) [3 6], potentially causing acute neurovascular injury, and in the longer term, mechanical construct failure, whose complications may require costly revision surgeries [7 9]. IGS allows the surgeon to visualize the unexposed or partially exposed structures such as the pedicle region of a vertebra and avoid potential damage to the sensitive nearby organs such as the spinal cord, nerves, and vascular structures. IGS has improved the accuracy of screw placement in all levels of the spine and has reduced breach rates to under 10% [5,10 14].

32.1.2

Evolution of image-guided surgery system

To guide a surgical procedure, C-arm fluoroscopy can be used to visualize an instrument’s position within an anatomical site, either continuously, or in snapshots. A variety of procedures can be performed using fluoroscopy. The main disadvantage of fluoroscopy, however, is the exposure of ionizing radiation to the patients, the surgeons, and operating room staff, particularly when fluoroscopy is used continuously. Furthermore, a single fluoroscopy image can only acquire one anatomical plane at a time. For other planes, the C-arm must be repositioned. Also, a C-arm machine is quite unwieldy and can constrain access to the surgical field. Lastly, C-arm does not offer navigation in the axial plane which is very beneficial in implant placement during spine surgery. IGS was developed to address the shortcomings of conventional intraoperative navigation and to optimize the accuracy and safety of surgical procedures.

FIGURE 32.1 The importance of navigation for pedicle screw placement. The screws are placed very poorly and have likely caused damage to the nerves and spinal cord and other sensitive organs nearby.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

553

Early IGS was performed via matching skin surface markers that were placed on a patient during fluoroscopic imaging to that of the skin surface markers as they appear on the patient during the surgical procedure. These markers introduce significant registration inaccuracy, especially when relative movement between the mobile skin and the underlying bony anatomy occurs [15,16]. IGS accuracy was improved when anatomical landmarks were used as registration markers [17 19]. These systems can be categorized into two broad classes: intraoperative and preoperative IGS. These two classes of the IGS systems and their differences are discussed below.

32.1.3

Intraoperative image-guided surgery systems

32.1.3.1 Intraoperative fluoroscopy-based image-guided surgery systems

32.1.3.2 Intraoperative three-dimensional image-guided surgery systems Analogous to the virtual fluoroscopy systems, several manufacturers have taken the approach of combining IGS technologies with 3D intraoperative imaging equipment. A standard setup consists of a computer system, an intraoperative imaging device, several customized surgical instruments that are tracked, a tool tracking system, and a reference array that is attached to the patient. Example devices include the Medtronic O-arm and Ziehm 3D C-arm [25,26]. The 3D intraoperative imaging equipment integrates with the IGS system through a calibration process that either involves touching the imaging device at known locations with a tracked pointer, imaging a calibration device, or by tracking the imaging device itself during 3D imaging. The result of this calibration is the localization of the 3D image volume in the tool tracking system which also tracks the reference array and the customized surgical instruments.

32. 7D Surgical MvIGS

Fluoroscopy-based IGS systems, also referred to as “virtual fluoroscopy,” combine computer-aided surgical technology with C-arm fluoroscopy [20,21]. There are several manufacturers that provide virtual fluoroscopy systems. Despite the differences in the exact hardware and software among various systems, they generally share the same basic components and functions. They typically include a C-arm fluoroscope situated in the operating room, a calibration target that attaches to the C-arm, a reference array, a tracking system, and various customized surgical instruments such as screwdrivers, awls, probes, and pointers. The reference array and surgical tools are visible to the tracking system and can be tracked in real time. This is made possible either by attaching light-emitting diodes (LEDs), also known as “active arrays,” or reflective spheres, referred to as “passive arrays.” The position and orientation of these arrays in threedimensional (3D) space is measured using a tracking system and transferred to a computer workstation which acts as the primary user interface of the system. The intraoperative fluoroscopic images of the patient are obtained and automatically transferred to the computer for processing. The system automatically calibrates the fluoroscopic images from the spatial information and at least one projection. The calibration process starts by measuring the relative position of the C-arm and the patient by a tracking system that can track the location of the reference array and the calibration target. The computer then links the spatial measurements relative to the obtained fluoroscopic images. The position of the instrument is displayed in reference to acquired fluoroscopic images in different views without any surgeon-derived registration steps. After the registration, the optical tracking system tracks the position of the anatomy via the reference array, and hence maintains the registration accuracy even if the patient or the optical tracking system moves, provided the reference array remains fixed to the anatomy. Thus, it is important that the reference array is firmly attached to the anatomy. Augmenting a traditional C-arm with computer-aided IGS technology strengthens the advantages of fluoroscopy and minimizes its disadvantages. Standard fluoroscopy provides real-time intraoperative visualization of the anatomy. The major disadvantage of the fluoroscopy is the amount of ionizing radiation exposure [22,23] to the patients as well as the surgical staff and surgeons. Moreover, the images can be obtained in only one plane at a time. Virtual fluoroscopy reduces the need for C-arm repositioning because the system can use multiple saved images and effectively acts as a multiplanar imaging unit. Additionally, the surgical team can stand away from the operative field while the fluoroscopic images are acquired, minimizing the occupational ionizing radiation exposure. Although occupational radiation exposure is significantly reduced compared to conventional fluoroscopy, there is still exposure to surgical staff and surgeons. The virtual fluoroscopy system enables two-dimensional navigation in the sagittal and coronal planes yet does not provide navigation in the axial plane. Moreover, poor fluoroscopy image quality, and hence poor navigational accuracy, occurs when imaging patients with high body-mass index, or in the thoracic or thoracic cervical junction due to the decreased ability of X-rays to penetrate through the chest wall and/or shoulders. Image quality is also affected by distortion from image intensifiers. For these reasons, widespread adoption of this IGS approach is limited [24].

554

Handbook of Robotic and Image-Guided Surgery

Therefore, the calibration process defines the 3D image volume relative to the reference array and enables the display of customized surgical instruments in relation to the patient’s anatomy. A chief advantage of the 3D IGS systems is navigation in the axial plane which assists the surgeon in visualizing the anatomy immensely. Additionally, compared to the fluoroscopy-based IGS systems, the 3D IGS systems result in a lower radiation dose and faster imaging. Furthermore, the multiplane imaging reduces the need to manually reposition the imaging device at different angles, yielding time savings in the operating room. Another major advantage of intraoperative 3D IGS systems is that the patient’s images at the operative position get transferred automatically to the IGS system and the surgeon can obtain an updated scan whenever needed by rescanning the patient in the operative position. However, the process of initially setting up the 3D intraoperative imaging device is time consuming and takes anywhere between 15 and 30 minutes. Reacquisitions require repositioning of the 3D intraoperative imaging device, which is also a time-consuming process and can interrupt surgical workflow. Additional staff is also required to operate these systems. Despite these benefits, the need for ionizing radiation impacts the patient and the operating room staff. Another disadvantage of the intraoperative IGS systems is the high cost of purchasing a dedicated intraoperative 3D imaging system in addition to an IGS system. Moreover, the long setup time of the system in the operating room (e.g., draping) and ergonomic issues (e.g., size and positioning of the system) can hinder the surgical workflow. More importantly, the intraoperative image is a snapshot in time of the patient relative to a reference array and if, during the operation, the reference array gets accidentally bumped, another scan needs to be acquired to restore the correct correspondence between the patient and the reference array. Similarly, the intervertebral motion over the course of the spine surgery can invalidate the correspondence between the reference array and the vertebrae that is not directly attached to the reference array. Navigation accuracy at these vertebrae tends to worsen with distance away from the reference array. Thus, large motions adversely impact the navigation accuracy, necessitating a new 3D intraoperative scan to be performed.

32.1.4

Preoperative image-guided surgery systems

Alternatively, several manufacturers of the IGS have taken the approach of utilizing preoperative images combined with the same IGS equipment mentioned in Section 32.1.3.1. A scan of the patient’s relevant anatomy is obtained before the surgery. The scan data are transferred to the computer workstation via a network connection or physical data transfer interfaces such as universal serial bus or optical disks. The image data are reformatted by the computer workstation into standard radiological views, such as sagittal, coronal, axial radiological views, providing an opportunity for surgical planning. For example, in spinal procedures, the size of structures such as the width of pedicles can be measured, and a virtual implant of the desired size can be placed along the planned trajectory. This allows the surgeon to confidently direct the screws into this predefined position even in complex surgical procedures. IGS is made possible by registration of the preoperative image to the intraoperative anatomy of the patient. This registration typically involves selecting multiple points ( . 3) that are distinct anatomical landmarks on the preoperative image as well as on the patient’s anatomy. A trackable probe is employed to touch the anatomical points in the surgical field that correspond to those selected on the preoperative images. This method is called paired-point matching. Pairedpoint matching can be augmented with surface matching, a process in which additional points are selected on the patient’s exposed surface anatomy. All preoperative IGS have inaccuracies as a result of the mismatch between the preoperative image and the current state of the surgical anatomy. These errors can arise from the registration process, the quality of the preoperative images, and surgical instrument tracking error. A mean error of less than 2.0 mm is generally considered clinically acceptable for spinal and cranial surgeries. The navigation accuracy must be confirmed before any surgical navigation is attempted. The navigation accuracy can be validated by placing a tracked probe tip on several different exposed anatomical points and observing the virtual probe on the preoperative images. If the locations of the virtual and real probe do not match, then the registration process should be repeated. Once the navigation accuracy is validated, the anatomy previously hidden from direct surgical line of sight can be visualized on the computer monitor. During surgical navigation, a surgical instrument’s tip and trajectory are visualized on the preoperative image of the anatomy in multiple planes such as axial, sagittal, or coronal views (Fig. 32.2). This enables the surgeon to follow the surgical plan by emulating the planned entry point and trajectories or, alternatively, modify the plan as deemed necessary intraoperatively. Every step of the procedure can benefit from the IGS system helping the surgeon eliminate injury to sensitive organs and improve the safety of the patient.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

555

FIGURE 32.2 Preoperative IGS system for pedicle screw implant trajectory planning. The trajectory of the screw pathway is displayed in axial, sagittal, and coronal views. IGS, Image-guided surgery.

FIGURE 32.3 7D Surgical MvIGS system, including an integrated projection system, stereo machine vision cameras, and infrared tool tracking system integrated into a surgical light. MvIGS, Machinevision Image-Guided Surgery.

32. 7D Surgical MvIGS

32.2

Motivation and benefits of the Machine-vision Image-Guided Surgery system

The 7D Surgical MvIGS system (Fig. 32.3) is the first and only IGS based on machine vision technologies, with the benefits of having a simple and fast workflow, does not require intraoperative radiation, and is cost-effective compared to similar classes of devices. IGS allows surgeons to visualize deeply situated anatomy during surgical procedures, which can reduce procedural complication rates and improve patient care [27]. However, widespread use of IGS has been limited due to several barriers to adoption, including (1) complex workflow and long learning curve, (2) extended surgical time due to workflow disruptions, (3) line-of-sight issue for optical trackers, (4) requiring nonsterile user assistance, (5) needing intraoperative ionizing radiation, and (6) large device footprint. The 7D Surgical MvIGS system,

556

Handbook of Robotic and Image-Guided Surgery

approved for spinal and cranial surgical procedures, aims to address these barriers, by employing 3D optical imaging technologies and machine vision algorithms. These barriers and the 7D Surgical MvIGS system’s solutions are discussed in detail below.

32.2.1

Complex workflow and long learning curve

Intraoperative IGS systems require a lengthy setup, which includes maneuvering the imaging device in and out of the desired field of view, adjusting the table height, and steps to maintain sterility of the surgical field. Also, the tool tracking system used for tracking should have line of sight to both the intraoperative scanner and the patient before images can be acquired. Registration is a necessary process to utilize preoperative images for surgical guidance. Traditional paired-point matching registration based on anatomic landmarks has a relatively long learning curve that can result in surgeon frustration and longer operation times, especially for new users. The main difficulty is that it puts the onus on the surgeon to accurately match preselected fiducial points with their respective anatomical location. Often, preselected fiducials might not be reachable during surgery, due to obscured or removed anatomy, or they may be incorrectly selected, due to the lack of distinct features that are recognizable by the surgeon. In these scenarios, a nonsterile user is needed to work with the surgeon intraoperatively to adjust and/or select new fiducials. Complex anatomy can further exacerbate this problem, as a result, add significant time to the registration process and ultimately to the surgical procedure. A significant learning curve thus exists for surgeons to be efficient at this workflow. Instead of requiring the user to accurately pick paired points, the 7D Surgical MvIGS system relaxes this requirement by capturing a high-resolution 3D surface image of the patient. This 3D surface image, in combination with machine vision algorithms, greatly reduces the finesse required in traditional paired-point matching approaches while using thousands of automatically generated points based on the exposed anatomy. The primary benefit is the surgeon only needs to approximately indicate to the system which anatomical location they are interested in (e.g., which spinal level). The workflow to accurately match preselected fiducials is not necessary. The entire registration workflow, which includes optical surface acquisition and patient region identification takes less than 20 seconds. After the registration is complete, the surgeon is asked to verify the registration by touching the anatomy with the tip of the customized trackable surgical instrument and comparing the virtual position of the tool on the system monitor to the location of the tool on the patient’s anatomy. By automating the surface digitization and the registration processes, valuable operating time is saved, and the learning curve is significantly minimized.

32.2.2

Extended surgical time due to workflow disruptions

A common challenge in the intraoperative IGS systems is dealing with variation in the spine alignment as rods and screws are inserted due to the nonrigid anatomy of the spine. The spine level that has the reference array attached to it is always tracked reliably (providing that the reference array is rigidly attached) but the rest of the spine levels can move relative to the reference array, introducing a mismatch between the registered medical image and the patient’s spine. To overcome this, the spine can be imaged again in the new alignment, which involves a lengthy setup process (15 30 minutes) using current intraoperative IGS. In addition, rescanning exposes the patient to additional ionizing radiation. In spinal surgeries involving multiple vertebrae, it is recommended that registration be performed segmentally, to reduce navigation errors associated with intervertebral motions. Here, the benefit of the 7D Surgical MvIGS system is amplified. Traditional preoperative IGS systems deal with this issue by performing multiple consecutive registrations, each requiring movement of the reference array to the spine level that is being operated on. This process is also quite lengthy (5 7 minutes per spine level). The 7D Surgical MvIGS system, however, can perform registrations quickly. Rather than spending 15 30 minutes to acquire new images in case of intraoperative IGS system or 5 7 minutes per registration in the case of current preoperative IGS systems, the surgeon can simply attach the 7D Surgical reference array to the desired spine level, indicate to the system the spine level of interest using a tracked instrument, and engage the system through a foot pedal click to perform a Flash Registration. The system digitizes the surface of the operative anatomy and automatically updates the registration. Since registration can be performed quickly, resistance from the surgeon to perform multiple registrations, one at each spinal level, is reduced. With this technology, the surgeon no longer has to worry about the potential mismatches between the registered medical image and the patient’s spine introduced due to intervertebral motion. Additionally, all IGS systems rely on the use of a reference array that is localized by the tool tracking system. This reference array needs to be fixed to the anatomy of interest and the customized surgical instruments are navigated with

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

557

respect to the reference array. The tool tracking system follows the anatomy via the reference array and hence the registration accuracy is maintained even if the patient or camera moves, as long as the reference array remains fixed to the anatomy. However, unwanted accidental change in the position of the reference array will introduce navigation inaccuracies, which worsens with increased distance from the reference array, disrupting the workflow. This is a wellrecognized limitation of the IGS systems. If the reference array is bumped, a new registration must be performed. In intraoperative IGS systems, this means repositioning the patient to be scanned and/or setting up the intraoperative scanner again and reimaging and reregistering the patient’s anatomy. This process can take 15 30 minutes of valuable operating time and can add to user frustration. In addition, rescanning exposes the patient and surgical team to additional ionizing radiation. In preoperative IGS systems, a new registration involves repicking the point on anatomical landmarks and verifying the quality of the registration. In either case, in order to get back to the state of accurate navigation, significant time costs are incurred. The 7D Surgical MvIGS system and its Flash Fix technology allow for the near instantaneous autocalibration of the patient registration process using machine vision. Rather than spending anywhere between 15 and 30 minutes to fix a registration as you would with existing intraoperative IGS systems, the surgeon simply engages the system through a foot pedal click to enact Flash Fix. The system then digitizes the surface of the operative anatomy and identifies what has spatially changed in the surgical environment since the last registration and updates the patient registration automatically. The surgeon no longer has to wonder if the reference array was bumped or worry about adding extra time to reregister the patient if the reference array is in fact bumped. With the Flash Fix technology, the registration can be updated at any time with minimal impact to the surgical workflow. Lastly, a crucial benefit of both the multilevel Flash Registration and the Flash Fix technology is that the patient and/or operating room staff are not exposed to additional ionizing radiation. These processes take only a few seconds and the IGS navigation can continue seamlessly.

32.2.3

Line-of-sight issues for optical trackers

32.2.4

Requiring nonsterile user assistance

Almost all IGS systems require a clinical specialist in practice to help with the hardware setup and the operation of the IGS system throughout the clinical workflow. This not only adds cost to the operation but can also create frustration for the surgeons, while adding additional time to the surgery, especially when the surgeon needs to deviate from the preplanned workflow. For example, as mentioned in Section 32.2.2, when the reference array is bumped, significant effort and time are required to restore navigation accuracy. On the contrary, the 7D Surgical MvIGS system gives surgeons full control over the workflow as well as the system hardware while the surgeon remains sterile. The surgeon can individually progress through the entire workflow to navigation by interacting with a foot pedal. Additionally, in the case of a reference array bump, Flash Fix quickly restores navigational accuracy. Additionally, as explained in Section 32.2.3, the surgeon can control the placement of the onboard surgical light and optical tracking cameras through a sterile handle. On the other hand, if intraoperative IGS or traditional fluoroscopy is used, a technician needs to be present to operate the radiation machine. The 7D Surgical MvIGS system operates based

32. 7D Surgical MvIGS

Instruments can be guided with electromagnetic (EM) tracking systems. These systems localize small EM field sensors in an EM field of known geometry, allowing tracking the position and orientation of an instrument. However, they are susceptible to distortions from nearby metal sources and have limited accuracy compared to the optical tracking systems [28]. Due to limitations of EM tracking systems, nowadays optical tracking systems are widely adopted in IGS systems. These systems rely on arrangements of optical tracking cameras (usually two cameras) positioned away from the surgical table and looking at the operation field. Although they offer accurate navigation, their line-of-sight issues have plagued IGS systems. If at any time the cameras cannot see the arrays on the customized surgical instruments due to their line of sight being blocked, the tracking is interrupted, which can cause disturbance of the navigation and frustration for surgeons. In addition, the surgeon does not have direct control over these cameras as they are not sterile. Therefore, they must remotely provide instructions to operating room staff to adjust the cameras to improve line of sight to the tracked instruments. To overcome this, the 7D Surgical MvIGS system has embedded the optical tracking system in an onboard overhead surgical light, which can be adjusted via a sterile light handle. Surgeons are used to pointing the light source toward the site of operation. This means that as the embedded surgical light is adjusted, the surgeon is in effect also updating the position of the cameras. This greatly improves the trackability of the surgical instruments. The 7D Surgical tools are designed to leverage this optimization by having their planar tracking array facing upwards, while preserving the surgeon’s line of sight down to the surgical field (see Fig. 32.4).

558

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.4 Examples of 7D Surgical’s trackable surgical tools: (A) spine reference array, (B) awl, (C) pedicle probe, (D) cranial reference array, and (E) pointer.

on preoperative medical images and hence does not require additional personnel to operate the ionizing radiation device, yielding additional time and cost savings.

32.2.5

Exposure to intraoperative ionizing radiation

When traditional intraoperative IGS is used, the patient, surgical staff, and surgeon are exposed to ionizing radiation. To minimize the occupational radiation, the staff and surgeons must wear lead aprons. However, wearing the heavyweight aprons over prolonged periods of time has been shown to cause orthopedic and ergonomic problems, particularly those related to the spine [29,30]. In addition, lead aprons do not allow for full coverage of the person’s body, nor do they fully block the radiation. In fact, the standard 0.5 mm lead aprons block just over one-third of the scattered radiation [31]. Therefore, the surgeons and staff receive a significant dosage of ionizing radiation despite wearing the heavy lead aprons. The staff are used to simply leaving the operating room during a scan. As these ionizing emitting devices have been used consistently for spine procedures over the last 20 30 years, the operating room staff members have experienced significant deleterious effects including an increased chance of developing several types of cancers. Mastrangelo et al. found a statistically significant increase in solid malignant tumors among orthopedic surgeons compared to other healthcare workers in the same facility [32]. Also, there has been an increase in the incidence of radiation-induced cataracts among the surgical staff and surgeons and hence many surgeons have started wearing surgical shielding goggles to decrease this risk [33]. These factors strongly indicate that radiation dosage for spinal surgeons should be reduced through the utilization of radiation-free technologies. The 7D Surgical MvIGS system removes the need to use radiation for navigation in the operating room. Instead, the patient’s preoperative image is loaded to the system and the operative anatomy of the patient is digitized using nonionizing visible light. The two surfaces are then automatically registered, enabling the surgeon to accurately navigate on the preoperative images of the spine, while providing a radiation-free surgical environment for surgeons, staff, and patients. Finally, by having the patient imaged in the controlled setting of the radiology department, further radiation exposure can be reduced for the patient through the use of size and weight-specific imaging protocols, with better image quality [34]. These individualized scans are difficult to achieve using intraoperative CT or 3D fluoroscopy technologies.

32.2.6

Large device footprint

The traditional fluoroscopy or intraoperative IGS machines are very unwieldy and handling these machines in small operating rooms and narrow hallways of hospitals is cumbersome. Also, additional personnel are required for operating the machines during the surgery, further cluttering the small operating rooms. The 7D Surgical MvIGS system can be easily wheeled through the hospital and it is designed to roll through standard entrance doors without the need for further floor support or retrofitting as required by some of the larger intraoperative imaging technologies. As the 7D Surgical MvIGS system is controlled by the surgeon or operating room staff, additional personnel such as clinical specialists are not required, reducing the number of people in the operating room.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

559

FIGURE 32.5 Hardware overview of the 7D Surgical MvIGS system. MvIGS, Machine-vision Image-Guided Surgery.

32.3 32.3.1

Technical aspects of the 7D Surgical Machine-vision Image-Guided Surgery system Hardware components

32.3.2

The Machine-vision Image-Guided Surgery system workflow

The 7D Surgical MvIGS system is designed to provide an easy and fast workflow for surgeons. The system has five key steps: (1) load data, (2) adjust 3D model, (3) select anatomy of interest, (4) register, and (5) navigate. Initially, the patient’s preoperative CT or magnetic resonance (MR) image is loaded into the system. A 3D model of the preoperative image is then created based on a threshold value applied to the intensity of the image volume

32. 7D Surgical MvIGS

The 7D Surgical MvIGS system is designed and manufactured in Ontario, Canada. Fig. 32.5 shows the main components of the 7D Surgical MvIGS system: (1) surgical arm with extension arm, (2) yoke, (3) system head, (4) surgeon monitor, (5) operator monitor, (6) workstation (includes computer, keyboard, computer mouse, software), (7) system cart, (8) castor locking pedal, (9) foot pedal, and (10) connector for removable sterile light handle. The surgical arm and yoke allow the surgeon to aim the system head toward the desired volume to utilize the builtin surgical lights and to direct the optical cameras for tool tracking and 3D surface scan. Joints allow both the surgeon and operator monitors to be adjusted to user preferences. These components also allow the cart components to be folded into a more compact configuration for transport and storage. The system head consists of a camera gantry integrated with a surgical-grade LED light source. The camera gantry is composed of a proprietary machine-vision camera system and projector, aiming lasers, and an infrared (IR) optical tracking system. The 7D Surgical MvIGS system utilizes the Polaris Vicra (Northern Digital, Ontario, and Canada), which consists of two cameras with IR filters, surrounded by IR LEDs that illuminate the tracking volume [35]. The projector device and the machine-vision camera system are employed to digitize the operative surface intraoperatively. The aiming lasers are used for positioning the system head at the correct working distance and orientation. The surgeon monitor and the operator monitor are mirrored. Having the same content on both displays helps coordinate the surgeon and the staff during the surgery. The system cart includes four castors for mobility. The castor locking pedal at the front of the cart can be used to lock, unlock, or set the directional lock for the castors. The directional lock confines the cart to move in a forward direction to facilitate transport (e.g., in long corridors), whereas unlocked castors allow movement in all directions. The foot pedal and the sterile light handle allow the surgeon to interact with the 7D Surgical MvIGS system while remaining sterile. The foot pedal has three buttons and their functions are context-sensitive.

560

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.6 7D Surgical’s Flash Registration: proprietary 3D imaging method and software algorithms automate the registration process. (Right image) 3D intraoperatively digitized surface of the spine, (left image) 3D rendering of the preoperative CT. The yellow highlighted region on the left defines the target anatomy. Shown in this figure is a lumbar spine model covered with red Play-Doh which mimics the presence of soft tissue. 3D, Three-dimensional.

pixels or voxels. This 3D model represents the anatomical site that the 7D Surgical MvIGS system will register to. Optionally, cropping tools are available to remove undesired regions. The surgeon then selects a minimum of three regions on the 3D model to demarcate the anatomy of interest. For example, in spinal surgeries, the anatomy of interest could be a vertebral level. In cranial surgeries, this could be the face of the patient (skin) or the cranium (bone). During registration, the system digitizes the patient using visible light. This is performed using the technique of structured light [36], which we referred to as a Flash of light, and is an integral part of the registration process. To perform Flash Registration, the system head is aimed directly at the target patient’s anatomy that is to be registered to. The aiming laser system embedded in the system head assists with the aiming. 3D surface acquisition follows, leading to the measurement of hundreds of thousands of points that accurately represents the visible surface anatomy. Using a tracked tool, the surgeon selects the corresponding regions from the preoperative 3D model. Alternatively, they can be acquired using a mouse by clicking on corresponding regions from the acquired structured light point cloud of the surface, also referred to as the digitized surface. The availability of the digitized surface thus gives the user an option to proceed with a completely patient contactless registration workflow. Additionally, existing technologies require corresponding points to be picked accurate to a few millimeters, whereas 7D Surgical’s approach only requires approximate identification, made possible by leveraging machine vision technology. The system then automatically registers the digitized surface to the preoperative dataset of the patient. This overall process is referred to as Flash Registration and reduces the length of existing workflows to seconds. Fig. 32.6 shows an example visualization of Flash Registration that digitizes a human lumbar spine phantom and automatically aligns its current spatial position to a preoperative dataset (e.g., CT). Once the preoperative image and the intraoperative digitized surface are registered, the surgeon then validates the registration accuracy using a tracked pointed tool, by touching known anatomical landmarks and sliding the tip of the pointed tool along the surface of the patient and verifying that they are seeing the expected anatomy on the IGS navigation monitor. In the navigation stage, the surgical tools are visualized in relation to the patient’s anatomy such that they are presented with various radiological (e.g., sagittal, axial, coronal, inline) and 3D views. These visualizations of the patient’s anatomy in relation to the surgeon’s tools are what is used during the procedure to guide tissue resection or implant insertion. A photograph of the 7D Surgical MvIGS system being used by the surgeon to navigate the patient’s anatomy is shown in Fig. 32.7. In addition to standard navigation views, the 7D Surgical cranial software enables simultaneous registration of multiple modalities, such as MR and CT. The workflow consists of independently registering multiple preoperative datasets. Once completed, the registrations can be linked during navigation. No preoperative image fusion is required. An example usage of multimodal navigation for cranial surgeries is shown in Fig. 32.8.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

561

FIGURE 32.7 The Machine-vision IGS system registers the spatial position of the patient in the operating room, while also tracking the movement of surgical tools as they relate to the patient’s anatomy in a spinal procedure. IGS, Image-guided surgery.

FIGURE 32.8 Screenshot from the cranial software of the 7D Surgical MvIGS system during navigation, linking the registration information performed separately to the CT and MR dataset. MR, Magnetic resonance; MvIGS, Machine-vision Image-Guided Surgery.

Flash Registration

At any time during the procedure, the surgeon can use the foot pedal to initiate a Flash Registration, which acquires an intraoperative 3D digitized surface of the patient’s anatomy based on structured light imaging, followed by registration. The registration process is based on the iterative closest point algorithm [37], which iteratively computes a transformation by reducing a distance metric between the intraoperative 3D digitized surface scan and the preoperative 3D model. In a cranial procedure, the surgeon may Flash to the patient’s face to plan their skin flap, and Flash again directly to the skull to confirm the location of the bone flap. Due to the density and accuracy of the 3D patient surface points collected by 7D Surgical Machine-vision cameras, registering directly to any cranial anatomy is fast, efficient, and accurate. In spinal procedures, if the reference array gets accidentally bumped, Flash Registration can be performed without the need to repick points. This is done by comparing the 3D surface scan at the time when the registration was first performed and the current scan which has the reference array in a different orientation. This process completes in a matter of seconds to restore navigation accuracy. We refer to this feature as Flash Fix.

32. 7D Surgical MvIGS

32.3.3

562

Handbook of Robotic and Image-Guided Surgery

32.4

Clinical case studies with 7D Surgical Machine-vision Image-Guided Surgery system

The 7D Surgical MvIGS system has been used in a wide range of clinical cases by numerous surgeons. Here we present a few spinal and cranial clinical case studies that have employed the 7D Surgical MvIGS system.

32.4.1

Revision instrumented posterior lumbar fusion L3 L5

In this example, the patient is a 60-year-old female who had previously undergone a lumbar laminectomy for symptomatic lumbar stenosis. Significant improvement in her symptoms were observed postoperatively. Unfortunately, she presented again with a history of worsening lower back pain radiating into the bilateral buttocks and thighs. Her symptoms were exacerbated by walking and relieved by sitting. Imaging disclosed significant scar tissue in the region of her previous laminectomy, and recurrent lumbar stenosis (Fig. 32.9). A large mass of scar tissue from the previous laminectomy was spared during the initial approach. The surgical site was digitized intraoperatively in 230 ms using 7D Surgical’s Flash Registration technology (Fig. 32.10). The surgeon identified the anatomical areas to be used for registration (Fig. 32.11) by selecting preplanned level definition points with the 7D Surgical awl and foot pedal. This anatomical information was then automatically registered to the preoperative CT via 7D Surgical’s proprietary machine-vision algorithms. The segmental regions used for registration included the spinous process and lamina of L5 along with transverse processes of L4. Shown in Fig. 32.12, each green point represents an individual digital fiducial (1430 in total) that the technology identifies automatically to make a successful match to the patient. A total of three vertebral levels were navigated and six screws were instrumented during this case. This patient has done extremely well in the postoperative period. She has had complete resolution of her preoperative symptoms and was back to her normal level of function within 6 weeks of surgery. Compared to manipulating the C-arm in and out of the field for anterior posterior and lateral imaging during cases without the 7D Surgical MvIGS system, significant surgical time was saved during the procedure. Maintaining sterility in the field requires extensive clinical experience as the C-arm is maneuvered in and out of the field as the table height is adjusted to accommodate the positioning of the C-arm. The 7D Surgical MvIGS system eliminates the need for any of these maneuvers. Additionally, it provides the surgeon with optimal overhead lighting to perform the procedure. Lastly, due to the patient’s small pedicles and the previous laminectomy, proper and accurate placement of the six pedicle screws would have been challenging without the use of the 7D Surgical MvIGS system (Fig. 32.13). The surgeon of this case commented that “The process of intraoperative registration is quick, user-friendly, and accurate. The entire registration process can be done by the operating surgeon within a few seconds. Preoperative

FIGURE 32.9 A 3D volumetric view of the preoperative CT of a 60-year-old female showing the previous laminectomy. 3D, Three-dimensional.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

563

FIGURE 32.10 Surgical incision showing scar tissue from previous laminectomy.

loading of the CT scan can be done at any point before the surgical procedure. Additionally, reregistration is a simple, straightforward process, should it be necessary during the procedure. From a surgical perspective, the pedicle screw placement workflow is not altered by use of the 7D Surgical MvIGS system, while the total surgical time is reduced due to the reduction of time spent placing and adjusting the C-arm for fluoroscopic imaging.”

32.4.2

Revision instrumented posterior lumbar fusion L4 S1

A 60-year-old male presented with a history of posterior L4 S1 decompression and fusion 1 year prior. Pseudarthrosis at L5 S1 then occurred postprocedure. Posterior reexploration along with placement of S2 pedicle screws (due to a left

32. 7D Surgical MvIGS

FIGURE 32.11 Regions corresponding to the anatomy of interest (in yellow) are selected in the preoperative image for registration.

564

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.12 Points acquired using 7D Surgical’s Flash imaging technology are automatically selected for registration (in green). Scar tissues (in gray) are not used.

FIGURE 32.13 The axial and sagittal views shown during navigation on the 7D Surgical MvIGS system that are used to guide the trajectory of pedicle screws. Measurement tools enable live implant sizing. MvIGS, Machine-vision Image-Guided Surgery.

S1 screw fracture) was planned along with an anterior lumbar interbody fusion at L5 S1. Bony union was confirmed on CT at L4 5. The patient suffered from refractory back pain and bilateral leg numbness and tingling, but no radiating pain; he failed many months of conservative therapy for his symptomatic nonunion before considering repeat surgery. The patient’s preoperative CT contained heavy image artifacts due to existing pedicle screws at L4 S1. Registration was performed on the remaining spinous processes of those levels, where the scan was free of artifacts, and the anatomy free of scar tissue. The surgeon was able to sterilely position the 7D Surgical Machine-vision cameras in an intuitive fashion, similar to how he would be aiming the built-in surgical lighting, and perform the Flash Registration process via the system’s foot pedal. The 7D Surgical MvIGS system was able to digitize the patient, and used 1558 points for registration, in a total workflow time of 32 seconds (Figs. 32.14 32.15). Accuracy was maintained throughout the procedure, allowing for cannulation and placement of three new pedicle screws at S1 and S2. The pedicle screw on L5 was placed adjacent to an existing screw (Fig. 32.16). The patient had an uncomplicated hospital course but was readmitted for 24 hours for nausea and emesis on postoperative day 4; no specific source was found. At 2 weeks, he had decreased his narcotic requirement by half, and at 2 months, his back pain was significantly reduced (although not resolved completely) as he began formal lumbar stretching and core strengthening.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

565

FIGURE 32.14 A 3D model of a tracked tool is overlaid on top of the 3D surface point cloud of the patient’s anatomy during registration, enabled by the 7D Surgical MvIGS system which fuses information from the 3D surface imaging system and the tool tracker. This assists the surgeons with identifying anatomical targets during the registration process. 3D, Three-dimensional; MvIGS, Machine-vision Image-Guided Surgery.

32. 7D Surgical MvIGS FIGURE 32.15 The 1558 points (shown in green) were used to register the 3D surface point cloud of the patient to the preoperative CT. Points not used for registration, potentially due to the presence of soft tissue on the bone, are shown in gray. Beige regions show the anatomical registration target. 3D, Three-dimensional.

566

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.16 Navigation views as displayed on the 7D Surgical MvIGS system. In this surgical case, the system has enabled the accurate placement of a new pedicle screw adjacent to an existing screw. MvIGS, Machine-vision Image-Guided Surgery.

32.4.3

Cervical fusion with radiofrequency ablation and vertebroplasty

A 54-year-old male with a history of esophageal cancer presented with a burst fracture in his C7 vertebral body along with posterior projection to the spinal canal. He experienced significant mechanical cervical spinal pain that worsened with movement in addition to radiculopathy involving both arms. The decision was made to perform a posteriorapproach fusion of C5 T2 with bilateral C7 transpedicular radiofrequency (RF) tumor ablation and vertebroplasty. Prior to the procedure, the patient’s preoperative CT was reviewed. The surgeon exposed the patient’s C5 T2, sterilely positioned the cameras, and initiated the Flash Registration process via the foot pedal. Using the 7D Surgical awl, he then collected points on the patient’s anatomy similar to those defined on the preoperative CT (Fig. 32.17). The system performed Flash Registration to C6 in less than a second, while colocalizing 1123 points between the digitized surface and preoperative CT scan (Fig. 32.18). For this registration, the surgeon had a total workflow time of 68 seconds. Soon after instrumenting C5 using the previous registration, the reference array was accidentally hit. The 7D Surgical Flash Fix was used to simultaneously digitize the patient and correct the registration instantly to allow the surgeon to continue operating. This entire registration correction took 4 seconds without the need for any intraoperative ionizing radiation. To maintain accuracy throughout the construct, T1 T2 was reregistered in only 47 seconds, and the pedicles were cannulated using a drill with a tracked guide. One last registration was performed on the highly mobile C7 vertebra. To stabilize the fracture, the C7 registration was performed to create bilateral pilot holes for the insertion of RF ablation (RFA) and vertebroplasty needles. The remaining steps of the RFA and vertebroplasty were performed under fluoroscopic guidance as per standard of care. The cannulation of the pedicles allowed for easy placement of eight screws and was greatly assisted by the 7D Surgical MvIGS system, while minimizing operating time and radiation (Figs. 32.19 and 32.20). The patient has done very well postoperatively and was ready for discharge on postoperative day 1, with Eastern Cooperative Oncology Group score 1, which indicated that the patient was able to carry out work of a light or sedentary nature [38]. Additionally, the patient’s spinal cord American Spinal Injury Association impairment score was E which indicates normal sensory and motor functions [39].

32.4.4

Cervical fusion

A patient who required cervical fusion was identified. The cervical levels to be instrumented were exposed, and an intraoperative CT was acquired. The CT dataset was loaded into the 7D Surgical MvIGS system. The surgeon aimed the stereo machine vision cameras at the anatomy of interest using the sterile light handle and attached the 7D Surgical Reference Array to C6. Flash Registration was initiated using the foot pedal, instantly digitizing the patient’s exposed anatomy (Fig. 32.21). The surgeon then identified three regions on C6 using the 7D Surgical awl, and with one additional foot pedal press, 622 unique points were registered in 1.5 seconds (Fig. 32.22). From initiating the Flash to the completed registration, the complete 7D Surgical workflow took the surgeon 21 seconds. This registration was used to instrument C5 and C6. Following a second registration to C3 and C4, the surgeon cannulated the lateral masses of these levels. The navigation screenshots are shown in Fig. 32.23. When the surgeon moved on to C2, he determined that the spine had changed position since the initial CT scan due to movement during instrumentation of the previous vertebral levels. Using 7D Surgical’s

FIGURE 32.17 A 3D model of a tracked tool is overlaid on top of the 3D surface point cloud of the patient’s anatomy during registration. 3D, Three-dimensional.

FIGURE 32.19 The axial and sagittal views during navigation in the thoracic region as displayed on the 7D Surgical MvIGS system. MvIGS, Machine-vision Image-Guided Surgery.

32. 7D Surgical MvIGS

FIGURE 32.18 The 1123 points selected for registration are shown in green.

FIGURE 32.20 Postoperative CT showing successful screw placement in C5 C6 and T1 T2.

FIGURE 32.21 A 3D surface point cloud inside an incision in the cervical region acquired using the 7D Surgical Machine-vision cameras. 3D, Three-dimensional.

FIGURE 32.22 The 622 points selected for registration are shown in green.

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

569

FIGURE 32.23 The axial and sagittal views during navigation of C4 as displayed on the 7D Surgical MvIGS system. MvIGS, Machine-vision Image-Guided Surgery.

machine-vision-powered Flash Registration, the surgeon was able to reregister to C2 using the original CT scan without the need for any further radiation in under 1 minute. C2 was then instrumented with the 7D Surgical awl, pedicle probe, and a hand-held drill. Overall, the 7D Surgical MvIGS system demonstrated its ability to support a multilevel cervical procedure while maintaining excellent navigational accuracy and keeping registration time at a minimum (Fig. 32.24). In this case, the average registration workflow time was 48 seconds and an average of 608 points were used for each registration. The total radiation exposure was significantly reduced and the estimated time saving in this case was about 20 minutes by eliminating the need for additional intraoperative CT scans.

32.4.5

Left temporal open biopsy

Prior to the procedure, the patient’s preoperative CT and MR images were loaded into the 7D Surgical MvIGS system. To build the 3D model, a threshold was applied to the MR data such that the skin surface was visible, and a different threshold was applied to the CT data to create the skull surface. The patient was positioned in the skull clamp in an extreme lateral position for access to the region of interest. The 7D Surgical cranial reference array was rigidly attached to the skull clamp using the starburst connector. The 7D Surgical MvIGS system head was aimed at the patient’s face and the site was instantly digitized (Fig. 32.25). Four

32. 7D Surgical MvIGS

FIGURE 32.24 Postoperative CT showing eight bilateral screws placed in C2, C4, C5, and C6.

570

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.25 Corresponding regions identified on the MR image and intraoperative digitized patient surface. MR, Magnetic resonance.

FIGURE 32.26 Green dots represent registered points between preoperative MR images and intraoperative digitized patient surface. MR, Magnetic resonance.

FIGURE 32.27 Corresponding regions identified on patient CT (left) and intraoperative digitized patient surface point cloud (middle); the region in green is points which have been selected for registration, whereas the points in orange are not. The beige region belongs to the preoperative CT image (right).

regions were picked virtually on the MR and on the digitized patient surface without the need to contact the patient using the 7D Surgical MvIGS system user console. The 7D Surgical MvIGS system then registered thousands of points in under 2 seconds using the patient’s anatomy, shown below in green (Fig. 32.26). After the patient was draped, the surgeon used this registration to plan the appropriate location for the skin flap. In order to minimize the craniotomy size, the surgeon chose to perform a second registration by aiming the system to Flash directly at the exposed bone (Fig. 32.27). Virtual regions were picked on the CT and on the intraoperative digitized surface, as shown in blue. Of the 300,000 points collected instantly using Flash, the 7D Surgical MvIGS system was able to detect which points to use for registration. Using the Linked Registration feature of the 7D Surgical cranial software, the registered CT and MR datasets were shown simultaneously on the monitor, overlaid on each other. No preoperative image fusion was required. Using both

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

571

FIGURE 32.28 Navigation of 7D Surgical pointer which is used to plan biopsy trajectory.

registered modalities, the surgeon was able to plan the craniotomy location and biopsy approach using the 7D Surgical Probe intraoperatively (Fig. 32.28).

32.5

Future of 7D Surgical Machine-vision Image-Guided Surgery system

7D Surgical’s powerful suite of optical imaging capability and machine-vision algorithms provide the groundwork for solutions to many critical spinal and cranial IGS navigation problems. Below we present one challenge and how 7D Surgical’s technology can potentially improve the current standards of care. It is important to note that at the time of writing this chapter, the following application of the 7D Surgical MvIGS system has not achieved any jurisdictional regulatory approval and is in the research and development phase.

Multilevel registration for spine deformity procedures

A common challenge in spinal surgery is progressively measuring the shape of the spine intraoperatively as rods and screws are inserted and the spine alignment changes. This capability is especially valuable for spine deformity surgeries where considerable change in spinal anatomy may be needed to improve the patient’s morbidity. For example, to correct for a deformity such as scoliosis, it may be desirable to reduce the angle between adjacent vertebrae to within a threshold range [40]. Current technologies require ionization radiation to provide feedback to the surgeon on whether they have achieved an ideal spinal alignment, but even then, the feedback in most cases is qualitative. Fluoroscopy images can only provide 2D planar views, which can obscure certain anatomies. Some intraoperative X-ray imaging devices can provide 3D information, however, their lengthy setup time and workflow mean they are not commonly used to incrementally measure the alignment of the spine. Scanners that do not include a mechanized scan table are also limited to approximately four vertebral levels, and complicated deformity cases often involve many more. In addition, using intraoperative X-ray imaging devices exposes harmful radiation to patients and surgical staff. There exists a need to provide surgeons with fast and accurate intraoperative feedback for complex multilevel spinal deformity cases. By leveraging data from an intraoperative 3D digitized surface, the 7D Surgical’s Flash Align technology can independently register multiple vertebrae at once. After registration, each vertebra can be tracked independently in three dimensions, enabling the measurement of the shape of the surgically exposed spine as it appears on the surgical table. A 3D model of the spine can also be reconstructed, as shown in Fig. 32.29. This simultaneous segmental registration approach overcomes one of the challenges with preoperative IGS systems that is associated with the discrepancy between the shape of the spine in the preoperative image and the patient position on the surgical table. This discrepancy arises due to differences in the patient’s orientation between a preoperative scan (usually performed supine) and when

32. 7D Surgical MvIGS

32.5.1

572

Handbook of Robotic and Image-Guided Surgery

FIGURE 32.29 Segmented virtual vertebrae models as displayed on the 7D Surgical MvIGS system software. In this example, a curved spine phantom where a preoperative image has been acquired is straightened. The preoperative image is shown in opaque gray, and the updated intraoperative position of the spine is shown in the segmented colored models. MvIGS, Machine-vision Image-Guided Surgery.

they are on the surgical table (usually in the prone position), and also due to forces and corrections applied during surgery. With existing IGS systems, a common practice is that the surgeon would register multiple spinal levels at once for a multilevel surgery. In essence, they are assuming the intervertebral motion between adjacent spinal levels is minimal and are willing to trade navigation accuracy for a shorter registration workflow. The 7D Surgical MvIGS system can perform multilevel segmental registration with negligible workflow impediment, enabling accurate navigation using the preoperative image despite changes in the shape of the spine during surgery. At any time during the procedure, the surgeon can simply press the foot pedal to initiate Flash Align, involving the intraoperative structured light acquisition of the surgical area. Without additional user actions, the system then proceeds to identify and register all vertebrae, based on the acquired structured light image, and see the updated position of all vertebra. Knowing the individual position of each vertebra can enable the 7D Surgical MvIGS system to generate metrics that can help the surgeon decide whether additional correction of the spine is required. These metrics include the Cobb angles, derotation angles, and sagittal balance [41,42]. The 7D Surgical’s Flash Align has the potential to improve surgical outcomes by providing feedback of the correction to surgeons intraoperatively, thereby reducing the need for revision surgeries, which can be between 3.9% and 12.9% [43,44]. It also has the potential to mitigate radiation exposure for navigation by eliminating the need for intraoperative CT and fluoroscopy to provide intraoperative spine alignment updates. This radiation reduction is especially important as many spinal deformity patients are young girls, and a link between early radiation exposure in adolescent women and an increased risk of breast cancer has been demonstrated [45]. Furthermore, this capability could reduce procedural time by instantly updating the virtual vertebral models to reflect the patient’s anatomical position during the procedure rather than having to rescan, a lengthy process using existing intraoperative imaging systems. This can in turn reduce the patient’s anesthesia time. Additionally, consistent live feedback improves the surgeon’s confidence in meeting the preoperative plans.

32.6

Conclusion

Using advanced optical surface digitization technologies, robust image registration, and cutting-edge machine-vision algorithms, the 7D Surgical MvIGS system performs registration of intraoperative anatomy to preoperative MR or CT images considerably faster than current IGS navigation systems. This is achieved without the need to be dependent on intraoperative ionizing imaging technologies, which greatly reduces radiation to the patient, surgeon, and operating room staff. Integration of the tool tracking system into the overhead surgical lighting unit provides improved instrument tracking and alleviates line-of-sight issues. We believe these innovations eliminate many of the restrictions that have traditionally led some surgeons to forgo navigation in favor of freehand or fluoroscopy-based approaches. A significant number of clinical cases, involving surgeons at major research centers, teaching hospitals, and private practice clinics have demonstrated the short learning curve and the evolution of a novice user to become a skilled operator after only a couple of cases. Additionally, high-accuracy navigation, enormous operating room time savings, and

Machine-Vision Image-Guided Surgery for Spinal and Cranial Procedures Chapter | 32

573

elimination of ionizing radiation exposure to the patient and surgical staff are the salient advantages of the 7D Surgical innovative technology.

References

32. 7D Surgical MvIGS

[1] Holly LT, Foley KT. Intraoperative spinal navigation. Spine 2003;28(15S):S54 61. Available from: https://doi.org/10.1097/01. BRS.0000076899.78522.D9. [2] Widmann G. Image-guided surgery and medical robotics in the cranial area. Biomed Imaging Interv J 2007;3(1):1 9. Available from: https:// doi.org/10.2349/biij.3.1.e11. [3] Bydon M, Mathios D, Macki M, De la Garza-Ramos R, Aygun N, Sciubba D, et al. Accuracy of C2 pedicle screw placement using the anatomic freehand technique. Clin Neurol Neurosurg 2014;125:24 7. [4] Castro W, Halm H, Jerosch J, Malms J, Steinbeck J, Blasius S. Accuracy of pedicle screw placement in lumbar vertebrae. Spine 1996;21 (11):1320 4. [5] Rajasekaran S, Vidyadhara S, Ramesh P, Shetty A. Randomized clinical study to compare the accuracy of navigated and nonnavigated thoracic pedicle screws in deformity correction surgeries. Spine 2007;32(2):56 64. [6] Wang Y, Xie J, Yang Z, Zhao Z, Zhang Y, Li T, et al. Computed tomography assessment of lateral pedicle wall perforation by free-hand subaxial cervical pedicle screw placement. Arch Orthop Trauma Surg 2013;133(7):901 9. [7] Dea N, Fisher C, Batke J, Strelzow J, Mendelsohn D, Paquette S, et al. Economic evaluation comparing intraoperative cone beam CT-based navigation and conventional fluoroscopy for the placement of spinal pedicle screws: a patient-level data cost-effectiveness analysis. Spine J 2016;16(1):23 31. [8] Verma R, Krishan S, Haendlmayer K, Mohsen A. Functional outcome of computer-assisted spinal pedicle screw placement: a systematic review and meta-analysis of 23 studies including 5,992 pedicle screws. Eur Spine J 2010;19(3):370 5. [9] Watkins R, Gupta A, Watkins R. Cost-effectiveness of image-guided spine surgery. Open Orthop J 2010;4:228 33. [10] Arand M, Schempf M, Fleiter T, Kinzl L, Gebhard F. Qualitative and quantitative accuracy of CAOS in a standardized in vitro spine model. Clin Orthop Relat 2006;450:118 28. [11] Du J, Fan Y, Wu Q, Wang D, Zhang J, Hao D. Accuracy of pedicle screw insertion among 3 image-guided navigation systems: systematic review and meta-analysis. World Neurosurg 2018;109:24 30. [12] Lee G, Massicotte E, Raja Rampersaud Y. Clinical accuracy of cervicothoracic pedicle screw placement. J Spinal Disord Tech 2007;20 (1):25 32. [13] Mason A, Paulsen R, Babuska J, Rajpal S, Burneikiene S, Nelson E, et al. The accuracy of pedicle screw placement using intraoperative image guidance systems. J Neurosurg Spine 2014;20(2):196 203. [14] Nottmeier E, Seemer W, Young P. Placement of thoracolumbar pedicle screws using three-dimensional image guidance: experience in a large patient cohort. J Neurosurg Spine 2009;10(1):33 9. [15] Brodwater B, Roberts D, Nakajima T, Friets E, Strohbehn J. Extracranial application of the frameless stereotactic operating microscope: experience with lumbar spine. Neurosurgery 1993;32:209 13. [16] Roessler K, Ungersboeck K, Dietrich W, Aichholzer M, Hittmeir K, Matula C, et al. Frameless stereotactic guided neurosurgery: clinical experience with an infrared based pointer device navigation system. Acta Neurochir (Wien) 1997;139(6):551 9. [17] Foley K, Smith M. Image-guided spine surgery. Neurosurg Clin N Am 1996;7(2):171 86. [18] Kalfas I, Kormos D, Murphy M, McKenzie R, Barnett G, Bell G, et al. Application of frameless stereotaxy to pedicle screw fixation of the spine. J Neurosurg 1995;83(4):641 7. [19] Nolte L, Zamorano L, Jiang Z, Wang Q, Langlotz F, Berlemann U. Image-guided insertion of transpedicular screws. A laboratory set-up. Spine 1995;20(4):497 500. [20] Foley K, Simon D, Rampersaud Y. Virtual fluoroscopy: computer-assisted fluoroscopic navigation. Spine 2001;26(4):347 51. [21] Yang C-D, Chenb Y-W, Tseng C-S, Ho H-J, Wu C-C, Wang K-W. Non-invasive, fluoroscopy-based, image-guided surgery reduces radiation exposure for vertebral compression fractures: a preliminary survey. Formos J Surg 2012;45(1):12 19. [22] Koo KSH, Reis J, Manchester J, Chaudry G, Dillon B. Effects of mechanical complications on radiation exposure during fluoroscopically guided gastrojejunostomy exchange in the pediatric population. Dysphagia 2018;33(2):251 7. [23] Srinivasan D, Than KD, Wang A, La Marca F, Wang P, Schermerhorn T, et al. Radiation safety and spine surgery: systematic review of exposure limits and methods to minimize radiation exposure. World Neurosurg 2014;82(6):1337 43. [24] Gutie´rrez LF, Ozturk C, McVeigh ER, Lederman RJ. A practical global distortion correction method for an image intensifier based x-ray fluoroscopy system. Med Phys 2009;35(3):997 1007. [25] Richter M, Mattes T, Cakir B. Computer-assisted posterior instrumentation of the cervical and cervico-thoracic spine. Eur Spine J 2004;13 (1):50 9. [26] Silbermann J, Allam FR, Reichert T, Koeppert H, Gutberlet M. Computer tomography assessment of pedicle screw placement in lumbar and sacral spine: comparison between free-hand and O-arm based navigation techniques. Eur Spine J 2011;20(6):875 81. [27] Jakubovic R, Guha D, Gupta S, Lu M, Jivraj J, Standish B, et al. High speed, high density intraoperative 3D optical topographical imaging with efficient registration to MRI and CT for craniospinal surgical navigation. Sci Rep 2018;8(1):14894. [28] Birkfellner W, Hummel J, Wilson E, Cleary K. Tracking devices. In image-guided interventions. Boston, MA: Springer; 2008.

574

Handbook of Robotic and Image-Guided Surgery

[29] lexandre D, Prieto M, Beaumont F, Taiar R, Polidori G. Wearing lead aprons in surgical operating rooms: ergonomic injuries evidenced by infrared thermography. J Surg Res 2017;209:227 33. [30] Goldstein J, Balter S, Cowley M, Hodgson J, Klein L. Occupational hazards of interventional cardiologists: prevalence of orthopedic health problems in contemporary practice. Catheter Cardiovasc Interv 2004;63(4):407 11. [31] Hyun S, Kim K, Jahng T, Kim H. Efficiency of lead aprons in blocking radiation 2 how protective are they? Heliyon 2016;2(5). Available from: https://doi.org/10.1016/j.heliyon.2016.e00117. [32] Mastrangelo G, Fedeli U, Fadda E, Giovanazzi A, Scoizzato L, Saia B. Increased cancer risk among surgeons in an orthopaedic hospital. Occup Med 2005;55(6):498 500. [33] Harstall R, Heini PF, Mini RL, Orler R. Radiation exposure to the surgeon during fluoroscopically assisted percutaneous vertebroplasty. Spine 2005;30(16):1893 8. [34] Donnelly LF, Emery KH, Brody AS, Laor T, Gylys-Morin VM, Anton CG, et al. Minimizing radiation dose for pediatric body applications of single-detector helical CT strategies at a large children’s hospital. AJR Am J Roentgenol 2001;176(2):303 6. [35] Frantz DD, Leis SE, Kirsch S, Schilling C. System for determining spatial position and/or orientation of one or more objects. U.S. patent 6, 288, 785; 1999. [36] Geng J. Structured-light 3D surface imaging: a tutorial. Adv Opt Photonics 2011;3(2):128 60. [37] Besl P, McKay N. A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 1992;14(2):239 56. [38] Oken MM, Creech RH, Tormey DC, Horton J, Davis TE, McFadden ET, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol 1982;5:649 55. [39] Roberts TT, Leonard GR, Cepela DJ. Classifications in brief: American Spinal Injury Association (ASIA) impairment scale. Clin Orthop Relat Res 2017;475(5):1499 504. [40] Majdouline Y, Aubin C-E, Robitaille M, Sarwark JF, Labelle H. Scoliosis correction objectives in adolescent idiopathic scoliosis. J Pediatr Orthop 2007;27(7):775 81. [41] Ames CP, Smith JS, Scheer JK, Bess S, Bederman SS, Deviren V, et al. Impact of spinopelvic alignment on decision making in deformity surgery in adults. J Neurosurg 2012;16(6):547 64. Available from: https://doi.org/10.3171/2012.2.SPINE11320. [42] Berthonnaud E, Dimnet J, Roussouly P, Labelle H. Analysis of the sagittal balance of the spine and pelvis using shape and orientation parameters. J Spinal Disord Tech 2005;18(1):40 7. [43] Luhmann S, Lenke L, Bridwell K, Schootman M. Revision surgery after primary spine fusion for idiopathic scoliosis. Spine 2009;34 (20):2191 7. [44] Richards B, Hasley B, Casey V. Repeat surgical interventions following “definitive” instrumentation and fusion for idiopathic scoliosis. Spine 2006;31:3018 26. [45] Hoffman D, Lonstein J, Morin M, Visscher W, Harris B, Boice J. Breast cancer in women with scoliosis exposed to multiple diagnostic X rays. J Natl Cancer Inst 1989;81(17):1307 12.

33 G

Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery Zhencheng Fan, Longfei Ma, Zhuxiu Liao, Xinran Zhang and Hongen Liao Tsinghua University, Beijing, China

ABSTRACT Minimally invasive surgery (MIS) has attracted significant interest in current medicine for less operative pain and complications, smaller incisions, and faster recovery times. Image-guided surgery (IGS) is a general term used for any surgical procedure with indirect vision to realize the MIS. However, in current IGS, most three-dimensional (3D) medical images used for surgical navigation are displayed in 2D plane monitors, which lack intuitive 3D spatial information. Moreover, surgeons easily suffer from eye hand coordination problems due to that the 2D navigation screen being away from the surgical area. This chapter focuses on 3D medical image rendering and display as well as augmented reality (AR)-based 3D image-guided techniques. Among the various 3D image rendering and display techniques, integral videography (IV) is promising due to its full parallax, continuous viewing points, autostereoscopic displayed images, and simple implementation without dedicated glasses or tracking devices. An ARbased 3D image-guided system using IV technique can present intuitive “see-through” surgical scenes to improve eye hand coordination in diagnosis, surgical planning, operation, and telemedicine. With the assistance of IV image-guided techniques, MIS with enhanced safety, efficiency, and accuracy can be utilized widely for improving the quality of life. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00033-5 © 2020 Elsevier Inc. All rights reserved.

575

576

Handbook of Robotic and Image-Guided Surgery

33.1

Introduction

Compared with traditional “open” surgery, minimally invasive surgery (MIS) can provide patients with better postoperative outcomes and faster recovery [1]. With the assistance of energy-based surgical devices and high-definition cameras, many MIS approaches, such as radiology and endoscopy, have been used in various surgical procedures. Image-guided techniques play important roles in MIS procedures, because human anatomy can only be observed through medical images. However, there are specific challenges for MIS needing to be overcome. In most MIS, image navigation information of the operative field is commonly visualized through a 2D screen, which results in a hand eye coordination problem and impaired depth perception. Surgeons can barely determine the correct direction and position of tools relative to the anatomical structure. Augmented reality (AR), the merging of real and virtual data, has been used to solve this problem. AR technologies have been investigated in a wide variety of surgical procedures. Three-dimensional (3D) images for AR are generated from medical imaging data. Volume rendering and surface rendering are the most common 3D reconstruction methods [2]. Several AR display technologies, including video-based display, see-through display, projectionbased display, have been used for AR-assisted navigation by creating a dynamic fused virtual and real image in real time. Video-based AR display technology is commonly used in robotic, endoscopic, and laparoscopic procedures [3]. In a video-based display, the virtual 3D images and the live surgical field are simultaneously overlaid and displayed in an external video display or a head-mounted display (HMD) [4,5]. The see-through display and the projection-based display enable surgeons to directly observe the real surgical field while simultaneously perceiving the virtual 3D images [6,7]. Precise depth perception is essential to AR-assisted navigation. It enables surgeons to directly observe the 3D anatomical structures with depth information, providing adequate accurate and safe surgery for patients. A 3D autostereoscopic image with precise depth perception can be generated using an integral videography (IV) technique [7]. An IV-based AR navigation scene can be realized through an autostereoscopic image overlay method. This 3D IV image overlay system can provide a larger viewing angle and more comfortable visual perception for multiple observers. The 3D IV image overlay system has been developed and evaluated for neurosurgery, oral and maxillofacial surgery (OMS), spinal surgery, etc. [8 10]. The application of IV technology to clinical processes would help surgeons to perform surgical procedures with improved operation efficiency and reduce their physical burden. To ensure the correct position of the overlaid 3D image and the real object, accurate patient 3D image registration and real-time target tracking are necessary. In this chapter, we give an overview of the main 3D image-guided techniques for MIS, and discuss applications in planning, diagnosis, and operation.

33.2

Three-dimensional image-guided surgery system

With the assistance of a 3D image-guided system, which provides abundant medical images, surgeons can perform safer and less invasive procedures.

33.2.1

Three-dimensional image acquisition and display

Generally, in current image-guided surgery (IGS), displayed 3D medical images mainly consists of preoperative or intraoperative images, and 3D medical images displayed in 2D plane monitors lack intuitive 3D spatial information resulting in surgeons barely being able to distinguish the spatial relationship among critical targets. Therefore 3D display techniques, which can provide intuitive 3D medical information acquired preoperatively or intraoperatively, are of importance in IGS.

33.2.1.1 Three-dimensional image acquisition Depending on the different acquisition methods and demands, 3D medical image acquisition can be divided into preoperative and intraoperative medical image acquisitions. The main techniques in preoperative medical image acquisition include computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine functional imaging techniques such as positron emission tomography (PET) and single-photon emission CT (SPECT). Cross-sectional images in CT are generated from X-ray measurements taken from different angles, and have a high-contrast resolution. MRI uses strong magnetic fields, field gradients, etc. to generate images of organs. It does not involve X-rays and has a better resolution of soft tissue. PET and SPECT, as two main functional imaging techniques, are utilized to observe

Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery Chapter | 33

577

metabolic processes by detecting pairs of gamma rays. After acquired preoperatively, medical images processing, including segmentation and visualization, can be operated for further usage in IGS. During surgery, the patient’s status may change and intraoperative medical images are vital for diagnosis and operation; thus ultrasound (US), endoscopy, etc. are widely applied. US is a real-time diagnostic imaging technique using sound waves with high frequencies, while endoscopy is utilized to examine the interior of a hollow organ or cavity of the body. Generally, visible cameras like a single camera, stereo camera, etc. are widely applied in surgery to capture and reconstruct the patient’s anatomy. By utilizing monocular cues, such as silhouettes, shading, and texture, 3D images can be reconstructed from images captured by a single camera. Considering human vision, stereo camera is used for acquiring images with different parallax and reconstructing 3D images. According to different application scopes, individuals can choose different image intraoperative 3D image acquisition devices and reconstruction methods.

33.2.1.2 Three-dimensional stereoscopic and autostereoscopic displays

33.2.2

Augmented reality based three-dimensional image guided surgery system

During surgery, surgeons can easily suffer from an eye hand coordination problem due to the 2D navigation surface being away from the surgical scene. To solve this problem, AR-based 3D image-guided techniques are proposed for intuitive IGS.

33.2.2.1 Augmented reality based three-dimensional image-guided techniques AR-based 3D image-guided systems mainly include two categories: binocular-based 3D AR systems and 3D autostereoscopic AR systems [18,19]. With the assistance of supplementary instruments, such as dedicated glasses, a binocular-based FIGURE 33.1 IV-based 3D image rendering and display [8]. 3D, Three-dimensional; IV, integral videography.

33. 3D Image-Guided Techniques

Compared with conventional 2D displays, 3D displays provide several advantages including being more realistic and satisfying spatial information, with better visual perception [11,12]. Especially in IGS, 3D display can provide intuitive 3D spatial medical images to surgeons to operate accurately and safely. 3D displays are mainly divided into stereoscopic displays and autostereoscopic displays [13]. Stereoscopic displays require supplementary instruments, like dedicated glasses or tracking devices, to display 3D images reconstructed from two parallax images. Using these techniques, individuals may easily suffer from visual and physical fatigue due to convergence accommodation conflict and redundant instruments. Moreover, observed 3D images are usually inaccurate due to the subjective influenced factors, such as certain divergence of crystalline lens and interpupillary distance. Different from 3D stereoscopic displays, 3D autostereoscopic displays can free people from supplementary instruments, so that multiobservers can view 3D images as if existing in space with the naked eye. Parallax barrier-based display, lenticular-based display, Integral photography (IP)-based display, and holography are common 3D autostereoscopic displays. The parallax barrier-based and lenticular-based 3D autostereoscopic displays allow each eye to observe a different set of pixels and provide 3D images with lateral parallax [14]. Holography can reconstruct 3D images based on the principle of interference by using interference patterns [15]. System configuration in holography is complicated and the reconstructed images do not have full color. Among 3D autostereoscopic displays, IV technology holds the benefits of full parallax, full color, compact system configuration, and simple implementation (Fig. 33.1) [16,17]. The elemental image (EI) array (EIA) in IV can be rendered by simulating the light rays on the computer with the development of a computer graphics (CG) technique. It records light rays emitting from the simulated 3D images through virtual microconvex lens array (MLA) on the EIA. According to the reversibility of ray tracing, light rays emitting from the EIA are modulated by the real MLA to reconstruct 3D images. Considering the surgical demands on real-time visualization, full color display, and simple implementation, IP is a promising 3D autostereoscopic display technique in IGS.

578

Handbook of Robotic and Image-Guided Surgery

3D AR system can provide two images with parallax and reconstructs 3D images simulating binocular vision. HMD and AR surgical microscopes are the two main binocular-based 3D AR systems applied in surgery. HMD devices can be worn to see fused information including the real scene and the computer-generated images [5]. The display units in HMD include half-slivered mirrors, cathode ray tubes, liquid crystal displays. Moreover, a tracking system is utilized to display the correct computer-generated images. The heavy hardware in HMD is not suitable for flexible movement. An AR surgical microscope is usually utilized for MIS to provide a magnified AR scene. It also provides 3D images reconstructed from two parallax images. The overlaid scene in a binocular-based 3D AR system can be only observed by one surgeon, and observers can suffer from visual fatigue and physical fatigue when using these techniques. Moreover, the reconstructed 3D image has inaccurate spatial information, because that it is easily affected by observers. Therefore it is unsuitable for accurate AR in surgery. To provide surgeons an intuitive AR scene with accurate 3D medical images, a 3D autostereoscopic AR system is required. The parallax barrier-based and lenticular-based 3D autostereoscopic AR systems only provide 3D medical images with single parallax. As a promising 3D autostereoscopic AR system, an IV-based 3D AR technique can present intuitive real-time “see-through” images with full color and full parallax [8]. By looking through the half-silvered mirror, surgeons can observe reflected 3D autostereoscopic images overlaid onto the real surgical scene.

33.2.2.2 Three-dimensional integral videography image overlay for image guidance Among various AR-based 3D image-guided systems, the 3D IV image overlay system can present intuitive “seethrough” surgical scenes to solve the eye hand coordination problem (Fig. 33.2) [8]. Compared with other modes for IGSs, the 3D IV image overlay system has some distinct merits, such as full color, full parallax, and compacted configuration. The 3D IV image overlay system mainly consist of three parts: 3D autostereoscopic display, 3D image overlay, and real-time tracking of targets, such as patients and surgical instruments. 3D autostereosopic images are rendered and displayed based on an IV technique. The main computer generation algorithms of EIA include the point ray-tracing rendering algorithm [7], real-time pickup method [20], multiview rendering algorithm [21,22]. Moreover, considering actual demands, surface rendering-based and volume rendering-based rendering algorithms can be chosen for generating EIAs. In the display process, 2D EIAs are displayed on a highresolution 2D flat display with MLA attached. By utilizing a half-silvered mirror, the displayed 3D autostereoscopic images can be merged onto the patient in situ. Moreover, to guarantee the spatial accuracy of the merged 3D autostereoscopic images, accurate patient 3D image registration methods are applied. One acceptable registration method is utilizing the optical tracking systems (OTSs) and tracking tools with fiducial markers. By detecting spatial positions of the anatomic points in displayed 3D autostereoscopic images and corresponding spatial positions of the anatomic points in the actual patient, the transformation coordinates between the patient and the 3D images are calculated. Therefore the reflected images can be superimposed on the correct position. Surgeons can observe fused images with accurate spatial position when looking through the half-silvered mirror. Moreover, the position and orientation of the surgical instruments are tracked by the OTSs in real time. Thus the information of the surgical instruments in the IGS can be updated to guide surgeons to operate accurately. To applied AR scenes in microsurgery, an enhanced 3D IV image overlay system with high resolution and large viewing angle was proposed [23,24]. Compared with a conventional 3D IV image overlay system, this system combines an IV technique with the optical see-through microscopic device. It mainly consists of four components: (1) a highdefinition IV display with dedicated optical image enhancement module, (2) a microscopic AR device, which could provide a magnified surgical scene, (3) a spatial tracking system for surgical instruments, and (4) a workstation for image analysis and processing. As two vital components in the enhanced 3D IV image overlay system, the dedicated FIGURE 33.2 3D IV image overlay system [8]. 3D, Threedimensional; IV, integral videography.

Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery Chapter | 33

579

optical image enhancement module can provide autostereoscopic images with better resolution, which can reach submillimeter and enlarged viewing angle, while the optical magnifier module arranged between the surgical scene and observers can provide a magnified surgical scene. With the assistance of the enhanced 3D IV image overlay system, surgeons can observe 3D images with high-quality merging on the magnified surgical scene with the naked eye.

33.3 33.3.1

Related techniques in three-dimensional image-guided surgery systems Intraoperative patient three-dimensional image registration

FIGURE 33.3 (A) Contour-based registration [9]; (B) US-assisted registration [10]; (C) CNN-based 2D US and 3D CT registration [29]. 3D, Threedimensional; CNN, Convolutional neural network; CT, computed tomography; US, ultrasound.

33. 3D Image-Guided Techniques

To superimpose the spatially projected 3D image on the patient in situ through a half-silvered mirror, intraoperative patient 3D image registration must be first performed. Registration is critical and registration accuracy directly influences the accuracy of the intervention in MIS. Registration can be divided into rigid registration and nonrigid registration according to the geometric transformation. The rigid registration is to find the free transformation of six degrees of freedom between the corresponding feature points on the source image and the target image. In contrast, nonrigid registration is a nonlinear transformation. Rigid registration is commonly used in the IGS for rigid anatomical structures, such as orthopedics, neurosurgery, OMS. Several registration methods have been used. Paired-point registration uses a set of corresponding anatomic or fiducial markers to compute the rigid transformation between the preoperative 3D image and the intraoperative patient [8]. To overcome the problem of the surgeon having to indicate the corresponding points in the paired-point registration, surface-based registration is used to match the intraoperative surface to the preoperative surface. Wang et al. have presented an IV-based AR navigation system for OMS (Fig. 33.3A) [9]. A real-time contour-matching method for patient 3D image registration was proposed. The 3D coordinates of the teeth’s sharp edge were captured using the stereo camera. The 3D coordinates of the teeth’s contour were registered with the preoperative 3D contour fetched from CT data. The measured image overlay error of the proposed AR system was 0.71 mm. However, there is no obvious surface contour in spinal surgery and a registration error may also be caused by softtissue deformation. Intraoperative US is a noninvasive method for anatomical acquisition inside body. Ma et al. proposed a US-assisted registration method using the rigid anatomical landmarks inside the body (Fig. 33.3B) [10]. The feasibility of US-assisted registration was evaluated by performing an AR-guided drilling experiment using an agar phantom and a sheep cadaver. Deep learning has developed rapidly in medical image analysis [25 27]. Chen et al. presented a novel 3D feature-enhanced network for automatic femur segmentation and achieved a high Dice similarity coefficient of 96.88% and took 0.93 seconds on average to segment CT volume using the developed method [28]. Chen et al. also proposed a novel 2D US and 3D CT registration method based on convolutional neural network (CNN) classification of US images (Fig. 33.3C) [29]. Rough image registration is firstly performed by applying the CNN approach and local registration refinement is further completed using a new orientation code mutual information metric. In total 50 registration trials between US image and CT data of multiple vertebras (L2 L4) were performed, and the experimental result showed that the mean target registration error was 2.3 mm, which meets the clinical requirement.

580

Handbook of Robotic and Image-Guided Surgery

FIGURE 33.4 (A) Multiple viewpointbased rendering algorithm for IV-based 3D autostereoscopic displays. (B) Projection geometry of the LBR algorithm [34]. 3D, Three-dimensional; IV, integral videography; LBR, lens-based rendering.

33.3.2

Real-time accurate three-dimensional image rendering

Real-time accurate 3D image display is in desperate demand in surgical applications. With the assistance of accurate 3D images, which are updated in real time, surgeons can acquire real-time spatial information to operate accurately. The disparity between the optical apparatus setups and the rendering model is the main problem, which brings out the inaccurate 3D display. For IV-based 3D autostereoscopic displays utilized in surgery, rendered images are mainly displayed on the flat-panel display with an MLA attached. Two factors, including the alignment between the EIA and MLA, as well as the medium between the EIA and the MLA, are critical for accurate 3D display. The disparity between the MLA and EIA is tiny; thus it is difficult to measure the rotational and translational alignments directly. Therefore a quantitative and systematic alignment method is proposed. It includes the moire´ fringes calibration method for rotational alignment [30,31] and the calibration method for translational alignment [32] by encoding pixels periodically. Moreover, the thickness and refractive index of the medium affect the direction of ray tracing greatly, which further influences the spatial accuracy of displayed 3D images. Generally, the actual thickness of the medium, which mainly consists of the apparatus over the EIA in a flat-panel display, is difficult to acquire in advance. Therefore through a quantitative evaluation method using a dedicated marker, the actual thickness of the medium can be calculated. Furthermore, considering the actual optical parameters, especially the thickness and refractive index of the medium, an enhanced 3D rendering algorithm can be designed for accurate 3D display [32,33]. In IV-based 3D autostereoscopic displays, super-multiview IV displays attract increasing attention due to their benefits, such as overcoming convergence and accommodation conflicts (Fig. 33.4A). Different from conventional single viewport rendering algorithms, super-multiview IP rendering is time consuming. Conventional IV-based 3D image rendering algorithms consist of two stages: a multiview capture process and an image synthesization process. These two processes should utilize new graphics processing unit features and be realized under a programmable pipeline. Therefore older graphics cards cannot support the conventional rendering methods. Conventional 3D image rendering algorithms suffer from a low rendering speed. To fulfill real-time 3D image rendering, a novel real-time high-quality 3D autostereoscopic image rendering algorithm, called the lens-based rendering method, was proposed [34]. High-quality EIs are captured lens by lens frontally and backward in this proposed algorithm (Fig. 33.4B). The algorithm utilizes both fixed and programmable graphics pipelines to accelerate the rendering speed and exploit interperspective antialiasing. This method can be applied to display 3D images with higher frame rates and better image quality in real time. No pixel resampling or view interpolation occurs to interfere with the image quality. Experiments using a Leap Motion and an IV-based 3D autostereoscopic display were carried out to evaluate the real-time features. The acquisition frame rate of the Leap Motion is about 140. The rendering rate of the proposed method can reach approximately 50 FPS, while the one of the conventional multiple cluster ray rendering algorithm only reached about 10 FPS.

33.4 33.4.1

Applications Three-dimensional image guided planning and operation

3D autostereoscopic images, which consist of the preoperative and intraoperative medical images, are of importance to assist surgeons in obtaining a better understanding of the anatomical structure and make surgical planning. For instance, a 3D interactive surgical visualization system using a mobile spatial information acquisition and IV-based 3D autostereoscopic display was proposed (Fig. 33.5A) [35]. The 3D spatial information of the intraoperative target is captured by

Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery Chapter | 33

581

FIGURE 33.5 (A) Fused medical images for surgical planning [35]. (B) 3D IV image overlay result in spine surgery [10]. 3D, Three-dimensional; IV, integral videography.

33.4.2

Robot-assisted operation

Image-guided techniques can also be applied in dedicated robotic systems to provide a stable operation platform and enhance the capabilities of surgeons performing accurate surgery. Liao et al. described an image-guided system, which integrates endoscopic image mosaics with 3D ultrasound images, especially for assisting intrauterine laser photocoagulation treatment (Fig. 33.6A) [36]. In conventional endoscopic laser photocoagulation treatment for twins, surgeons have difficulty in identifying vessel anastomosis due to the limited view and shortage of surrounding information. Therefore by developing fusing endoscopic image mosaics with a 3D ultrasound-image model, the proposed visualization system can improve the efficiency of planning and guidance. Moreover, preexisting surgical procedures easily suffer from the shortage of effective assisted devices or manipulators to operate accurately, safely, and reliably. Therefore a structural and functional design for robot-assisted operation is crucial. A novel manipulator was proposed for stabilizing the fetus and preventing it from free-floating in the endoscopic intrauterine surgery (Fig. 33.6B) [37]. The fetus-supporting manipulator consists of a flexible bending and curving mechanism to reach the target as well as a soft balloon-based stabilizer to guarantee that the manipulator can be inserted into the uterus with a small incision. Under ultrasound guidance, the curving mechanism with the balloonbased stabilizer can be operated accurately during surgery.

33. 3D Image-Guided Techniques

the mobile device and reconstructed using 3D points reconstruction, dimension-reduced triangulation, and projectionweighted mapping algorithms. To fuse the reconstructed 3D intraoperative image and the preoperative medical image, a point cloud selection-based warm-start iterative closest point algorithm was also presented. The fused 3D medical images were displayed intuitively based on IV techniques for observation. Moreover, by adjusting the acquisition viewpoint of the mobile spatial information acquisition device, the fused 3D medical images corresponding to the required viewpoint can be displayed. The proposed system can be utilized widely, especially for telemedicine, operating education, surgical planning, and navigation. Combined with the AR techniques, an AR-based 3D IGS system can resolve the eye hand coordination problem and provide intuitive see-through scenes for operation. The 3D IV image overlay has been applied in neurosurgery, orthopedic surgery, etc. Liao et al. developed a 3D IV image overlay system, especially for open MRI-guided surgery [21]. With the proposed system and the tracking and registration method, the 3D autostereoscopic images were superimposed accurately onto the patient. Moreover, surgical instruments can be observed directly and intuitively when inserted into organs. Phantom and animal experiments were carried out to evaluate the feasibility of the proposed system. Experiments showed that it enables higher accurate registration and has practical usage in neurosurgery and other medical fields. Ma et al. applied the 3D IV image overlay system in spine surgery (Fig. 33.5B) [10]. To solve the deformation of soft tissue, a US-assisted registration method was proposed. The feasibility of the proposed system was also evaluated by phantom and animal experiments. Considering the limited spatial resolution of conventional 3D IV image overlay and the surgical scene in microsurgery, Zhang et al. [24] proposed an enhanced 3D IV image overlay system using a dedicated optical module. Compared with a conventional 3D IV image overlay system, the 3D autostereoscopic image with high quality was merged on the actual surgical scene which was magnified for accurate operation. Experimental results showed that the resolution and viewing angle can be adjusted flexibly by changing the optical parameters.

582

Handbook of Robotic and Image-Guided Surgery

FIGURE 33.6 (A) Endoscope image mapped on the 3D ultrasound model [36]. (B) Linkage bending mechanism, carving mechanism, and balloonbased stabilizer [37]. 3D, Three-dimensional.

FIGURE 33.7 Tumor detection and precise automatic ablation system [38].

33.4.3

Integration of diagnosis and treatment in minimally invasive surgery

The integration of diagnostic and therapeutic techniques is of importance to avoid the inaccurate preoperative diagnosis and simplify the entire surgical process. In the integration of diagnosis and treatment, image-guided operation also provides intuitive image guidance for operation. For instance, during neurosurgery treatment, the brain tissue will be deformed by the cerebrospinal fluid leakage and surgical interventions, called “brain shift.” Thus an integration of lesion diagnosis and treatment with an accurate image guidance system is of desperate need. Liao et al. developed an integrated diagnosis and therapeutic system, especially for precision neurosurgery (Fig. 33.7) [38]. The proposed system combines tumor detection and precise automatic ablation. The tumor is detected using 5-aminolevulinic acid induced protoporphyrins IX fluorescence, which can be accumulated in pathological lesions and emits red fluorescence under blue light illumination. The robotic laser ablation device consists of a microlaser and an automatic focusing and robotic mechanism. Using detected fluorescent information, the microlaser is moved automatically to ablate the target area. Experimental results showed that the proposed system can identify the lesion with fluorescence and the lesion can be ablated automatically using the robotic laser ablation device with guidance by the navigation system. Using highprecision spectral analysis, the accuracy of the fluorescent measurement of the tumor is improved. Moreover, the integration of lesion diagnosis using fluorescence and ablation treatment improves the removal efficiency of the tumor. Moreover, using combined spectral-domain optical-coherence tomography with laser ablation, the diagnosis and treatment of diseased lesions can be enhanced [39,40].

Three-Dimensional Image-Guided Techniques for Minimally Invasive Surgery Chapter | 33

33.5

583

Discussion and conclusion

3D IGS plays a vital role in MIS to provide intuitive 3D autostereoscopic medical images and a see-through surgical scene. According to demand, preoperative and intraoperative medical images can be acquired by different medical acquisition devices. 3D autostereoscopic displays are helpful in IGS to provide 3D spatial images without supplementary instruments and free individuals from physical and visual fatigue. As a promising method, an IV-based 3D autostereoscopic display can be widely applied due to its merits, such as full parallax, full color, compact system configuration, and simple implementation. With the development of CG, various rendering algorithms are proposed to render EIA. To improve eye hand coordination, an AR-based 3D image-guided system can provide fused images. With the guidance of the 3D IV image overlay, surgeons can observe a merged surgical scene where invasiveness of surgery can be reduced and surgical accuracy improved. Researches on displayed 3D images with high image quality and further applications in medicine are still under consideration. The main techniques in 3D image-guided systems includes patient image registration using dedicated tracking tools [41] and spatial tracking of patients [42], as well as rendering accurate 3D images in real time and providing an accurate AR interface. 3D image-guided techniques can also be applied in dedicated robotic systems, such as endoscopy and manipulators, to provide a stable operation platform and enhance the capabilities of surgeons to perform accurate surgery. Moreover, it is useful for the integration of diagnosis and treatment in MIS. For further application in surgery, the image quality of the IV-based 3D autostereoscopic display, such as the spatial resolution and displayed depth need to be improved by novel optical apparatus setups and 3D image rendering algorithms. More experiments and evaluations using 3D image-guided system should be performed for clinical efficiency and usability. In conclusion, the 3D image-guided system is of benefit for MIS, with less operative pain and complications, smaller incisions, and faster recovery times. With the assistance of 3D image-guided techniques, MIS with enhanced safety, efficiency, and accuracy can be utilized widely for increasing the quality of life.

References

33. 3D Image-Guided Techniques

[1] Marescaux J, Diana M. Next step in minimally invasive surgery: hybrid image-guided surgery. J Pediatr Surg 2015;50(1):30 6. [2] Tang R, Ma L, Rong Z, Li M, Zeng J, Wang X, et al. Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: a review of current methods. Hepatobiliary Pancreat Dis Int 2018;17:101 12. [3] Diana M, Marescaux J. Robotic surgery. Br J Surg 2015;102(2):e15 28. [4] Tang R, Ma L, Xiang C, Wang X, Li A, Liao H, et al. Augmented reality navigation in open surgery for hilar cholangiocarcinoma resection with hemihepatectomy using video-based in situ three-dimensional anatomical modeling: a case report. Medicine 2017;96(37):e8083. [5] Cakmakci O, Rolland J. Head-worn displays: a review. J Disp Technol 2006;2(3):199 216. [6] Wen R, Tay WL, Nguyen BP. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput Methods Programs Biomed 2014;116(2):68 80. [7] Liao H, Hata N, Nakajima S, Iwahara M, Sakuma I, Dohi T. Surgical navigation by autostereoscopic image overlay of integral videography. IEEE Trans Inf Technol Biomed 2004;8(2):114 21. [8] Liao H, Inomata T, Sakuma I, Dohi T. 3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay. IEEE Trans Biomed Eng 2010;57(6):1476 86. [9] Wang J, Suenaga H, Hoshi K, Yang L, Kobayashi E, Sakuma I, et al. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans Biomed Eng 2014;61(4):1295 304. [10] Ma L, Zhao Z, Chen F, Zhang B, Fu L, Liao H. Augmented reality surgical navigation with ultrasound-assisted registration for pedicle screw placement: a pilot study. Int J Comput Assist Radiol Surg 2017;12(12):2205 15. [11] Javidi B, Okano F, Son JY. Three-dimensional imaging, visualization, and display. Springer; 2009. [12] Okoshi T. Three-dimensional imaging techniques. Elsevier; 2012. [13] Sexton I, Surman P. Stereoscopic and autostereoscopic display systems. IEEE Signal Proc Mag 1999;16(3):85 99. [14] Ives HE. Camera for making parallax panoramagrams, U.S. patent 2039648. 1936. [15] Yamaguchi I, Zhang T. Phase-shifting digital holography. Opt Lett 1997;22(16):1268. [16] Lippmann MG. Epreuves reversibles donnant la sensation du relief. J Phys 1908;7:821 5. [17] Stern A, Javidi B. Three-dimensional image sensing, visualization, and processing using integral imaging. Proc IEEE 2006;94(3):591 607. [18] Sauer F, Vogt S, Khamene A. Augmented reality. In: Image-guided interventions. Springer US; 2008. p. 81 119. [19] Lamata P, Ali W, Cano A, Cornella J, Declerck J, Elle OJ, et al. Augmented reality for minimally invasive surgery: overview and some recent advances. Intech; 2010. [20] Okano F, Hoshino H, Arai J, Yuyama I. Real-time pickup method for a three-dimensional image based on integral photography. Appl Opt 1997;36(7):1598 603.

584

Handbook of Robotic and Image-Guided Surgery

[21] Liao H, Nomura K, Dohi T. Long visualization depth autostereoscopic display using light field rendering based integral videography. In: IEEE virtual reality conference. 2006. p. 314. [22] Liao H, Dohi T, Nomura K. Autostereoscopic 3D display with long visualization depth using referential viewing area-based integral photography. IEEE Trans Vis Comput Graph 2011;17(11):1690 701. [23] Zhang X, Chen G, Liao H. A high-accuracy surgical augmented reality system using enhanced integral videography image overlay. In: Conf Proc IEEE Eng Med Biol Soc. 2015. p. 4210. [24] Zhang X, Chen G, Liao H. High quality see-through surgical guidance system using enhanced 3D autostereoscopic augmented reality. IEEE Trans Biomed Eng 2017;64(8):1815 25. [25] Litjens G, Kooi T, Bejnordi BE, Setio A, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42(9):60 88. [26] Goceri E, Goceri N. Deep learning in medical image analysis: recent advances and future trends. In: Int. conferences computer graphics, visualization, computer vision and image processing. 2017. p. 305 11. [27] Milletari F, Navab N, Ahmadi SA. V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Fourth international conference on 3D vision. 2016. p. 565 71. [28] Chen F, Liu J, Zhao Z, Zhu M, Liao H. 3D feature-enhanced network for automatic femur segmentation. IEEE Trans Inf Technol Biomed 2019;23(1):243 52. [29] Chen F, Wu D, Liao H. Registration of CT and ultrasound images of the spine with neural network and orientation code mutual information. In: International conference on medical imaging and virtual reality. 2016. p. 292 301. [30] Hutley MC, Hunt R, Stevens RF, Savander P. The moire´ magnifier. Pure Appl Opt 1994;3(2):133. [31] Hirsch M, Lanman D, Wetzstein G, Raskar R. Construction and calibration of optically efficient LCD-based multi-layer light field displays. J Phys Conf Ser 2013;415(1):2071. [32] Fan Z, Zhang S, Weng Y, Chen G, Liao H. 3D quantitative evaluation system for autostereoscopic display. J Disp Technol 2016;12 (10):1185 96. [33] Fan Z, Chen G, Xia Y, Huang T, Liao H. Accurate 3d autostereoscopic display using optimized parameters through quantitative calibration. J Opt Soc Am A 2017;34(5):804. [34] Chen G, Ma C, Fan Z, Cui X, Liao H. Real-time lens based rendering algorithm for super-multiview integral photography without image resampling. IEEE Trans Vis Comput Graph 2018;24(9):2600 9. [35] Fan Z, Weng Y, Chen G, Liao H. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display. J Biomed Inform 2017;71:154 64. [36] Liao H, Tsuzuki M, Mochizuki T, Kobayashi E, Chiba T, Sakuma I. Fast image mapping of endoscopic image mosaics with three-dimensional ultrasound image for intrauterine fetal surgery. Minim Invasiv Ther 2009;18(6):332 40. [37] Liao H, Suzuki H, Matsumiya K, Masamune K, Dohi T, Chiba T. Fetus support manipulator with flexible balloon-based stabilizer for endoscopic intrauterine surgery. In: International conference on medical image computing & computer-assisted intervention, vol. 9. 2006. p. 412 9. [38] Liao H, Noguchi M, Maruyama T, Muragaki Y, Kobayashi E, Iseki H, et al. An integrated diagnosis and therapeutic system using intraoperative 5-aminolevulinic-acid-induced fluorescence guided robotic laser ablation for precision neurosurgery. Med Image Anal 2012;16 (3):754 66. [39] Fan Y, Zhang B, Chang W, Zhang X, Liao H. A novel integration of spectral-domain optical-coherencetomography and laser-ablation system for precision treatment. Int J Comput Assist Radiol Surg 2017;5:1 13. [40] Fan Y, Xia Y, Zhang X, Sun Y, Tang J, Zhang L, et al. Optical coherence tomography for precision brain imaging, neurosurgical guidance and theranostics: a review. Biosci Trends 2018;12(1):12 23. [41] Fan Z, Chen G, Wang J, Liao H. Spatial position measurement system for surgical navigation using 3-D image marker-based tracking tools with compact volume. IEEE Trans Biomed Eng 2018;65(2):378 89. [42] Ma C, Chen G, Liao H. Automatic fast-registration surgical navigation system using depth camera and integral videography 3D image overlay. In: International conference on medical imaging and virtual reality. 2016. p. 392 403.

34 G

Prospective Techniques for Magnetic Resonance Imaging Guided RobotAssisted Stereotactic Neurosurgery Ziyan Guo1, Martin Chun-Wing Leong1, Hao Su2, Ka-Wai Kwok1, Danny Tat-Ming Chan3 and Wai-Sang Poon3 1

The University of Hong Kong, Hong Kong City University of New York, New York City, NY, United States 3 The Chinese University of Hong Kong, Hong Kong 2

ABSTRACT Stereotactic neurosurgery involves a technique that can locate the brain targets using an external coordinate system. With the advancements of magnetic resonance imaging (MRI), numerous studies on frameless stereotaxy and MRIguided/verified technique have been reported to improve the workflow and surgical outcomes. Intraoperative (intraop) MRI guidance in frameless techniques is an appealing method which could simplify workflow by reducing the coregistration errors in different imaging modalities and monitoring the surgical progress. Manually operated platforms thus emerge for MRI-guided frameless procedures. However, these procedures could still be complicated and time-consuming due to their intensive manual operation. To further simplify the procedure and enhance accuracy, robotics has been introduced. In this chapter, we review the state-of-the-art intraop MRI-guided robotic platforms for stereotactic neurosurgery. To improve surgical workflow and achieve greater clinical penetration, three key enabling techniques are discussed with emphasis on their current status, limitations, and future trends. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00034-7 © 2020 Elsevier Inc. All rights reserved.

585

586

34.1

Handbook of Robotic and Image-Guided Surgery

Background

Stereotactic neurosurgery requires locating the targets of interest within the brain using an external coordinate system [1]. Stereotactic approaches have been adopted in a wide variety of procedures, such as biopsy, ablation, catheter placement, stereo electroencephalography, and deep brain stimulation (DBS) [2 4]. Three key stages are incorporated in current stereotaxy workflow: (1) preoperative (preop) planning which provides a roadmap to the interventionists prior to the operation; (2) immediate planning which involves registering the three-dimensional (3D) coordinates of a stereotactic frame onto the preop image; and (3) intraoperative (intraop) refinement which involves setting up the system for intervention. Preop planning involves high-resolution tomography, such as computed tomography (CT) and magnetic resonance (MR) imaging (MRI). These imaging modalities offer crucial image accuracy for precise target/lesion localization. In particular, MRI (e.g., gadolinium-enhanced MR images) is advantageous to visualize deep brain structures for the treatment of functional disorders. Special MRI sequences can also be used to pinpoint the target/lesion location on the 3D roadmap. After locating the required region-of-interests, immediate planning involves the realignment of the frame through image fusion and registration. Such realignment is crucial to obtain a consistent coordinate system among the frame and the preop roadmap. Finally, intraop refinement involves essential procedures such as the creation of a burr hole for dural puncture. For DBS, microelectrode recording (MER) and macrostimulation are also involved in this stage for physiological validation. Instrument manipulation for stereotactic neurosurgery remained a major challenge, despite the standard workflow that has been established for multiple decades. Satisfying the supreme demand of precision while minimizing invasiveness is the key to successful operation. Imprecise positioning of instruments would result in deviated trajectory and targeting error, which would significantly increase the risk of hemorrhage. Although image fusion (registration) has been performed at the immediate planning stage, it cannot compensate for the dynamically changing condition during surgery. Particularly, the unavoidable brain shift/deformation after craniotomy can affect the position of the critical/target regions on the brain. Many procedures can also cause brain shift, such as instrument manipulation, anesthesia operation, change in intracranial pressure, postural/gravitational forces, tissue removal and effect of pharmaceuticals. Given the multiplexed causes of brain shift, solely using the preop images as a roadmap is undesirable. Continuous updates are therefore required. The incorporation of advanced real-time visualization is crucial for precise instrument manipulation and brain shift compensation. The advances in intraop imaging techniques, particularly intraop MRI, simplify the perplexing workflow for stereotactic neurosurgeries. MRI possesses several advantages over other modalities (e.g., CT or ultrasound) thanks to its high sensitivity for intracranial physiological/pathological changes and its capability of visualizing soft tissues in high contrast without radiation. To date, MR images can be acquired swiftly through advanced radiofrequency excitation sequences (e.g., fast imaging with low angle shot sequence can achieve a temporal resolution of 20 30 ms [5]). These imaging techniques, which permit real-time guidance on soft-tissue deformation, can be supported in many current MRI facilities. With increasing real-time MRI availability, there is sufficient support for MRI-guided robots to find their way into more complex surgical procedures. These MRI robots are capable of delivering more precise treatment through accurate image guidance. Device implantation and tissue ablation are timely examples. In this review, we provide a discussion regarding the state-of-the-art apparatus and MR safe/conditional robots for stereotactic neurosurgery, as well as the key enabling techniques, with emphasis on their current status, limitations, and future trends.

34.2

Clinical motivations for magnetic resonance imaging guided robotic stereotaxy

Computer-aided navigation systems have enabled intraop guidance based on preop images since the 1990s (Fig. 34.1) [6]. The advancement of intraop navigation techniques enables frameless stereotaxy, which utilizes fiducial landmarks to replace the rigid frame for registration and transformation of the frame-of-reference. With the fiducial markers/contours providing real-time positional information of the imaged brain and the surgical instrument, the accuracy [7 10], diagnostic yield, morbidity, and mortality rate [11] of frameless stereotaxy are currently comparable to its frame-based counterpart. In addition, frameless stereotactic neurosurgery is also associated with reduced anesthetic time and fewer complications [12]. Despite frameless instrument guidance being achieved, intraop continual visualization of the surgical process remains a challenge [13 15]. Taking the conventional DBS as an example, it utilizes MER and fluoroscopy/CT images concurrently to confirm the placement location of the electrodes. The patient, however, is required to stay awake

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

587

FIGURE 34.1 Key milestones of stereotactic devices for image-guided neurosurgery.

throughout the surgery under local anesthesia for the interventionists to assess the corresponding symptoms [16]. Coregistering fluoroscopy images with the preop roadmap is susceptible to registration errors. In this light, intraop MRI is the preferred imaging modality thanks to its sensitivity to intracranial pathology with high-contrast soft-tissue images. The resultant images in 3D can provide the surgical navigation system with clear visualization, allowing precise guidance of the instrument to the target tissue in real-time. By incorporating MRI guidance in the frameless stereotaxy technique, the multiplexed workflow of DBS can be further streamlined by conducting general anesthesia and verification in situ with MR images [17]. The patient need not stay awake in response to the interventionists. The instrumental position can also be pinpointed throughout the surgical process [18].

Manually operated stereotactic platforms have been developed for MRI-guided neurosurgeries, such as the NexFrame (Medtronic, Inc., United States) and the SmartFrame (ClearPoint, MRI Interventions, Inc., United States) [24] systems. Notably, the ClearPoint system (Fig. 34.2A) has been deployed for several therapeutic approaches including electrode placement [25], focal ablation [26] and direct drug delivery [27]. These MR-safe/conditional platforms have been validated through a number of clinical trials [16,28,29]. Particularly, there was a clinical study on frameless DBS approaches involving 27 patients with movement disorders [17]. This study clearly indicated that frameless DBS under MRI guidance can reduce procedural time without sacrificing surgical accuracy. However, patients have to be moved in and out of the scanner’s isocenter for imaging updates and manual instrument adjustment. Such a requirement not only increases the operation time, but also demands advanced peripherals such as a compatible anesthesia system. These challenges have directed increasing attention at developing intraop manipulators and further translating robotics technology into neurosurgery. Robots can be superior over humans in certain areas, especially for intensive, tedious tasks that demand high precision. Having a compact robot capable of operating inside the confined MRI bore also mitigates the disturbing requirement of frequent patient transfer to/from the isocenter. The increasing demands of MRI-guided robotic platforms for stereotactic neurosurgeries can also be inferred by the rising number of recent reports [30 32]. The clinical benefits of such platforms are extensively discussed.

34. Stereotactic NeuroSurgery

34.3 Significant platforms for magnetic resonance imaging guided stereotactic neurosurgery

588

Handbook of Robotic and Image-Guided Surgery

FIGURE 34.2 Significant MRI-guided stereotactic neurosurgical systems. (A) ClearPoint system by MRI Interventions, Inc., United States. Two frames (SmartFrame) are mounted to the skull bilaterally and manually aligned to the predefined trajectories [18,19]; (B) MRI-compatible surgical assist robot by AIST-MITI, Japan and BWH, Harvard Medical School, United States [20]; (C) NeuroArm/SYMBIS by Deerfield Imaging, United States [21]; (D) NeuroBlate system by Monteris Medical, Inc., United States [22]; (E) a MRI-guided stereotactic robot for deep brain stimulation, developed by Worcester Polytechnic Institute, United States [23]. MRI, Magnetic resonance imaging.

Fig. 34.2B illustrates the early models of MR conditional robots such as those presented in Masamune et al. [33] and Chinzei et al. [34]. These early robot prototypes have a large footprint in the operating room, and are mostly based on low-field, open-bored interventional MRI (iMRI) scanners (e.g., Signa SP 0.5T, GE Medical Systems, United States). The robot reported in Chinzei et al. [34,35] is the first robotic platform integrated with an optically linked frameless stereotactic system. These early robot models have a few key disadvantages. Images obtained from specialized iMRI scanners often have impaired image quality due to the scanners having a low base magnetic field. Any metalcontaining robot components can further degrade the already suboptimal image quality. Furthermore, most of these early robotic systems lack tele-operating capabilities, of which any manual influence on the system may be hindered by the confined iMRI workspace. Fig. 34.2C illustrates NeuroArm/SYMBIS (Deerfield Imaging, United States), which is an MR-compatible robotic system for tele-operative microsurgery and stereotactic brain biopsy [36,37]. It consists of two 7 1 1 degrees of freedom (DoFs) manipulators and is able to operate with a maximum load of 0.5 kg. It also features a moderate force output and movement speed of 10 N and 0.5 5 mm/s, respectively [38]. These manipulators are semiactively controlled by a remote workstation integrated with hand tremor filter and movement scaling. Stereotaxy can be conducted within the magnetic bore using a single MRI-compatible robotic arm. This robot arm is fabricated with MR-compatible materials such as titanium, polyetheretherketone, and polyoxymethylene [21]. To provide a constant frame-of-reference for effective robot control, this MR-compatible robot arm is directly attached to the magnet bore. Fig. 34.2D shows the Monteris stereotactic platform which is capable of tele-operating a two-DoF robotic device for laser ablation. The NeuroBlade laser probe is oriented by a separate, disposable MRI-compatible stereotactic frame (AXiiiS stereotactic miniframe) that is also visible on the MR images. This stereotactic frame consists of three translatable legs and a ball socket for the instrument to engage the treatment target from any angle. The laser fiber for ablation is oriented and driven by piezoelectric motors. Utilizing real-time MRI and thermometry data, the surgeon can monitor and update the probe position and ablation profile accordingly [39]. However, if multiple ablations are required, the patient may be required to be transferred back to the operation theater for probe removal, frame realignment, and possibly new craniotomy. Finally, Fig. 34.2E illustrates a recent research prototype developed by Fischer et al. [23,40], which is designed specifically to place DBS leads under the guidance of MRI. The system features six DoFs driven by piezoelectric motors, which mimics the functionality and kinematic structure of a conventional stereotactic frame (e.g., Leksell frame). It has been demonstrated that the simultaneous robotic manipulation and imaging would not affect the imaging usability for visualization and navigation. The robot could reach targets in a static phantom model with accuracy 1.37 6 0.06 mm in tip position and 0.79 6 0.41 degree in insertion angle [40].

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

34.4

589

Key enabling technologies for magnetic resonance imaging guided robotic systems

The goal of MRI-guided stereotactic neurosurgical platforms is to achieve higher accuracy, effectiveness, and optimized surgical workflow. Despite there being many developed MRI-guided robotic systems (as listed in Table 34.1), only a few of them are available on the market; achieving widespread clinical use is still an ambitious objective. Furthermore, adopting MRI-guided robotic systems also introduces additional costs. Particularly, the expenses of occupying the MRI suite for a prolonged time can be substantial, let aside the expensive MRI-compatible instruments [12]. These associated costs can be detrimental to the application of MRI-compatible robotics in health care [58]. Streamlining the surgical workflow may be a way out to enable widespread application. Here we propose three key enabling technologies for high-performance intraop MRI-guided robotic platforms, which would simplify the workflow (as illustrated in Fig. 34.3) and potentially reduce the surgical costs.

TABLE 34.1 Existing robotic systems for magnetic resonance (MR) imaging (MRI)-guided neurosurgery. Degrees of freedom

Number of end effectors

Actuatora

Accuracy

HMI

Features

Key references

NeuroArm/ SYMBIS (Deerfield Imaging, United States)

711

2

E

Submillimeter

O

Tele-operated microsurgery and stereotaxy Only one manipulator can fit into the magnet bore Haptic feedback 3D image reconstruction for navigation Phase: FDA approved, commercial

Sutherland et al. [41]; Louw et al. [42]; Motkoski et al. [21]

NeuroBlate (Monteris Medical, Inc., United States)

2

1

E

1.57 6 0.21 mm

O

Laser ablation Patient under general anesthesia Continuous MR thermography acquisition Phase: FDA approved, commercial

Mohammadi et al., 2014 [44]; Manijila et al., 2016 [43]

Pneumatic MRIcompatible needle driver (Vanderbilt University, United States)

2

1

P

1.11 mm

Transforamenal ablation; Precurved concentric tube 3T closed-bore MRI scanner Phase: clinical trial

Comber et al. [45,46]

MRI-guided surgical manipulator (AIST-MITI, Japan and BWH, Harvard University, United States)

5

1

E

0.17 mm/ 0.17 degree

Navigation and axisymmetric tool placement 0.5T open MRI scanner Pointing device only; Phase: in vivo test with a swine brain

Chinzei et al. [47]; Koseki et al. [48]

(Continued )

34. Stereotactic NeuroSurgery

Emerging platforms

TABLE 34.1 (Continued) Emerging platforms

Degrees of freedom

Number of end effectors

Actuatora

Accuracy

MRI-compatible stereotactic neurosurgery robot (Worcester Polytechnic Institute, United States)

7

1

E

Mesoscale neurosurgery robot (Georgia Institute of Technology, United States)

b

1

MR safe bilateral stereotactic robot (The University of Hong Kong, Hong Kong)

8

Multi-imagercompatible needle-guide robot (Johns Hopkins University, United States)

HMI

Features

Key references

1.37 6 0.06 mm

Needle-based neural interventions Mounted at the MRI table SNR reduction in imaging less than 10.3% Phase: research prototype

Li et al. [23]; Nycz et al. [40]

c

About 1 mm

Tumor resection, hemorrhage evacuation Skull-mounted Phase: research prototype

Ho et al. [49]; Kim et al. [50]; Cheng et al. [51]

2

H

1.73 6 0.75 mm

Bilateral stereotactic neurosurgery Skull-mounted MR safe/induce minimal imaging interference Phase: research prototype

Guo et al. [52]

3

1

P

1.55 6 0.81 mm

General needle-based interventions Table-mounted iMRIS Phase: research prototype

Jun et al. [53]

MRI-compatible needle insertion manipulator (University of Tokyo, Japan)

6

1

E

3.0 mm

Needle placement 0.5T MRI scanner Phase: research prototype

Masamune et al. [33]; Miyata et al. [54]

Endoscope manipulator (AIST, Japan)

4

1

E

About 0.12 mm/ 0.04 degree

Endoscope manipulation for transnasal neurosurgery Vertical field open MRI Large imaging noise caused by ultrasonic motors Phase: research prototype

Koseki et al. [55]

Tele-robotic system for MRIguided neurosurgery (California State University, United States and University of Toronto, Canada)

7

1

P/H

Brain biopsy 1.5T MRI scanner Mounted at the surgical table Phase: research prototype

Raoufi et al. [56]

O

(Continued )

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

591

TABLE 34.1 (Continued) Emerging platforms

Degrees of freedom

Number of end effectors

Actuatora

Open-MRI compatible robot (Beihang University, China)

5

1

E

Accuracy

HMI

Features

Key references

Biopsy and brachytherapy 0.3T iMRIS Phase: research prototype

Hong et al. [57]

3D, Three-dimensional; FDA, Food and Drug Administration; HMI, human machine interface; iMRIS, Intraoperative MRI scanner. a Actuator: E, nonmagnetic electric actuator, such as piezoelectric motor or ultrasonic motor; P, pneumatic actuator; H, hydraulic actuator. b A flexible continuum robot, of which the degrees of freedom depend on the number of segments. c Shape memory alloy spring-based actuators remotely driving the manipulator via pulling tendons.

FIGURE 34.3 Workflow of (A) conventional stereotactic neurosurgery; (B) MRI-guided robotic neurosurgery. In the robot-assisted procedure, errors can be mitigated by the guidance of real-time MRI and closed-loop control of robotic manipulation. MRI, Magnetic resonance imaging.

Nonrigid image registration

Mismatch between the preop and intraop images can lead to much confusion in the process of target localization. Such a mismatch can arise from various sources: (1) differences in patient positioning during scanning and surgery (e.g., in supine/prone); (2) lead time between scanning and surgery; (3) number of sampling fiducial points for registration; and (4) intrinsic error in image fusion. Image registration mitigates such misalignments, thus enabling precise localization of the preoperatively segmented critical/target regions on the intraop images. With the target location pinpointed on the rapidly acquired intraop image, the surgical plan can be established/updated accordingly. To date, many commercial navigation systems only employ rigid registration to realign the both sets of images. However, it cannot compensate for any image discrepancy resulting from the actual brain deformation and the MR image distortion. For example, it cannot tackle the severe misalignment (B10 30 mm [59]) caused by brain shift after craniotomy (Fig. 34.4). This large-scale

34. Stereotactic NeuroSurgery

34.4.1

592

Handbook of Robotic and Image-Guided Surgery

FIGURE 34.4 (Upper row) Brain deformation before and after the craniotomy [60]; (lower row) geometric distortion in diffusion images [71].

brain deformation inevitably makes the surgical plan inconsistent with the actual anatomy during the procedure. Nonrigid image registration has been proposed to mitigate such misalignment. In particular, the biomechanical finiteelement-based registration schemes are specifically developed to estimate and predict the extension of any brain shift of different regions. The relative stiffness model of intracranial structures has to be constructed so as to deduce deformation caused by gravity [60 62]. Apart from nonlinear image discrepancy due to the tissue deformation, spatial distortion of MR images would also hamper the accuracy in MRI-guided stereotactic surgery [63]. The cause of MR distortion is multiform and incalculable. Let alone base (static) field inhomogeneity, chemical shift, and susceptibility artifacts, the nonlinearity of the B1 gradient field contributes most to such distortion. It has been reported that the spatial distortions can be as much as 25 mm at the perimeter of an uncorrected 1.5T MR image; the error would still remain within the 1% range (typically B4 mm) even after standard gradient calibration using a grid phantom [64,65]. This error is significant concerning the supreme accuracy requirement in stereotaxy. Worse still, the distortion may even be aggravated due to the higher magnetic field inhomogeneity that presents in 3T MRI scanners [63]. The combined effect of these variables often results in very complex and nondeterministic image distortion, particularly affecting the images obtained by advanced excitation sequences. For example, the echo-planar imaging sequence used in the acquisition of diffusion-weighted images is vulnerable to susceptibility-induced distortions, resulting in heavy distortion at the tissue margins where the magnetic susceptibility is rapidly changing in 3D space (Fig. 34.4) [66]. Considering such gradient field nonlinearity, gradient-based excitation sequences are set back despite its widespread usefulness. Nonrigid registration schemes can correct the distortion in gradient-based image while retaining any useful anatomical information. This can be achieved by registering the distorted image to a standard MR image (e.g., T2 turbo spin echo images that exhibit little image distortion) obtained at the same imaging instance. As a result, the image correspondence in 3D obtained by nonrigid registration can reliably restore any misalignment caused by image distortion. Recent research demonstrated that significant ( . 10%) accuracy improvement has been archived by resolving such a misalignment [67]. However, complex computation involved in nonrigid registration schemes impedes its efficacy to be used in the intraop scenario. This motivates the development of high-performance image registration schemes using scalable computation architectures such as graphical processing units, field-programmable gate arrays, or computation clusters. Recent works [68 70] have demonstrated substantial computation speed up, in which the registration process can be accomplished within seconds, even with a large image dataset in 3D (B27 M voxels) being used.

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

34.4.2

593

Magnetic resonance based tracking

Real-time tracking enables in situ positional feedback of stereotactic instruments inside the MRI scanner bore. Not only does it act as the feedback data to close the control loop of a robot, it also allows the operator to visualize the instrument position/configuration with respect to the brain roadmap. A sufficient number of tracked markers are required to pinpoint the instrument in the image coordinates [72]. However, real-time positional tracking of the instrument inside the MRI scanner is challenging for several reasons: (1) conventional instruments can either be invisible or create serious susceptibility artifacts on MR images; (2) restricted space of the scanner bore and complicated electromagnetic (EM)shielding limit the use of external tracking devices, for example, stereo-optical cameras; and (3) image reconstruction is time-consuming (e.g., 9.440 seconds required for acquisition of a slice of T2-weighted MR image with field of volume of 220 3 220 mm [73]). Only a few 2D images can be obtained, therefore it is hard to localize multiple marker points on an image domain in relatively large 3D space. Passive tracking (Fig. 34.5, upper row) is the most commonly used method, in which passive markers are incorporated with the stereotactic instruments and directly visible in MR images by changing the contrast. No additional hardware is necessary. The markers are either filled with paramagnetic agents (e.g., gadolinium compounds) that can produce positive MR image contrast, or diamagnetic materials (e.g., ceramic) that generates negative contrast. The shape of these markers can be customized into spheres, tubes, or other desired structures for the ease of recognition on the images. Passive tracking is simple and safe, and can be performed under various MR field strengths without inducing any heat. However, passive markers may be invalid when the markers are in close proximity, or out of the imaging slice [74]. Thus, the configuration of the marker system needs to be specially designed for ready identification [75]. In addition, the localization of passive markers is challenging to perform automatically and also in real time. The visualization of passive markers relies on 2D image reconstruction, which is time-consuming and may not be reliable as the MR images are intrinsically distorted [76]. To tackle these challenges, much research attention has been recently given to the MR-based active tracking techniques (Fig. 34.5, lower row). Active markers are small coils serving as antennas individually connected with the MRI scanner receivers, and actively respond to the MR gradient field along three principal directions. Without the need for

34. Stereotactic NeuroSurgery

FIGURE 34.5 (Upper row) An MRI-visible guide oriented by a stereotactic device to align with the planned trajectory. A ceramic mandrel is inserted subsequently after the alignment and its tip position is validated in MR image [84,85]; (middle row) a 5 F catheter embedded with a semiactive marker at the tip. This marker is a resonant circuit and can be controlled by optical fiber. The two MR images show that the marker produces no signal enhancement in the detuned state and an intense signal spot in the tuned state [86]; (lower row) active markers mounted at a brachytherapy catheter. 3D TSE sequence is adopted to generate a high-resolution MR image (resolution: 0.6 3 0.6 3 0.6 mm3) [83]. 3D, Three-dimensional; MRI, magnetic resonance imaging; TSE, turbo spin echo.

594

Handbook of Robotic and Image-Guided Surgery

image reconstruction, the markers can be rapidly localized using a 1D projection technique [77]. This localization is automatic, since the marker can be independently identified through its own receiving channel [78,79]. The obtained coordinates may then be immediately used for adjustment of the further scanning plane [80]. Specific MR sequences are designed to incorporate and interleave both tracking and imaging. Delicate heat control is also needed because of the resonating RF waves and the storage of electrical energy caused by the conductive structure [81]. Therefore, a semiactive tracking system is also preferable, in which there is no electrical wire connected between the coil marker and the MRI scanner. It resolves the potential problem of heat generated by the wires. This marker unit acts as an RF receiver to pick up the MR gradient signal, as well as an inductor to resonate with the signal transmitted to the MRI scanner receiver [82]. The resonance frequency of this coil marker needs fine tuning to adapt with the scanners of different field strengths (i.e., 63.8 and 123.5 MHz, respectively, for 1.5 and 3T MRI scanners), while the 1.5T scanner is more popular in clinical practice and 3T can provide images with lower noise and a faster acquisition time. We can foresee that such MR-based tracking coils could be implemented in stereotactic neurosurgery to realize real-time instrument tracking. Promising results have been reported in an MR-active tracking system for intraop MRIguided brachytherapy. Three active microcoil markers (1.5 3 8 mm2, Fig. 34.5) are mounted on a Ø1.6 mm brachytherapy stylet [83]. Both the tracking and imaging are in the same coordinate system, the stylet configuration can be virtually augmented on the MR images in situ. High-resolution (0.6 3 0.6 3 0.6 mm3) stylet localization at high sampling rate (40 Hz) and low-latency (,1.5 ms) could be achieved.

34.4.3

Magnetic resonance imaging compatible actuation

The actuator is another key component of a robot. Its performance also determines the surgical safety and accuracy, in particular for instrument manipulation in stereotactic surgery that involves precise coordination of three DoFs at least and demands an average accuracy of 2 3 mm. Conventional high-performance actuators mostly consist of magnets and are driven by EM power. However, the use of ferromagnetic materials is forbidden under a strong magnetic field. This poses a strong incentive to develop motors that are safe and compatible with the MRI environment. Piezoelectric motors actuated by high-frequency electric current have been extensively applied for iMRI applications [87 89]. Such motors are usually small in size (e.g., 40.5 3 25.7 3 12.7 mm3, Nanomotion motor as shown in Fig. 34.6, upper row), and can provide fine movement at the nanoscale. However, the motion range and speed of these motors are limited and insufficient for some long-stroke DoFs (e.g., inserting an ablation catheter for tumors located in the deep brain area) without

FIGURE 34.6 Exemplary MRI-compatible robotic systems driven by different motors. (Upper row) NeuroArm manipulator (in red frame) driven by ultrasonic piezoelectric motors (in yellow frame). The manipulator is mounted onto an extension board for stereotaxy [96,97]. Careful EM-shielding is required for the motors and controller box placed inside the MRI room to ensure safety and minimal interference to the imaging. System setup diagram of the robot integrated with ultrasonic motors is shown on the right. (Lower row) Prostate robot (in red frame) driven by pneumatic stepper motors (in yellow frame) [91,98]. The controller box can be placed in the control room and connected with the motors via air hoses. System setup diagram of the robot integrated with pneumatic motors is shown on the right. EM, Electromagnetic MRI, magnetic resonance imaging.

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

595

additional mechanisms. EM interference is inevitably induced by the high-frequency electrical signal. Tailormade EM shielding of the motor and its electronic drivers may degrade the motor compactness [88,90]. Nevertheless, the imaging quality can be more or less deteriorated by the presence of electric current while the motors operate inside the scanner bore during the image acquisition, thus affecting the visualization of small targets (e.g., DBS targets with diameters of approximately 4 12 mm). In this light, intrinsically MR-safe motors driven by other energy sources, for example, pressurized air/water flow, are preferable. Minimal EM interference is generated by this fluid-driven actuation [91 94]. Fig. 34.6 (lower row) shows a general setup of a pneumatically actuated MRI robot. Long transmission air pipes (e.g., 10 m) connect the robot and its control box, which are placed in MRI and control rooms separately. Pressurized air at 0.2 0.4 MPa can be supplied from the medical air system commonly available in hospital rooms. However, the high-frequency air pulses may generate unfavorable noises and vibration in the operating room. The compressibility of air results in limited torque/ force output and low-stiffness transmission, making the positional accuracy hard to reach the millimeter level and satisfy the requirement in stereotaxy [95]. In contrast, incompressible liquid (e.g., water, oil) in hydraulic motors offers relatively accurate, responsive, and steady mechanical transmission. They can typically render large output power. A master slave design is usually adopted in hydraulic systems. The master unit is placed in the control room, which is driven by electric motors; the slave unit works near or inside the MRI scanner bore, which is made of MR-safe materials and its power is transmitted from the master unit via long hydraulic tubes. In this hydraulic system, discreet sealing for all the connectors is required to prevent liquid leakage. This may pose difficulties in setting up the robot, for example, when disconnecting and reconnecting the hydraulic tubes through the waveguide (with diameter of BØ100 mm) between the MRI room and the control room.

34.5

Conclusion

In this review, we have given an overview of the emerging robotic platforms for MRI-guided stereotactic neurosurgery. These neurosurgical systems allow for enhanced dexterity, stability, and accuracy beyond manual operation. However, few of them are in wide application. This may be due to the lack of optimized surgical workflow and outcomes to compensate for the high cost of using MRI and MRI-compatible instruments/robot. To tackle this challenge, three key enabling technologies have been proposed in this chapter, namely nonrigid image registration, MR-based positional tracking, and MR-safe actuation. All these technological developments will eventually serve to exploit the information and augment the surgeon’s capabilities, by providing enhanced visualization and manipulation. Continued efforts to incorporate these techniques and evaluate the clinical benefits would be of great value.

[1] Galloway R, Maciunas RJ. Stereotactic neurosurgery. Crit Rev Biomed Eng 1989;18(3):181 205. [2] Spiegel EA, Wycis HT, Marks M, Lee AJ. Stereotaxic apparatus for operations on the human brain. Science 1947;106(2754):349 50. [3] Henderson JM, Holloway KL, Gaede SE, Rosenow JM. The application accuracy of a skull-mounted trajectory guide system for image-guided functional neurosurgery. Comput Aided Surg 2004;9(4):155 60. [4] Gonzalez-Martinez J, Vadera S, Mullin J, Enatsu R, Alexopoulos AV, Patwardhan R, et al. Robot-assisted stereotactic laser ablation in medically intractable epilepsy: operative technique. Oper Neurosurg 2014;10(2):167 73. [5] Uecker M, Zhang S, Voit D, Karaus A, Merboldt KD, Frahm J, et al. Real-time MRI at a resolution of 20 ms. NMR Biomed 2010;23(8): 986 94. [6] Mert A, Gan LS, Knosp E, Sutherland GR, Wolfsberger S. Advanced cranial navigation. Neurosurgery 2013;72(Suppl. 1):A43 53. [7] Holloway KL, Gaede SE, Starr PA, Rosenow JM, Ramakrishnan V, Henderson JM. Frameless stereotaxy using bone fiducial markers for deep brain stimulation. J Neurosurg 2005;103(3):404 13. [8] Henderson JM. Frameless localization for functional neurosurgical procedures: a preliminary accuracy study. Stereotact Funct Neurosurg 2004;82(4):135 41. [9] Maciunas RJ, Fitzpatrick JM, Galloway RL, Allen GS. Beyond stereotaxy: extreme levels of application accuracy are provided by implantable fiducial markers for interactive image-guided neurosurgery. Interactive image-guided neurosurgery. Am Assoc Neurol Surg 1993; ISBN: 1879284154. [10] Maciunas RJ, Galloway Jr RL, Latimer JW. The application accuracy of stereotactic frames. Neurosurgery 1994;35(4):682 95. [11] Dammers R, Haitsma IK, Schouten JW, Kros JM, Avezaat CJ, Vincent AJ. Safety and efficacy of frameless and frame-based intracranial biopsy techniques. Acta Neurochir (Wien) 2008;150(1):23. [12] Dorward NL, Paleologos TS, Alberti O, Thomas DG. The advantages of frameless stereotactic biopsy over frame-based biopsy. Br J Neurosurg 2002;16(2):110 18.

34. Stereotactic NeuroSurgery

References

596

Handbook of Robotic and Image-Guided Surgery

[13] Lunsford DL, Parrish R, Albright L. Intraoperative imaging with a therapeutic computed tomographic scanner. Neurosurgery 1984;15(4): 559 61. [14] Black PM, Moriarty T, Alexander E, Stieg P, Woodard EJ, Gleason PL, et al. Development and implementation of intraoperative magnetic resonance imaging and its neurosurgical applications. Neurosurgery 1997;41(4):831 45. [15] Hadani M, Spiegelman R, Feldman Z, Berkenstadt H, Ram Z. Novel, compact, intraoperative magnetic resonance imaging-guided system for conventional neurosurgical operating rooms. Neurosurgery 2001;48(4):799 809. [16] Foltynie T, Zrinzo L, Martinez-Torres I, Tripoliti E, Petersen E, Holl E, et al. MRI-guided STN DBS in Parkinson’s disease without microelectrode recording: efficacy and safety. J Neurol Neurosurg Psychiatry 2011;82(4):358 63. [17] Southwell DG, Narvid JA, Martin AJ, Qasim SE, Starr PA, Larson PS. Comparison of deep brain stimulation lead targeting accuracy and procedure duration between 1.5-and 3-tesla interventional magnetic resonance imaging systems: an initial 12-month experience. Stereotact Funct Neurosurg 2016;94(2):102 7. [18] Chabardes S, Isnard S, Castrioto A, Oddoux M, Fraix V, Carlucci L, et al. Surgical implantation of STN-DBS leads using intraoperative MRI guidance: technique, accuracy, and clinical benefit at 1-year follow-up. Acta Neurochir (Wien) 2015;157(4):729 37. [19] Starr PA, Markun LC, Larson PS, Volz MM, Martin AJ, Ostrem JL. Interventional MRI guided deep brain stimulation in pediatric dystonia: first experience with the ClearPoint system. J Neurosurg Pediatr 2014;14(4):400 8. [20] Chinzei K, Hata N, Jolesz FA, Kikinis R. MR compatible surgical assist robot: system integration and preliminary feasibility study. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, Heidelberg; Oct 11, 2000. p. 921 30. [21] Motkoski JW, Sutherland GR. Why robots entered neurosurgery. Exp Neurosurg Anim Models 2016;85 105. [22] Golby AJ. Image-guided neurosurgery. Elsevier Science; 2015. [23] Li G, Su H, Cole GA, Shang W, Harrington K, Camilo A, et al. Robotic system for MRI-guided stereotactic neurosurgery. IEEE Trans Biomed Eng 2015;62(4):1077. [24] Larson P, Starr PA, Ostrem JL, Galifianakis N, San Luciano Palenzuela M, Martin A. 203 application accuracy of a second generation interventional MRI stereotactic platform: initial experience in 101 DBS electrode implantations. Neurosurgery 2013;60(CN_Suppl. 1):187. [25] Sidiropoulos C, Rammo R, Merker B, Mahajan A, LeWitt P, Kaminski P, et al. Intraoperative MRI for deep brain stimulation lead placement in Parkinson’s disease: 1 year motor and neuropsychological outcomes. J Neurol 2016;263(6):1226 31. [26] Drane DL, Loring DW, Voets NL, Price M, Ojemann JG, Willie JT, et al. Better object recognition and naming outcome with MRI-guided stereotactic laser amygdalohippocampotomy for temporal lobe epilepsy. Epilepsia 2015;56(1):101 13. [27] Chittiboina P, Heiss JD, Lonser RR. Accuracy of direct magnetic resonance imaging-guided placement of drug infusion cannulae. J Neurosurg 2015;122(5):1173 9. [28] Ashkan K, Blomstedt P, Zrinzo L, Tisch S, Yousry T, Limousin-Dowsey P, et al. Variability of the subthalamic nucleus: the case for direct MRI guided targeting. Br J Neurosurg 2007;21(2):197 200. [29] Patel NK, Plaha P, Gill SS. Magnetic resonance imaging-directed method for functional neurosurgery using implantable guide tubes. Oper Neurosurg 2007;61(Suppl. 5):ONS358 66 p. [30] Kaouk JH, Goel RK, Haber GP, Crouzet S, Stein RJ. Robotic single-port transumbilical surgery in humans: initial report. BJU Int 2009;103(3): 366 9. [31] Devito DP, Kaplan L, Dietl R, Pfeiffer M, Horne D, Silberstein B, et al. Clinical acceptance and accuracy assessment of spinal implants guided with SpineAssist surgical robot: retrospective study. Spine 2010;35(24):2109 15. [32] Antoniou GA, Riga CV, Mayer EK, Cheshire NJ, Bicknell CD. Clinical applications of robotic technology in vascular and endovascular surgery. J Vasc Surg 2011;53(2):493 9. [33] Masamune K, Kobayashi E, Masutani Y, Suzuki M, Dohi T, Iseki H, et al. Development of an MRI-compatible needle insertion manipulator for stereotactic neurosurgery. J Image Guid Surg 1995;1(4):242 8. [34] Chinzei K, Kikinis R, Jolesz FA. MR compatibility of mechatronic devices: design criteria. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin, Heidelberg: Springer; 1999 Sep 19. pp. 1020 30. [35] Lewin JS, Metzger A, Selman WR. Intraoperative magnetic resonance image guidance in neurosurgery. J Magn Reson Imaging 2000;12(4): 512 24. [36] Sutherland GR, Maddahi Y, Gan LS, Lama S, Zareinia K. Robotics in the neurosurgical treatment of glioma. Surg Neurol Int 2015;6(Suppl. 1): S1 8. [37] Sutherland GR, McBeth PB, Louw DF. NeuroArm: an MR compatible robot for microsurgery. International congress series. Elsevier; 2003. [38] Faria C, Erlhagen W, Rito M, De Momi E, Ferrigno G, Bicho E. Review of robotic technology for stereotactic neurosurgery. IEEE Rev Biomed Eng 2015;8:125 37. [39] Hawasli AH, Ray WZ, Murphy RK, Dacey Jr RG, Leuthardt EC. Magnetic resonance imaging-guided focused laser interstitial thermal therapy for subinsular metastatic adenocarcinoma: technical case report. Oper Neurosurg 2011;70(Suppl. 2):332 7. [40] Nycz CJ, Gondokaryono R, Carvalho P, Patel N, Wartenberg M, Pilitsis JG, et al. Mechanical validation of an MRI compatible stereotactic neurosurgery robot in preparation for pre-clinical trials. In: Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ international conference on. IEEE; Sep 24, 2017. p. 1677 84. [41] Sutherland GR, McBeth PB, Louw DF. NeuroArm: an MR compatible robot for microsurgery. Int Congr Ser 2003;1256:504 8.

Prospective Techniques for Magnetic Resonance Imaging Chapter | 34

597

34. Stereotactic NeuroSurgery

[42] Louw DF, Fielding T, McBeth PB, Gregoris D, Newhook P, Sutherland GR. Surgical robotics: a review and neurosurgical prototype development. Neurosurgery 2004;54(3):525 37. [43] Manjila S, Knudson KE, Johnson Jr C, Sloan AE. Monteris AXiiiS stereotactic miniframe for intracranial biopsy: precision, feasibility, and ease of use. Oper Neurosurg 2015;12(2):119 27. [44] Mohammadi AM, Hawasli AH, Rodriguez A, Schroeder JL, Laxton AW, Elson P, et al. The role of laser interstitial thermal therapy in enhancing progression-free survival of difficult-to-access high-grade gliomas: a multicenter study. Cancer Med 2014;3(4):971 9. [45] Comber DB, Pitt EB, Gilbert HB, Powelson MW, Matijevich E, Neimat JS, et al. Optimization of curvilinear needle trajectories for transforamenal hippocampotomy. Oper Neurosurg 2016;13(1):15 22. [46] Comber DB, Slightam JE, Gervasi VR, Neimat JS, Barth EJ. Design, additive manufacture, and control of a pneumatic MR-compatible needle driver. IEEE Trans Robot 2016;32(1):138 49. [47] Chinzei K, Miller K, Towards MRI. Towards MRI guided surgical manipulator. Med Sci Monit 2001;7(1):153 63. [48] Koseki Y, Kikinis R, Jolesz FA, Chinzei K. Precise evaluation of positioning repeatability of MR-compatible manipulator inside MRI. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, Heidelberg; 2004. p. 192 9. [49] Ho M, Kim Y, Cheng SS, Gullapalli R, Desai JP. Design, development, and evaluation of an MRI-guided SMA spring-actuated neurosurgical robot. Int J Robot Res 2015;34(8):1147 63. [50] Kim Y, Cheng SS, Diakite M, Gullapalli RP, Simard JM, Desai JP. Toward the development of a flexible mesoscale MRI-compatible neurosurgical continuum robot. IEEE Trans Robot 2017;33(6):1386 97. [51] Cheng SS, Kim Y, Desai JP. New actuation mechanism for actively cooled SMA springs in a neurosurgical robot. IEEE Trans Robot 2017;33: 986 93. [52] Guo Z, Dong Z, Lee KH, Cheung CL, Fu HC, Ho JD, et al. Compact design of a hydraulic driving robot for intra-operative MRI-guided bilateral stereotactic neurosurgery. IEEE Robot Autom Lett 2018;3(3):2515 22. [53] Jun C, Lim S, Wolinsky JP, Garzon-Muvdi T, Petrisor D, Cleary K, et al. MR safe robot assisted needle access of the brain: preclinical study. J Med Robot Res 2018;3(01):1850003. [54] Miyata N, Kobayashi E, Kim D, Masamune K, Sakuma I, Yahagi N, et al. Micro-grasping forceps manipulator for MR-guided neurosurgery. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, Heidelberg; Sep 25, 2002. p. 107 13. [55] Koseki Y, Washio T, Chinzei K, Iseki H. Endoscope manipulator for trans-nasal neurosurgery, optimized for and compatible to vertical field open MRI. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, Heidelberg; Sep 25, 2002. p. 114 21. [56] Raoufi C, Goldenberg AA, Kucharczyk W. Design and control of a novel hydraulically/pneumatically actuated robotic system for MRI-guided neurosurgery. J Biomed Sci Eng 2008;1(01):68. [57] Hong Z, Yun C, Zhao L, Wang Y. Design and optimization analysis of open-mri compatibile robot for neurosurgery. In: Bioinformatics and biomedical engineering, 2008. ICBBE 2008. The 2nd international conference on 2008 May 16. IEEE. p. 1773 6. [58] Mattei TA, Rodriguez AH, Sambhara D, Mendel E. Current state-of-the-art and future perspectives of robotic technology in neurosurgery. Neurosurg Rev 2014;37(3):357 66. [59] Nimsky C, Ganslandt O, Hastreiter P, Fahlbusch R. Intraoperative compensation for brain shift. Surg Neurol 2001;56(6):357 64. [60] Archip N, Clatz O, Whalen S, Kacher D, Fedorov A, Kot A, et al. Non-rigid alignment of pre-operative MRI, fMRI, and DT-MRI with intraoperative MRI for enhanced visualization and navigation in image-guided neurosurgery. Neuroimage 2007;35(2):609 24. ˇ [61] Skrinjar O, Nabavi A, Duncan J. Model-driven brain shift compensation. Med Image Anal 2002;6(4):361 73. [62] Hu J, Jin X, Lee JB, Zhang L, Chaudhary V, Guthikonda M, et al. Intraoperative brain shift prediction using a 3D inhomogeneous patientspecific finite element model. J Neurosurg 2007;106(1):164 9. [63] Tavares WM, Tustumi F, da Costa Leite C, Gamarra LF, Amaro Jr E, et al. An image correction protocol to reduce distortion for 3-T stereotactic MRI. Neurosurgery 2013;74(1):121 7. [64] Baldwin LN, Wachowicz K, Thomas SD, Rivest R, Fallone BG. Characterization, prediction, and correction of geometric distortion in MR images. Med Phys 2007;34(2):388 99. [65] Mallozzi R. Geometric distortion in MRI. The Phantom Laboratory, Inc.; 2015. [66] Walker A, Liney G, Metcalfe P, Holloway L. MRI distortion: considerations for MRI based radiotherapy treatment planning. Australas Phys Eng Sci Med 2014;37(1):103 13. [67] Murgasova M, Estrin GL, Rutherford M, Rueckert D, Hajnal J. Distortion correction in fetal EPI using non-rigid registration with Laplacian constraint. In: Biomedical Imaging (ISBI), 2016 IEEE 13th international symposium on 2016 Apr 13. IEEE. p. 1372 5. [68] Kwok KW, Chow GC, Chau TC, Chen Y, Zhang SH, Luk W, et al. FPGA-based acceleration of MRI registration: an enabling technique for improving MRI-guided cardiac therapy. J Cardiovasc Magn Reson 2014;16(1):W11. [69] Kwok KW, Chen Y, Chau TC, Luk W, Nilsson KR, Schmidt EJ, et al. MRI-based visual and haptic catheter feedback: simulating a novel system’s contribution to efficient and safe MRI-guided cardiac electrophysiology procedures. J Cardiovasc Magn Reson 2014;16(1):O50. [70] Gu X, Pan H, Liang Y, Castillo R, Yang D, Choi D, et al. Implementation and evaluation of various demons deformable image registration algorithms on a GPU. Phys Med Biol 2009;55(1):207. [71] Bhushan C, Haldar JP, Choi S, Joshi AA, Shattuck DW, Leahy RM. Co-registration and distortion correction of diffusion and anatomical images based on inverse contrast normalization. Neuroimage 2015;115:269 80. [72] Moche M, Trampel R, Kahn T, Busse H. Navigation concepts for MR image-guided interventions. J Magn Reson Imaging 2008;27(2):276 91.

598

Handbook of Robotic and Image-Guided Surgery

[73] Chavhan GB, Babyn PS, Thomas B, Shroff MM, Haacke EM. Principles, techniques, and applications of T2*-based MR imaging and its special applications. Radiographics 2009;29(5):1433 49. [74] Elayaperumal S, Plata JC, Holbrook AB, Park YL, Pauly KB, Daniel BL, et al. Autonomous real-time interventional scan plane control with a 3-D shape-sensing needle. IEEE Trans Med Imaging 2014;33(11):2128 39. [75] Strother SC, Anderson JR, Xu XL, Liow JS, Bonar DC, Rottenberg DA. Quantitative comparisons of image registration techniques based on high-resolution MRI of the brain. J Comput Assist Tomogr 1994;18(6):954 62. [76] Wang D, Strugnell W, Cowin G, Doddrell DM, Slaughter R. Geometric distortion in clinical MRI systems: Part I: Evaluation using a 3D phantom. Magn Reson Imaging 2004;22(9):1211 21. [77] Dumoulin C, Souza S, Darrow R. Real-time position monitoring of invasive devices using magnetic resonance. Magn Reson Med 1993;29(3): 411 15. [78] Werner R, Krueger S, Winkel A, Albrecht C, Schaeffter T, Heller M, et al. MR-guided breast biopsy using an active marker: a phantom study. J Magn Reson Imaging 2006;24(1):235 41. [79] Zimmermann H, Mu¨ller S, Gutmann B, Bardenheuer H, Melzer A, Umathum R, et al. Targeted-HASTE imaging with automated device tracking for MR-guided needle interventions in closed-bore MR systems. Magn Reson Med 2006;56(3):481 8. [80] Coutts GA, Gilderdale DJ, Chui M, Kasuboski L, Desouza NM. Integrated and interactive position tracking and imaging of interventional tools and internal devices using small fiducial receiver coils. Magn Reson Med 1998;40(6):908 13. [81] Konings MK, Bartels LW, Smits HF, Bakker CJ. Heating around intravascular guidewires by resonating RF waves. J Magn Reson Imaging 2000;12(1):79 85. [82] Rube MA, Holbrook AB, Cox BF, Houston JG, Melzer A. Wireless MR tracking of interventional devices using phase-field dithering and projection reconstruction. Magn Reson Imaging 2014;32(6):693 701. [83] Wang W, Dumoulin CL, Viswanathan AN, Tse ZT, Mehrtash A, Loew W, et al. Real-time active MR-tracking of metallic stylets in MR-guided radiation therapy. Magn Reson Med 2015;73(5):1803 11. [84] Richardson RM, Kells AP, Martin AJ, Larson PS, Starr PA, Piferi PG, et al. Novel platform for MRI-guided convection-enhanced delivery of therapeutics: preclinical validation in nonhuman primate brain. Stereotact Funct Neurosurg 2011;89(3):141 51. [85] Truwit C, Martin AJ, Hall WA. MRI guidance of minimally invasive cranial applications. Interventional magnetic resonance imaging. Springer; 2011. p. 97 112. [86] Weiss S, Kuehne T, Brinkert F, Krombach G, Katoh M, Schaeffter T, et al. In vivo safe catheter visualization and slice tracking using an optically detunable resonant marker. Magn Reson Med 2004;52(4):860 8. [87] Krieger A, Song SE, Cho NB, Iordachita II, Guion P, Fichtinger G, et al. Development and evaluation of an actuated MRI-compatible robotic system for MRI-guided prostate intervention. IEEE/ASME Trans Mechatron 2013;18(1):273 84. [88] El Bannan K, Chronik BA, Salisbury SP. Development of an MRI-compatible, compact, rotary-linear piezoworm actuator. J Med Device 2015;9(1):014501. [89] Wang Y, Cole GA, Su H, Pilitsis JG, Fischer GS. MRI compatibility evaluation of a piezoelectric actuator system for a neural interventional robot. In: Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual international conference of the IEEE. IEEE; Sep 3, 2009. p. 6072 5. [90] Su H, Cardona DC, Shang W, Camilo A, Cole GA, Rucker DC, et al. A MRI-guided concentric tube continuum robot with piezoelectric actuation: a feasibility study. In: Robotics and Automation (ICRA), 2012 IEEE international conference on 2012 May 14. IEEE. p. 1939 45. [91] Stoianovici D, Patriciu A, Petrisor D, Mazilu D, Kavoussi L. A new type of motor: pneumatic step motor. IEEE/ASME Trans Mechatron 2007;12(1):98 106. [92] Sajima H, Kamiuchi H, Kuwana K, Dohi T, Masamune K. MR-safe pneumatic rotation stepping actuator. J Robot Mechatron 2012;24(5): 820 7. [93] Chen Y, Mershon CD, Tse ZT. A 10-mm MR-conditional unidirectional pneumatic stepper motor. IEEE/ASME Trans Mechatron 2015;20(2): 782 8. [94] Chen Y, Kwok KW, Tse ZTH. An MR-conditional high-torque pneumatic stepper motor for MRI-guided and robot-assisted intervention. Ann Biomed Eng 2014;42:1823 33. [95] Su H, Cole GA, Fischer GS. High-field MRI-compatible needle placement robots for prostate interventions: pneumatic and piezoelectric approaches. Adv Robot Virtual Real 2012;3 32. [96] Sutherland GR, Latour I, Greer AD, Fielding T, Feil G, Newhook P. An image-guided magnetic resonance-compatible surgical robot. Neurosurgery 2008;62(2):286 93. [97] Sutherland GR, Latour I, Greer AD. Integrating an image-guided robot with intraoperative MRI. IEEE Eng Med Biol Mag 2008;27(3):59 65. [98] Stoianovici D, Kim C, Petrisor D, Jun C, Lim S, Ball MW, et al. MR safe robot, FDA clearance, safety and feasibility of prostate biopsy clinical trial. IEEE/ASME Trans Mechatron 2017;22(1):115 26.

35 G

RONNA G4—Robotic Neuronavigation: A Novel Robotic Navigation Device for Stereotactic Neurosurgery 1 1 ˇ ˇ Bojan Jerbi´c1, Marko Svaco , Darko Chudy2,3, Bojan Sekoranja , 1 1 2 1 ˇ Filip Suligoj , Josip Vidakovi´c , Domagoj Dlaka , Nikola Vitez , ˇ ˇ c1, Ivan Zupanˇ ci´c1, Luka Drobilo1, Marija Turkovi´c1, Adrian Zgalji´ Marin Kajtazi1 and Ivan Stiperski1 1

Faculty of Mechanical Engineering and Naval Architecture, UNIZAG FAMENA, University of Zagreb, Zagreb, Croatia 2 Department of Neurosurgery, University Hospital Dubrava, Zagreb, Croatia 3 School of Medicine, Croatian Institute for Brain Research, University of Zagreb, Zagreb, Croatia

ABSTRACT RONNA G4 (RONNA) is a robotic neuronavigation system based on articulated robotic arms and is intended for minimal invasive stereotactic procedures such as biopsies, stereoelectroencephalography, epilepsy surgeries, deep brain stimulation, and tumor resections. RONNA can be configured as a single- or dual-arm system: the single-arm system is intended for stereotactic neuronavigation and serves as a navigation assistant to the surgeon, while the dual-arm configuration performs autonomous invasive operation tasks such as bone drilling, probe or needle insertion, etc. RONNA is characterized by a fully automated patient registration procedure, robot position planning, accurate instrument guidance, and autonomous bone drilling. A novel localization method was developed combining machine vision and mathematical estimation, as well as a novel point-pairing correspondence algorithm and a multiobjective cost function for the optimization of robot placement. RONNA provides surgical tool positioning within the patient intracranial space, robotic assistance in drilling operations, and others which are distinguished by extraordinary accuracy in comparison to existing robotic and other neuronavigation systems. The clinical application of RONNA in stereotactic neurosurgical procedures shortens the operation time, lowers procedure invasiveness, enables faster patient recovery, and better utilization of hospital operational resources. Starting from 2016, RONNA has been undergoing clinical trials at the University Hospital Dubrava. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00035-9 © 2020 Elsevier Inc. All rights reserved.

599

600

Handbook of Robotic and Image-Guided Surgery

Abbreviations BLOB CMM CT FLE ICP JDMS MR MSER NOS OCS OR OTS RCD TCP TRE

35.1

binary large objects coordinate measurement machine computed tomography fiducial localization error iterative closest point joint displacement minimization strategy magnetic resonance maximally stable extremal regions neutral orientation strategy orientation correction strategy operating room optical tracking system ray circle detector tool center point target registration error

State of the art in robotic neuronavigation

With the advance of neurosurgery and the development of advanced surgical procedures, the expectations put on stereotactic equipment have increased and novel solutions are sought to bypass the restrictions imposed by technologies currently in use. Most commonly used stereotactic systems today are based either on stereotactic frames or vision systems using retroreflective infrared markers mounted on special adapters. While these systems are very different in principle, they both rely on vision (human eye or stereovision cameras) and hand adjustment for positioning, which leaves much room for improvement with regards to positioning precision. Since the primary goal of stereotactic neurosurgery is precise and reliable positioning of neurosurgical instruments (drills, electrodes, probes, and other) to the target points and trajectories defined by the surgeon, positioning accuracy is of utmost importance for neurosurgical robotic systems [14]. In the last two decades, a rapid development of robotic and surgical technologies has taken place. Scientific papers [57] present an overview, historical development, and state-of-the-art applications of robotic technology in surgical procedures and in neurosurgery. Main challenges for robotic systems in surgical procedures are geometric accuracy and repeatability, safety, complex three-dimensional (3D) path programmability, registration procedure automation, simple practical usage, and fast system adaptation based on multiple sources of sensor data. One of the main obstacles to widespread robotization in neurosurgical procedures is the total cost of robotic systems, which is still very high. These high costs can be attributed to time- and resource-consuming tasks, for example, robot development and small series production, but neurosurgical robotic systems can also be designed by utilizing commercial robot arms. Standard robots come in a wide range of kinematic configurations and can meet specifications required for a wide variety of applications in neurosurgery. The first application of a robot in medicine was also in the field of neurosurgery, with the successful use of the PUMA 200 industrial robot in a frame-based configuration for a brain biopsy procedure in 1985 [8]. Table 35.1 gives an overview of standard industrial robots implemented as part of commercial or research neuronavigation robot systems since the year 2000. The main benefits of implementing industrial robots in medical robotic systems are lower research costs and overall price of the system, with the development of the robotic arm having been done by the robot manufacturer. Since 2016, four innovative robotic neuronavigation systems have been developed based on standard industrial robots from KUKA [12], Sta¨ubli [10], and Universal robots [13,14] (details are given in Table 35.1). These systems are not included in the current state-of-the-art literature survey and review papers [1,5,26,27]; and demonstrates a very rapid development of the robotized neuronavigation medical field.

35.2 35.2.1

RONNA—robotic neuronavigation Historical development of the RONNA system

When work on RONNA was started in early 2010, the goal was to reduce the load of sensitive and challenging neurosurgical procedures imposed on the surgeon. The research teams at the Faculty of Mechanical Engineering and Naval Architecture (FSB), Zagreb (Croatia), and University Hospital Dubrava (KBD), Zagreb (Croatia), began experimenting with the possibilities of robot application in minimally invasive neurosurgical procedures. Through long-term collaboration, various system designs, localization methods, and procedures were considered. The development procedure was divided into four groups of achieved milestones, marking the four generations of the RONNA system. Fig. 35.1

RONNA G4—Robotic Neuronavigation Chapter | 35

601

TABLE 35.1 Overview of industrial robots used for neuronavigation since the year 2000 [9]. System (project)

Selected papers

Robot manufacturer

Model

RR* (mm)

Payload (kg)

ROSA Spine

Lefranc and Peltier [10]

Sta¨ubli

TX60L

6 0.030

2

Chenin et al. [11] Aqrate

Patel (2016) [12]

KUKA

KR6 R700

6 0.030

6

TIRobot

Tian et al. [13]

Universal robots

UR5

6 0.100

5

Tian [14] not specified

Faria et al. [15]

Yaskawa Motoman

MH5

6 0.020

5

Active project

Beretta et al. [16]

KUKA

LWR4 1

6 0.100

7

RONNA

Jerbi´c et al. [17]

KUKA

KR6R900

6 0.030

6

ˇ Svaco et al. [9]

KUKA

KR6R900

6 0.030

6

Lefranc et al. [18]

Mitsubishi

RV3SB

6 0.020

3

ROSA Brain

Gonza´lez-Martı´nez et al. [19] ROBOCAST

Comparetti et al. [20]

Adept

Viper s1300

6 0.070

5

OrthoMIT

Tovar-Arriaga et al. [21]

KUKA/DLR

LWR3

6 0.150

14

Pathfinder

Deacon et al. [22]

Adept

Viper s1300

6 0.070

5

Eljamel [23] RobaCKa

Eggers et al. [24]

Sta¨ubli

RX90

6 0.025

6

CASPAR

Burkart et al. [25]

Sta¨ubli

RX90

6 0.025

6

RR*, Robot repeatability. ˇ ˇ ˇ Reproduced with permissions from Svaco M, Sekoranja B, Suligoj F, Vidakovi´c J, Jerbi´c B, Chudy D. A novel robotic neuronavigation system: RONNA G3. Strojniˇski vestnik—J Mech Eng 2017. doi:10.5545/sv-jme.2017.4649.

35.2.2

RONNA G4 system—the fourth generation

Developed for frameless stereotactic navigation, the robotic neuronavigation system RONNA is based on standard industrial robots with the basic version consisting of three main components: a robotic arm placed on a universal mobile platform, a planning system, and a navigation system. The extended version of RONNA (shown in Figs. 35.2 and 35.3)

35. RONNA G4 Robotic System

illustrates the evolution of the system design and localization markers though the four generations. The first iterations of the system served as a proof of concept, where various approaches to the problem were tested and evaluated. Experiments involved research in system setup, robot positioning, and sizing, localization methods, etc. After the initial trials, a dual-arm configuration was selected where one robot was used as an assistant for stereotactic navigation and the other was tasked with invasive operations such as bone drilling, probe or needle insertion, etc. The first generation of the system used a positioning system located on the robotic arm, consisting of a camera, a laser distance sensor, and a reference localization plate. The second generation, an outcome of the initial experiments, was a refined system with the best concepts from the first generation being implemented and improved. At this stage the KUKA Agilus KR6 R900 sixx 1 was chosen as the navigational assistant robot. Also, the localization method was modified and a stereovision system, used for locating the spherical features of the reference marker, was developed and implemented during this stage. With this generation of the system preclinical trials on phantoms were started in 2012. Through preclinical trials the system robustness, accuracy, and reliability were verified. The third generation was characterized by the implementation of the mobile platform, containing all the necessary electrical equipment, and making RONNA a complete system. Along with the platform a new type of marker was applied, which also marked a significant milestone, as the first brain biopsy was performed with the RONNA G3 system in May 2016 [3]. Since then, RONNA has been regularly used in the University Hospital Dubrava (KBD) in Zagreb, with the fourth generation bringing further developments, as described in the following sections.

602

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.1 The historical development of the RONNA G4 system through four generations: RONNA G1 (2010), RONNA G2 (2011), RONNA G3 (2016), RONNA G4 (2017).

FIGURE 35.2 The RONNA G3 system (render) system with components: (1) master robot, (2) assistant robot, (3) universal mobile platform, (4) optical tracking system, and (5) control and planning software interface.

consists of two robotic arms mounted on specially designed universal mobile platforms, a global OTS (optical tracking system) (Polaris Spectra, NDI—Northern Digital Inc., Ontario, Canada), as well as a control and planning software interface. The robots are equipped with surgical tools (guides, grippers, drill, etc.). A localization feature based on freely distributed fiducial markers is used for patient registration, while a stereovision system (RONNAstereo) is used for patient localization in the physical space. A specific characteristic of RONNA with respect to the most current state-of-the-art robotic neurosurgical solutions [1,5,9,17,26,27] is an additional mobile platform equipped with a compliant and sensitive robotic arm which makes it a dual-arm robotic system (master and assistant). The robots used in the current fourth generation (Fig. 35.3) are standard six degrees-of-freedom (DoF) KUKA Agilus KR6 R900 sixx revolute robots. This enables full flexibility in positioning and reorientation around the operative trajectories defined with five parameters (three translations and two rotations).

RONNA G4—Robotic Neuronavigation Chapter | 35

603

FIGURE 35.3 The current version of the RONNA G4 system with two Kuka Agilus robots.

Due to their application in the operating environment, the system design and functional requirements in neurosurgical robotics are much more demanding than in conventional robotics, for example, in industrial applications. The robotic system must be compact enough to fit in the operating room (OR) and avoid interfering with regular procedures performed by the medical staff, while also meeting complex requirements regarding spatial working ability. Therefore the entire setup was designed using computer-aided design software, enabling modeling and simulation [28] of various trajectories and instruments involved in neurosurgical procedures, as well as requirements regarding the location of the system in the OR and its relation to other equipment and medical staff. Whether the system is used as a single robot for stereotactic navigation or in a dual-arm mode, enabling more invasive procedures, depends on the type of surgery at hand and the assessment of the surgeon in charge. In both cases, the patient is to be under anesthesia with his head fixed in a head holder, for example, a Mayfield clamp, stereotactic frame, or a device with similar functionality. The master robot is used to accurately guide the surgical instruments (drill, needle, or any other instrument) to the desired orientation and target point (thus defining the operation trajectory), after which the neurosurgeon or the assistant robot performs instrument insertion. When only the master robot is used, the system serves the purpose of a stereotactic navigation instrument (guide), while the extended version of RONNA, which uses both robotic arms, is intended for automated robotic bone-drilling applications and manipulation of surgical instruments. The assistant robot inserts the instrument through the tool guide positioned on a planned trajectory pointing toward the target point. In addition, the assistant robot is planned for assisting the surgeon through intuitive humanrobot collaboration [29].

RONNA surgical workflow

The RONNA clinical procedure is composed of three phases: the preoperative phase, the preparation phase, and the operation phase. Preclinical trials and laboratory experiments are presented in detail in the following papers [9,17,30]. In the preoperative phase, the bone-attached screws are fixed to the patient’s head and the patient is scanned with a CT (computed tomography) scanner. After scanning, the images are imported into the operation planning software (RONNAplan), where trajectories are defined, and the fiducial reference markers are localized in image space. Manual localization of fiducial markers is also possible, but it has shown drawbacks in the past, such as insufficient localization accuracy and long duration, as well as the added possibility of human error. To overcome these drawbacks, an automated algorithm for accurate localization of spherical fiducials in image space was developed [31], outputting a set of localized fiducial points fxi g. Operating trajectories, consisting of target and entry points, are planned the day before or just before the surgery using the RONNA plug-in developed for Osirix MD (Pixmeo Sa`rl, Switzerland). The developed plug-in can be used on any laptop or desktop computer running the Osirix MD software. Using this software, the neurosurgeon can make the operative plan separately from the workstation implemented in RONNA (i.e., in his office) and transfer the plan via USB, CD, or through the network infrastructure (PACS server) to the robot workstation. Using an intuitive and straightforward user interface, the operator can control intraoperative robot movements such as commanding a robot to a drilling or biopsy position. The planning software also enables measurement of postoperative entry and target point errors by utilizing state-of-the-art image fusion algorithms. RONNAplan enables visualization of the patient’s anatomy and

35. RONNA G4 Robotic System

35.3

604

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.4 Trajectory definition in the RONNAplan planning software is shown with the freely distributed markers attached to the patient during clinical trials.

planning of an i 5 1. . .a number of operation trajectories in the coordinate system of the CT scanner (denoted as CT). Each operation trajectory is composed of two points: an entry point eni , denoted as the translation vector CT eni t and defined on the surface of the patient’s skull, and the surgery target point tri , denoted as CT t. An example of a planned tr i trajectory is shown in Fig. 35.4. The generated surgical plan can be automatically transferred to the robot control software after the planning phase has been completed. Patient registration implies determining a spatial transformation between the coordinate systems of the medical diagnostic images and the patient in the OR. In the patient registration process, freely distributed individual spherical fiducials mounted on bone screws are used as shown in Fig. 35.5. In the second step, the preparation phase of the RONNA procedure, the patient is brought to the OR and the robot is positioned near the patient. The global OTS is used for coarse global positioning of the robot with respect to the patient in the global localization phase of the procedure to enable automatization of the registration procedure. For solving the rigid point-based registration problem, a correspondence between the physical space and image space fiducials must be established. In the third and final step, the operation phase, the robot arm is mounted with the stereovision localization device RONNAstereo for accurate physical space localization. RONNAstereo houses two industrial cameras (IDS uEye SE) with infrared sensors and a ring infrared light-emitting diode light placed in front of the lens of each camera. The virtual tool center point (TCP) of RONNAstereo is calibrated so that it corresponds with the TCP of a calibrated surgical tool. Stereovision images are processed using a machine vision algorithm that actively determines the position of a localized spherical fiducial with respect to the robot’s TCP. Relocalization with RONNAstereo ensures better precision than if it depended only on the OTS coordinates. After registration, RONNAstereo is physically replaced with a surgical instrument, which can then be moved by the robot to any trajectory planned by the surgeon in the preoperative phase. A complete RONNA surgical workflow is shown in Figs. 35.6 and 35.7.

35.4

Automatic patient localization and registration

Patient localization defines the process of acquiring the coordinates of the patient’s head in the image space [CT or magnetic resonance (MR)] or in physical space (in the OR using RONNAstereo). Registration is a general term describing the process of developing a spatial mapping between two sets of data. In one of the more accepted definitions the term registration implies the aligning of two images of the same environment or object, which can be taken from different viewpoints, with different devices, and at different times [32]. The registration problem is challenging because point

RONNA G4—Robotic Neuronavigation Chapter | 35

605

FIGURE 35.5 Freely distributed fiducial markers composed of: a retroreflective sphere (A), that is, a fiducial marker, a removable base (B), and a self-drilling and selfˇ ˇ tapping screw (C). Reproduced with permission from Suligoj F, Jerbic´ B, Svaco M, ˇ Sekoranja B. Fully automated point-based robotic neurosurgical patient registration procedure. Int J Simul Model 2018;17:45871. doi: 10.2507/IJSIMM17(3)442

FIGURE 35.6 Steps in the RONNA surgical workflow of a brain biopsy. (A) Bone fiducials are attached to the patient’s cranial bone in local anesthesia. (B) The patient is preoperatively scanned on a CT scanner. (C) The entry and target points are planned in the operating room prior to surgery. (D) Automatic patient localization is performed using the RONNAstereo stereovision system. (E) Sterile draping of the robot arm. CT, Computed tomography.

35. RONNA G4 Robotic System

correspondences between the two-point sets are often unknown a priori. In that case, the registration problem is also known as the simultaneous pose and correspondence problem [33]. A rigid registration emerges from the definition of a rigid body: A rigid body is a collection of particles moving in such a way that the relative distances between the particles do not change [34]. Hence, a rigid registration is defined as a transformation that does not change the distance between any two points on the moving body and typically such a transformation consists of translation and rotation. A rigid body transformation in 3D is defined by six parameters, three translations and three rotations [35] based on reference points, whose position in physical space is determined by the localization procedure. Reference points for the “image-to-physical” registration process in neurosurgery are also called fiducial points (RONNA uses retroreflective markers on bone screws as fiducial points). Errors which occur in the localization procedure, both in the image and in the physical space, reduce the alignment accuracy in the registration process and have a negative effect on the registration accuracy. Since bone screws are utilized in RONNA operating procedures, the registration of the patient from image to physical space is considered a rigid registration.

606

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.7 Steps in the RONNA surgical workflow of a brain biopsy. (A) A sterile tool guide is attached to the robot. (B) The robot is positioned along the planned trajectory, skin incision and drilling of the burr hole are performed by the neurosurgeon. (C) After drilling the burr hole, electrocoagulation of the dura and cortical entry of the biopsy needle is performed, and the biopsy needle is advanced. Finally, a staged biopsy is performed. (D) The tumor tissue samples are ejected into a Petri dish. The operative procedure is completed.

The correspondence problem is a fundamental task found in applications such as image and point cloud alignment, optical flow estimation, 3D reconstruction, and stereo vision [36,37]. A solution to the correspondence problem is finding correct point-to-point or pixel-to-pixel correspondences between models or images. Any device that generates spatial output data generates positioning errors. This means there is a large probability that, due to the errors in the acquired data, there will be incorrect correspondences between the input points. The incorrect correspondences are called outliers and if they are not removed they will impair the accuracy of the estimated transformation. As elaborated in Ref. [38], current 3D correspondence techniques are much less accurate than those of their 2D counterparts because of a higher rate of outliers.

35.4.1

Robotic navigation and point-pair correspondence

RONNA utilizes a fully automatic patient registration procedure when the robot arm and the patient are in the line of sight of the OTS. This solution simplifies the localization procedure for the medical personnel and shortens the duration of the operation. To achieve automatic registration in the OR, the OTS is used for localization of the fiducial markers placed on the patient and the robot. At this phase, there are two separate point sets: the first set of i 5 1.  . .n  points fxi g are positions of the fiducial markers in the image space (CT), while the second set of j 5 1. . .m points yj are coordinates of the fiducial markers attached to the patient in the physical space captured by the OTS (coordinate system OTS OTS). Every point xi or yj will be denoted as a translation vector CT xi t or yj t, depending on the context. To solve the   rigid point-based registration problem, the correspondence between fxi g and yj must be established. To address the problem of using fiducial markers placed at unknown distances to one another a novel correspondence algorithm was developed [39]. The algorithm uses a similarity matrix, with known positional mean error and the standard deviations of the input data from the OTS and a CT scanner with the localization algorithm to validate successful point pairing ˇ and to remove potential outliers. The correspondence algorithm is presented in detail in the paper by Suligoj et al. [39]. Once at least three corresponding point pairs have been matched, the problem is reduced to calculating the 3 3 3 rotation matrix P and the 3 3 1 translation vector t that aligns  the corresponding l number (l can be smaller than m and n if the point set includes outliers) of points from fxi g and yj in a way which minimizes the root-mean-square (RMS) distance between the points: l   1X yi 2 ðPxi 1tÞ2 d2 5 (35.1) l i51

RONNA G4—Robotic Neuronavigation Chapter | 35

607

Typically, because of the localization errors, d cannot be zero. An example of the method for the rigid point-based registration is the least squares fitting of two 3D point sets [40]. The final goal of the registration procedure is to find the transformation between the image space and the physical space for the OTS:  CT  CT P OTS t CT OTS (35.2) OTS T 5 0 1 or for the robot: CT RT

5

 CT

RP

CT Rt

0

1

 (35.3)

When point correspondence between the CT and OTS coordinate systems is established, the position of the patient in the robot coordinate system R can be calculated. As can be seen in Fig. 35.8, the OTS, which detects the position and calculates the coordinates of the fiducial markers, is used to attain the position and the orientation of the dynamic reference frame M mounted on the robot tool (OTSM T), as well as the positions of the freely distributed fiducial markers   yi attached to the patient. M is retrieved in the OTS as OTSM T, that is, the position and orientation of the predefined configuration of individual fiducials attached to the robot tool. Positions of the patient fiducial markers in the coordinate system of the OTS are defined using the translation vectors denoted as OTS yi t. The connection between the position and orientation of the fiducial marker attached to the robot tool and the TCP of the robot is defined by the transformation TCPM T. The transformation OTS TCP T is calculated as OTS TCP T

5 OTSM TUTCPM T

(35.4)

The position and orientation of the precalibrated TCP in the robot base coordinate system (TCPR T) are acquired from the robot controller. The translational and rotational parts of the transformation TCPM T are determined by means of tool calibration. The translation TCPM t is first calculated by moving the TCP of the robot arm in several configurations around the same point in space [41]. The orientation is calculated by moving the TCP of the robot in three points and by creatR OTS ing a new coordinate system TEMP in the space, which is shared by the robot TEMP T and the OTS TEMP T. For calculatM ing the rotation matrix TPC P, the orientation of the marker captured by the OTS and the orientation of the TCP in the robot coordinate system are recorded at the same time.  21   M OTS 21 OTS R P21 U R P U TEMP (35.5) TCP P 5 TEMP P U M P TCP When TCPM T is known, the value of OTS TCP T using Eq. (35.4) is calculated which determines the coordinates of the fiducial markers in the TCP coordinate system: TCP t 5 TCP T21 U OTS t yi OTS yi

(35.6)

R R yi yi t 5 TCP TU TCP t

(35.7) FIGURE 35.8 Coordinate systems and transformations used for achieving an automatic patient registration procedure. Reproduced with ˇ ˇ permission from Suligoj F, Jerbic´ B, Svaco M, ˇ Sekoranja B. Fully automated point-based robotic neurosurgical patient registration procedure. Int J Simul Model 2018;17:45871. doi: 10.2507/ IJSIMM17(3)442.

35. RONNA G4 Robotic System

The coordinates of the fiducial markers in the coordinate system R is further calculated as

608

Handbook of Robotic and Image-Guided Surgery

on the assumption that the robot joint positions used for calculating TCPR T are identical to those used when capturing OTS OTS M T and yi tyi . Therefore when the coordinates of the fiducial markers in the robot base coordinate system are known, the patient position in relation to the robot is also known.

35.4.2

Automated marker localization

In medical robotics, patient localization is defined as the process of determining the exact position or coordinates of the patient in the image space or the physical space. Finding point pairs between two sets of points, xi and yi , that after transformation have the RMS distance between the points equal to zero, would mean that both inputs have zero positioning errors and no outlier points. In actual situations, the positioning errors from input devices are a consequence of environmental signal noise, errors produced due to the discretization of the input signal, and the resolution of the device itself. Errors which occur in the localization procedure reduce the alignment accuracy in the registration process and have a negative effect on the registration accuracy. It should be noted that the positions of the individual points are localized with an error composed of the resolution of input devices and the localization error, which can be written as in Ref. [42]: xi 5 x^i 1 exi

(35.8)

yj 5 y^j 1 eyj

(35.9)

where x^i and y^j are the true point coordinates and xi and yj are the coordinates from the patient images and the patient in the OR, containing their respective errors exi and eyj .

35.4.2.1 Automatic localization in image space The primary motivation for the development of the image space localization algorithm was to improve the accuracy and robustness of image space localization, previously performed manually by human operators [31]. An iterative clustering method is developed for circle grouping. Due to the visually cluttered environment in 2D CT images, many false-positive circles can be detected. Verified clusters are used for the calculation of sphere centers. Euclidean distance filters are used in the clustering phase, as well as for the elimination of potential false-positive results. Two methods for estimating spherical fiducial centers within the detected clusters are implemented: RANSAC Linefit and Spherefit, as shown in Fig. 35.9. Robustness, accuracy, reliability, and processing time of the algorithm for the automated localization of fiducial markers are verified in ongoing clinical trials. The performance of the localization algorithm was evaluated in comparison with four skilled human operators. The measurements were based on 12 patient and eight laboratory phantom CT scans. The localization error of the algorithm in comparison with the human readings was smaller by 49.29% according to the ground truth estimation and by 45.91% according to the intramodal estimation. FIGURE 35.9 Automatic localization of spherical fiducials in image space.

RONNA G4—Robotic Neuronavigation Chapter | 35

609

35.4.2.2 Automatic localization in physical space—RONNAstereo To achieve accurate localization in physical space, RONNAstereo is used in combination with a series of specifically developed computer vision algorithms. Retroreflective medical-grade navigation marker spheres attached to bone screws are used for patient localization. When illuminated, markers emit infrared light back to the cameras, while the background remains dark. Grayscale images are obtained, preprocessed, and spheres are detected. As a cross-section of any sphere is a circle, an algorithm for detecting circles is used. The process of automatic localization in physical space can be divided into three main phases described in detail in the remainder of this section: 1. Image preprocessing; 2. Automatic brightness adjustment; 3. Circle detection. 1. Image preprocessing. Before circle detection, image preprocessing is done to remove noise and reflections which could obstruct circle detection. Gaussian blur is used to soften sphere edges which often contain irregularities due to the rough surface of the marker. In the next step the grayscale image is transformed into a binary image through thresholding. Small particles are then removed using erosion and dilatation. All other binary objects are afterward filtered by size, shape, and distance from the center. Binary objects with an area less than 1000 pixel are removed, as well as all objects with sides that are more than 5% different in length. Object radii are then computed and all objects with a radius less than 50 and more than 80 pixel are removed. 2. Automatic brightness adjustment. Brightness adjustment is a critical step in image processing because it enables stable and accurate circle detection. While in the localization phase, the robot is guided by the OTS for coarse localization. When the RONNAstereo is positioned in front of each marker, a signal is sent to the localization software to find circles in both images so that the robot’s correction can be calculated. The lower and upper brightness limits are determined as brightness levels at which circle detection becomes inaccurate. When brightness is too low, irregularities on sphere edges appear, and when the image is too bright, sphere edges appear larger. If the circle is not found on one or both images, camera parameters are changed so that the brightness is decreased by 10%. This step is repeated until the circle is found or the lowest brightness limit is reached. When the lower brightness limit is reached, parameters are set to the upper brightness limit, and this step is repeated. 3. Circle detection. To implement the best possible circle detection algorithm, five different circle detection algorithms were validated. Three algorithms are based on finding circular edge points and fitting a circle through the detected points: (1) HOUGH transform [43], (2) circle detection based on learning automata [44], (3) ray circle detector (RCD) algorithm (which uses radially spreading rays to find edges of a circle), and two algorithms are based on finding the center of gravity in a binary region: (4) detection and filtering of binary large objects (BLOBs) and (5) maximally stable extremal regions (MSER) [45].

35.5

Optimal robot positioning with respect to the patient

Since RONNA provides both single- and dual-arm configuration possibilities, a novel positioning algorithm for dynamic robot placement with respect to operation trajectories is developed. The algorithm is used to determine the correct positioning of single or multiple robot platforms with respect to the patient [46]. The positioning algorithm is used

35. RONNA G4 Robotic System

The upper limit for the algorithm execution is set to 30 ms to enable real-time detection. Algorithms which did not satisfy this condition were the circle detection based on learning automata, which took over 30 seconds and MSER algorithm, which took 153 ms on average to execute. To evaluate circle detection accuracy, a marker sphere is placed in the center of both cameras and the distance from the detected circle center and the image center for each detection algorithm is measured. Distances are measured in pixels and millimeters, where millimeters per pixel are calculated using the known sphere diameter in millimeters and sphere representation in the image in pixels. In Fig. 35.10 the comparison between RCD, HOUGH, BLOB, and MSER algorithms is shown. The yellow square represents the processed area of the image. Due to the slow execution, circle detection based on learning automata is not used in comparison between the algorithms. A comparison between the remaining four algorithms is shown in Table 35.2. The MSER algorithm was shown as the most accurate, but due to its slow execution time, it was unable to work in real time and was not implemented for patient localization. The HOUGH algorithm was less accurate than the RCD and BLOB algorithms. Although the RCD and BLOB algorithms had similar accuracy in a testing environment, the BLOB algorithm was more stable in changing light conditions and, finally, the BLOB algorithm was implemented due to the displayed stability.

610

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.10 Comparison of circle detection algorithms in images recorded during surgery.

TABLE 35.2 Comparison of circle detection algorithms. Algorithm

Left image detection error (pixel)

Right image detection error (pixel)

Left image detection error (mm)

Right image detection error (mm)

Average execution time (ms)

RCD

1.20

1.41

0.08

0.10

3.42

HOUGH

1.59

2.24

0.11

0.16

1.51

BLOB

1.35

1.00

0.09

0.07

1.43

MSER

1.01

0.00

0.07

0.00

153.43

BLOB, Binary large objects; MSER, maximally stable extremal regions; RCD, ray circle detector.

to calculate optimal positions of both robots in mutual target operation points by ensuring high dexterity for each robot and preventing collision between the collaborating robots. It validates configurations in joint space based on a multiobjective cost function composed of three criteria: the condition number (c), joint limit avoidance (JLA), and collision avoidance criteria (d). Fast convergence of the position optimization algorithm is facilitated by realistic initial conditions determined from RONNA reachability maps, generated by simulating characteristic operation trajectories in the robot workspace.

35.5.1

Dexterity evaluation

Robot dexterity in trajectories defined for surgical procedures, is evaluated based on the c and JLA parameters. When the robot posture is near a singularity, uncontrolled high-speed motions of the robot joints are possible. To avoid unwanted singularity-related issues, the condition number c is used as a measure of distance from singularity positions. A high condition number indicates the vicinity of a singularity, while a low value indicates a robot posture with high dexterity and far from any singularities. Hence, the application of the condition number ensures that singular positions are avoided. The condition number is calculated based on the robot’s normalized Jacobian matrix (JN). The robot’s characteristic length described in Ref. [47] was calculated and used as a normalization factor. The normalization was performed using the form presented in Dumas et al. [48] and the condition number in the normalized form is then calculated as

c 5 c JNT : (35.10) The JLA parameter [49] evaluates robot posture through robot joints positions, where a posture with all the joints in the middle of their working range is considered optimal. This assures robot joints are far from the boundaries of their

RONNA G4—Robotic Neuronavigation Chapter | 35

611

working range and prevents the robot from moving to configurations unreachable due to joint boundaries. The JLA parameter for a robot joint configuration is calculated as JLA 5 k 1 kσ ;

(35.11)

where k are mean deviations and kσ are standard deviations of m 3 n values of the coefficient k. The coefficient k is defined as Δqij 2 i 5 1; . . .; n kij 5 ; ; (35.12) j 5 1; . . .; m Δqi max where the variables are: G G G G

Δqij —the deviation of the ith joint variable form the center of its range, at the jth point in space; Δqi max —the maximum permissible deviation of the ith joint variable; m—the number of points in space; and n—the number of robot joints.

For a six DoF revolute robot, n 5 6 and m 5 2 3 number of trajectories. Furthermore, we have introduced a novel fuzzy JLA function which proved better suited for this specific application [50].

35.5.2

RONNA reachability maps

To ensure fast convergence of the position optimization algorithm, realistic initial conditions, determined from the RONNA reachability maps, need to be used. The reachability maps are generated through offline simulation of characteristic trajectories for over 1000 target points in the robot workspace. Characteristic trajectories were randomly generated from a spherical cone, representing the patient’s intracranial space [50]. End-effector coordinate ranges (represented in the robot base coordinate system—x, y, z), spherical cone workspace angles (α, β), and the robot end effector roll angles (γ) used for the simulation are shown in Table 35.3. x, y, and z coordinates were incremented for 50 mm in each iteration, while angles were incremented by 5 degrees. For every target point (x, y, and z coordinates) 91 trajectories with all possible combinations of α and β were simulated, forming a characteristic spherical cone. The vertex of the spherical cone represents the target point, while the entry points depend on the values of α and β. Each trajectory was additionally simulated with 19 different roll angles (γ), resulting in a total of two million robot configurations. Fig. 35.11 depicts the schematic view of α and β angles with respect to the patient. Reachability parameter (RP) is then calculated for every target point as Number of feasible trajectories 3 100: Number of attempted trajectories

(35.13)

RP values can be represented as 2D (xy) colored maps for different values of the z coordinate. Since the height of the RONNA platform is fixed, positioning is done in the xy plane. To determine the initial x and y coordinates for the position optimization algorithm, a mean value of RP was determined for every xy coordinate pair in all applied z coordinates. A reachability map with mean values of RP is shown in Fig. 35.12. The highest value of RP on the shown reachability map is for x 5 500 and y 5 0 mm, thus those values are chosen as the initial conditions for the position optimization algorithm.

TABLE 35.3 Simulation boundaries. x (mm)

y (mm)

z (mm)

α (degrees)

β (degrees)

γ (degrees)

450850

2300 to 300

2150 to 300

210 to 80

290 to 90

290 to 90

35. RONNA G4 Robotic System

RP 5

612

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.11 Schematic view of α and β angles with respect to the patient.

FIGURE 35.12 Mean reachability map.

35.5.3

Single robot position planning algorithm

The single robot position planning algorithm utilizes previously described c and JLA parameters, which are used to maximize the robot dexterity. Input parameters for the optimization algorithm are patient’s head height with respect to the robot base (z coordinate), head orientation in the robot base coordinate frame, and operation trajectories defined by the surgeon. Planar position (x and y coordinates) and roll angle (γ) of the robot end effector are optimized to ensure high robot dexterity. Minimization of the objective function Q is implemented via the NelderMeads algorithm for solving unconstrained nonlinear problems. The objective function is given as Q 5 0:35Ucavg 1 0:65UJLA minx;y;γAR Q

(35.14)

Weight factors for the cavg and JLA parameters of the objective function are based on simulations. Since c does not incorporate the number of target points in its definition, it must be obtained for every target point (j) individually, after which the cj values have to be averaged overall the target points (cavg) for every iteration of the optimization. The JLA parameter evaluates robot joint angles for all target points through the mean and standard deviation of the joint angles expressed by the coefficient k. The effect of the c and JLA parameters on robot posture is shown in Fig. 35.13A and B. Furthermore, task-planning algorithms for increasing robot autonomy are also a topic of research [51,52].

RONNA G4—Robotic Neuronavigation Chapter | 35

613

FIGURE 35.13 Effect of parameters on robot posture: (A) condition number and (B) joint limit avoidance, (C) Collision-free operation of robots with optimized roll angles.

35.5.4

Position planning for collaborating robots

Since dexterity maximization does not evaluate the interrelation of multiple robots in space, collaborating robots require an additional position optimization parameter. The additional parameter assures collision avoidance between robots operating in the same points in space. Since the geometry of used neurosurgical instruments does not constrain the roll angle (γ) of the robot end-effector, it can be altered to achieve a maximum spatial distance between n 2 1-th robot joints. The additional parameter named separation distance [46] (e) is calculated as e 5 ðdðJ5;r1 ; J5;r2 ÞÞ21 :

(35.15)

Fig. 35.13C shows the two robots operating in the same spatial point with roll angles generated by the optimization algorithm, preventing collision due to annotated distance (d) between the marked robot joints. The goal of Eq. (35.15) is to minimize the value of the optimization parameter e, and thus maximize the distance between the two robot joints. A new objective function Q2, used for optimizing the two robots’ relative position, as well as their position with respect to the patient, is defined as Q2 5 0:1Uðcavg;r1 1 cavg;r2 Þ 1 0:6UðJLAr1 1 JLAr2 Þ 1 0:3Ue : minxr1 ;yr1 ;xr2 ;yr2 ;γr1 ;γr2 AR Q2

(35.16)

35.5.5

Robot positioning in physical space

After determining the optimal robot positions, mobile platform positioning with respect to the patient is determined based on the feedback loop provided through the OTS. The OTS gives real-time position feedback by recording markers placed on the robots and the patient, as shown in Section 35.4. Real-time visualization of two robots’ positions and the patient is provided to help the hospital personnel guide the mobile platforms to the calculated optimal positions with regards to the patient. With the online positioning algorithm and real-time position visualization, RONNA can be positioned in the OR very quickly, shortening the overall preparation time for the surgery. An example of robot positioning is shown in Fig. 35.14.

35.5.6

Robot localization strategies

Localization strategies are defined through robot approach angles, orientations, and movements during fiducial marker localization in physical space, as well as motions for positioning onto target points [53]. Robots with six or more DoF can approach targets in their workspace with an unlimited number of different orientations resulting in different joint configurations. Furthermore, each trajectory can be reached by the robot in numerous configurations, that is, orientations around the longitudinal tool axis [50], as shown in Fig. 35.15. With the ability to approach fiducial markers from

35. RONNA G4 Robotic System

Input parameters of the optimization algorithm are the same as when only one robot is used, while additional optimization parameters (xr2, yr2, γ r2), used for positioning of the second robot are added. Weight factors of the objective function parameters are determined based on simulations and they provide good optimization results. The optimization algorithm, combined with the presented objective function, assures safe operation without collision between the robots or contact between the robots and the patient.

614

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.14 Positioning of robots in physical space.

FIGURE 35.15 Parameters defining robot pose: (A) robot with mounted RONNA stereo; (B) robot with a mounted surgical instrument which uses the same virtual TCP as RONNAstereo. TCP, Tool center point. ˇ Reproduced with permission from Suligoj F, ˇ ˇ Jerbic´ B, Sekoranja B, Vidakovic´ J, Svaco M. Influence of the localization strategy on the accuracy of a neurosurgical robot system. Trans FAMENA 2018;42:2738. doi:10.21278/TOF.42203.

different angles during localization and the ability to change the angle around the x-axis while positioned in the targeted trajectory, RONNA can implement different robot localization strategies. The goal of robot localization strategies is to decrease errors in patient registration and instrument positioning with respect to their designated target pose. Each strategy uses a set of F which contains n localization poses F 5 fF1 ; . . . ; Fn g and a set T which contains m target poses T 5 fT 1 ; . . . ; T m g. As shown in Fig. 35.15A, the position of the robot end effector flange is determined by six joint states θ. Connected to the end-effector flange is the tool, whose end pose relative to end-effector flange is fixed and defined using a Cartesian coordinate system translation vector (x, y, z) and a series of three rotations (α, β, γ) according to the ZYX convention. Combining the transformation from the robot base to the end-effector flange with the tool transformation, the robot tool pose is given as the transformation R-TCP (Fig. 35.8.). Three localization strategies were developed: the neutral orientation strategy (NOS), the orientation correction strategy (OCS), and the joint displacement minimization strategy (JDMS) [53]. To evaluate positioning performance using the localization strategies, laboratory phantom measurements were performed using different numbers of fiducial markers in the registration procedure. When three, four, and five fiducial markers were used the application error for NOS was 1.571 6 0.256, 1.397 6 0.283, and 1.327 6 0.274 mm, and for OCS 0.429 6 0.133, 0.284 6 0.068, and 0.260 6 0.076 mm, respectively. The application error for JDMS was 0.493 6 0.176 mm for four and 0.369 6 0.160 mm for five fiducial markers.

35.6

Autonomous robotic bone drilling

During drilling procedures mechanical work is applied through a rotating cutting tool, causing plastic deformation and shear failure in the drilled area of the bone. During this process, due to bone deformation and friction, significant conversion of mechanical energy into thermal energy occurs, causing a transient rise in temperature of adjacent bone and soft tissues to above normal physiological levels.

RONNA G4—Robotic Neuronavigation Chapter | 35

615

The majority of the literature investigating bone drilling uses well-defined theorems previously developed for drilling metal, since an assumption was made that bone behaves similarly to metal when being machined [54]. It should be noted that there are significant differences between drilling metal and drilling bone, one of the most obvious being their respective material properties. Metals are dense and ductile, whereas bone tends to be slightly porous and brittle. Also, during bone drilling, temperatures rising above 47 C can cause irreversible osteonecrosis [55]. When drilling a bone two-thirds of the energy is transformed into heat due to friction between drill bits and bones, resulting in a significant increase in temperature, with temperatures of only 42 C causing metabolic disturbances in bone tissue. The magnitude of the temperature rise is determined by a number of factors, including drill bit cutting geometry and diameter, rotational speed, feed rate, axial thrust force, initial drill bit temperature, and use of internal or external cooling. During drilling, the possibility of a surgeon’s mistake or negligence is always present when detecting the breakthrough of cortical bone and the use of a blunt drill bit may result in breaking through the meninges [56]. Also, an inexperienced surgeon can drill with too much axial force at a slow rotational speed of the drill bit, leading to a bone temperature increase. To prevent the possibility of human error during drilling, a fully automated drilling procedure was developed for the RONNA assistant robot.

35.6.1

Automated drilling operation

The robotic drilling system RONNAdrill consists of three main parts, as shown in Fig. 35.16. For autonomous drilling, positional data from the OTS is used and the relative position of the assistant (drilling) robot with respect to the master robot is calculated. The assistant robot positions the medical drill bit 15 mm above the instrument guide as shown in Fig. 35.16B. Due to coarse global positioning given by the OTS, the drill bit position with respect to the guide is compensated by a position correction algorithm, minimizing forces and torques during the insertion of the surgical drill bit and correcting relative positioning. After successfully correcting drill bit and guide relative position, constant axial force regulation is applied. The drilling algorithm utilizes a proportional integral (PI) force controller and algorithm to predict drilling parameters: drilling feed, rotational speed, and axial force. During the drilling procedure a fuzzy algorithm detects bone breakthrough, using the torque imposed on the surgical drill bit and force feedback sensor measurements [57]. As an added safety feature for breakthrough detection, bone thickness, automatically determined from the CT scan and planned trajectories, is also used. Immediately after drill breakthrough (Fig. 35.16C) the robot stops the drill and retracts it from the drilling position to a position 50 mm from the surface guide.

35.6.2

Force controller

FIGURE 35.16 (A) Ronna robotic drilling system—1. Kuka Agilus KR 900 robot, 2. medical drill Medtronic Midas Rex and 3. ATI Net F/T Gamma SI-65-5 force torque sensor. (B) Drill positioning. (C) Drill bit entry and drilling operation.

35. RONNA G4 Robotic System

To maintain a constant axial force [58] and prevent an undesirable increase in bone temperature, as well as internal drill bit clogging, a basic closed-loop regulation is used. The control circuit is shown in Fig. 35.17., where fd ðtÞ is the desired force, xd ðtÞ desired position, fc ðtÞ contact force, gc ðsÞ denotes the transfer function of the PI regulator and hðsÞ is the second-order transfer function of the robot controller, with K0 being the proportional contact compliance.

616

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.17 Basic control loop force regulation system.

The force controller utilizes a PI [59,60] regulation control law given by the equation: ðt xd ðtÞ 5 2 kfp Δf ðtÞ 2 kfi Δf ðτ Þdτ;

(35.17)

0

where kpf and kfi are the proportional and integral control gain, while force tracking error is defined as Δf ðtÞ 5 fc ðtÞ 2 fd ðdÞ:

(35.18)

Control gains are determined using standard control design techniques by setting the poles of the closed-loop controller. The controller was adjusted to quasiaperiodic step response characterized by an overshoot of approximately 6% (damping ratio ζ 5 0.71, kfi 5 20, kpf 5 0:12).

35.6.3

Experimental results

Experimental results were obtained by drilling a plastic homogeneous material with a thickness of 7 mm. To prepare for the procedure, the drill bit distance from the drilling sample is set to 15 mm and the reference force to 20 N. After the start of the operation, the tool is moved at a limited speed until the drill bit touches the drilling sample and the drilling initializes. All the parameters are monitored throughout the procedure and when the hole is drilled through, the bone breakthrough detection algorithm uses the torque in the surgical drill bit and force feedback parameters to stop the drilling operation. Experimental results are shown in Fig. 35.18. After trials on a plastic specimen, the next step is validating the autonomous drilling procedure experimentally on real human cranial bones. Maintaining a constant axial force during the procedure will prevent an unwanted increase in bone temperature, thus eliminating the risk of irreversible osteonecrosis.

35.7

Error analysis of a neurosurgical robotic system

The positioning accuracy of a neurosurgical robot can be defined as the distance between the planned targets in the image space, defined by the surgeon, and the actual positions reached with the surgical instrument attached to the robot. Neurosurgical robot accuracy can be explained through three different aspects: robot intrinsic accuracy, registration accuracy, and application accuracy. The most relevant measure for the surgeons and the patients is the overall positioning accuracy, that is, application accuracy. Significant factors influencing registration accuracy are the fiducial marker type, fiducial spatial distribution, number of fiducials used, and the accuracy of the localization method. In the study by Wang and Song [61] the idea of improving target registration accuracy through the optimized distribution of fiducial points is proposed for situations when the planned target trajectory is previously known. The study provides a practical approach for the surgeon to arrange the fiducial markers in a way which reduces target registration errors (TREs). Similar research also presents different approaches and performance metrics which can be used for fiducial marker placement planning [62,63]. Fitzpatrick showed that a greater fiducial spread leads to greater registration accuracy [64]. Concerning the number of fiducial markers, in the study by Perwo¨g et al. [65], it is shown that a higher number of fiducials used in registration had a positive influence on the accuracy of the computer-assisted navigation. The number of fiducials is one of the factors that greatly influence the accuracy of a robotic system as well. Two main approaches exist that are used in robot localization: with the localization device independent of the robot and simultaneously acquiring the position of both the robot and the patient, or with the sensor itself mounted on the robot arm. In RONNA both approaches are utilized: OTS for global localization and RONNAstereo, mounted on the

RONNA G4—Robotic Neuronavigation Chapter | 35

617

FIGURE 35.18 Experimental results of bone breakthrough detection.

eapp 5 ereg 1 eintr

(35.19)

The point preg is the point which is transformed from the image space to the physical space in the registration procedure and can be defined as preg 5 ptrue 1 ereg ;

(35.20)

where ereg is the TRE—the distance between the planned image target location and the physical target location after registration. The magnitude of TRE or ereg is dependent on the number of fiducial markers, the spatial configuration of fiducial markers, the location of target points, and fiducial localization error (FLE). FLE is present in both the image and the physical space. In image space, it is a result of noise produced by imaging artifacts, the resolution of the reconstructed images produced by the CT or MRI scanner, and the accuracy of the localization method. In physical space, on the other hand, FLE is a consequence of robot positioning errors during localization, errors in RONNAstereo calibration, resolution of the two cameras, and the algorithm used for calculating the centers of the fiducial markers. Deviation of the position in which the robot is sent to preg from the position that the tooltip reaches preached is a result of the robot intrinsic error eintr : preached 5 preg 1 eintr :

(35.21)

35. RONNA G4 Robotic System

robot arm, for precise patient localization. When the sensor is mounted on a robot, the robot performance with respect to positioning becomes an important factor in overall system accuracy and is commonly described with repeatability and absolute accuracy. Positional repeatability, a prerequisite for absolute positioning accuracy, is defined as the ability of the robot to return to the same arm configuration and end effector position/orientation repeatedly. While this is an important characteristic for a robot, in medical robotics absolute accuracy is a more important measure since the robot is sent to arbitrary target positions and orientations. Absolute positioning accuracy is therefore defined as the ability of a robot to move to the desired position in 3D space with respect to a reference frame [66,67]. It is important to note that in comparison to repeatability, absolute accuracy error is usually an order of magnitude larger [68]. Positioning errors of a neurosurgical robotic system are manifested in physical space, while the robot’s intrinsic accuracy and registration accuracy are the two major factors responsible for the overall application accuracy of a neurosurgical robotic system. The analysis of the RONNA positioning errors follows that of Liu et al. [69]. Application error of the robot system eapp , regardless of the reference coordinate system, is the distance between the true position of the target, ptrue , and the actual position the robot tool has reached, preached . RONNAstereo, attached to the robot flange as shown in Fig. 35.19, is used for localizing the fiducial markers and measuring the application error. The magnitude of the robot system application error is given as a sum of the registration error ereg and the robot intrinsic error eintr :

618

Handbook of Robotic and Image-Guided Surgery

FIGURE 35.19 Detailed overview of RONNA positioning errors. Reproduced ˇ with permission from Suligoj F, Jerbic´ B, ˇ ˇ Sekoranja B, Vidakovic´ J, Svaco M. Influence of the localization strategy on the accuracy of a neurosurgical robot system. Trans FAMENA 2018;42:2738. doi:10.21278/TOF.42203.

35.7.1

RONNA kinematic and nonkinematic calibration

Although modern robots are well known for their reliability [66] and RONNA utilizes much of the same dependable technology proven in industrial robotics to bring improvements to the field of medicine, with all the advantages it also has some of the imperfections. Due to the robot being subject to various deviations (machining tolerances, assembly deviations, component limitations, etc.) and the laws of physics (gravity, material elasticity, thermal dilatations, etc.), the actual robot will differ from the original mathematical model and therefore cause positional errors. One of the challenges in modern robotics, especially for serial link revolute robots [70] is also complementary to the needs and aims of stereotactic neurosurgery: achieving highly accurate absolute positioning. To counter the aforementioned discrepancies and reduce positional deviations, robot calibration is performed [71], which affects the theoretical model kinematic parameters (joint offsets, i.e., deviation from the robot zero axis position, link dimensions, and angle deviations) or nonkinematic parameters (gravity, gear stiffness, gear ratio, backlash, thermal deviations, gear wear, etc.) [72]. To perform the calibration, a measurement setup is needed to record the end-effector position and/or orientation in multiple different postures. Two approaches are generally used: open loop, where a separate measurement system is applied to measure positions, and closed loop, where only the robot’s internal sensors are used for measurement. Throughout the research in this field a wide variety of measuring systems has been used and many different techniques have been developed for use in robot calibration [73,74]. The results obtained through measurement are then run through optimization algorithms [7577] to correct the robot parameters and reduce the deviations between the endeffector position and the robot kinematic model. Various calibration methods have so far been developed and a special method is currently being developed specifically for calibrating the KUKA Agilus robot for its application in RONNA. This method will use the RONNAstereo dual camera tool, which is already an integral part of the patient localization system and can also be applied for robot/ tool calibration as well as for periodical (semi)automatic precision checks in the hospital where the system is used, without any extra equipment needed. Since this method is not yet complete, another calibration method (implemented in the commercial product RoboDK) is used to calibrate the robot using a laser tracker so the influence of calibrated parameters on overall system precision can be examined. A calibration model generally consists of four standard key parts as defined by Chen-Gang et al. [78]: a kinematic model, a measurement setup, an optimization method, and a validation setup. To give a better insight into the calibration problem, a calibration procedure based on the working principles of the method implemented in RoboDK will be explained in the following sections. Since this is a commercial product the exact mechanism is not revealed. A diagram of the calibration process is presented in Fig. 35.20.

35.7.1.1 Kinematic model A kinematic model is a mathematical description of the robot: its functional dimensions and DoF. It describes the robot’s workspace, its positional capabilities and constraints. Most often used for describing robot kinematics, which is

RONNA G4—Robotic Neuronavigation Chapter | 35

619

FIGURE 35.20 Diagram of the calibration process.

also the case in this method, is the modified DenavitHartenberg (MDH) notation. This method forms a transformation matrix through a set of four parameters, describing how each subsequent joint relates to the previous one using a series of transformations around and along x and z axes for each of the joints Ji : Ti21 5 Rotðx; αi Þ-Transðai ; 0; 0Þ-Rotðz; θi Þ-Transð0; 0; di Þ i

(35.22)

Trobot 5 Tbase T06 Ttool 5 Tbase T01 T12 T23 T34 T45 T56 Ttool

(35.24)

Since the precision of a robot is not affected only by the precision of its build, but also by physical influences as well, nonkinematic calibration of several influencing factors is sometimes also performed. In this case only the stiffness of the joints is considered, accounting for the mass of the robot links and the attached tool causing additional torsional momentum upon the joints under the influence of gravity. A mathematic model describing this influence is formed in the robot model and the stiffness parameter s 5 ½s1 ; s2 ?s6 T , set to zero by default, is added to the parameter table. Combining the kinematic and nonkinematic parameters, the complete robot model is defined as presented in Table 35.4.

35. RONNA G4 Robotic System

Following the MDH transformation procedure along the kinematic chain of the KUKA Agilus KR6 R900 sixx robot, a kinematic parameter set is formed for each of the robot’s links/joints. These transformations are then combined into a for every Ji , which are sequentially multiplied and result in a transformation matrix 4 3 4 transformation matrix Ti21 i T06 defining the transformation of all six robot axes for the given geometrical parameters and joint angles. To fully define the robot’s kinematic chain, the robot base reference frame (Tbase ) and robot flange reference frame (Ttool ) are added at the beginning and end of the kinematic chain, describing the robot base and tool transformations. Multiplying the solution with the base and tool transformation matrices, the complete kinematic chain is defined as the matrix Trobot : 2 3 sθi 0 αi cθi 6 sθi cαi cθi cαi sαi di sαi 7 sθi 5 sin θi sαi 5 sin αi 7 5 TðJi Þ 5 6 (35.23) Ti21 i 4 sθi sαi cθi sαi cαi di cαi 5; cθi 5 cos θi cαi 5 cos αi 0 0 0 1

620

Handbook of Robotic and Image-Guided Surgery

TABLE 35.4 KUKA Agilus KR6 R900 sixx modified DenavitHartenberg parameters with added stiffness. i

αi ð  Þ

ai ðmmÞ

θi ð  Þ

di ðmmÞ

si

1

0

0

2θ1 1 Δθ1

400

0

2

290

25

θ2 1 Δθ2

0

0

3

0

455

θ3 2 90 1 Δθ3

0

0

4

290

35

2θ4 1 Δθ4

420

0

5

90

0

θ5 1 Δθ5

0

0

6

290

0

2θ6 1 180 1 Δθ6

80

0

FIGURE 35.21 Robot calibration measurement setup: robot and laser tracker.

35.7.1.2 Measurement setup A crucial prerequisite for robot calibration is a suitable measurement system that will provide the user with adequate data regarding the actual robot position. Ideally, the absolute position of the robot tool with respect to its base frame is acquired, however to acquire this type of data expensive measurement systems are often required. Alternatively, less expensive measurement systems can be used that combine devices such as cameras, dial gauges, probes, etc., with precisely measured reference models or simple reference spheres. Calibration methods using additional external measuring systems (besides the robot’s internal sensors) are called open-loop calibrations. Laser trackers are the measurement system most commonly used for robot calibration: they are portable, very precise, simple to use, and enable absolute measurement of the entire robot workspace [7981], however, they are also very expensive and often available only in large and/or specialized companies. For the initial calibrations, as performed in the procedure described in this text, a laser T tracker will be used for an open-loop calibration to measure absolute positions of the TCP: tmeasj 5 xm ym zm , with j being the number of the robot configuration. Joint states measured internally by the encoders ðθ1 ; θ2 ?θ6 Þ will also be recorded for each of these robot configurations and saved as inputs for the parameter optimization procedure. The measurement setup is shown in Fig. 35.21. A different approach will be taken with RONNAstereo. Although the stereo cameras are also an outside measurement system, their task is to measure the deviation of the TCP from the reference marker and provide the robot with necessary position corrections to ensure the TCP is positioned in the same location for every tool orientation. This way

RONNA G4—Robotic Neuronavigation Chapter | 35

621

the cameras are used to constrain the TCP position and not for measurement, so it is considered a closed-loop calibration—only internal joint measurement information is collected. RONNAstereo is used along with a calibration phantom equipped with retroreflective reference markers to record robot joint values in multiple tool orientations around each of the reference markers, while the camera correction assures the TCP stays centered on the marker with a deviation of under 0.05 mm for each measurement. The reference marker positions (i.e., their relative distances) were precisely measured on a coordinate measurement machine and this information, combined with the recorded joint data and initial parameters, will be used as inputs for the parameter fitting algorithm, yet to be implemented.

35.7.1.3 Optimization method After acquiring a sufficient number of pose measurements, an optimization algorithm is applied to perform parameter correction. This algorithm iteratively adjusts the parameters using the measured poses to identify a parameter set which achieved the lowest deviation from the measurement data and is therefore the closest to describing the actual robot parameters. For optimization, an analytic linear least squares estimation is presented, as shown by Nubiola and Bonev [82]. The direct kinematic function of the robot (a function of the geometric parameters, the stiffness parameter and joint rotations) is used to describe how the end effector position is affected by the parameters [83]:   R tTCP (35.25) Trobot 5 0 1 pi 5 ½αi ; ai ; Δθi ; di ; si  for i 5 1?6

T p 5 p1 ; p2 ?p6 q 5 ½θ1 ; θ2 ?θ6 T

(35.26) (35.27)

For each robot configuration j of the total of k configurations measured using the laser tracker, the tTCPj value is calculated with the initial robot parameters p and the qj joint vector defining each of the configurations:

T (35.28) tTCPj 5 xj ; yj ; zj 5 f ðp; qj Þ Applying the laser tracker measurement data for the recorded joint states, the TCP deviation

T ΔTCPj 5 Δxj ; Δyj ; Δzj is calculated as a difference between tmeasj and tTCPj for each of the k measured configurations, forming the ΔTCP vector:

T (35.29) ΔTCP 5 ΔTCP1 ; ΔTCP2 ; ? ΔTCPk

Multiplying the Jacobian pseudoinverse J1 with the deviation data ΔTCP , a set of parameter corrections Δpl in vector form is obtained. After each step these corrections are added to the parameters in the kinematic model, altering them gradually with each iteration l and applying the alterations to the calculations in the next iteration, until the equivalent of value all ΔTCP is minimized below the defined deviation. J1 5 ðJT JÞ21 JT

Δpl 5 J1 f TCP ; pl ΔTCPl

(35.31)

pl11 5 pl 1 Δpl

(35.33)

(35.32)

35.7.1.4 Validation To evaluate the parameter optimization results, validation needs to be performed by applying the parameters to the robot. The optimized parameters are used to calculate position corrections, that is joint angle corrections for the desired

35. RONNA G4 Robotic System

Using the vector f TCP , containing the kinematics formulation for the x, y, and z coordinates, and the parameter vector p, with all the parameters to be optimized, the Jacobian matrix is calculated:

T f TCPj 5 xj ðp; qj Þ; yj ðp; qj Þ; zj ðp; qj Þ

T (35.30) f TCP 5 f TCP1 ; f TCP2 ; ? f TCPk J 5 f ðf TCP ; pÞ

622

Handbook of Robotic and Image-Guided Surgery

tool position, which is then measured and compared to a reference value. If applicable, both the position and orientation of the tool are measured to acquire complete pose information, but due to measurement equipment limitations most often only the TCP position is measured. Whether local or global workspace measurement is performed for open-loop calibration, or only the joint encoders in a closed-loop method are used, validation throughout the entire robot workspace is advised if applicable. The measurement device of choice for this task, due to its precision and flexibility, is the laser tracker, capable of measuring tool positions (semi)automatically throughout the entire robot workspace, therefore enabling a thorough examination of the calibrated parameters. For further reading on this topic some of the following works are recommended: Nubiola and Bonev [82], Joubair and Bonev [84], Gaudreault et al. [85], as well as the work of Chen-Gang et al. [78] for an overview of the work done in the field in recent years. Although robot precision and proper calibration are very important for ensuring reliable positioning for surgical procedures, it is still only a single element of the system. Its characteristics need to be evaluated separately to examine the influence kinematic and nonkinematic errors have on robot positioning (and eliminate them as much as possible), but a final measure of precision is given in phantom trials, and even more in the results of live surgeries.

35.8

Future development and challenges

There are numerous advantages with introducing robotic technology into the field of neurosurgery. Endurance is the first benefit of robotics, given the results of several studies demonstrating how surgeons suffer from muscle fatigue during operations as a result of the procedure duration and the need to hold surgical instruments at specific angles [86]. By using a robotic arm to steady the surgical instrument, the problems of fatigue and hand tremor can be eliminated. Robots are also able to extend the visual and manual dexterity of neurosurgeons beyond their limits as pointed out by Eljamel [23], since they are capable of working through very narrow and long surgical corridors, an ability extremely beneficial for neurosurgical procedures. RONNA G4 is a novel neuronavigation robotic system, intended as a sophisticated tool to be used by neurosurgeons for intraoperative planning and accurate frameless neuronavigation. Some of the advantages of the frameless over the frame-based techniques are the ease of use, less patient discomfort, and more flexible preoperative planning, with the ability to separate the imaging from the surgical procedure, providing ample time for detailed image analysis and trajectory planning. One of the primary goals in the further development of RONNA is markerless patient localization. During the surgery, a 3D scan of the patient’s head, fixed in the OR, will be obtained and correlated with a preoperative reference image for registration. Registration, in this case, is difficult because the form of the human face is not rigid, the skin has variable thickness, and is often swollen and/or deformed when the patient is lying on the operating table. For this reason a solution that can achieve submillimeter accuracy using markerless localization still has not been developed [18,8789]. An algorithm most often used for registration between two-point clouds is the iterative closest point (ICP). Although widely used, there is a concern that, in this specific case, ICP would not produce satisfactory results due to differences between the two-point clouds being compared. The goal is to develop an algorithm capable of learning which parts of the human face can be used for registration and which should be left out due to susceptibility to large deformations. An approach using neural networks is planned for solving this challenging problem. In cooperation with the University Hospital Dubrava, sets of scans made on real patients are being gathered for use in developing and validating the algorithm. The device used for head feature scanning is a highly accurate stereovision camera (IDS, Ensenso N35) with a z-depth resolution of 0.14 mm.

Acknowledgments The authors would like to acknowledge the support of the Croatian Scientific Foundation through the research project ACRON—a new concept of Applied Cognitive RObotics in clinical Neuroscience and the European Regional Development Fund through the project CRTA— Regional Center of Excellence for Robotic Technology. The authors would also like to acknowledge Fadi Almahariq, M.D. and Dominik Romic, M.D. for their help in the clinical trials.

References [1]

Faria C, Erlhagen W, Rito M, De Momi E, Ferrigno G, Bicho E. Review of robotic technology for stereotactic neurosurgery. IEEE Rev Biomed Eng 2015;8:12537. Available from: https://doi.org/10.1109/RBME.2015.2428305.

RONNA G4—Robotic Neuronavigation Chapter | 35

[2]

[3]

[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

[19]

[21] [22] [23] [24] [25] [26] [27]

[28]

Marcus HJ, Seneci CA, Payne CJ, Nandi D, Darzi A, Yang G-Z. Robotics in keyhole transcranial endoscope-assisted microsurgery: a critical review of existing systems and proposed specifications for new robotic platforms. Oper Neurosurg 2014;10:8496. Available from: https://doi. org/10.1227/NEU.0000000000000123. ˇ ˇ ˇ Dlaka D, Svaco M, Chudy D, Jerbi´c B, Sekoranja B, Suligoj F, et al. Brain biopsy performed with the RONNA G3 system: a case study on using a novel robotic navigation device for stereotactic neurosurgery. Int J Med Rob Comput Assisted Surg 2017;17. Available from: https:// doi.org/10.1002/rcs.1884. Cardinale F, Cossu M, Castana L, Casaceli G, Schiariti MP, Miserocchi A, et al. Stereoelectroencephalography: surgical methodology, safety, and stereotactic application accuracy in 500 procedures. Neurosurgery 2013;72:35366. Available from: https://doi.org/10.1227/NEU.0b013e31827d1161. Smith JA, Jivraj J, Wong R, Yang V. 30 Years of neurosurgical robots: review and trends for manipulators and associated navigational systems. Ann Biomed Eng 2016;44:83646. Available from: https://doi.org/10.1007/s10439-015-1475-4. Gomes P. Surgical robotics: reviewing the past, analysing the present, imagining the future. Rob Comput Integr Manuf 2011;27:2616. Available from: https://doi.org/10.1016/j.rcim.2010.06.009. Mattei TA, Rodriguez AH, Sambhara D, Mendel E. Current state-of-the-art and future perspectives of robotic technology in neurosurgery. Neurosurg Rev 2014;37:35766. Available from: https://doi.org/10.1007/s10143-014-0540-z. Kwoh YS, Hou J, Jonckheere EA, Hayati S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans Biomed Eng 1988;35:15360. Available from: https://doi.org/10.1109/10.1354. ˇ ˇ ˇ Svaco M, Sekoranja B, Suligoj F, Vidakovi´c J, Jerbi´c B, Chudy D. A novel robotic neuronavigation system: RONNA G3. Strojniˇski vestnik—J Mech Eng 2017;63. Available from: https://doi.org/10.5545/sv-jme.2017.4649. Lefranc M, Peltier J. Evaluation of the ROSAt Spine robot for minimally invasive surgical procedures. Expert Rev Med Devices 2016;13: 899906. Available from: https://doi.org/10.1080/17434440.2016.1236680. Chenin L, Peltier J, Lefranc M. Minimally invasive transforaminal lumbar interbody fusion with the ROSATM Spine robot and intraoperative flat-panel CT guidance. Acta Neurochir (Wien) 2016;158:11258. Available from: https://doi.org/10.1007/s00701-016-2799-z. Patel D. Dr. Patel performs groundbreaking robotic surgery in Switzerland. Available from: http://www.thespinehealthinstitute.com/news-room/ health-blog-news/dr-patel-performs-groundbreaking-robotic-surgery-in-switzerland, Accessed on 03.06.2016. Tian W, Wang H, Liu Y. Robot-assisted anterior odontoid screw fixation: a case report: robot-assisted odontoid screw fixation. Orthop Surg 2016;8:4004. Available from: https://doi.org/10.1111/os.12266. Tian W. Robot-assisted posterior C12 transarticular screw fixation for atlantoaxial instability: a case report. Spine 2016;41:B25. Available from: https://doi.org/10.1097/BRS.0000000000001674. Faria C, Vale C, Rito M, Erlhagen W, Bicho E. A simple control approach for stereotactic neurosurgery using a robotic manipulator. In: Garrido P, Soares F, Moreira AP, editors. CONTROLO 2016. Cham: Springer International Publishing; 2017. p. 397408. Beretta E, De Momi E, Rodriguez y Baena F, Ferrigno G. Adaptive hands-on control for reaching and targeting tasks in surgery. Int J Adv Rob Syst 2015;12:50. Available from: https://doi.org/10.5772/60130. ˇ ˇ Jerbi´c B, Nikoli´c G, Chudy D, Svaco M, Sekoranja B. Robotic application in neurosurgery using intelligent visual and haptic interaction. Int J Simul Model 2015;14:7184. Available from: https://doi.org/10.2507/IJSIMM14(1)7.290. Lefranc M, Capel C, Pruvot AS, Fichten A, Desenclos C, Toussaint P, et al. The impact of the reference imaging modality, registration method and intraoperative flat-panel computed tomography on the accuracy of the ROSAs stereotactic robot. Stereotact Funct Neurosurg 2014;92: 24250. Available from: https://doi.org/10.1159/000362936. Gonza´lez-Martı´nez J, Bulacio J, Thompson S, Gale J, Smithason S, Najm I, et al. Technique, results, and complications related to robot-assisted stereoelectroencephalography. Neurosurgery 2016;78:16980. Available from: https://doi.org/10.1227/NEU.0000000000001034. Comparetti MD, Vaccarella A, Dyagilev I, Shoham M, Ferrigno G, De Momi E. Accurate multi-robot targeting for keyhole neurosurgery based on external sensor monitoring. Proc Inst Mech Eng, H: J Eng Med 2012;226:34759. Available from: https://doi.org/10.1177/0954411912442120. Tovar-Arriaga S, Tita R, Pedraza-Ortega JC, Gorrostieta E, Kalender WA. Development of a robotic FD-CT-guided navigation system for needle placement-preliminary accuracy tests. Int J Med Rob Comput Assisted Surg 2011;7:22536. Available from: https://doi.org/10.1002/rcs.393. Deacon G, Harwood A, Holdback J, Maiwand D, Pearce M, Reid I, et al. The Pathfinder image-guided surgical robot. Proc Inst Mech Eng, H: J Eng Med 2010;224:691713. Available from: https://doi.org/10.1243/09544119JEIM617. Eljamel MS. Validation of the PathFindert neurosurgical robot using a phantom. Int J Med Rob Comput Assisted Surg 2007;3:3727. Available from: https://doi.org/10.1002/rcs.153. Eggers G, Wirtz C, Korb W, Engel D, Schorr O, Kotrikova B, et al. Robot-assisted craniotomy. MIN—Minim Invasive Neurosurg 2005;48: 1548. Available from: https://doi.org/10.1055/s-2005-870908. Burkart A, Debski RE, McMahon PJ, Rudy T, Fu FH, Musahl V, et al. Precision of ACL tunnel placement using traditional and robotic techniques. Comput Aided Surg 2001;6:2708. Available from: https://doi.org/10.1002/igs.10013. Kantelhardt S, Amr N, Giese A. Navigation and robot-aided surgery in the spine: historical review and state of the art. Rob Surg: Res Rev 2014;19. Available from: https://doi.org/10.2147/RSRR.S54390. Tan A, Ashrafian H, Scott AJ, Mason SE, Harling L, Athanasiou T, et al. Robotic surgery: disruptive innovation or unfulfilled promise? A systematic review and meta-analysis of the first 30 years. Surg Endosc 2016;30:433052. Available from: https://doi.org/10.1007/ s00464-016-4752-x. Vidakovic J, Jerbic B, Suligoj F, Svaco M, Sekoranja B. Simulation for robotic stereotactic neurosurgery. In: Katalinic B, editor. DAAAM proceedings. 1st ed DAAAM International Vienna; 2016. p. 5628.

35. RONNA G4 Robotic System

[20]

623

624

Handbook of Robotic and Image-Guided Surgery

ˇ ˇ [29] Sekoranja B, Jerbi´c B, Suligoj F. Virtual surface for human-robot interaction. Trans FAMENA 2015;39:5364. ˇ ˇ [30] Svaco M, Jerbi´c B, Stiperski I, Dlaka D, Vidakovi´c J, Sekoranja B, et al. T-phantom: a new phantom design for neurosurgical robotics. Proceedings of the 27th DAAAM international symposium. Mostar, BiH: DAAAM International Vienna; 2016. p. 26670. ˇ ˇ ˇ [31] Suligoj F, Svaco M, Jerbi´c B, Sekoranja B, Vidakovi´c J. Automated marker localization in the planning phase of robotic neurosurgery. IEEE Access 2017;5:1226574. Available from: https://doi.org/10.1109/ACCESS.2017.2718621. [32] Brown LG. A survey of image registration techniques. ACM Comput Surveys 1992;24:32576. Available from: https://doi.org/10.1145/ 146370.146374. [33] Li H, Hartley R. The 3D-3D registration problem revisited. IEEE; 2007. p. 18. [34] Heard WB. Rigid body mechanics: mathematics, physics and applications. Weinheim: Wiley-VCH; 2006. [35] Tofts P. Quantitative MRI of the brain: measuring changes caused by disease. Chichester, West Sussex; Hoboken, NJ: Wiley; 2003. [36] Enqvist O. Correspondence problems in geometric vision. Centre for Mathematical Sciences, Mathematics, Lund University; 2009. [37] Zimmer HL. Correspondence problems in computer vision. Faculty of Mathematics and Computer Science Saarland University; 2011. [38] Parra Bustos A, Chin T-J. Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans Pattern Anal Mach Intell 2017. Available from: https://doi.org/10.1109/TPAMI.2017.2773482 11. ˇ ˇ ˇ [39] Suligoj F, Jerbi´c B, Svaco M, Sekoranja B. Fully automated point-based robotic neurosurgical patient registration procedure. Int J Simul Model 2018;17:45871. Available from: https://doi.org/10.2507/IJSIMM17(3)442. [40] Sorkine-Hornung O, Rabinovich M. Least-squares rigid motion using SVD. Available from: https://igl.ethz.ch/projects/ARAP/svd_rot.pdf, Accessed on 19.6.2018. [41] Yin S, Ren Y, Zhu J, Yang S, Ye S. A vision-based self-calibration method for robotic visual inspection systems. Sensors 2013;13:1656582. Available from: https://doi.org/10.3390/s131216565. [42] Yaniv Z. Rigid registration. In: Peters T, Cleary K, editors. Image-guided interventions. Boston, MA: Springer; 2008. p. 15992. [43] Yuen HK, Princen J, Dlingworth J, Kittler J. A comparative study of Hough transform methods for circle finding. Alvey Vision Club; 1989. p. 29.16. [44] Cuevas E, Wario F, Zaldivar D, Pe´rez-Cisneros M. Circle detection on images using learning automata. IET Comput Vision 2012;6:121. Available from: https://doi.org/10.1049/iet-cvi.2010.0226. [45] Matas J, Chum O, Urban M, Pajdla T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vision Comput 2004;22: 7617. Available from: https://doi.org/10.1016/j.imavis.2004.02.006. ˇ ˇ ˇ [46] Vidakovi´c J, Jerbi´c B, Svaco M, Suligoj F, Sekoranja B. Position planning for collaborating robots and its application in neurosurgery. Tehnicki vjesnik—Tech Gazette 2017;24. Available from: https://doi.org/10.17559/TV-20170213110534. [47] Khan WA, Angeles J. The kinetostatic optimization of robotic manipulators: the inverse and the direct problems. J Mech Des 2006;128:168. Available from: https://doi.org/10.1115/1.2120808. [48] Dumas C, Caro S, Garnier S, Furet B. Joint stiffness identification of six-revolute industrial serial robots. Rob Comput Integr Manuf 2011;27: 8818. Available from: https://doi.org/10.1016/j.rcim.2011.02.003. [49] Pamanes GJA, Zeghloul S. Optimal placement of robotic manipulators using multiple kinematic criteria. In: Robotics and automation. Proceedings, 1991 IEEE international conference on. IEEE; 1991. p. 9338. ˇ ˇ ˇ [50] Svaco M, Koren P, Jerbi´c B, Vidakovi´c J, Sekoranja B, Suligoj F. Validation of three KUKA Agilus robots for application in neurosurgery. In: Ferraresi C, Quaglia G, editors. Advances in service and industrial robotics. Cham: Springer International Publishing; 2018. p. 9961006. ˇ ˇ [51] Svaco M, Jerbi´c B, Sekoranja B. Task planning based on the interpretation of spatial structures. Tehnicki vjesnik—Tech Gazette 2017;24. Available from: https://doi.org/10.17559/TV-20160118150332. ˇ ˇ [52] Svaco M, Jerbi´c B, Suligoj F. Autonomous robot learning model based on visual interpretation of spatial structures. Trans FAMENA 2014;38: 1328. ˇ ˇ ˇ [53] Suligoj F, Jerbi´c B, Sekoranja B, Vidakovi´c J, Svaco M. Influence of the localization strategy on the accuracy of a neurosurgical robot system. Trans FAMENA 2018;42:2738. Available from: https://doi.org/10.21278/TOF.42203. [54] Lee J, Gozen BA, Ozdoganlar OB. Modeling and experimentation of bone drilling forces. J Biomech 2012;45:107683. Available from: https://doi.org/10.1016/j.jbiomech.2011.12.012. [55] Augustin G, Davila S, Udilljak T, Staroveski T, Brezak D, Babic S. Temperature changes during cortical bone drilling with a newly designed step drill and an internally cooled drill. Int Orthop 2012;36:144956. Available from: https://doi.org/10.1007/s00264-012-1491-z. [56] Alajmo G, Schlegel U, Gueorguiev B, Matthys R, Gautier E. Plunging when drilling: effect of using blunt drill bits. J Orthop Trauma 2012;26: 4827. Available from: https://doi.org/10.1097/BOT.0b013e3182336ec3. [57] Lee W-Y, Shih C-L, Lee S-T. Force control and breakthrough detection of a bone-drilling system. IEEE/ASME Trans Mechatron 2004;9:209. Available from: https://doi.org/10.1109/TMECH.2004.823850. ˇ ˇ ˇ [58] Svaco M, Vitez N, Sekoranja B, Suligoj F. Tuning of parameters for robotic contouring based on the evaluation of force dissipation. Trans FAMENA 2018;42. Available from: https://doi.org/10.21278/TOF.42302. [59] De Schutter J, Van Brussel H. Compliant robot motion II. A control approach based on external control loops. Int J Rob Res 1988;7:1833. Available from: https://doi.org/10.1177/027836498800700402. [60] Roy J, Whitcomb LL. Adaptive force control of position/velocity controlled robots: theory and experiment. IEEE Trans Rob Autom 2002;18: 12137. Available from: https://doi.org/10.1109/TRA.2002.999642. [61] Wang M, Song Z. Improving target registration accuracy in image-guided neurosurgery by optimizing the distribution of fiducial points. Int J Med Rob Comput Assisted Surg 2009;5:2631. Available from: https://doi.org/10.1002/rcs.227.

RONNA G4—Robotic Neuronavigation Chapter | 35

[62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80]

[82] [83] [84] [85] [86] [87]

[88] [89]

Shamir RR, Joskowicz L, Shoshan Y. Fiducial optimization for minimal target registration error in image-guided neurosurgery. IEEE Trans Med Imaging 2012;31:72537. Available from: https://doi.org/10.1109/TMI.2011.2175939. Franaszek M, Cheok GS. Selection of fiducial locations and performance metrics for point-based rigid-body registration. Precis Eng 2017;47: 36274. Available from: https://doi.org/10.1016/j.precisioneng.2016.09.010. Fitzpatrick JM. The role of registration in accurate surgical guidance. Proc Inst Mech Eng, H: J Eng Med 2010;224:60722. Available from: https://doi.org/10.1243/09544119JEIM589. Perwo¨g M, Bardosi Z, Freysinger W. Experimental validation of predicted application accuracies for computer-assisted (CAS) intraoperative navigation with paired-point registration. Int J Comput Assisted Radiol Surg 2017. Available from: https://doi.org/10.1007/s11548-017-1653-y. Shiakolas PS, Conrad KL, Yih TC. On the accuracy, repeatability, and degree of influence of kinematics parameters for industrial robots. Int J Model Simul 2002;22:24554. Available from: https://doi.org/10.1080/02286203.2002.11442246. Albert Nubiola. Contribution to improving the accuracy of serial robots [Internet] [PhD Thesis]. [Montreal]: E´cole de technologie supe´rieure; 2014. Available from: http://espace.etsmtl.ca/id/eprint/1432. Andrew Liou YH, Lin PP, Lindeke RR, Chiang HD. Tolerance specification of robot kinematic parameters using an experimental design technique—the Taguchi method. Rob Comput Integr Manuf 1993;10:199207. Available from: https://doi.org/10.1016/0736-5845(93)90055-O. Liu J, Zhang Y, Li Z. Improving the positioning accuracy of a neurosurgical robot system. IEEE/ASME Trans Mechatron 2007;12:52733. Available from: https://doi.org/10.1109/TMECH.2007.905694. Joubair A, Zhao LF, Bigras P, Bonev I. Absolute accuracy analysis and improvement of a hybrid 6-DOF medical robot. Ind Robot 2015;42: 4453. Available from: https://doi.org/10.1108/IR-09-2014-0396. Heinig M, Hofmann UG, Schlaefer A. Calibration of the motor-assisted robotic stereotaxy system: MARS. Int J Comput Assisted Radiol Surg 2012;7:91120. Available from: https://doi.org/10.1007/s11548-012-0676-7. Roth Z, Mooring B, Ravani B. An overview of robot calibration. IEEE J Rob Autom 1987;3:37785. Available from: https://doi.org/10.1109/ JRA.1987.1087124. Meng Y, Zhuang H. Self-calibration of camera-equipped robot manipulators. Int J Rob Res 2001;20:90921. Available from: https://doi.org/ 10.1177/02783640122068182. Boochs F, Schutze R, Simon C, Marzani F, Wirth H, Meier J. Increasing the accuracy of untaught robot positions by means of a multi-camera system. IEEE; 2010. p. 19. Nubiola A, Bonev IA. Absolute robot calibration with a single telescoping ballbar. Precis Eng 2014;38:47280. Available from: https://doi.org/ 10.1016/j.precisioneng.2014.01.001. Ma L, Bazzoli P, Sammons PM, Landers RG, Bristow DA. Modeling and calibration of high-order joint-dependent kinematic errors for industrial robots. Rob Comput Integr Manuf 2018;50:15367. Available from: https://doi.org/10.1016/j.rcim.2017.09.006. ˇ ˇ ˇ Svaco M, Sekoranja B, Suligoj F, Jerbi´c B. Calibration of an industrial robot using a stereo vision system. Procedia Eng 2014;69:45963. Available from: https://doi.org/10.1016/j.proeng.2014.03.012. Chen-Gang, Li-Tong, Chu-Ming, Xuan J-Q, Xu S-H. Review on kinematics calibration technology of serial robots. Int J Precis Eng Manuf 2014;15:175974. Available from: https://doi.org/10.1007/s12541-014-0528-1. Nubiola A, Slamani M, Joubair A, Bonev IA. Comparison of two calibration methods for a small industrial robot based on an optical CMM and a laser tracker. Robotica 2014;32:44766. Available from: https://doi.org/10.1017/S0263574713000714. Wu Y, Klimchik A, Caro S, Furet B, Pashkevich A. Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments. Rob Comput Integr Manuf 2015;35:15168. Available from: https://doi.org/10.1016/j.rcim.2015.03.007. Aguado S, Santolaria J, Aguilar J, Samper D, Velazquez J. Improving the accuracy of a machine tool with three linear axes using a laser tracker as measurement system. Procedia Eng 2015;132:75663. Available from: https://doi.org/10.1016/j.proeng.2015.12.557. Nubiola A, Bonev IA. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Rob Comput Integr Manuf 2013;29:23645. Available from: https://doi.org/10.1016/j.rcim.2012.06.004. Mooring B, Roth ZS, Driels MR. Fundamentals of manipulator calibration. New York: Wiley; 1991. Joubair A, Bonev IA. Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis Eng 2015;40:32533. Available from: https://doi.org/10.1016/j.precisioneng.2014.12.002. Gaudreault M, Joubair A, Bonev IA, Local and closed-loop calibration of an industrial serial robot using a new low-cost 3D measuring device. In: Robotics and automation (ICRA), 2016 IEEE international conference on. IEEE; 2016, p. 43129. Brodie J, Eljamel S. Evaluation of a neurosurgical robotic system to make accurate burr holes. Int J Med Rob Comput Assisted Surg 2011;7: 1016. Available from: https://doi.org/10.1002/rcs.376. Minchev G, Kronreif G, Martı´nez-Moreno M, Dorfer C, Micko A, Mert A, et al. A novel miniature robotic guidance device for stereotactic neurosurgical interventions: preliminary experience with the iSYS1 robot. J Neurosurg 2016;112. Available from: https://doi.org/10.3171/2016.1. JNS152005. Fan Y, Jiang D, Wang M, Song Z. A new markerless patient-to-image registration method using a portable 3D scanner: patient-to-image registration method using a portable 3D scanner. Med Phys 2014;41:101910. Available from: https://doi.org/10.1118/1.4895847. Brandmeir NJ, Savaliya S, Rohatgi P, Sather M. The comparative accuracy of the ROSA stereotactic robot across a wide range of clinical applications and registration techniques. J Rob Surg 2017. Available from: https://doi.org/10.1007/s11701-017-0712-2.

35. RONNA G4 Robotic System

[81]

625

36 G

Robotic Retinal Surgery Emmanuel Vander Poorten1, Cameron N. Riviere2, Jake J. Abbott3, Christos Bergeles4, M. Ali Nasseri5, Jin U. Kang6, Raphael Sznitman7, Koorosh Faridpooya8 and Iulian Iordachita9 1

Department of Mechanical Engineering, KU Leuven, Heverlee, Belgium Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States 3 Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, United States 4 School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom 5 Ophthalmology Department, Technical University of Munich, Munich, Germany 6 Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States 7 ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland 8 Eye Hospital Rotterdam, Rotterdam, Netherlands 9 Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, United States 2

ABSTRACT Retinal surgery has long drawn the attention of engineers and clinicians who identified a clear use case for robotics and assistive technology. In retinal surgery, precision is paramount. Skilled practitioners operate on the boundaries of human capability, dealing with minuscule anatomic structures that are both fragile and hard to discern. Surgical operations on the retina, a hair-thick multilayered structure that is an integral part of the central nervous system responsible for vision, spurred the development of robotic systems that enhance perception, precision, and dexterity. This chapter provides an encompassing overview of the progress that has been made during the last two decades in terms of sensing, modeling, visualization, stabilization, and control. The chapter reports on recent breakthroughs with first-in-human experiences, as well as on new venues that hold the potential to expand retinal surgery to techniques that would be infeasible or challenging without robotics. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00036-0 © 2020 Elsevier Inc. All rights reserved.

627

628

Handbook of Robotic and Image-Guided Surgery

36.1

The clinical need

The retina is a “layer of nervous tissue that covers the inside of the back two-thirds of the eyeball, in which stimulation by light occurs, initiating the sensation of vision” and “is actually an extension of the brain, formed embryonically from neural tissue and connected to the brain proper by the optic nerve” [1]. Any damage to the retina may cause irreversible and permanent visual field defect or even blindness. Key structures that are the subject of different surgical interventions are depicted in Fig. 36.1 and include the sclera, retinal vessels, scar tissue, or epiretinal membranes (ERMs) and, recently, the retinal layers. A list of parameters and dimensions that characterize these structures is provided in Table 36.1. In comparison, we note that the diameter of the average human hair is 50 μm, which highlights the micromanipulation challenges in retinal surgery. Open-sky surgery is a less-than-desirable option when treating critically fragile structures within the eye, such as the retina. Surgeons approach the retina through a “key-hole” set-up, inserting slender instruments through small incisions in the sclera to operate at a micrometer scale on structures whose complexity rivals or exceeds that of the brain. Visualization occurs through a stereo operating microscope. The incision forms a fulcrum point. This fulcrum complicates handeye coordination due to the inverted relationship between hand and instrument motion (Fig. 36.2). If the instrument is not pivoted exactly about the fulcrum point a net force will be applied to the sclera which could damage the sclera or could potentially cause the eye to rotate in its socket. When the eye rotates it becomes more difficult to reach a location in the retina precisely as the target location changes dynamically. The surgeon uses the support of an armrest (elbow) and the patient’s head (wrists) to stabilize the hands. Lightweight instruments are maneuvered within the confined space between the patient’s head and the microscope. A wide-angle lens is often placed between the eye and microscope, offering a larger view of the retina. This limits the work volume that is available.

36.1.1

Human factors and technical challenges

Retinal microsurgery demands advanced surgical skills. The requirements for vision, depth perception, and fine motor control are high (Table 36.2), exceeding the fundamental physiological capability of many individuals [2628]. A primary cause of tool positioning error is physiological tremor [29]. Even when microsurgical procedures are successfully performed, in the presence of tremor they require greater concentration and effort and are attended by greater risk. Patient movement is another important confounding factor. Among patients who snore under monitored anesthesia (  16%), half have sudden head movements during surgery, leading to a higher risk of complications [30]. The challenges of retinal microsurgery are further exacerbated by the fact that in the majority of contact events, the forces encountered are below the tactile perception threshold of the surgeon [9]. Inability to detect surgically relevant forces leads to a lack of control over potentially injurious factors that result in complications. FIGURE 36.1 A cross-section of a human eye. A cannula is placed 4 mm from the cornea limbus (the border of the cornea and the sclera), providing access to the intraocular space.

Robotic Retinal Surgery Chapter | 36

629

TABLE 36.1 Governing dimensions in retinal surgery. Structure

Dimension

Comment/Sources

Human eye

24.6 mm avg

Axial length [2]

Human retina

100300 μm

Thickness [3]

Internal limiting membrane

0.52.5 μm, 13 μm

Maximal at macula [4,5]

Epiretinal membrane

60 μm

Cellular preretinal layer [6]

Retinal vessel

40350 μm, 40120 μm

Branch to central [3,7]

Vessel puncture force

20 mN avg, 181 mN max

Cadaver pig eye [8]

63% , 5 mN

Cadaver pig eye [9]

0.617.5 mN; 80% , 7 mN

Cadaver pig eye [10]

2 mN avg, 1 mN std

Fertilized chicken egg [11]

80% , 5 mN

Fertilized chicken egg [12]

Vessel dissection force

67 mN avg, 82 mN max

Cadaver pig eye [8]

Peeling force

812 mN, 1545 mN

ISM of chicken egg [11,13]

Damage during peeling

From 5.1 mN

Fertilized chicken egg [14]

From 6.4 mN

Rabbit [14]

Retina damage

1788 Pa

17.2 mN on 3.5 mm diameter [15]

Breathing frequency

3 Hz; 0.2 Hz

Rat [16]; pig [17]

Breathing amplitude

50 μm; 300 μm

Rat [16]; pig [17]

Heartbeat frequency

0.84 Hz; 2 Hz

Rat [16]; pig [17]

Heartbeat amplitude

15 μm; 100 μm

Rat [16]; pig [17]

Required positioning accuracy

10 μm

General [18,19]

Required positioning accuracy

25 μm

Subretinal injection [20]

ISM, Inner shell membrane.

36.1.2

Motivation for robotic technology

Given the size and fragile nature of the structures involved, complication rates are not negligible [3234]. Surgical steps that are considered too risky or even impossible may be facilitated through robotics. There is also an interest in automating repetitive tasks to reduce cognitive load and allow experts to focus on critical steps of a procedure. Ergonomy represents another area of potential innovation. One can reconsider the operating layout and optimize usability to reduce physical burdens. Some appealing characteristics of robotic technology for treating the retina include improved positioning accuracy through some combination of motion scaling and tremor reduction, the ability to keep an instrument steady and immobilized for a prolonged period of time, and the ability to save coordinates for future use. The retina is neural

36. Robotic Retinal Surgery

Aside from the poor ergonomics of operating through a surgical microscope, leading to an elevated risk for back and neck injuries, with incidence of 30%70% for neck pain and 40%80% for back pain [31], this approach is associated with difficult handeye coordination. Without haptic feedback, the surgeon can only rely on visual feedback. However, the quality of the visual feedback is still not good enough. Surgeons expend considerable effort adjusting the optics and illumination to obtain the appropriate level of clarity, detail, and overview of the target scene. Depth perception is suboptimal. Even with modern stereo microscopes, surgeons are still sometimes unsure exactly when contact with the retina is established. Poor visualization due to factors such as corneal scars or intense vitreous hemorrhage can affect the outcome of retinal surgeries and increase the chance of complications.

630

Handbook of Robotic and Image-Guided Surgery

FIGURE 36.2 Overall layout and view during retinal surgery. (Left) Retinal surgical scene using surgical microscope, surgeon holding vitrectome in right hand and light pipe in the left; (right) typical view during an ILM peeling. ILM, Internal limiting layer.

TABLE 36.2 Human factors and technical limitations in retinal surgery. Parameter

Value

Comment/Sources

Physiologic tremor

182 μm, 100 μm RMS

Epiretinal membrane removal [21,22]

156 μm RMS

Artificial eye model [23]

812 Hz

Neurogenic tremor component [18] 2

Fulcrum motion

area up to 12.6 mm

During, e.g., manual vitrectomy [24]

Maximum velocity

0.7 m/s

Epiretinal membrane removal [21]

Typical velocity

0.10.5 mm/s

Epiretinal membrane peeling [25]

Maximum acceleration

30.1 m/s2

Epiretinal membrane removal [21]

Manipulation forces

,7.5 mN in 75%

Ex vivo pig eye membrane [9]

tissue; even a small mistake can cause irreversible damage, including blindness. Through robotics, procedures that cannot be performed safely using conventional manual techniques due to limitations in precision, such as microcannulation and subretinal injection, may be considered. In current manual practice, surgeons can only use two instruments simultaneously, although three or more instruments would be helpful in complicated cases, such as delaminations. Robotics further facilitates integration with advanced tooling. Dedicated interfaces could help manage instruments with articulating end-effectors. User interfaces can be tailored to provide feedback from a broad range of sensors embedded in a new line of “intelligent” instruments. Robotic surgery may enable operation with narrower instruments, which would decrease the size of scleral incisions and reduce damage to the sclera.

Robotic Retinal Surgery Chapter | 36

631

Taken in combination, the above characteristics could create a highly effective therapeutic system for performing advanced microsurgical procedures. Not only could the added functionality decrease complication rates, it could also speed up healing and shorten the duration of admission in the clinic. For robotics to be successful, the above arguments would need to outweigh the disadvantages of elevated operational cost and increased operation time that seem inevitable, based on today’s technology.

36.1.3

Main targeted interventions

The following retinal procedures have received considerable attention from researchers who identified opportunities for improvement by use of robotic technology.

36.1.3.1 Epiretinal membrane peeling An ERM is an avascular, fibrocellular membrane, such as a scar tissue, that may form on the inner surface of the retina and cause blurred and distorted central vision. Risk for ERM increases with age, primarily affecting people over age 50. ERM is mostly idiopathic and related to an abnormality of the vitreoretinal interface in conjunction with a posterior vitreous detachment. ERM can also be triggered by certain eye diseases such as a retinal tear, retinal detachment, and inflammation of the eye (uveitis). The prevalence of ERM is 2% in individuals under age 60 and 12% in those over age 70 [35]. Although asymptomatic, ERM often leads to reduced visual acuity and metamorphopsia, where straight lines can appear wavy due to contraction forces acting over the macular region [36]. Treatment is surgical and only when the patient suffers from binocular metamorphopsia and progressive visual decrease less than 50%. The procedure involves pars-plana vitrectomy, followed by removal (peeling) of the ERM, with or without peeling of the native internal limiting membrane (ILM) in order to decrease the recurrence of ERM afterwards [37].

36.1.3.2 Retinal vein cannulation

36.1.3.3 Subretinal injection In procedures such as antivascularization treatment, drugs are commonly administered in the vitreous humor to slow down neovascularization. Although intravitreal injections are fairly simple, when targeting cells in subretinal spaces the dose that actually reaches those cells could be very small. Subretinal injection is an alternative where drugs are directly injected in the closed subretinal space. Subretinal injection is regarded as the most effective delivery method for cell and gene therapy—including stem-cell therapy for degenerative vitreoretinal diseases such as retinitis pigmentosa, agerelated macular degeneration, and Leber’s congenital amaurosis [40]—despite it potentially leading more often to adverse events and possible complications [41].

36.1.4

Models used for replicating the anatomy

To support technology development for the abovementioned procedures, a variety of synthetic, in vitro, and in vivo models have been proposed over the past decade. Table 36.3 provides an overview of the most commonly used models and some indicative references to works were they are described or deployed. Due to the complexity of the human eye, different models are suited for each surgical intervention, with no single model satisfying all requirements. Despite the abundance of available models, research is still ongoing to further improve the existing models. For example, for

36. Robotic Retinal Surgery

Retinal vein occlusion is the second-most-prevalent vasculature-related eye disease [38]. A blood clot clogs the vein, which leads to a sudden halt in retinal perfusion. Since arterial inflow continues, hemorrhages develop and the retina may become ischemic, leading to retinal neural cell apoptosis. Depending on the thrombus location, one distinguishes between central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO), that is, when the thrombus resides in a smaller branch vein. BRVO can be asymptomatic but may lead to sudden painless legal blindness. Secondary macular edema can develop and cause metamorphopsia. Later, neovascularization can occur because of ischemic retina and cause secondary glaucoma, retinal detachment, and vitreous hemorrhage [39]. There is no etiologic curative treatment at present. One of the few symptomatic treatments that are offered are injections to prevent neovascularization, delivered directly into the eye. The injected medicine can help reduce the swelling of the macula. Steroids may also be injected to help treat the swelling and limit the damage to the occluded tissue. If CRVO is severe, ophthalmologists may apply panretinal photocoagulation wherein a laser is used to make tiny burns in areas of the retina. This lowers the chance of intraocular bleeding and can prevent eye pressure from rising to sight-threatening levels.

632

Handbook of Robotic and Image-Guided Surgery

TABLE 36.3 Models used for simulating and testing retinal surgeries, including membrane peeling, vein cannulation, and injections. Model

Peeling

Synthetic membranes

[25,4244]

Cannulation

Inj.

Comment Peeling of membrane

Gelatin phantom

[45]

10% Mimics tissue

Soft cheese

[20]

Similar OCT response

Rubber band

[46]

Agar

[47]

Raw chicken egg

[13,42,50]

Fertilized chicken egg

[3,11,13]

Cadaver bovine eye

[52]

Simulates scleral contact [48]

Vitreous humor Peeling ISM

[3,11,51]

[7,8,53]

Cadaver pig eye

[49]

Peeling ISM [45]

W/o cornea, lens, vitreous

[49]

Open-sky; 4060 μm

Perfused pig eye

[54]

Closure of vessels

In vivo pig eye

[55,56]

W/lasering to form clots

In vivo rabbit eye

[8,57]

In vivo cat eye

[8,58]

Preretinal 60 μm vessels

[58]

Intraretinal vessels

ISM, Inner shell membrane; OCT, optical coherence tomography.

membrane peeling, Gupta et al. have been searching for representative in silico models [43]. For vein cannulation, the Rotterdam Eye Hospital has been developing an ex vivo perfused pig eye model that can be used to evaluate retinal motion or vessel coagulation [54]. A modified Rose Bengal method has been developed to create clotted vessels in live pigs for validating cannulation performance [55,56].

36.2

Visualization in retinal surgery

As force levels remain predominantly below human perceptual thresholds, haptic feedback is of no avail in current surgical practice. This section explains the basic technology that is available for visualization. Over the years, a broad range of medical imaging technologies have played crucial roles in imaging the retina preoperatively and during interventions. In the following, we describe some of the most important modalities related to robotic microsurgery, with an emphasis on the stereo microscope (Section 36.2.1), as it plays a central role in the link between the patient and the operating physician. The second part of this section (Section 36.2.2) introduces optical coherence tomography (OCT) as an imaging modality with rapidly increasing importance in retinal surgery.

36.2.1

Basic visualization through operative stereo microscopy

Operative microscopes are the primary tool to image the surgical site during retinal microsurgery and are fully integrated into the standard of care worldwide. With a number of commercial vendors offering stereo microscopes (Zeiss, Leica Microsystems, Haag-streit Surgical, Topcon Medical Systems), most provide high-quality magnified and illuminated viewing of the surgical area. The obtained image quality is a result of a plurality of components briefly summarized in the following.

36.2.1.1 Stereo microscope At its core, a stereo microscope is composed of a binocular head mount that allows the operating clinician to view the surgical site via an optical system. Typically, the optical system consists of a set of lenses and prisms that connect to an objective lens that dictates the working distance to the viewing site. Critically, the stereo microscope relies on two optical feeds that allow the operating clinician to view the retina with depth perception. To modulate imaging magnification, different inbuilt lenses can be selected during the procedure by means of a control knob or pedal that comes with

Robotic Retinal Surgery Chapter | 36

633

FIGURE 36.3 Field of view from a microscope. Retina visualization with stereo microscope and two different zoom factors. Surgical tweezers are used to delicately interact with the retina.

the system. Most recent systems feature focal lengths of 150200 mm, allowing crisp visualization of the eye posterior. Fig. 36.3 provides a view upon the retina for different zoom factors. In addition, a secondary set of binoculars is often available by means of a beam splitter so that additional personnel can view the surgical procedure simultaneously. Physically, stereo microscopes are mounted on the ceiling or suspended via a floor stand arm. They come with a dedicated foot pedal to control specific functionalities—including precise placement of the stereo microscope, and changing of focus or zoom—with the benefit of providing the operating clinician maximal freedom with their hands.

36.2.1.2 Additional lenses

36.2.1.3 Light sources In order to see the surgical site, light from the exterior must be directed onto the retina. A variety of options now exist to do so, and the use of multiple illumination types during a single procedure is common. However, an important risk factor and consequence of used illumination systems is induced retinal phototoxicity. First reported in 1966 in patients having undergone cataract surgery, phototoxicity can be either thermal or photochemical in nature from excessive ultraviolet or blue light toxicity. Reports indicate that roughly 7% of macular hole repair patients have experienced significant phototoxicity. As such, the operating clinician is always forced to compromise illumination with patient safety.

36. Robotic Retinal Surgery

In addition to the optical system in the stereo microscope, it is common to use an additional lens during procedures in order to provide a wider field of view (FOV) or improve visualization at dedicated locations of the retina. In practice the choice of this additional lens is based on the surgical task in question. We briefly discuss some of the choices common to retinal microsurgery. In practice, there are two types of additional lenses used: noncontact and contact lenses. As the name indicates, the difference in these lies in whether or not the lens is touching the cornea. In the case of noncontact lenses, these are typically attached to the microscope itself by means of an adapter that can be moved in and out of the viewing path manually. In contrast, contact lenses are placed in physical contact with the eye during dedicated portions of the procedures. These are typically handheld by an assistant while in use or directly sutured to the eye. Both types have their advantages: noncontact lenses are convenient as they do not require an additional personnel or cause trauma to the eye, but they are not always properly aligned with the viewing region under consideration; conversely, handheld or sutured lenses provide improved viewing comfort but require an additional hand. In terms of visualization, additional lenses serve two important purposes. The first is to provide a wider FOV that can range up to 130 degrees of view (e.g., BIOM or Eibos). Such wide-angle lenses are common during vitrectomy procedures. In contrast, for procedures related to the macula such as ILM or ERM peeling, lenses that provide smaller fields of view with greater resolution are often preferred. Perhaps the most popular of this kind is the Machemer lens that provides a highly magnified 30 degree FOV.

634

Handbook of Robotic and Image-Guided Surgery

As a primary illumination system, an integrated light source is already available with the surgical system itself. This light source is coaxial with the microscope’s optical system, allowing the light source to travel the same path as the viewing path, which reduces shadowing effects. Alternatively, endoilluminators are fiber-optic light pipes inserted through one of the trocars in the eye sclera. Most common in surgical practice are two types of light sources for such light pipes: xenon and halogen. Although both have the potential to induce phototoxicity, both are considered safe. Light pipes of this nature come in 20, 23, and 25 gauge sizes, providing a spectrum of pipe stiffness useful for eye manipulations during procedures. Today, such illumination pipes provide cone-like illuminations of up to 4080 degree angles depending on the system. Naturally, a consequence of the light pipe endoilluminator is that the operator physician is forced to use one hand to manipulate this light source during the procedure. While this can be effective to augment depth perception (via an instrument’s project shadow on the retina) or improve illumination of specific retinal regions, chandelier illuminations offer an alternative and give the clinician freedom in both hands. Chandelier endoilluminators provide excellent wideangle illumination in situations where bimanual surgical maneuvers are necessary.

36.2.1.4 Additional imaging Prior to the surgical intervention, an important aspect is to visualize what areas of the retina should be manipulated during an envisioned procedure. To do this, a variety of imaging devices and modalities are typically used in routine clinical care. These include but are not limited to G

G

G

Color fundus imaging relies on a digital camera, with an electronic control of focus and aperture to image a 3050 degree FOV of the retina. The technology dates back to the 1880s and can be used to capture over 140 degrees for peripheral imaging using additional lenses. Nowadays, acquiring color fundus images is an easy and relatively inexpensive method to diagnose, document, and monitor diseases affecting the retina. Variants to color fundus photography such as red-free imaging, which enhance the visibility of retinal vessels by removing red wavelengths, are also common. Fluorescein angiography is similar to color fundus photography except that it takes advantage of different filters and fluorescein intravenous injections to produce high-contrast images at the early stages of an angiogram. By using the camera light flashes, which are excited using a filter and then absorbed by the fluorescein, blood flow regions of the vasculature are strongly highlighted. This can then be recorded via the camera and help depict the complete vasculature of the retina. Such imaging is extremely effective in identifying regions of the retina that have venous occlusions and other related pathologies. OCT is a fast and noninvasive imaging modality that can acquire micrometer-resolution three-dimensional (3D) scans of the anterior and posterior segments of the eye. Since its introduction in 1991, it has become one of the most widely used diagnostic techniques in ophthalmology. Today, OCT is used to diagnose and manage a variety of chronic eye conditions, as it provides high-resolution imaging and visualization of relevant biomarkers such as interor subretinal fluid buildup, retinal detachments, or pigment epithelium detachments. In addition, it enables careful measurement of retinal thickness, which can be important during retinal detachment or macular hole repair procedures. OCT angiography (OCT-A) can also be used to yield 2D volumes of the vasculature, bypassing fluorescein injections. Similarly, Doppler OCT can be used to quantify blood perfusion. Given its strong clinical relevance and its pertinent role in the future of robotic retinal surgery, the following sections will describe OCT in detail.

36.2.2

Real-time optical coherence tomography for retinal surgery

Retinal surgery requires both visualization and physical access to limited space in order to perform surgical tasks on delicate tissue at the micrometer scale. When it comes to viewing critical parts of the surgical region and to working with micrometer accuracy, excellent visibility and precise instrument manipulation are essential. Conventionally, visualization during microsurgery is realized by surgical microscopes, as shown in Fig. 36.2, which limits the surgeon’s FOV and prevents perception of microstructures and tissue planes beneath the retinal surface. The right image in Fig. 36.2 and both sides of Fig. 36.3 show a typical microscope view of the retina surface during ILM peeling. The entire thickness of human retina, which consists of 12 layers, is only about 350 μm and the ILM is as thin as 13 μm [5]. Therefore even with the aid of advanced surgical microscope systems, such operation is extremely challenging and requires rigorous and long-term training for retinal surgeons. So far, several well developed imaging modalities such as magnetic resonance imaging, X-ray computed tomography, and ultrasound sonogram have been utilized in image-guided interventions for various kinds of surgeries [59]. However, these conventional imaging modalities are not suitable for retinal surgery because their resolution is too low,

Robotic Retinal Surgery Chapter | 36

635

which prevents resolving the retinal microstructures. The slow imaging speed is problematic here as well. In recent years, OCT emerged as a popular intraoperative imaging modality for retinal surgery. OCT systems are now capable of achieving high-speed imaging in excess of 100 cross-sectional images per second, large imaging depths of a few millimeters, and micrometer-level transverse and longitudinal resolution [60,61]. Images such as depicted in Fig. 36.4 are produced with OCT. OCT systems evolved rapidly over the past 30 years, and currently there are many different types of commercial systems in the market. Below is a short description of each type. G

G

G

Time-domain (TD) OCT: TD OCT is the first variant of OCT that achieves depth scanning (i.e., A-scan imaging) by physically translating the position of a reference plane in function of the depth of the imaging layer that one wants to visualize. To detect the signal, a simple photodetector directly captures the intensity of the interference signal. Because the reference plane can be translated over a long distance using mechanical stages, a very long imaging depth, typically on the order of several centimeters to tens of centimeters, can be achieved. However, the typical Ascan speed is less than 1 kHz. Therefore the major drawbacks of TD OCT systems are slower scanning speed and low signal-to-noise ratio (SNR). FD OCT: Unlike TD OCT, frequency-domain OCT (FD OCT) systems perform spectral measurements and the depth information is deduced from Fourier transforming the OCT spectral data. Since FD OCT does not need the physical movement of the reference plane, it can be made high speed. Furthermore, the use of spectral measurements significantly improves the SNR compared to TD OCT [62,63]. FD OCT system characteristics are described in detail in the next section. Spectral-domain (SD) OCT: SD OCT is the original variant of FD OCT that uses a spectrometer and a broadband light source to measure the OCT spectral interference signal. Most commercial OCT systems are SD OCT type and generally operate with A-scan speeds in the range of 70 Hz to 20 kHz. SD OCT systems exhibit significant improvements in SNR compared to TD OCT and allow high-speed OCT imaging where the imaging speed depends on the speed of a line-scan camera used in the spectrometer.

36. Robotic Retinal Surgery FIGURE 36.4 Diagnostic imaging modalities. Fundus color photography (upper left), fluorescein angiography (upper right), and optical coherence tomography (lower) are preoperative imaging modalities commonly used before retinal interventions.

636

G

G

G

G

Handbook of Robotic and Image-Guided Surgery

Swept-source (SS) OCT: The latest development in OCT technology is SS OCT. It uses a wavelength-swept laser and a high-speed single photodetector to measure the OCT spectral interference signal. Typical commercial versions exhibit A-scan speeds in the range of 50200 kHz. Typically SS OCT systems are faster, exhibit larger imaging depth, and offer higher SNR compared to SD OCT. However, they are more expensive than SD OCT. For example, a typical SS OCT engine operating at 100 kHz would cost approximately 30,000 dollars whereas a 70-kHz OCT spectrometer engine would be in the 10,000 dollar range. Intraoperative OCT (iOCT): iOCT generally refers to an FD OCT system integrated into a surgical microscope that allows OCT visualization during surgical procedures. Typical commercial iOCT systems provide real-time B-mode (i.e., cross-sectional) images. A postprocessed C-mode (i.e., volumetric) image can be typically generated in a few seconds. Several companies provide iOCT as an option for their high-end surgical microscope systems. Common-path OCT (CP OCT): CP OCT, unlike the standard OCT systems that use a Michelson interferometer setup, does not have a separate reference arm [64,65]. Instead it uses the signal arm as the reference arm and the reference signal is produced from the distal end of the signal arm. Therefore the signal and the reference beam mostly share the same beam path. This allows a much simpler system design, lower associated costs, and the ability to use interchangeable probes, as well as the freedom to use any arbitrary probe arm length. CP OCT is also immune to polarization, dispersion effects, and fiber bending. This makes CP OCT systems ideal for endoscopic applications [64]. Fourier domain CP OCT (FD CP OCT): The FD CP OCT is the Fourier domain variant of CP OCT.

36.2.3

Principle of Fourier domain optical coherence tomography

FD OCT was first described by Fercher et al. in 1995 [66]. Over the past two decades [62,63,6769] it has been developed rapidly and most of the commercial OCT systems are of this type. Compared to TD OCT, FD OCT has more than two orders of magnitude higher sensitivity and significantly faster imaging speed [62] with the typical A-scan imaging speed in the order of a few 100 kHz. There are two different types of FD OCT as mentioned above: SD OCT which uses a broadband light source and a dispersive spectrometer with a line-scan array detector, and SS OCT which uses a narrow-single-wavelength-swept laser with a high-speed PIN detector. Fig. 36.5 shows the schematic layout and signal processing steps of a typical spectrometer-based FD OCT (i.e., SD OCT). The spectrometer in SD OCT uses a diffraction grating that disperses the broadband light, several collimating lenses, and a high-speed line-scan CCD or CMOS camera to detect the spectrum of the OCT signal. The signal arriving FIGURE 36.5 A schematic of SD OCT. A typical layout of a Fourier domain OCT system based on a spectrometer (i.e., SD OCT) is shown schematically with simplified signal processing steps. OCT, Optical coherence tomography; SD OCT, spectral-domain optical coherence tomography.

Robotic Retinal Surgery Chapter | 36

637

at the line-scan camera is the combined interferogram of the light waves from different depths within the sample. The resultant signal spectrum ID(k) can be written as [70] " !# N X ρ SðkÞ RR 1 ID ðkÞ 5 RSn 4 n51 " # N pffiffiffiffiffiffiffiffiffiffiffiffi X ρ SðkÞ 1 RR RSn cosð2kðzR 2 zSn ÞÞ (36.1) 8 n51 " # N pffiffiffiffiffiffiffiffiffiffiffiffiffiffi X ρ SðkÞ 1 RSm RSn cosð2kðzSm 2 zSn ÞÞ 8 m6¼n51 where k is the wavenumber, S(k) is the power spectrum of the light source, RR is the power reflectivity of the reference mirror, and RSi is the power reflectivity of the ith layer of the sample. The depth profile or A-scan image of the sample can be obtained by taking the Fourier transform of the spectrum in Eq. (36.1). This results in a spatial domain A-scan image which can be expressed as " !# N X ρ γ½z RR 1 iD ðzÞ 5 RS n DC terms 8 n51 " # N pffiffiffiffiffiffiffiffiffiffiffiffi X ρ γðzÞ  1 cross-correlationterms RR RSn δðz 6 2ðzR 2 zSn ÞÞ 8 n51 " # N pffiffiffiffiffiffiffiffiffiffiffiffiffiffi X ρ γðzÞ  1 auto-correlationterms RSm RSn δðz 6 2ðzSm 2 zSn ÞÞ 8 m6¼n51

36.2.3.1 Axial resolution of spectral-domain optical coherence tomography The OCT light source having a Gaussian spectral shape with a bandwidth Δλ for wavelength and Δk for wavenumber, can be described mathematically as SðkÞ 5

2 1 pffiffiffi e2½ðk2k0 Þ=Δk Δk π

(36.2)

where k0 is the center wavenumber. It can be shown that its Fourier transform γ(z) is γðzÞ 5 e2z

2

Δk2

(36.3)

36. Robotic Retinal Surgery

where γ(z) is the Fourier transform of S(k). The “DC terms” correspond to the spectrum of the light source. Usually, this is the largest component of the detector signal, which needs to be subtracted before A-scan images can be displayed. The “cross-correlation terms” are the terms that form the desired OCT A-scan image. It contains several peaks whose locations are determined by the distance offset from the reference mirror position zR and the target positions zS. The amplitude of these peaks changes according to the light source power and the reflectivity of the reference and the target positions within the sample. The last component, the “autocorrelation terms,” comes from the interference of the light between different reflectors within the target. This results in a ghost image artifact. However, this component is usually located away from the desired signal, since the distances between the different reflectors within the sample are small. The OCT signal can be visualized as a depth-resolved 1D image (A-mode), a cross-sectional 2D image (B-mode), or a volumetric 3D image (C-mode); schematically shown in Fig. 36.6. In most SD OCT systems, the signal is detected as a spectral modulation using a spectrometer which samples them uniformly in wavelength, and this can be described as in Eq. (36.1). This implies that they are nonlinear in wavenumber domain. Thus, applying the discrete Fourier transform or fast Fourier transform to such a signal will seriously degrade the imaging quality. A specific procedure, both in hardware and software, has been developed to reconstruct the image from the nonlinear wavenumber domain spectrum. Compared to the hardware solutions that usually complicate the design of the spectrometer and increase the cost, the software solutions are usually much more flexible and cost-efficient. There are two widely used software methods: the first is based on numerical interpolation that includes various linear interpolations and cubic interpolation; the other uses the nonuniform discrete Fourier transform or the nonuniform fast Fourier transformation.

638

Handbook of Robotic and Image-Guided Surgery

FIGURE 36.6 OCT imaging modes. Three different scanning/imaging modes of OCT are schematically described: A-scan (1D), B-scan (2D), and C-scan (3D). OCT, Optical coherence tomography.

From Eq. (36.2) the A-scan signal is the convolution of γ(z) and the sample’s structure function δ(z 6 2(zR 2 zS)). Thus, the resolution laxial of the SD OCT in axial direction can be defined as the full width at half maximum (FWHM) of γ(z) pffiffiffiffiffiffiffiffiffiffi 2 lnð2Þ 2 lnð2Þ λ20 5 (36.4) laxial 5 π Δλ Δk where λ0 is the central wavelength of the light source. As you can see the axial resolution of the OCT is determined by the bandwidth of the light source. Thus a broadband light source is usually used in the SD OCT system to achieve high-resolution imaging.

36.2.3.2 Lateral resolution of spectral-domain optical coherence tomography In SD OCT, the lateral resolution is defined as the FWHM of the point spread function (PSF) of the probe beam at the beam waist. Assume the numerical aperture of the objective lens before the sample is denoted as NA. Then the lateral resolution of SD OCT can be expressed as pffiffiffiffiffiffiffiffiffiffi 2 lnð2Þ λ0 (36.5) llateral 5 π NA

36.2.3.3 Imaging depth of spectral-domain optical coherence tomography In SD OCT, the imaging depth is influenced by two factors. The first is the sample’s scattering and absorption. This causes the light intensity to decrease exponentially with depth. Another factor is the spectrometer’s spectral resolution. It is determined by the light bandwidth, Δk, and the number of the pixels in the line-scan camera, which is denoted as N. Based on the Shannon/Nyquist theory, the maximum imaging depth of the SD OCT system limited by the resolution of the spectrometer is given by zmax 5

Nπ 2Δk

(36.6)

Eq. (36.4) shows that the axial resolution of SD OCT is inversely proportional to the bandwidth of the light source. Thus both high-resolution and large bandwidth spectral measurements are needed for SD OCT imaging that requires both large imaging depth and high axial resolution. This requires a large linear array camera which can be quite expensive. In addition, a slow sampling rate will increase the imaging time, which makes the imaging susceptible to motion artifacts. It also produces a large amount of data that becomes a heavy burden on the image storage and transferring.

Robotic Retinal Surgery Chapter | 36

639

36.2.3.4 Sensitivity of spectral-domain optical coherence tomography The sensitivity of an SD OCT system can be expressed as [71]    2 X 1=N ρηT =hv0 P0 γ r γ s Rr           5 T ρη =hv0 P0 =N γ r Rr 1 1 1 1 Π 2 =2 ρη=hv0 P0 =N γ r Rr N=Δveff 1 1 σ2rec S DOCT

(36.7)

where N is the number of pixels obtained at the detector, ρ is the efficiency of the spectrometer, η denotes the quantum efficiency of the detector, T is the CCD/CMOS detector integration time, h is Planck’s constant, v0 is the center frequency, P0 is the output of the source power, and γ R and γ S are the parts of the input power that enter the spectrometer from the reference and sample arms, respectively. Rr is the power reflectivity of the reference mirror, Π is the polarization state of the source, Δveff is the effective spectral line width of the light source, and σrec is the RMS of the receiver noise. The three terms in the denominator of Eq. (36.7) have different meanings: the first is the shot noise, the second the excess noise, and the third the receiver noise.

36.2.4 High-speed optical coherence tomography using graphics processing units processing

36. Robotic Retinal Surgery

Due to their fast working speed, OCT systems are suitable for use as clinical interventional imaging systems. To provide accurate and timely visualization, real-time image acquisition, reconstruction, and visualization are essential. However, in current ultrahigh-speed OCT technology, the reconstruction and visualization speeds (especially 3D volume rendering) are generally far behind the data acquisition speed. Therefore most high-speed 3D OCT systems usually work in either low-resolution modes or in a postprocessing mode, which limits their intraoperative surgical applications. To overcome this issue, several parallel processing methods have been implemented to improve A-scan data of FD OCT images. The technique that was adopted by most commercial systems is based on multicore CPU parallel processing. Such systems have been shown to achieve 80,000 line/s processing rate on nonlinear-k polarization-sensitive OCT systems and 207,000 line/s on linear-k systems, both with 1024-point/A-scan [72,73]. Nevertheless, the CPU-based processing is inadequate for real-time 3D video imaging, even a single 3D image display can take multiple seconds. To achieve ultrahigh-speed processing, GPGPU (general purpose computing on graphics processing units)-based technology can accelerate both the reconstruction and visualization of ultrahighspeed OCT imaging [69,74,75]. The signal processing flow chart of the dual-GPUs architecture is illustrated in Fig. 36.7. where three major threads are used for the FD OCT system raw data acquisition (Thread 1), the GPU-accelerated FD OCT data processing (Thread 2), and the GPU-based volume rendering (Thread 3). The three threads synchronize in the pipeline mode, where Thread 1 triggers Thread 2 for every B-scan and Thread 2 triggers Thread 3 for every complete C-scan, as indicated by the dashed arrows. The solid arrows describe the main data stream and the hollow arrows indicate the internal data flow of the GPU. Since the CUDA technology currently does not support direct data transfer between GPU memories, a C-scan buffer is placed in the host memory for the data relay [75]. Such dual-GPU architecture separates the computing task of the signal processing and the visualization into different GPUs, which has the following advantages: (1) Assigning different computing tasks to different GPUs makes the entire system more stable and consistent. For the real-time 4D imaging mode, the volume rendering is only conducted when a complete C-scan is ready, while B-scan frame processing is running continuously. Therefore if the signal processing and the visualization are performed on the same GPU, competition for GPU resource will happen when the volume rendering starts while the B-scan processing is still going on, which could result in instability for both tasks. (2) It will be more convenient to enhance the system performance from the software engineering perspective. For example, the A-scan processing could be further accelerated and the PSF could be refined by improving the algorithm with GPU-1, while a more complex 3D image processing task such as segmentation or target tracking can be added to GPU-2. Fig. 36.8 provides an overview of a stereo microscope with iOCT as well as a pair of digital cameras allowing simultaneous capturing of both the pair of stereo-images as well as iOCT-images. The iOCT and digital cameras share a large part of the optical path. This is convenient as zoom adjustments will be equally reflected in the stereo-camera as on the iOCT scanner. While such and similar layouts offer powerful measurement tools for capturing the retina, the quality still depends on the state and alignment of all intervening media. Advanced instruments described next (Section 36.3) bypass these problems by directly measuring inside the patient’s eye.

640

Handbook of Robotic and Image-Guided Surgery

FIGURE 36.7 Signal processing flow chart of ultrahigh-speed OCT based on dual-GPUs architecture. Dashed arrows, thread triggering; solid arrows, main data stream; hollow arrows, internal data flow of the GPU. Here the graphics memory refers to global memory. GPU, Graphics processing units; OCT, optical coherence tomography.

36.3

Advanced instrumentation

Over the last few decades advances in instrumentation have significantly altered retinal surgery practice. The development of pars-plana vitrectomy in the 1970s by Macherner formed a key milestone [76]. Kasner had discovered in 1962 that the vitreous humor, the clear gel-like structure that occupies the intraocular space between the lens and retina (Fig. 36.1), could be removed amongst others providing unrestricted access to the retina [77,78]. Macherner developed pars-plana vitrectomy, a minimally invasive technique to remove the vitreous humor. In this technique a so-called vitrectome is introduced at a distance of 34 mm from the limbus, the place where the cornea and sclera meet. This region, the so-called pars plana, is free from major vascular structures. The retina typically starts at 68 mm posterior to the limbus. There is thus little risk for retinal detachment when making an incision in the pars plana to create access to the intraocular space [77]. A vitrectome, a suction cutter, is then used to remove the vitreous humor, which can be replaced by a balanced salt solution. Often a three-port approach is adopted where, aside for the vitrectome, a second incision is made to connect a supply line to provide at constant pressure the salt solution. A third incision is used to pass a light-guide to provide local illumination. Vitrectomy clears the path for other instruments to operate on the retina. Modern retinal instruments include retinal picks, forceps, diamond-dusted membrane scrapers, soft-tip aspiration cannulas, cauterization tools, coagulating laser fibers, chandeliers (illuminating fibers), and injection needles. There is a trend to miniaturize these instruments, with a particular focus on the diameter. In retinal surgery the instrument diameter is expressed in the Birmingham wire gauge (BWG) system. The BWG system is often simply termed Gauge and abbreviated as G; Table 36.4 shows the corresponding dimensions in millimeters. When the diameter drops to 25 G (0.5 mm) the required incisions become self-sealing so that there

Robotic Retinal Surgery Chapter | 36

641

FIGURE 36.8 Layout of a stereo microscope with iOCT and digital cameras. Frontal view upon a commercial stereo microscope with mounts for digital cameras and iOCT. iOCT, Intraoperative optical coherence tomography.

TABLE 36.4 Lookup table—instrument dimensions from Birmingham wire gauge. Gauge

20

21

22

23

24

25

26

27

28

29

mm

0.889

0.813

0.711

0.635

0.559

0.508

0.457

0.406

0.356

0.330

is no need to suture the incisions, and the risk of inflammation is reduced. However, the 25 G instruments are more compliant and may bend (e.g., when trying to reposition the eye). Some retinal surgeons therefore prefer larger and stiffer 23 G (0.65 mm) instruments. Next to the more “traditional” instruments various sophisticated instruments, featuring integrated sensing capability (Sections 36.3.1 and 36.3.2) as listed in Table 36.5 or enhanced dexterity (Section 36.3.4) have been reported. In contrast to methods relying on external measurement (Section 36.2) or actuation (Section 36.5) these instruments directly measure and act in the intraocular space, bypassing the complex optical path formed by the plurality of lenses and intervening media, and avoiding the effect of the scleral interface. Therefore they potentially allow a more precise acquisition, understanding, and control over the interaction with the retinal tissue.

Force sensing

Fragile structures at the retina may get damaged when undergoing excessive forces. As these excessive forces often lie below human perceptual thresholds [9], this is not an imaginary problem. Gonenc et al. describe iatrogenic retinal breaks, vitreous hemorrhage, as well as subretinal hemorrhages following peeling procedures [42]. When too large forces are applied on a cannulated vessel it may tear or get pierced. This may lead to serious bleeding, or unintentional injection of a thrombolytic agent in subretinal layers which would cause severe trauma. Over the years researchers have presented several sensors for measuring the force applied on retinal structures Fig. 36.9 shows a time-line with some developments in this regard.

36.3.1.1 Retinal interaction forces Gupta and Berkelman employed strain gauges glued upon or integrated in the instrument handle [9,79,80]. These early works provided a first insight into the governing interaction forces with the retina. Gupta showed that for 75% of the time interaction forces stayed within 7.5 mN below human perceptional thresholds. Berkelman developed a threedegree-of-freedom (DoF) sensor based on a double cross-flexure beam design [79,80]. Aside from submillinewton precision, Berkelman’s sensor behaves isotropically in three DoFs. With a 12.5 mm outer diameter (O.D.) this sensor can only be integrated in the instrument handle. The sensor therefore does not only pick up the interaction forces at the

36. Robotic Retinal Surgery

36.3.1

642

Handbook of Robotic and Image-Guided Surgery

TABLE 36.5 Overview of sensor-integrated vitreo-retinal instruments. Measurand

Technology (references)

Retinal interaction force

Strain gauge [9,79,80]; FabryPerot [81,82]; FBG [13,14,17,42,8387]

Scleral interaction force

FBG [46,88,89]

Proximity, depth

OCT [17,45,87,90,91]

Puncture

Impedance [92]

Oxygen

Photoluminescence [93]

FBG, Fiber Bragg grating; OCT, optical coherence tomography.

FIGURE 36.9 Force sensing for retinal surgery. Evolution of force sensing over recent years and integration of force-sensing technology in instruments for retinal surgery.

retina, but also forces that develop at the incision in the sclera. Since the latter are typically an order of magnitude larger [8], it is difficult to estimate the interaction forces at the retina. Therefore researchers searched for embedding sensors in the shaft of the surgical instruments to measure directly in the intraocular space. The first intraocular force sensor by Sun et al. employed FBG (Fiber Bragg grating) optical fibers [11]. Fiber optical sensors are attractive as they can be made very small, are immune to electrical noise, and sterilizable [82]. Sun started with a single 160 μm FBG strain sensor. The sensor was glued in a square channel manufactured along the longitudinal axis of a 0.5 mm diameter titanium wire, mimicking 25 G ophthalmic instruments [11]. The sensitive part of the optical fiber, that is, a 1 cm long portion where the Bragg grating resides, was positioned nearby the distal instrument tip such that interaction forces at the sclera would not be picked up. A measurement resolution of 0.25 mN was reported. During experiments on fertilized chicken eggs, forces between 8 and 12 mN were found when peeling the inner shell membrane (ISM). Forces in the range of 13 mN were measured during vessel cannulation experiments. In a follow-up work a two-DoF version was developed by routing three fibers along the longitudinal axis of a 0.5 mm diameter instrument [13]. This sensor measures transverse force components perpendicular to the instrument axis. Through differential measurement the effect of temperature could be canceled out. Experiments were conducted on, respectively, a raw egg

Robotic Retinal Surgery Chapter | 36

643

membrane, a chorioallantoic membrane (CAM), which is an extraembryonic membrane of a fertilized chicken egg [3], and on a rabbit [14]. Peeling forces varied between 0.21.5 and 1.34.1 mN, for the rabbit an average minimum force to delaminate the hyaloid was 6.7 mN. Damage appeared in the CAM model when forces exceeded 5.1 mN, whereas retinal tears occurred in the rabbit from forces beyond 6.4 mN. Gonenc et al. used a similar design on a hook with the Micron, a handheld robotic instrument, to measure forces while peeling different membrane models [42]. He et al. introduced a three-DoF force sensing pick which is also sensitive to axial forces. A superelastic Nitinol flexure is foreseen to make the tool sensitive in the axial direction [85]. The RMS error was below 1 mN in all directions [85]. Several works introduced microforceps with integrated force sensing [84,85,94]. Gonenc et al. foresaw a method to ensure that grasping does not affect the force reading [94]. Kuru et al. developed a modular setup allowing exchange of the forceps within a nondisposable force-sensing tube [86]. Gijbels et al. developed a stainless steel cannulation needle with two-DoF FBG sensing and an 80-μm needle tip [10,83]. The sensor resolution was 0.2 mN. Repeatable force patterns were measured when cannulating synthetic vessels in a polydimethylsiloxane (a siliconbased organic polymer) (PDMS) retina [95] and cadaver pig eyes [10]. Whereas the force sensor is only sensitive to transverse forces, typical cannulation needles, including those from Gijbels [83] and Gonenc et al. [96], have a distal tip that is bent under an angle close to 45% to ease cannulation [12]. This angulation renders the sensor also sensitive to puncture forces which are hypothesized to mainly occur in the direction of the needle tip. Gonenc et al. mounted a force-sensing microneedle on the Micron handheld robotic system and cannulated vessels on a CAM surface. They reported cannulation forces rising from on average 8.8 up to 9.33 mN for increasing speed of, respectively, 0.30.5 mm/s [96]. In Gijbels’ work cannulation forces ranged between 0.6 and 17.5 mN, but in 80% of the cases they were below 7 mN [10]. Whereas the majority of works involve sensors based on FBGs, a number of instruments have been presented that employed the FabryPe´rot interferometry (FPI) measurement principle [81,82]. With FPI light exiting an optical fiber scatters back between reflective surfaces at both sides of a flexure body. Depending on the load the flexure deforms affecting the backscattered light. FPI is in general more affordable than FBG, but manufacturing precise flexures is challenging. A further challenge exists in making sure the instrument operates robustly despite the flexure. Liu et al. used FD CP OCT to interrogate the change of cavity length of an FP cavity. By multiplexing three signals they constructed a 3D force sensor with diameter below 1 mm on a retinal pick [82]. Fifanski et al. also built a retinal pick with force sensing based on FPI. This instrument has only one DoF but has a 0.6 mm O.D. close to current practice [81].

36.3.1.2 Scleral interaction forces

36.3.1.3 Force gradients Instead of measuring the absolute interaction force for some applications such as detection of puncture or contact state it is more robust to look at relative chances rather than absolute forces. For example Gijbels et al. and Gonenc et al. look at the force transient to detect the puncture of a retinal vein [10,95,96]. A threshold of 23 mN/s was found to be able to detect punctures with a 98% success rate [10]. In 12% of the cases a false-positive detection was made, for example, when upon retraction the threshold was hit. Double punctures, that is, where the vein is pierced through, were also successfully detected as they would lead to two rapidly succeeding force transients.

36. Robotic Retinal Surgery

An underappreciated limitation of current robotic systems is the lost perception of forces at the scleral interface where the tool enters the eye. During freehand manipulation, surgeons use this force to orient the eye or to pivot about the incision such as to limit stress and/or keep the eye steady. In robot-assisted retinal surgery the stiffness of the robotic system attenuates the user’s perception of the scleral forces [88]. This may induce undesired large scleral forces with the potential for eye injury. For example, Bourla et al. reported excessive forces applied to the sclera due to misalignment of the remote-center-of-motion (RCM) of the da Vinci [97]. He et al. developed a multifunction forcesensing ophthalmic tool [46,88] that simultaneously measures and analyzes tooltissue forces at the tool tip and at the toolsclera contact point. A robot control framework based on variable admittance uses this sensory information to reduce the interaction force. He et al. reported large scleral forces exceeding 50 mN and tool deflection complicating positioning accuracy if insufficient care was paid to the scleral interaction forces, where forces dropped to 3.4 mN otherwise [88]. Following the same control framework, a force-sensitive light-guide was developed by Horise et al. that can accommodate to the patient’s eye motion. Such a smart light-guide could support bimanual surgery as the microsurgeon can use his/her second hand to manipulate other instruments instead of the light-guide [89].

644

Handbook of Robotic and Image-Guided Surgery

36.3.2

Optical coherence tomography

Force or force gradients can help improve understanding of the current state, but offer little help to anticipate upon a pending contact or state transition, neither do they provide a lot of insight into what is present below the surface. SD OCT systems (Section 2.3) achieve ,5 μm axial resolution in tissue [98] and have imaging windows larger than 23 mm. As such they are considered very useful to enhance depth perception in retinal applications. Several researchers have developed surgical instruments with integrated optical fibers to make this imaging technology available at the instrument tip. The fibers may be directly connected to an OCT-engine or when using iOCT systems they may be routed via an optical switch to the OCT-engine, whereby the switch allows rerouting of the OCT signal to the fiber and alternatively to the intraoperative scanner [99]. The single fiber is typically introduced in a slot along the longitudinal direction of the surgical instrument and inserted alongside the instrument into the eye. The single fiber can then be used to generate an axial OCT-scan or A-scan (Fig. 36.6) that provides information on the tissue and tissue layers in front of the OCT-beam that radiates within a narrow cone from the OCT fiber. By making lateral scanning motions, the axial OCT-scan can be used to create B-scans or C-scans. Han et al. integrated a fiber-optic probe with FD CP OCT [100] into a modified 25 G hypodermic needle shaped as a retinal pick. They showed how the multiple layers of the neurosensory retina can be visualized through an A-scan and further reconstructed B- and C-scans from a rat cornea [91]. The fiber of Liu et al. [100] was also used by Yang et al. to generate B- and C-scans out of A-scans with the Micron, a handheld micromanipulator [101]. Balicki et al. embedded a fiber in a 0.5-mm retinal pick for peeling the ERM [90]. The instrument was designed so that the tool tip itself was also visible in the A-scan. Through some basic filtering both the tool tip and the target surface could be extracted. In this layout registration is highly simplified as the distance to the target is simply the distance between the observed tip and the observed anatomical structure, hence omitting the need to calibrate the absolute tip location. Song et al. integrated an OCT-fiber in a motorized microforceps. It assesses the relative motion of the forceps relative to the target. The fiber is glued along the outside, fixed to one “finger” of the forceps, such as to avoid artifacts when tissue is grasped [50]. Kang and Cheon [45] developed a CP OCT-guided microinjector based on a similar piezo-actuation stage to conduct subretinal injections at specific depths. Given the multilayer structure of the retina, simple peak detection algorithms may mistakenly provide the distance to a layer that differs from the retinal surface. More sophisticated algorithms were developed to take the multilayer structure into account amongst others by Cheon et al. who proposed a shifted cross-correlation method in combination with a Kalman filter [52] or Borghesan et al. who compared algorithms based on an unscented Kalman filter to an algorithm based on the particle filter [102]. Recently, within EurEyeCase, a EU-funded project on robotic retinal surgery [103], the first human experiments (five subjects) with robot-assisted fiber-based OCT were conducted. A needle with integrated OCTfiber was moved toward the retina. The feasibility of installing a virtual bound at a safe distance from the retina was confirmed [104]. In the same project cannulation needles featuring combined force and OCT-sensing were developed [87] and tested in vivo on pig eyes [17]. In one of the configurations that were explored four grooves were made along the longitudinal axis of the instrument. In two grooves a pair of FBG optical fibers were inserted and glued. In one of the remaining grooves an OCT-fiber was glued. This latter was used to estimate the distance from the tip of the cannulation needle to the surface. The fiber OCT-fiber was retracted with respect to the cannulation needle such that even during cannulation the OCT-fiber tip was at a certain distance from the surface, allowing estimation of the depth of the cannulation relative to the retinal surface.

36.3.3

Impedance sensing

Several works have looked at electrical impedance sensing to measure different variables. In an early work Saito et al. used electrical conductivity for venipuncture detection in a rabbit [105]. A similar approach was followed by Schoevaerdts et al. [92] for eye surgery. The goal was to estimate the contact with a retinal vessel and a shift in impedance when puncturing a retinal vessel. Similar to force sensing a change in impedance was expected to occur when the sensor passed from a pure vitreous-like environment, toward a contact with a vessel wall and subsequently contact with blood in a cannulated vessel. Experiments on ex vivo pig eyes showed a detection rate of 80%. The feasibility of detecting double punctures was also confirmed by Schoevaerdts. A side product of the impedance sensing was found in the possibility to detect air bubbles in the supply line through which the medicine is to be injected [106]. Given the small size and fragile nature of the targeted vessels the presence of air in a supply line forms an important problem. The air may pass through the tiny lumen of a cannulation needle and end up in the targeted vessel. Due to the lower pressure in

Robotic Retinal Surgery Chapter | 36

645

the vessel (compared to the high pressure to push the drugs through the needle tip), the air could rapidly expand inside as it could potentially damage the targeted vessel itself.

36.3.4

Dexterous instruments

Where conventional retinal tools are generally straight, several instruments featuring distal dexterity have been designed up to now [15,48,107113] (see Fig. 36.10). These instruments enter the intraocular space in a straight fashion but can then be reconfigured, taking on a curved or bent shape inside the eye. Thanks to the distal dexterity, a larger part of the targeted region can be reached with reduced risk of colliding with the lens. Anatomic targets may also be reached under different angles. This in its turn can help reduce the force that is applied at the sclera. Ikuta et al. [114] developed a microactive forceps 1.4 mm in diameter, with built-in fiberscope [110]. In Ikuta’s design a sophisticated interface allows bending the 5 mm distal segment over a range of 45 degrees while still allowing normal operation of the gripper. Wei et al. introduced a 550 μm preshaped superelastic NiTi tube. Restrained by a cannula 0.91 mm in diameter, the tube enters the eye in a straight fashion. However, the part that protrudes out of the cannula takes on its preshaped form once again. By regulating the relative displacement between the NiTi tube and cannula the bending angle is adjusted [113]. Hubschman et al. developed a micro-hand of which each finger is 4 mm long and 800 μm wide and consists of six pneumatically actuated phalanxes that can bend 180 degrees and each lift up to 5 mN force [109]. A stent deployment unit (SDU) was a new development from Wei et al. [48]. The SDU consists of three concentric NiTi elements: a preshaped outer tube with preshaped radius of 5 mm that bends at a specified angle when extended outside a stainless steel support tube, a stent pushing element to deliver the stent, and a guidewire to puncture and create access to the vessel. With an outer tool diameter of 550 μm the instrument is compatible with modern dimensions. The 70-μm guidewire was used to successfully cannulate vessels in agar and in a CAM model. The authors recommended smaller stents than the 200 μm that was employed to be able to target smaller structures. Another example of an instrument with distal DoF is found in the Intraocular Robotic Interventional and Surgical (IRIS) system (IRISS). The IRIS has an O.D. of 0.9 mm but features two distal DoFs, each with 6 90 degree bending angle for only a 3-mm long section [108,112]. Cao et al. developed an endoilluminator with two bending DoFs. Shape memory alloy is used as the driving method. Despite good miniaturization potential the reported endoilluminator was only a 10 3 scaled version of the targeted 25 G design. More recently Lin et al. introduced a miniature forceps mounted on a concentric tube compatible with 23 G vitreoretinal surgery [111]. The gripper was actuated by a NiTi pull wire which is said not to interfere with the shape of the concentric tubes when actuated. The development of flexible instruments for retinal surgery is advocated a.o. by Bergeles et al. who computed a reduction of retinal forces

36. Robotic Retinal Surgery

FIGURE 36.10 Dexterous vitreoretinal instruments. Overview of novel instruments featuring distal actuation capability.

646

Handbook of Robotic and Image-Guided Surgery

in the order of 30% when moving flexible instruments through a model of the vitreous humor compared to steering rigid instruments through such an environment [15]. The importance of this work is to be seen in light of the growing interest toward vitrectomy-less interventions [115].

36.4

Augmented reality

During an intervention surgeons tend to immerse themselves mentally into the intraocular space, favoring visual above nonvisual sensory channels. Under the assumption that visual feedback would minimally distract the natural flow of the operation, augmented reality has been explored extensively to convey additional contextual information. However, augmentation of information has several specific challenges: first, the processing and rendering of the data have to be performed efficiently to provide timely feedback to the user. This is especially important for situations where the additional data directly provide surgical guidance, as in these cases any delay introduced by the visualization and augmentation would create lag and negatively affect the surgical performance. Assuming the needle movement is 1 mm/s, each 10 ms of delay in one control loop will bring 10 μm position error. Second, identification of the required information to be augmented in each surgical step, as well as registration of multimodal images, is not straightforward. As a third challenge, the visualization and rendering methods for 3D volumetric data are highly demanding when it comes to computational performance, especially when high visual fidelity is to be achieved. Advanced rendering methods which apply realistic illumination simulation produce high-quality results, such as the OCT volume in Fig. 36.11 rendered with Monte-Carlo volume ray-casting. These have been shown to improve the perception of important structures; however, it currently takes several seconds to generate these images and thus is not directly applicable to real-time imaging. Therefore optimizing approaches for fast rendering and high-quality augmentation is an important research task. This section explains mosaicing, subsurface imaging, depth visualization, and overlaying of pre- and intraoperatively acquired data. Furthermore, a novel approach using auditory feedback as a form of augmentation will be discussed.

36.4.1

Mosaicing

The acquisition of high-resolution retinal images with a large FOV is challenging for technological, physiological, and economic reasons. The majority of imaging devices being used in retinal applications are slit-lamp biomicroscopes; OCT machines and ophthalmic microscopes visualize only a small portion of the retina, complicating the task of localizing and identifying surgical targets, increasing treatment duration and patient discomfort. To optimize ophthalmic procedures, image processing and advanced visualization methods can assist in creating intraoperative retina maps for view expansion (Fig. 36.12). An example of such mosaicing methods, described in Ref. [116], is a combination of direct and feature-based methods, suitable for the textured nature of the human retina. The researchers in this work described three major enhancements to the original formulation. The first is a visual tracking method using local illumination compensation to cope with the challenging visualization conditions. The second is an efficient pixel selection scheme for increased computational efficiency. The third is an entropy-based mosaic update method to dynamically improve the retina map during exploration. To evaluate the performance of the proposed method, they conducted

FIGURE 36.11 OCT volume. High-quality volume rendering of an intraoperative OCT cube from a patient with macular foramen (left); and with surgical instrument (right). OCT, Optical coherence tomography.

Robotic Retinal Surgery Chapter | 36

647

FIGURE 36.12 Mosaicing. Mosaicing result obtained from a set of nine images [117].

several experiments on human subjects with a computer-assisted slit-lamp prototype. They also demonstrated the practical value of the system for photo-documentation, diagnosis, and intraoperative navigation.

36.4.2

Subsurface imaging

Recent ophthalmic imaging modalities such as microscope-integrated OCT enable intraoperative visualization of microstructural anatomies in subtissue domains. Therefore conventional microscopic images are subject to modification in order to integrate visualization of these new modalities. Augmentation of iOCT data on en-face images to the surgeon comes with challenges including instrument localization and OCT-optical image registration (see Fig. 36.13). Studies

36. Robotic Retinal Surgery

FIGURE 36.13 Screenshot of injection guidance application. (Left) Augmented view of the surgical scene, showing the camera view with the overlaid OCT scanning locations as well as the projected intersection point with the retinal pigment epithelium (RPE) layer. Current and last B-scan are marked with white and blue bars for illustrative purposes; (right) schematic view of the 3D relationships between B-scans (blue), current needle estimate (green), and intersection point with the target surface (red). These relationships cannot easily be inferred from a simple 2D microscope image. OCT, Optical coherence tomography.

648

Handbook of Robotic and Image-Guided Surgery

describe robust segmentation methods to obtain the needle point cloud within the OCT volume and use retinal vessel structures for online registration of OCT and optical images of the retina [20]. Due to the infrared light source of the OCT and using the geometrical features of the surgical instruments, segmentation results are robust to illumination variation and speck reflection.

36.4.3

Depth visualization

In the conventional vitreoretinal surgeries one of the important weaknesses is the lack of intraocular depth. Currently, surgeons rely on their prior experience to proximate the depth from the shadow of their instrument on the retina. Recent studies show that modern intraoperative imaging modalities such as iOCT are able to provide accurate depth information. Therefore augmented reality can play an important role here to intuitively visualize the depth information.

36.4.4

Vessel enhancement

The morphology of blood vessels is an important indicator for most retinal diseases. The accuracy of blood vessel segmentation in retinal fundus images affects the quality of retinal image analysis and consequently the quality of diagnosis. Contrast enhancement is one of the crucial steps in any of the retinal blood vessel segmentation approaches. The reliability of the segmentation depends on the consistency of the contrast over the image. Bandara and Giragama [118] presented an assessment of the suitability of a recently invented spatially adaptive contrast enhancement technique for enhancing retinal fundus images for blood vessel segmentation. The enhancement technique was integrated with a variant of the Tyler Coye algorithm, which has been improved with a Hough line-transformation-based vessel reconstruction method. The proposed approach was evaluated on two public datasets, STARE [119,120] and DRIVE [121]. The assessment was done by comparing the segmentation performance with five widely used contrast enhancement techniques based on wavelet transforms, contrast limited histogram equalization, local normalization, linear unsharp masking, and contourlet transforms. The results revealed that the assessed enhancement technique is well suited for the application and also outperforms all compared techniques. In addition to retinal fundus images, OCT and OCTA are other imaging modalities to offer retinal vessel visualization. As discussed before, OCT is a noninvasive, high-resolution medical imaging modality that can resolve morphological features, including blood vessel structures, in biological tissue as small as individual cells, at imaging depths on the order of 1 mm below the tissue surface. An extension of OCT, named OCTA, is able to image noninvasively the vasculature of biological tissue by removing the imaging data corresponding to static tissue and emphasizing the regions that exhibit tissue motion. OCTA has demonstrated great potential for characterization of vascular-related ocular diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. Quantitative analysis of OCT and OCTA images, such as segmentation and thickness measurement of tissue layers, pattern analysis to identify regions of tissue where the morphology has been affected by a pathology from regions of healthy tissue, segmentation and sizing of blood and lymph vasculature, etc., has a significant clinical value as it can assist physicians with the diagnosis and treatment of various diseases. However, morphological features of interest in OCT and OCTA are masked or compromised by speckle noise, motion artifacts, and shadow artifacts generated by superficial blood vessels over deeper tissue layers due to scattering and absorption of light by the blood cells. Tan et al. [122] introduced a novel image-processing algorithm based on a modified Bayesian residual transform. Tan’s algorithm was developed for the enhancement of morphological and vascular features in OCT and OCTA images.

36.4.5

Tool tracking

In order to integrate robotic manipulators in retinal microsurgery and to augment the clinician’s perceptual ability, a critical component is the capacity to accurately and reliably estimate the location of an instrument when in the FOV. As microscopes have video recording capabilities, a number of methods have thus focused on real-time visual tracking of instruments from image data. A major challenge to do so from an algorithm point of view is that the instrument appearance is difficult to model over time. Initially, methods relied on knowing the instrument geometry to track the instrument [123,124]. Alliteratively, visual survoing has been the basis of a number of methods in order to overcome the need to know the instrument structure beforehand [125,126]. Unfortunately, such methods have difficulties in dealing with prolonged tracking time and require failure-checking systems. More recent methods have leveraged machine learning methods to provide fast and robust solutions to the instrument tracking problem. This has ranged from using boosting methods

Robotic Retinal Surgery Chapter | 36

649

FIGURE 36.14 Modality-specific instrument tracking approaches. These tracking approaches are developed and combined to a robust and realtime multimodal instrument tracking approach. The tracked instrument is indicated by the yellow arrow.

FIGURE 36.15 One frame taken from Lumera 700 with integrated iOCT-Rescan 700 (Carl Zeiss AG). Ophthalmic operation in posterior segment on a patient with subretinal hemorrhage; the surgeon is precisely injecting rtPA in the subretinal domain. Surgeons see this side-by-side view of the optical image and OCT image intraoperatively. Yellow circles show the areas that need surgeons attention. (Left) One frame taken from Lumera 700 with integrated iOCT-Rescan 700 (Carl Zeiss AG) Ophthalmic operation in anterior segment; surgeon is performing DMEK operation. Surgeons see this side-by-side view of the optical image and OCT image intraoperatively. Yellow circles show areas that need the surgeon’s attention (right). DMEK, Descemet membrane endothelial keratoplasty; iOCT, intraoperative optical coherence tomography; OCT, optical coherence tomography.

36.4.6

Auditory augmentation

In data augmentation and perception, the visual modality is currently dominant. However, conveying all the available information in an operational environment through the same modality may risk overstimulation and a high cognitive load which could lead to inattentional blindness (see Fig. 36.15). In modern surgical rooms there are many visual displays. Sometimes their number is even higher than the number of surgeons and physicians in the room. Following all these monitors during a surgical procedure can be very difficult. Augmenting the cognitive field with additional perceptual modalities such as audio can sometimes offer a solution to this problem. Audio as a modality plays a substantial role in our perception and provides us with focused or complementary information in an intuitive fashion. Auditory display, and specifically sonification, aims at exploiting the potential of the human auditory system to expand and improve perception. This modality has been less exploited in

36. Robotic Retinal Surgery

[127,128] to random forests [129], as well as a variety of methods that update learned models dynamically to improve robustness [130]. Unsurprisingly however, recent use of deep learning methods has been shown to work extremely well in terms of 2D instrument pose localization, speed, and robustness [131,132]. Perhaps even more promising is the use of high-resolution OCT information at the 3D location of the instrument tip. Given new integrated iOCT capabilities, some preliminary results for tracking instruments with iOCT image data have been shown possible (see Fig. 36.14) and are promising [133,134]. Such combined multimodal instrument tracking approaches may be the key for precise intraocular tracking of surgical instruments. Without a doubt, this will have important relevance in robotic-assisted retinal microsurgery, as the iOCT has an axial resolution of 510 μm, which allows for precise depth information to be estimated and appears far better than pure stereo-based estimation [126].

650

Handbook of Robotic and Image-Guided Surgery

augmented reality and robotic applications so far. Sonification is the transformation of perceptualized data into nonspeech audio. The temporal and spatial advantages of the audio modality suggest sonification as an alternative or complement to visualization systems. The siren alarm of ambulances and parking guidance systems are the most known examples of sonification, which provide us, respectively, with the urgency and distance function in an intuitive way. The omnidirectional nature of sound relieves the surgeon from steadily looking at the monitors and switching between monitors and patient. The works in Refs. [135,136] suggest solutions for using sonification for surgical data augmentation and precise navigation. Sonification methods proposed for robotic vitreoretinal surgery give the surgeon aural feedback about the status of the operation. These studies investigate different digital audio effects on a music track to indicate the current anatomical area where the moving surgical tool is. Data regarding the corresponding area can be acquired from several sources, including OCT.

36.5

State-of-the-art robotic systems

As described in Section 36.1, in retinal microsurgery the requirements for fine motor control are high, exceeding the fundamental physiological capability of many individuals. In order to enhance the capabilities of surgeons, a variety of robotic concepts have been explored that span the spectrum between conventional surgery and full robotic autonomy. As we move along that spectrum, major approaches include completely handheld systems that are completely ungrounded and maintain much of the essence of conventional surgery, cooperative-control systems in which the surgeon’s hand and the robot are both in direct contact with the surgical instrument, and masterslave teleoperation systems in which the “master” human-input device is distinct from the “slave” surgical robotic manipulator. As technology continues to advance, all robotic systems have the potential to be partially automated, and all but handheld devices have the potential to be fully automated, although automation has not typically been the primary motivation for the development of robotic retinal-surgery platforms. Table 36.6 provides an overview of 21 distinct robotic retinal-surgery platforms that have been described to date. An overview of the majority of these systems is assembled in Fig. 36.16. The fundamental working principles behind these systems, and associated parameters, are explained below.

36.5.1

Common mechatronic concepts

In this section, we begin with an introduction to some of the actuation and mechanism design concepts that are common across multiple robotic retinal-surgery platforms.

36.5.1.1 Electric-motor actuation: impedance-type versus admittance-type The electric motor is by far the most commonly used actuator in the design of surgical robots. However, even within electric-motor-based systems, varying the inertial and frictional properties—typically through the use of transmissions, such as gearing—can lead to drastically different robot dynamics, to the point of completely changing the inputoutput causality of the system. At one extreme, impedance-type robots use direct-drive motors with no (or little) gearing or capstan-like cable transmissions, which results in joints with low friction and low inertia that are easily backdrivable when powered off. The pose of impedance-type robots must be actively controlled, and gravity compensation is typically employed to prevent the robot from sagging under its own weight. It can be assumed that all external loads will disturb the robot from its desired pose to some degree, although the feedback control mitigates these disturbances. From one perspective, impedance-type robots have inherent safety characteristics, in that their maximum force capability is fairly low, which limits the risk of harm to humans in direct contact. In the event of a powerloss system failure, these systems can be removed rapidly from the scene because they can be simply backdriven by hand, however, they may not “fail safe” in the sense that they may not remain in their last commanded position in the event of a powerloss system failure. At the other extreme, admittance-type robots have a significant amount of gearing, reflected inertia, and nonlinear friction, making the joints nonbackdrivable to a substantial degree, even when powered off. An admittance-type robot will hold its pose whenever it is not actively commanded to move, and requires a control system to move (whereas an impedance-type robot requires a control system to hold its pose). An admittance-type robot exhibits high precision, and it can be assumed that, when interacting with soft tissue, environmental disturbances have a negligible effect on the pose of the robot. From one perspective, admittance-type robots are “fail safe” in the sense that, in the event of a powerloss system failure, they remain in their last commanded position. A quick-release mechanism may need to be

Robotic Retinal Surgery Chapter | 36

651

TABLE 36.6 Overview of systems for robotic retinal surgery. Group

Config.

Actuation

RCM

References

Comments

Automatical Center of Lille

TO

AT

CT

[137,138]

RCM w/distal insertion

Beihang Univ.

TO

AT

LI

[139]

RCM w/proximal insertion

Carnegie Mellon Univ.

HH

PZ



[140,141]

Handheld 6-DoF parallel

Columbia/Vanderbilt

TO

AT



[113,142]

Parallel 1 distal continuum

ETH Zurich

TO

MA



[143,144]

Untethered microrobot

Imperial College London

TO

AT



[111]

Continuum

Johns Hopkins Univ.

CC

AT

LI

[145,146]

RCM w/distal insertion

Johns Hopkins Univ.

TO

AT

LI

[108,147]

RCM 1 distal continuum

Johns Hopkins Univ.

HH

PZ



[45,50]

Handheld 1-DoF prismatic

King’s/Moorfields Robot

CC

AT

LI

[148]

RCM w/distal insertion

KU Leuven

CC

IT

LI

[149,150]

RCM w/proximal insertion

McGill Univ.

TO

IT



[151,152]

Parallel macro-micro

NASA-JPL/MicroDexterity

TO

IT



[19,153]

Serial, cable-driven

Northwestern Univ.

TO

AT



[154,155]

Parallel

TU Eindhoven/Preceyes

TO

AT

LI

[156,157]

RCM w/distal insertion

TU Munich

TO

PZ



[158,159]

Hybrid parallel-serial

UCLA

TO

AT

CT

[112,160]

RCM w/distal insertion

Univ. of Tokyo

TO

AT

CT

[161,162]

RCM w/distal insertion

Univ. of Tokyo

TO

AT



[53,163]

Parallel 1 distal rotation

Univ. of Utah

TO

PZ



[44]

Serial

Univ. of Western Australia

TO

AT

CT

[164]

RCM w/distal PZ insertion

AT, Admittance-type electric motor; CC, cooperatively controlled; CT, circular track; DoF, degree-of-freedom; HH, handheld; IT, impedance-type electric motor; LI, linkage-based; MA, magnetic; PZ, piezoelectric actuators; RCM, remote-center-of-motion; TO, teleoperation.

36.5.1.2 Piezoelectric actuation Piezoelectric actuators exhibit a strain (i.e., they stretch) when a voltage is applied. These actuators are capable of extremely precise motions, typically measured in nanometers. In addition, motions can be commanded at high bandwidth. However, standard piezoelectric actuators are typically not capable of large motions. Piezoelectric stick-slip actuators utilize a piezoelectric element that stretches when a voltage is applied (e.g., by 1 μm), with a distal element that is moved by the piezoelectric element through static friction. When the piezoelectric element is rapidly retracted, the inertia of the distal element causes slipping relative to the piezoelectric element, resulting in a net displacement of the distal element. The result is an actuator that is similar in behavior to a stepper motor with extremely small steps, but with a stochastic step size. By taking multiple successive steps, large net motions are possible. Piezoelectric stick-slip actuators behave much like admittance-type actuators during normal operation, in that they are very precise and they maintain their position when not commanded to move. However, they can be easily backdriven when powered off by overcoming the static friction.

36. Robotic Retinal Surgery

added if one wants to remove the instrument relatively quickly from the patient’s eye in an emergency situation. Admittance-type robots may have a very high maximum force capability, which represents an inherent safety risk when in direct contact with humans. Impedance-type and admittance-type robots can be viewed as two ends of a continuous spectrum, without definitive boundaries. For the purposes of this chapter, if a robot is difficult or impossible to move by a human when it is powered off then it will be considered an admittance-type robot; otherwise, it will be considered an impedance-type robot.

652

Handbook of Robotic and Image-Guided Surgery

FIGURE 36.16 Representative examples of robotic retinal surgery platforms from the categories of (A) cooperatively controlled robots, (B) teleoperated systems, (C) handheld systems, and (D) microrobotic systems.

Robotic Retinal Surgery Chapter | 36

653

Other piezoelectric motors use various forms of “inchworm” strategies in which multiple piezoelectric actuators are successively stretched and relaxed in a sequence that results in a net stepping behavior. From a high-level control perspective, these piezoelectric motors behave much like piezoelectric stick-slip actuators. However, piezoelectric motors are able to generate larger forces and resist higher loads before being backdriven. Another example of a piezoelectric motor is the ultrasonic motor (SQL-RV-1.8 SQUIGGLE motor, New Scale Technologies, NY, United States) used in the Micron handheld robotic instrument [165], which uses a ring of piezoelectric elements to rotate a threaded rod, thus producing linear actuation with a range of motion that is limited only by the length of the threaded rod.

36.5.1.3 Remote-center-of-motion mechanisms When the surgical instrument passes through a scleral trocar, it must be constrained to move with four DoFs in order to respect the constraint of the trocar; this includes three-DoF rotation about the center of the trocar, and one-DoF translation parallel to the shaft of the instrument (Fig. 36.17). Some retinal robots implement this kinematic constraint in software. Other robots use a dedicated RCM mechanism to mechanically implement the kinematic constraint. RCM mechanisms provide an additional layer of safety, in that no system failure could cause the robot to violate the kinematic constraint of the trocar and potentially harm the sclera. There are two basic RCM designs that have dominated the design of retinal robots. The most common is a linkage mechanism (e.g., double parallelogram) preceded proximally by a rotary joint, with axes that intersect at a point (which is the location of the RCM). The second most common is a circular track preceded proximally by a rotary joint, with axes that intersect at a point. Both of these base mechanisms are typically succeeded distally by a rotary joint to rotate the instrument about its shaft and a prismatic joint to translate the instrument along its shaft (i.e., to insert/withdraw the instrument), which completes the four-DoF mechanism. However, recent innovations in linkagebased RCM mechanisms have eliminated the distal prismatic joint, simplifying the portion of the robot that is closest to the eye and microscope. The instrument-translation DoF is in this case enabled by a more complex proximal mechanism [139,150]. In order to rotate the eye in its orbit to image the complete retina (Fig. 36.17), RCM mechanisms must be preceded proximally by additional DoF to move the location of the RCM point. This is typically accomplished by a simple threeDoF Cartesian stage, which need not have the precision of the RCM, since its only function is eye rotation, and it is not directly involved in the control of the instrument with respect to the retina. It must be noted that the inherent safety motivating the use of an RCM mechanism is somewhat reduced by the addition of this proximal positioning system, as its motion can easily violate the trocar constraint and should be conducted with sufficient care.

36. Robotic Retinal Surgery FIGURE 36.17 Instrument motion DoF divided into (left) four DoFs that do not alter the location of an optimal pivot point located central in the pars plana; (right) two DoFs that alter the orientation of the eye in its orbit by displacing the pars plana. DoF, Degree-of-freedom.

654

Handbook of Robotic and Image-Guided Surgery

36.5.2

Handheld systems

The first class of retinal robotic systems that we consider is handheld devices in which mechatronic components intervene between the surgeon’s hand and the tip of the instrument. Of all the systems that we consider, handheld systems are the closest to existing clinical practice and workflow, with the surgeon retaining a great deal of direct control over the instrument, including the ability to rapidly remove the instrument from the eye. Because handheld devices are mechanically ungrounded, they are able to affect, but not fully control, the steady-state pose of the instrument. In this regard, handheld systems are only robotic systems in the broadest sense, and might be better described as “mechatronic” systems. They are best suited to compensating for small motions, particularly over time scales that are too fast for the surgeon to react. Unlike teleoperated systems, which can enhance accuracy by filtering error out of commands sent to the manipulator, handheld systems can reduce error only by means of active compensation. The handheld-system concept that has received the most attention is the Micron system from Carnegie Mellon University [140,141,166168]. The Micron uses a six-DoF Stewart-platform (also known as hexapod) parallel mechanism driven by piezoelectric motors. The motion of the handle is tracked (e.g., optically [169] or electromagnetically [170,171]) using an external system. The control system tries to extract the intentional motion of the operator and to cancel out all unintentional motion, including the tremor of the operator’s hand. Researchers at Johns Hopkins University have created a class of “SMART” (sensorized micromanipulationaided robotic-surgery tools) instruments that incorporate one-DoF motion control into a handheld instrument. SMART instruments incorporate a CP OCT fiber into the instrument to measure the distance between the instrument and the retina, and use a piezoelectric motor to move the instrument’s end-effector prismatically. This active DoF, which is directed normal to the retina during operation, is automatically controlled using real-time feedback from the OCT in an attempt to maintain a fixed distance between the instrument’s end-effector and the retina, in spite of surgeon tremor. To date, the group has developed a microforceps [50] and a microinjector [45] based on this concept.

36.5.3

Cooperative-control systems

The defining feature of cooperative control, sometimes referred to as “hands-on” cooperative control or “comanipulation,” is that both the surgeon’s hand and the robot cooperatively hold the surgical instrument. The resulting instrument motion is defined by input commands from both the surgeon and the robot. To a lesser extent the external environment will also affect instrument motion (mainly through bending of the thin instrument shaft). Cooperative-control systems retain much of the manipulation experience of a traditional surgery for the surgeon. Cooperative-control systems can also be used in a teleoperation configuration with only minor modifications, but the reverse is not true in general. The earliest example of a cooperative-control system for robotic retinal surgery is the Steady-Hand Eye Robot (SHER) [84,145,146] developed at the Johns Hopkins University. The SHER comprises a three-DoF Cartesian robot, followed distally by a linkage-based RCM mechanism, followed distally by a passive rotation joint for rotation about the instrument shaft. Because the robot does not include a dedicated prismatic actuator for instrument insertion/withdrawal, in general the RCM point at the trocar is implemented virtually, involving all DoF of the robot. However, the RCM mechanism was designed so that the mechanical RCM will correspond to the trocar (and virtual RCM) when the instrument’s end-effector is interacting with the macula; in that location, very little movement of the Cartesian robot is required. The SHER is an admittance-type robot. A force sensor integrated into the instrument handle measures the force applied by the user. This force is used as an input to control the velocity of the robot (i.e., admittance control), which in the simplest case is a linear relationship that creates the effect of virtual damping. The small forces conveyed by human hand tremor can be attenuated through filtering, leading to the “steady hand” designation. A similar admittance-type paradigm is currently being pursued at King’s College London and Moorfields Eye Hospital [148]. The system comprises a seven-DoF positioning robot (a six-DoF Stewart platform followed distally by a rotational actuator), followed distally by a linkage-based RCM mechanism, followed distally by a prismatic actuator for instrument insertion/withdrawal. The system from KU Leuven [10,149,150] is the only cooperative-control platform that follows the impedance paradigm: the robot is impedance-type, and the controller generates a retarding force that is proportional to the velocity of the device (i.e., impedance control), creating the effect of virtual damping. This system does not require embedding a force sensor in the operator’s handle. Using the impedance controller, it is also possible to mitigate unintentional and risky motions through the application of force to the surgeon-manipulated instrument. The KU Leuven system comprises a three-DoF Cartesian robot, followed distally by a linkage-based RCM mechanism, followed distally by a passive joint for rotation about the instrument shaft. The RCM mechanism is also responsible for instrument insertion/withdrawal.

Robotic Retinal Surgery Chapter | 36

36.5.4

655

Teleoperated systems

36. Robotic Retinal Surgery

Teleoperated systems comprise two distinct robotic subsystems connected via a communication channel: a “slave” manipulator that mechanically manipulates the surgical instrument, and a “master” human-input device that is directly manipulated by the surgeon. The master device typically takes the form of a haptic interface, but other types of input devices (e.g., joysticks, 3D mice) have been used as well. Because there is not a direct physical connection between the master and slave, teleoperation systems provide more opportunities to substantially change the surgical experience of the surgeon, including position and force scaling, as well as other ergonomic improvements such as moving the surgeon away from the operating microscope to a surgeon console. In the following, we focus on the design of the slave manipulator, since master input devices are easily exchanged. The first teleoperated retinal-surgery robot was the stereotaxical microtelemanipulator for ocular surgery (SMOS), created by researchers at the Automatical Center of Lille [137,138]. The SMOS slave comprised a three-DoF Cartesian robot, followed distally by a circular-track RCM mechanism, followed distally by a prismatic actuator for instrument insertion/withdrawal, followed distally by a joint for rotation about the instrument shaft. In the years that followed, three different groups developed systems with very similar designs. The first was a group at the University of Western Australia [164]. Although the majority of their robot is driven by electric motors, the distal insertion/withdrawal stage uses a piezoelectric motor. The second was a group at the University of Tokyo [161]; it should be noted that this system was an evolution from an earlier prototype with a different circular-track-RCM design [162]. The third was a group at UCLA, with the IRISS [112,160]. IRISS had two key innovations over SMOS. The first was a tool-changer design that enabled two different instruments to be changed (automatically) in a given hand. The second innovation was the use of two circular-track RCM mechanisms, separated by the same distance as the left- and right-hand scleral trocars, mounted to a single base positioning unit, which enables a single slave robot to be used for bimanual manipulation (i.e., one slave “body” with two “arms”). All of the systems described above are admittance-type robots. Although linkage-based RCM mechanisms have dominated the designs of platforms based on cooperative control, they have received relatively little attention in the context of teleoperation. The system that is the most mature, and has received the most attention, is the PRECEYES Surgical System developed by a collaboration between TU Eindhoven and AMC Amsterdam and commercialized by Preceyes [156,157]. The slave is an admittance-type robot, comprising a threeDoF Cartesian robot, followed distally by the RCM mechanism, followed distally by a prismatic actuator for instrument insertion/withdrawal, followed distally by a joint for rotation about the instrument shaft. The slave is equipped with a quick-release instrument holder such that the instrument can be swiftly removed in case of an emergency. A further noteworthy feature is that the three-DoF proximal positioning stage is integrated in the patient’s head-rest such that there is more knee space for the operator who sits closely to the patient and manipulates a bed-mounted master manipulator. Recently a group from Beihang University [139] developed a system that is similar to the PRECEYES system, but they removed the most distal translation stage used for instrument insertion/withdrawal and modified the more proximal RCM mechanism to provide that DoF, similar to the system from KU Leuven (see Section 36.5.3). Four groups have developed solutions based on parallel robots, all of which implement the RCM in software. The first such system was developed at McGill University [151,152]. It was based upon two three-DoF Cartesian robots that contacted a flexible element at two distinct locations; controlling the positions of the two three-DoF robots enabled control of the tip of the flexible element through elastic beam bending. Each of the six actuators was designed as a twostage, macro-/microactuator, with a high-precision piezoelectric element mounted on an impedance-type linear electric motor. The three “parallel” systems that followed were all based upon a six-DoF Stewart platform, and all were of the admittance type, including systems from Northwestern University [154,155], the University of Tokyo [53,163], and Columbia University [113]. The system from the University of Tokyo is similar to the Northwestern system, but also included an additional distal rotation DoF for rotation of the instrument about its shaft axis. The system from Columbia, the intro-ocular dexterity robot (IODR), is similar to the Northwestern system, but with a major innovation: it includes a distal two-DoF continuum device to add dexterity inside of the eye (i.e., distal to the RCM implemented at the scleral trocar). In the years that followed, other snake-like continuum devices have been developed that enable the instrument’s end-effector to approach and manipulate the retina from arbitrary orientation. The system from Johns Hopkins University [108,147] is quite similar to the continuum device of the IODR, but is deployed from the SHER platform (see Section 36.5.3). The system from Imperial College London uses nested superelastic tubes, to be deployed from a unit located on the microscope [111]. Two systems have been developed based on piezoelectric stick-slip actuators. Both systems implement the RCM in software, and both systems exhibit compact designs motivated by the goal of mounting the slave manipulator on the

656

Handbook of Robotic and Image-Guided Surgery

patient’s head. The first was the system from TU Munich [158,159], called iRAM!S (Robot-assisted Microscopic Manipulation for Vitreoretinal Ophthalmologic Surgery). iRAM!S uses “hybrid parallel-serial” kinematics comprising a serial chain of simple parallel mechanisms, leading to a compact design reminiscent of RCM mechanisms. The second is the system from the University of Utah [44], which uses a conventional six-DoF serial-chain kinematic structure (three-DoF Cartesian robot followed distally by a three-DoF spherical wrist) with the goal of eliminating uncontrolled and unsensed DoF in the kinematic chain. Finally, one system that stands out as being quite distinct from any other concept discussed above is the system developed as a collaboration between NASA-JPL and MicroDexterity [19,153]. In that system, the slave manipulator is a cable-driven impedance-type robot with serial-chain kinematics.

36.5.5

Untethered “microrobots”

A more exotic robotic approach has been pursued at ETH Zurich, where researchers have been developing untethered magnetic devices that can navigate from the sclera to the retina, driven wirelessly by applied magnetic fields, to deliver potent drugs [49,143,172,173]. Although the tiny untethered devices are referred to as “microrobots” for lack of a better term, the robotic intelligence in the system lies entirely in the external magnetic control system. The applied magnetic fields are generated by the OctoMag system, which comprises eight electromagnets designed to surround the patient’s head without interfering with the operating microscope [144]. From the perspective of control, magnetic actuation shares many properties of other impedance-type robots, assuming the vitreous has been removed and replaced with a liquid solution. With magnetic microrobots, force control at the retina can be accomplished in an open-loop fashion, giving magnetic microrobots an inherent safety when compared to traditional robots. However, the forces that can be generated are also quite small, which complicates or even prohibits performing certain surgical procedures. An alternate concept has been explored for use with an intact vitreous in which the microrobot takes the form of a miniature screw driven by magnetic torque [174,175]. The same field-generation and localization systems can be applied with this concept as well, but the nonholonomic motion of screws through soft tissue requires more sophisticated motion planning and control. Recently, researchers pushed the miniaturization envelope even further, and presented the navigation of micropropelling swimmers inside an intact porcine vitreous humor, with their results evaluated with OCT measurements [176]. The fact that the vitreous humor would not need to be removed is an appealing property warranting further investigation.

36.5.6

Clinical use cases

Recently, a number of first-in-human experiments have been reported. The teleoperated PRECEYES system (Section 36.5.4) has been used for initiating a membrane-peeling procedure on six patients during a human trial at Oxford University [177]. The robot was used successfully to lift up a flap of the ERM or the ILM away from the macula surface using a bevelled needle or pick. Subsequently, a subretinal injection was conducted successfully in three patients [178]. In the framework of EurEyeCase, PRECEYES has been used in another human trial at the Rotterdam Eye hospital. Here, for the first time a virtual fixture was implemented based on real-time acquired distance measurements from an OCT fiber [104], demonstrating the feasibility of in vivo use of OCT-integrated instruments. Contemporaneously, the cooperative-control system from KU Leuven (Section 36.5.3) was been used for the firstin-human demonstration of robot-assisted vein cannulation [179]. Four patients with RVO were treated, demonstrating the feasibility of robot-assisted cannulation and injection of the thrombolytic agent ocriplasmin (ThromboGenics NV) to dissolve clots obstructing retinal veins.

36.5.7

General considerations with respect to safety and usability

Regardless of the surgical robot used, there is still a risk of human error, which may lead to iatrogenic trauma and blindness. For example, excessive tool pivoting around the entry incision may lead to astigmatism, wound leak, or hypotony. Accidental motions of the tool may still puncture the retina or cause bleeding, or even touch the intraocular lens and cause a cataract [180]. All of these risks are indeed present, since the previously described robotic systems do not demonstrate “intelligence,” they merely replicate or scale-down the motions of the commanding surgeon. Thus, robots can improve surgical dexterity but not necessarily surgical performance. Functionalization of the tools with force or pressure sensors, as well as ophthalmic image processing, can improve the perception of the surgeon and enable him/ her to link with artificial intelligence algorithms toward further improving the success rate of interventions.

Robotic Retinal Surgery Chapter | 36

657

A typical step in retinal surgery is a rotation of the eye in its orbit to visualize different regions of the retina. This is accomplished by applying forces at the scleral trocars with the instrument shafts. When done bimanually, surgeons have force feedback to ensure that their hands are working together to accomplish the rotation, without putting undue stress on the sclera. When using more than one robotic manipulator in retinal surgery, whether in a cooperative or teleoperated paradigm, the control system must ensure that the robots work in a coordinated fashion. This kinematically constrained problem is solved in Ref. [142]. Further, all teleoperation systems and especially systems using curved and shape-changing instruments or untethered agents require retraining of the surgical personnel to get accustomed to this remote-manipulation paradigm, which may disrupt surgical workflow. Many of the master interfaces have been designed to make this transition as intuitive as possible, and are based on either recreating the kinematic constraints of handheld and cooperative-control systems (i.e., with the surgeon’s hand on the instrument handle outside of the eye) or on creating kinematics that effectively place the surgeon’s hand at the end-effector of the instrument inside the eye (with the kinematic constraint of the trocar explicitly implemented in the interface). However, recent work suggests that placing the surgeon’s hand at the end-effector of the instrument, but not explicitly presenting the kinematic constraints of the trocar to the user, may lead to improved performance, likely due to the improved ergonomics that it affords [181].

36.6

Closed-loop feedback and guidance

Benefiting from feedback from sensorized instruments (Section 36.3), it becomes possible to establish high-bandwidth feedback schemes that update in real time with the changing anatomy. This section describes different feedback and guidance schemes, including haptic feedback and other force-servoing schemes. Through sensor fusion, it becomes possible to implement multirate estimation schemes that mix information and measurements, derived from preoperative or intraoperative imaging, with local sensor measurements. Feedback and guidance schemes share commonalities across hardware configurations, but in the following the discussion is organized by category, describing feedback schemes tailored for handheld systems (Section 36.6.1), cooperative-control systems, (Section 36.6.2), and finally teleoperation systems (Section 36.6.3).

36.6.1

Closed-loop control for handheld systems

36. Robotic Retinal Surgery

The handheld system Micron from Carnegie Mellon University (Section 36.5.2) tracks its own motion using a custom optical tracking system [169], performs filtering to determine the undesired component of motion [140], and deflects its own tip using high-bandwidth actuators in order to counteract the undesired movement of the tip [141]. Control of Micron is based on internal-model control, which provides a FD design technique that can handle underdamped dynamics, is robust under conditions of model error, and addresses time delay [182]. Due to the active nature of error compensation, performance is limited by time delay [140]. As a result, control implementation with Micron has frequently incorporated feedforward tremor suppression based on Kalman state estimation [183]. Besides the image-guided applications described in Section 36.7, in order to provide tremor compensation when image guidance is not used, the system incorporates a high-shelf filter with negative gain as a tremor-canceling filter, providing what may be thought of as relative motion scaling below 2 Hz, with full suppression above 2 Hz [140]. Previously, notch filtering of the neurogenic component of physiological tremor was implemented [167,184], but over time experimentation made clear that achievement of significant accuracy enhancement for surgical tasks requires error suppression at frequencies considerably lower than had been foreseen, even to frequencies that overlap with voluntary movement. The controller can also be programmed to limit velocity, which may help to avoid tissue damage [185]. Micron has been used also in combination with force-sensing tools to enhance safety in tissue manipulation. Gonenc et al. [42] integrated a two-DoF force-sensing hook tool with an earlier-generation three-DoF Micron prototype for superior performance in membrane peeling operations. By mapping the force information into auditory signals in real time, the forces could be kept below a safety threshold throughout the operation. Furthermore, Gonenc et al. mounted a force-sensing microneedle tool on Micron, enabling an assistive feedback mechanism for cannulating retinal veins more easily [96]. The implemented feedback mechanism informs the operator upon vessel puncture and prevents overshoot based on the time derivative of sensed tool tip forces. In Ref. [94], a compact, lightweight, force-sensing microforceps module was integrated with Micron and the existing tremor cancellation software was extended to inject microvibrations on the tool tip trajectory when necessary to assist membrane delamination. Experiments on bandages and raw

658

Handbook of Robotic and Image-Guided Surgery

chicken eggs have revealed that controlled microvibrations provide ease in delaminating membranes. Automatic force-limiting control has also been demonstrated with the six-DoF Micron system for membrane peeling using a parallel force/position control system [186]. An alternate handheld system developed for retinal surgery and highlighted in Section 36.5.2 is SMART from Johns Hopkins University [50]. SMART is a microsurgical forceps that can actively stabilize tool tip motion along the tool axis by using a fiber-optic OCT to measure the distance between tool tip and target tissue. The OCT signals are sent via feedback control to a piezoelectric motor that provides active tremor compensation during grasping and peeling functions. This closed-loop positioning function can be particularly useful for one-DoF motion stabilization when a target tissue and an environment are delicate, and undesired collision needs to be avoided.

36.6.2

Closed-loop control for cooperative-control systems

Cooperative control is a shared control scheme where both the operator and the robot hold the surgical instrument (see Section 36.5.3). The force exerted by the operator guides the robot to comply with his/her movements. These robotic systems can be augmented with virtual fixtures [187] and can be fitted with smart instruments possessing various sensing modalities. By way of example, smart instruments with force-sensing capability may prove essential for safe interaction between the robot and the patient. The Johns Hopkins University team have developed a family of forcesensing instruments [13,82,84,85] with fiber-optic sensors integrated into the distal portion of the instrument that is typically located inside the eye. Auditory [188] and haptic [25] force-feedback mechanisms have demonstrated the potential value of regulating the tool-to-tissue forces. Initially, the JHU team have employed cooperative-control methods that modulate the robot behavior based on operator input and/or tool tip forces [25,189]. Later, they extended these methods to take into consideration the interaction forces between tool shaft and sclera.

36.6.2.1 Robot control algorithms based on tool-tip force information The earliest application of microforce sensing in cooperative robot control was proposed by Kumar et al. [189]. Balicki et al. [25] implemented this control scheme on the SHER as one of the available behaviors for assisting retinal surgery. Force scaling cooperative control maps, or amplifies, the human-imperceptible forces sensed at the tool tip (Ft) to handle interaction forces (Fh) by modulating robot velocity x_ 5 αðFh 1 γFt Þ. Scaling factors of α 5 1, and γ 5 500 were chosen to map the 010 mN manipulation forces at the tool tip to input forces of 005 N at the handle. Furthermore, a force-limiting behavior was developed to increase maneuverability when low tip forces are present [25]. The method incorporates standard linear cooperative control with an additional velocity constraint that is inversely proportional to the tip force 8 < Vlim ðFt Þ; 2 Fh , Vlim ðFt Þ and Ft , 0 x_ 5 Vlim ðFt Þ; 2 Fh . Vlim ðFt Þ and Ft . 0 : ~ Fh ; otherwise where Vlim (Ft) is a velocity-limiting function described graphically in Fig. 36.18. This force-limiting behavior effectively dampens the manipulation velocities.

FIGURE 36.18 Velocity-limiting function. Constraint parameters m and b were chosen empirically. Forces lower than f1 5 1 mN do not limit the velocity. Velocity limit was set at v2 5 0.1 mm/s for forces above f2 5 7.5 mN [25].

Robotic Retinal Surgery Chapter | 36

659

36.6.2.2 Robot control algorithms based on sclera force information An underappreciated limitation of current robotic systems is the suboptimal user perception of the forces present at the point that the tool passes through the sclera. With multifunction force-sensing tools (Section 36.3.1.2), He et al. measure the tooltissue forces at both the tip and the interface with the sclera. A variable admittance control method was introduced [88] to take advantage of this knowledge. The control law is: x_ss 5 αðAsh Fsh 1 γAss Fss Þ where x_ss is the desired velocity of where the robot/tool contacts the sclerotomy in the sclera, Fsh and Fss are the handle input force and sclera contact force resolved in the sclera frame, respectively, γ denotes the constant scalar as the force scaling factor, α denotes the constant scalar as the admittance gain, and Ash and Ass are the diagonal admittance matrices associated with the handle input force and sclera contact force in the sclera frame, respectively. A virtual RCM can be realized by setting Ash 5 diag(0, 0, 1, 1, 1, 1) and Ass 5 I. The admittance matrix Ash removes the transverse force components that can lead to undesired lateral motion, and preserves the four-DoF motion that is allowed by the RCM constraints. In addition, the sclera force feedback is to servo the sclera contact force toward zero. This strengthens the virtual RCM with robustness against eye motion attributed to other instrument and/or patient movement. When the surgeon is performing retinal vein cannulation (RVC), the tool tip is close to (or in contact with) the retina, and an RCM is desired to minimize the eye motion and target tissue. When the surgeon needs to reposition the eye to adjust view, the tool is kept away from the retina to avoid collision. Therefore the measured insertion depth of the tool can be used to adjust the robot admittance to provide the appropriate robot behavior. We can define Ash 5 diag(1 2 β, 1 2 β, 1, 1, 1, 1) and Ass 5 diag(1 1 β, 1 1 β, 1, 1, 1, 1), where β A [0, 1] could vary linearly along with the tool insertion depth as shown in Fig. 36.19 or nonlinearly [190,191]. When the insertion depth is smaller than the given lower bound llb, β 5 0 and Ash 5 Ass 5 I, we have the force-scaling control mode that provides the freedom to reposition the eye with scaled sclera force feedback. When the insertion depth is larger than the given upper bound lub, β 5 1 and it switches to virtual RCM with doubled gain for minimizing the transverse forces at the sclerotomy.

36.6.3

Closed-loop control for teleoperated systems

FIGURE 36.19 JHU SHER variable admittance control. (Left) Robot variable admittance control framework based on sclera force/position input. (Right) Admittance variation (linear or nonlinear) along the insertion depth. SHER, Steady-Hand Eye Robot.

36. Robotic Retinal Surgery

Most of the results in closed-loop control of cooperative-control systems can be applied to teleoperated systems with only minor modifications. However, in contrast to handheld systems or cooperative-control systems, teleoperation systems additionally offer the possibility to completely decouple the operator at the master side from the surgical robotic slave. Thanks to this decoupling it becomes possible to tackle physiological tremor in a number of different manners. First, it is possible to inject physical damping in the master robot’s controller, effectively removing the high-frequency motion of the operator’s hand. Second, it is possible to filter the signal that is sent, for example, as a reference trajectory for the slave robot to follow, in such a manner that all high-frequency components are filtered out. In a scaled teleoperation scenario, a constant scale factor is used to scale the master command to a scaled reference signal for the slave robot. In this third scenario, the amplitude of the physiological tremor would simply be transmitted in a downscaled fashion to the slave robot. It goes without saying that a combination of the above three methods may be implemented as well.

660

Handbook of Robotic and Image-Guided Surgery

Teleoperation schemes also offer a number of options to implement virtual walls. One may choose to install a virtual wall at the master side that renders resistive forces upon entry into the forbidden zone. The operator will be slowed down. The slave robot that follows the master’s motion will equally slow down as soon as the master enters the wall. Alternatively one may choose to decouple the slave’s motion from the master’s motion with an intermediate “proxy,” and effectively halt slave motion upon entry in the forbidden zone. For example, Jingjing et al. [192] propose to compute a homocentric sphere with radius below that of a spherical-shaped eye as the boundary between a safe and a dangerous zone. In a scaled teleoperation scenario this decoupling could correspond to “zeroing” the scale factor between master and slave. In principle, decoupling allows installing stiffer virtual walls at the slave side. In such a case penetration can be kept minimal and potentially lower than in the case of a cooperatively controlled system where the penetration will be lower-bounded by the stiffness of the robot and its controller. In practice, the difference in stiffness may not always be significant [193], especially given the fact that operators are trained individuals that naturally operate in a responsible fashion and typically work at low speeds. Whereas most practical teleoperation schemes are “unilateral,” which means that all control signals travel down from master to slave with only visual feedback traveling back upwards to the operator, one may equally consider “bilateral” control [194,195]. By reflecting back position errors or forces measured at the slave to the master, the operator could in principle be made aware more rapidly of the interactions that are taking place at the slave side. Balicki et al. implemented both uni- and bilateral controllers [196]. Bilateral controllers can be made responsive to any kind of position or force tracking error [194,195]. For the former it suffices to compute, for example, from the robot encoders of master and slave, the tracking error. For the latter one needs to measure the interaction forces of the eye that one wants to feedback. While quite some force-sensing instruments have been developed in the past (as depicted in Fig. 36.9), most of the governing forces stay well below human thresholds [9]. “Force scaling” would thus need to be applied if one wants to render the forces to a perceivable level. While bilateral controllers tend to enhance the operator’s awareness offering a more transparent method of operation, in reality this may lead to stability issues [194,195]. Balicki further proposes cooperative teleoperation behavior. In this hybrid control scheme a robot designed for cooperative control can be jointly controlled by mixing inputs from an operator handling the robot and from a second operator who provides inputs at a master console [197]. While this approach may combine the benefits from both worlds it does require the attendance of two experts who would need training to become accustomed to this new method of operation. Note that while ample works in the literature describe contributions to set up visual, auditory, or haptic feedback, so far hardly any work has analyzed the usability and the benefit of one feedback type versus another. This was also a finding of Griffin et al. who conducted a systematic review of the role of haptic feedback in robotic retinal surgery to conclude that even in a broader sense proper studies on human factors and ergonomics in robotic retinal surgery are missing [198].

36.7 36.7.1

Image-guided robotic surgery Image-guidance based on video

The earliest work in this area was that of Dewan et al. [199], who described active constraints based on stereo vision to limit the motion of the JHU Steady-Hand Robot to follow a surface or a contour using admittance control. This work was done without a microscope, performing stereo disparity-based 3D reconstruction to constrain the robot for opensky manipulation. Disparity-based techniques were used by Richa et al. [126] to warn the surgeon of proximity to the retina; this was tested quantitatively in a water-filled eye phantom and qualitatively in rabbit eyes in vivo, although the proximity threshold was set to 2 mm, which is very large for retinal procedures. Becker et al. used a similar disparity-based stereo technique to develop active constraints for Micron, demonstrating accuracy enhancement in station-keeping, contour-following, a repeated move-and-hold task, and membrane peeling [166]. Implementation of active constraints with handheld systems such as Micron is fundamentally different from setting up admittance-based virtual fixtures with a grounded robot arm, however. Because Micron is not mechanically grounded, it cannot apply force to resist the motion of the human operator. Therefore active constraints must be implemented as position-based virtual fixtures [166], in which a corrective displacement of the instrument tip is automatically applied in order to constrain the tip motion to the fixture. In such an approach, the null position of the tip manipulator of the handheld system is taken as the user input to the system, and the reference position is adjusted in order to implement the fixture. Just as an admittance-type robot enables implementation of “hard” (unyielding) fixtures by setting the admittance to zero in a given direction, or “soft” (yielding) fixtures by setting the admittance to a reduced but nonzero value, likewise with position-based virtual fixtures, a hard fixture can be implemented by prohibiting all motion in a

Robotic Retinal Surgery Chapter | 36

661

given direction, whereas a soft fixture can be implemented by providing (down)scaled motion in a given direction within the vicinity of a given location, subject to the range of motion of the manipulator. Becker et al. [7] also used this approach to develop a virtual fixture for vessel cannulation, scaling motion by a factor of 0.5 perpendicular to the target vessel while allowing unscaled motion parallel to the target vessel. This work was demonstrated ex vivo in an open-sky porcine retina. In a similar porcine retina ex vivo model, Becker et al. [200] implemented a hard fixture for semiautomated scanning for patterned laser retinal photocoagulation. This work performed visual servoing using the aiming beam of the treatment laser. To accommodate the limited range of motion of the three-DoF Micron prototype at that time [140], the operator provided the gross motion from point to point. Whenever a yet-untreated target was detected within reach of the manipulator, the control system servoed the tip to the target, fired the laser, and returned the tip to its null position. Yang et al. [165] subsequently updated this work to demonstrate fully automated scanning, using a newer Micron prototype with a much greater range of motion [141]. This updated work featured a hybrid visual servoing scheme, in which motion in the retinal plane was controlled via visual servoing using the microscope cameras, while motion perpendicular to the retina was handheld by closed-loop control using the optical tracker that accompanies Micron [169]—essentially, the visual compliance approach of Castann˜o and Hutchinson [201]. Direct comparison with the semiautomated approach showed that accuracy was similar at rates below one target per second, but that at higher rates the performance of the semiautomated approach dropped off due to the difficulty of the human operator in moving accurately between targets [202]. Also following a hybrid servoing approach similar to Refs. [165,202], Yu et al. [203] presented a technique for hybrid visual servoing using microscope cameras for guidance in the plane of the retina and a separate miniature B-mode OCT probe for guidance along the axis of the instrument. Open-sky implementation allows good performance with stereo disparity-based reconstruction of the retinal surface. However, when it comes to operating in the intact eyeball, this approach is highly problematic due to the complex and nonlinear optics of the eye [168]. Recently, Probst et al. [204] presented a semidense deep matching approach that involves convolutional neural networks for tool landmark detection and 3D anatomical reconstruction; however, to date the work has been demonstrated only open-sky. Methods that specifically model the ocular optics have been developed [143,172], but these have still yielded error on the order of hundreds of microns. To address this problem for Micron, Yang et al. [168] exploited the manipulation capability of the instrument in order to implement a structured-light approach (Fig. 36.20). Before starting an intervention, this approach involves generating one or two circular scans with the laser aiming beam that are detected by the microscope cameras as ellipses. The size, aspect ratio, and orientation of the ellipses allow the retinal surface to be reconstructed. This approach to reconstruction was used by Yang et al. [168] to demonstrate automated patterned laser

36. Robotic Retinal Surgery

FIGURE 36.20 Examples of research in image-guided robotic retinal surgery. Systems are shown during preclinical testing. (Left) Intraoperative tracking of needles for iOCT-based servoing during retinal and subretinal injections [20]. (Right) Hybrid visual servoing for patterned laser photocoagulation using the Micron handheld robotic system, performed during vitrectomy surgery in a porcine eye ex vivo [168]. iOCT, Intraoperative optical coherence tomography.

662

Handbook of Robotic and Image-Guided Surgery

photocoagulation with a handheld instrument in intact porcine eyes ex vivo. This approach can be generalized by building aiming beams into other instruments besides those for laser treatment; Mukherjee et al. [205] took such an approach in preliminary experiments toward vessel cannulation. Such aiming beams also have potential to be used for proximity sensing to the surface, and intraoperative updates of retinal surface reconstruction [206]. The eyeball moves during surgery, caused sometimes by the patient, and sometimes caused by the surgeon either intentionally in order to change the view of the retina or unintentionally as a result of intraocular manipulation. In order to keep anatomical active constraints registered to the patient, it is important to accurately track the motion of the retina. There are many algorithms for segmentation of fundus images, but such algorithms are generally designed for offline use, and do not provide robust tracking in the presence of illumination changes or avoidance of mistaking intraocular instruments for vessels. To address this need, Braun et al. [207] developed a retinal tracking algorithm for intraoperative use with active constraints that uses an exploratory algorithm for rapid vessel tracing [208], with an occupancy grid for mapping and iterative closest point for localization in the presence of instrument occlusions and varying illumination. More recently, this work was augmented by incorporating loop-closure detection [209].

36.7.2

Image guidance based on optical coherence tomography

OCT represents an alternative imaging means that provides far higher resolution than microscope cameras, albeit at a much higher cost. Systems for iOCT are available commercially from numerous microscope manufacturers [210]. Efforts have begun to exploit this new means of imaging. Zhou et al. [211] described two methods to segment an intraocular surgical needle: one using morphological features of the needle as detected in the OCT images, and the other using a fully convolutional network. The methods were demonstrated in porcine eyes ex vivo. However, these methods require volumetric datasets, and do not work in real time. To address this shortcoming, Weiss et al. [134] presented a technique that uses five parallel iOCT B-scans to track the needle, detecting the elliptical section of the needle in each scan. The same group has also presented a similar technique, using a larger number of B-scans (128 along x, and the same number along y), to perform marker-free robot handeye calibration, which they have demonstrated in intact porcine eyes ex vivo [212]. Tracking of the needle after it enters tissue remains an open research problem [134], which the group has begun to address by registering the needle to a computer-aided design model before the tip enters the retina, and then predicting the tip position and orientation after subretinal entry using the known input commands [20]. OCT information can be acquired not only through the pupil, but also through intraocular surgical instruments. The laboratory of J.U. Kang at Johns Hopkins University has developed common-path SS OCT (CP-SSOCT) with a fiberoptic probe that is fabricated to fit within a 26 G needle [45]. The system provides an OCT A-scan with a resolution of 1.8 μm over a range of 3686.4 μm from the tip of the probe. Automated scanning of an OCT probe to acquire 3D retinal imagery using this technology was demonstrated using Micron [57,101]. Yang et al. obtained stabilized OCT images of A-mode and M-mode scans in air and live rabbit eyes. The Kang group has also demonstrated a wide range of capabilities using SMART handheld instruments combining their OCT technology with one-DoF axial actuation (see Section 36.5.2). The technology can perform functions such as servoing to a selected stand-off distance from the surface [52], or actively compensating hand tremor along the axial dimension of the instrument [213]. They have also combined the technology with a motorized microforceps for epiretinal membranectomy and have demonstrated improved accuracy and reduced task completion time in an artificial task involving picking up 125-μm optical fibers from a soft polymer surface [214]. They have used the depth servoing capability to perform subretinal injections with enhanced accuracy in a porcine retina ex vivo in an open-sky experiment [45]. Compared to freehand injection, where depth varied over a range of 200 μm, the RMS error of OCT-guided injection in gelatin and ex vivo bovine eyes stayed below 7.48 and 10.95 μm, respectively [45]. The group has also developed the capability to calculate lateral displacement from the value of cross-correlation coefficient based on the speckle model, and used this to demonstrate real-time estimation of scanning speed in freehand retinal scanning in order to reduce distortion due to motion artifacts [215].

36.8

Conclusion and future work

Robotic microsurgery is still in its infancy but has already begun to change the perspective of retinal surgery. Robotic retinal surgery has been successfully carried out in a limited number of patients, using a few surgical systems such as the PRECEYES Surgical System [216] or the system by KU Leuven [179]. Performing new treatments such as retinal vein cannulation has now become technically feasible due to improved stability and tremor cancellation functionality. New developments such as augmented reality, haptic guidance, and micrometer-scale distance sensing will further

Robotic Retinal Surgery Chapter | 36

663

impact the efficiency and reliability of these interventions. Recent contributions have led to the arrival of instruments featuring superior dexterity. These futuristic devices could initiate clinical efforts to design radically new surgical techniques. Innovations in material science, drug development, retinal chips, and gene and cell therapy are expected to create a whole new set of engineering challenges. Together with advances in imaging, robotics has become among the most promising trends in advancing the field of retinal microsurgery. As a result an increasing number of academic institutions have embarked on research projects to investigate ever more powerful systems. This section identifies the main challenges ahead, striving to outline the direction of the next decade of research in retinal surgery.

36.8.1

Practical challenges ahead

There are several challenges in the road ahead. Existing robotic systems are still very expensive and should be first adopted and accepted by both surgeons and patients to become sufficiently useful in the operating room (OR). At that point the OR culture will need to change. Dedicated training programs would need to be developed and robotic surgery included in the surgical curriculum. Further, robotic systems need to be developed so that currently impossible interventions are achieved, such as, for example, retinal vein cannulation or subretinal delivery of novel therapeutics. Technical feasibility alone is not sufficient, as the safety and effectiveness of supplied substances and drugs must be validated as well. An important challenge for robot developers is hence to establish a solid collaboration with the pharmaceutical industry. The adoption of robotic systems in commonplace procedures such as ERM peeling, which despite its dexterity requirement is routinely and successfully performed in the Western world, does not support the high cost of introducing a robot into the OR. The added value is too restricted for these scenarios. Therefore we anticipate that the way forward for retinal surgical robotics will depend on a combination of the following three key characteristics: (1) system optimization including enhancing the usability, reduction of cost, and miniaturization in order to reduce the space occupation in the OR; (2) the capability to delivery targeted drugs and substances ultraminimally invasively, opening the path to new treatment methods; and (3) automation to enable the parallel execution of a plurality of surgical procedures operated by a surgery supervisor.

36.8.2

System optimization

36.8.3

Novel therapy delivery methods

Depth perception and control is difficult in retinal surgery in general but is especially problematic for subretinal injections where in the absence of visual feedback precision in the order of 25 μm is needed to ensure critical layers, such as

36. Robotic Retinal Surgery

In the case of developing robotic technology for microsurgery, more effort is needed in studying human factors to design more effective humanrobot interfaces that are intuitive enough to perform complicated maneuvers inside the eye. Little attention has been paid so far to coordinating the control of multiple instruments at once. In manual interventions surgeons regularly reposition the eye to optimize the view angle, and after obtaining a good view they then conduct ultraprecise manipulations. While this concerns very different types of manipulation, surgeons are used to quickly switch between them. Virtual fixtures that coordinate and constrain the relative motion between instruments, such as that proposed by Balicki [197], could be further explored to this end. Increased surgical time and cost remain serious concerns for robotic surgery. Several strategies can be followed to limit these concerns, such as making sure that robotic surgeons possess equal control over what happens with and within the eye. Another essential feature is the possibility to quickly exchange tools such as that developed in prior work by Nambi et al. [44]. Further optimization of space would be needed as well, especially in the case where the surgeon remains close to the patient space, occupancy of the robotic device is crucial as it should not negatively affect the surgeon’s already poor ergonomic conditions. Multidisciplinary teams should work together to understand how to build optimal compact and low-cost systems. Clinicians have traditionally worked together with biomedical engineers to design systems for specific applications, but have not been successful in translating these systems to the clinic. Most commonly, academic research occurs in isolation of the constraints that a real OR poses. For example, academic teams have designed robots that are challenging to integrate into existing clinical procedures, therefore limiting their adoption. It is time to pay greater attention to devise streamlined robot design approaches that consider more profoundly the constraints of the OR, staff position, and assistant/surgeon position, together with ergonomics, microscope constraints, and the challenges of the actual application at hand. The robotic retinal surgery community can therefore leverage the extensive work conducted by Padoy et al. on OR reconstruction and tracking [217,218].

664

Handbook of Robotic and Image-Guided Surgery

the retinal pigment epithelium, are not damaged irreparably [20]. The development of OCT has opened up new perspectives in this context, offering the capacity to image disease on the micrometer level and at early disease states. This spurs the development of novel tools and delivery systems that allow interventions in early stages before major complications arise. As new drugs, new prosthesis, and cell and gene therapy are being developed, we expect a growth in the development of new miniature delivery instruments and microinjectors that, for example, under iOCT guidance, deliver these therapeutic substances with extreme precision, targeting specific retinal layers [20,45]. In this context microrobotics have made their appearance. Being the smallest representative of surgical and interventional devices, they offer tremendous opportunities to push miniaturization to the extreme. Ultimately they could enable interaction with few and even individual cells. Microrobots are one of the newest research areas in surgical robotics. Retinal surgery has been one of the major drivers for this technology. Microrobots have been proposed for intraocular drug delivery and retinal vein cannulation, and their mobility has been evaluated in animal models in vivo. The evaluated microrobots are propelled by electromagnetic fields (see Section 36.5.5). Electromagnetic-based actuation is preferred in small-scale actuation due to the favorable scaling of electromagnetic forces and torques with respect to device volume. Even though the minuscule size of the steerable magnetic devices makes the application of forces challenging currently, it can be expected that as the engineering capacity at the microscale levels matures, microdevices will become valuable tools of future retinal surgical ORs, primarily as means to precisely deliver novel therapeutics, and subsequently as mechanisms to enable ever more precise interventions.

36.8.4

Toward autonomous interventions

We expect a progressive adoption of automated features, similar to other fields in robotic surgery [219], which could ultimately lead to full autonomous execution of parts of the surgery. A long-term effort would enable a surgeon to supervise a set of robots that perform routine procedure steps autonomously and only call on his/her expertise during critical patient-specific steps. The surgeon would then guide the robot through the more complex tasks. Reaching this goal will require the analysis of data generated from a large number of interventions. Significant research on the topic, primarily on understanding the surgical phases of cataract surgery, has been conducted by Jannin et al. [220], among others. Coupled with realistic retinal simulators, such as those developed by Cotin et al. [221], we expect that robots will be able to undertake certain aspects of surgery, such as port placement and vitrectomy, in the near future. Visual servoing frameworks such as those developed by Riviere et al. [168] would enable automated cauterization of leaky vessels in diabetic retinopathy, therefore speeding up potentially length interventions. Finally, the upcoming field of surgical data science [222] is expected to play an increasingly important role in robotic retinal surgery.

36.9

Acknowledgments

Emmanuel B. Vander Poorten’s contribution to this publication was supported by EurEyeCase via the EU Framework Programme for Research and Innovation-Horizon 2020 (No. 645331) and STABLEYES an internal KU Leuven C3-fund (3E160419). Cameron Riviere’s contribution to this publication was supported by the US National Institutes of Health (grant no. R01EB000526). Jake J. Abbott’s contribution to this publication was supported by the National Eye Institute of the National Institutes of Health under award number R21EY027528. Christos Bergeles’ contribution to this publication was supported by an ERC Starting grant [714562]. Further, this report is independent research funded by the National Institute for Health Research (Invention for Innovation, i4i; II-LB-071620002). M. Ali Nasseri’s contribution to this publication was supported by the Ophthalmology Department of Klinikum rechts der Isar, TUM; State of Bavaria; and Carl Zeiss Meditec AG. Jin U Kang’s contribution to this publication was supported by an NIH grant R01 EY021540 and Coulter Translational Fund. Koorosh Faridpooya’s contribution was supported by the Foundation Rotterdam Eye Hospital and grants from the Foundation Coolsingel and by EurEyeCase via the EU Framework Programme for Research and Innovation-Horizon 2020 (No. 645331). Iulian Iordachita’s contribution to this publication was supported by US National Institutes of Health, grant number 1R01EB023943. The views expressed in this publication are those of the authors and not necessarily those of the National Institutes of Health, the NHS, the National Institute for Health Research, or the Department of Health.

References [1] The Editors of Encyclopedia Britannica. Retina 2018. https://www.britannica.com/science/retina. Accessed on May 27, 2019. [2] Singh KD, Logan NS, Gilmartin B. Three-dimensional modeling of the human eye based on magnetic resonance imaging. Invest Ophthalmol Visual Sci 2006;47(6):22729.

Robotic Retinal Surgery Chapter | 36

665

36. Robotic Retinal Surgery

[3] Leng T, Miller JM, Bilbao KV, Palanker DV, Huie P, Blumenkranz MS. The chick chorioallantoic membrane as a model tissue for surgical retinal research and simulation. Retina 2004;24(3):42734. [4] Almony A, Nudleman E, Shah GK, Blinder KJ, Eliott DB, Mittra RA, et al. Techniques, rationale, and outcomes of internal limiting membrane peeling. Retina 2012;32(5):87791. [5] Charles S, Calzada J. Vitreous microsurgery. Lippincott Williams & Wilkins; 2010. [6] Wilkins JR, Puliafito CA, Hee MR, Duker JS, Reichel E, Coker JG, et al. Characterization of epiretinal membranes using optical coherence tomography. Ophthalmology 1996;103(12):214251. [7] Becker BC, Voros S, Lobes Jr, LA, Handa JT, Hager GD, Riviere CN. Retinal vessel cannulation with an image-guided handheld robot. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2010. p. 54203. [8] Jagtap AD, Riviere CN. Applied force during vitreoretinal microsurgery with hand-held instruments. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, vol. 4; 2004. p. 27714. [9] Gupta PK, Jensen PS, de Juan E. Surgical forces and tactile perception during retinal microsurgery. In: International conference on medical image computing and computer-assisted intervention. Springer; 1999. p. 121825. [10] Gijbels A, Willekens K, Esteveny L, Stalmans P, Reynaerts D, Vander Poorten EB. Towards a clinically applicable robotic assistance system for retinal vein cannulation. In: IEEE international conference on biomedical robotics and biomechatronics; 2016. p. 28491. [11] Sun Z, Balicki M, Kang J, Handa J, Taylor R, Iordachita I. Development and preliminary data of novel integrated optical micro-force sensing tools for retinal microsurgery. In: IEEE international conference on robotics and automation; 2009. p. 1897902. [12] Ergeneman O, Pokki J, Poˇcepcova´ V, Hall H, Abbott JJ, Nelson BJ. Characterization of puncture forces for retinal vein cannulation. J Med Devices 2011;5(4):044504. [13] Iordachita I, Sun Z, Balicki M, Kang JU, Phee SJ, Handa J, et al. A sub-millimetric, 0.25 mN resolution fully integrated fiber-optic force-sensing tool for retinal microsurgery. Int J Comput Assisted Radiol Surg 2009;4(4):38390. [14] Sunshine S, Balicki M, He X, Olds K, Kang J, Gehlbach P, et al. A force-sensing microsurgical instrument that detects forces below human tactile sensation. Retina 2013;33:2006. [15] Bergeles C, Sugathapala M, Yang G-Z. Retinal surgery with flexible robots: Biomechanical advantages. In: International symposium on medical robotics; 2018. p. 16. [16] Singh K, Dion C, Costantino S, Wajszilber M, Lesk MR, Ozaki T. In vivo measurement of the retinal movements using Fourier domain low coherence interferometry. In: Conference on lasers and electro-optics. Optical Society of America; 2009. p. CMR4. [17] Ourak M, Smits J, Esteveny L, Borghesan G, Gijbels A, Schoevaerdts L, et al. Combined oct distance and FBG force sensing cannulation needle for retinal vein cannulation: in vivo animal validation. Int J Comput Assisted Radiol Surg 2019;14 pages 3019. [18] Ang WT, Riviere CN, Khosla PK. An active hand-held instrument for enhanced microsurgical accuracy. In: International conference on medical image computing and computer-assisted intervention. Springer; 2000. p. 87886. [19] Charles S, Das H, Ohm T, Boswell C, Rodriguez G, Steele R, et al. Dexterity-enhanced telerobotic microsurgery. In: IEEE international conference on advanced robotics; 1997; 510. [20] Zhou M, Huang K, Eslami A, Roodaki H, Zapp D, Maier M, et al. Precision needle tip localization using optical coherence tomography images for subretinal injection. In: IEEE international conference on robotics and automation; 2018. p. 403340. [21] Riviere CN, Jensen PS. A study of instrument motion in retinal microsurgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, vol. 1; 2000. p. 5960. [22] Singh SPN, Riviere CN. Physiological tremor amplitude during retinal micro-surgery. In: Proceedings of the IEEE northeast bioengineering conference; 2002. p. 1712. [23] Peral-Gutierrez F, Liao AL, Riviere CN. Static and dynamic accuracy of vitre-oretinal surgeons. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, vol. 1; 2004. p. 27347. [24] Hubschman J, Son J, Allen B, Schwartz S, Bourges J. Evaluation of the motion of surgical instruments during intraocular surgery. Eye 2011;25(7):947. [25] Balicki M, Uneri A, Iordachita I, Handa J, Gehlbach P, Taylor R. Micro-force sensing in robot assisted membrane peeling for vitreoretinal surgery. In: International conference on medical image computing and computer-assisted intervention. Springer; 2010. p. 30310. [26] Charles S. Dexterity enhancement for surgery. In: Taylor R, Lavalle´e S, editors. Computer-integrated surgery-Technology and clinical applications. The MIT Press; 1996. p. 46771. [27] Harwell RC, Ferguson RL. Physiologic tremor and microsurgery. Microsurgery 1983;4(3):18792. [28] Patkin M. Ergonomics applied to the practice of microsurgery. Aust N Z J Surg 1977;47(3):3209. [29] Wells TS, Yang S, MacLachlan RA, Handa JT, Gehlbach P, Riviere C. Comparison of baseline tremor under various microsurgical conditions. In: IEEE international conference on systems, man, and cybernetics; 2013. p. 14827. [30] McCannel CA, Olson EJ, Donaldson MJ, Bakri SJ, Pulido JS, Donna M. Snoring is associated with unexpected patient head movement during monitored anesthesia care vitreo-retinal surgery. Retina 2012;32(7):13247. [31] Mehta S, Hubbard III GB. Avoiding neck strain in vitreoretinal surgery: an ergonomic approach to indirect ophthalmoscopy and laser photocoagulation. Retina 2013;33(2):43941. [32] Feltgen N, Junker B, Agostini H, Hansen LL. Retinal endovascular lysis in ischemic central retinal vein occlusion: one-year results of a pilot study. Ophthalmology 2007;114(4):71623. [33] Koch P. Advanced vitreoretinal surgery. Acta Ophthalmol 2017;95(S259). [34] Yiu G, Marra KV, Wagley S, Krishnan S, Sandhu H, Kovacs K, et al. Surgical outcomes after epiretinal membrane peeling combined with cataract surgery. Br J Ophthalmol 2013;97(9):1197201.

666

Handbook of Robotic and Image-Guided Surgery

[35] Mitchell P, Smith W, Chey T, Wang JJ, Chang A. Prevalence and associations of epiretinal membranes: the Blue Mountains Eye Study, Australia. Ophthalmology 1997;104(6):103340. [36] Appiah AP, Hirose T, Kado M. A review of 324 cases of idiopathic premacular gliosis. Am J Ophthalmol 1988;106(5):5335. [37] Charles S. Techniques and tools for dissection of epiretinal membranes. Graefes Arch Clin Exp Ophthalmol 2003;241(5):34752. [38] Rogers S, McIntosh RL, Cheung N, Lim L, Wang JJ, Mitchell P, et al. The prevalence of retinal vein occlusion: pooled data from population studies from the United States, Europe, Asia, and Australia. Ophthalmology 2010;117(2):31319. [39] McIntosh RL, Rogers SL, Lim L, Cheung N, Wang JJ, Mitchell P, et al. Natural history of central retinal vein occlusion: an evidence-based systematic review. Ophthalmology 2010;117(6):111323. [40] Stout JT, Francis PJ. Surgical approaches to gene and stem cell therapy for retinal disease. Hum Gene Ther 2011;22(5):5315. [41] Peng Y, Tang L, Zhou Y. Subretinal injection: a review on the novel route of therapeutic delivery for vitreoretinal diseases. Ophthalmic Res 2017;58(4):21726. [42] Gonenc B, Balicki MA, Handa J, Gehlbach P, Riviere CN, Taylor RH, et al. Preliminary evaluation of a micro-force sensing handheld robot for vitreoretinal surgery. In: IEEE/RSJ international conference on intelligent robots and systems; 2012. p. 412530. [43] Gupta A, Gonenc B, Balicki M, Olds K, Handa J, Gehlbach P, et al. Human eye phantom for developing computer and robot-assisted epiretinal membrane peeling. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2014. p. 68647. [44] Nambi M, Bernstein PS, Abbott JJ. A compact telemanipulated retinal-surgery system that uses commercially available instruments with a quick-change adapter. J Med Rob Res 2016;1(2):1630001. [45] Kang J, Cheon G. Demonstration of subretinal injection using common-path swept source OCT guided microinjector. Appl Sci 2018;8(8):1287. [46] He X, Balicki M, Gehlbach P, Handa J, Taylor R, Iordachita I. A novel dual force sensing instrument with cooperative robotic assistant for vitreoretinal surgery. In: IEEE international conference on robotics and automation; 2013. p. 2138. [47] Kummer MP, Abbott JJ, Dinser S, Nelson BJ. Artificial vitreous humor for in vitro experiments. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2007. p. 64069. [48] Wei W, Popplewell C, Chang S, Fine HF, Simaan N. Enabling technology for microvascular stenting in ophthalmic surgery. J Med Devices 2010;4(1):014503. [49] Bergeles C, Kummer MP, Kratochvil BE, Framme C, Nelson BJ. Steerable intravitreal inserts for drug delivery: in vitro and ex vivo mobility experiments. In: International conference on medical image computing and computer-assisted intervention. Springer; 2011. p. 3340. [50] Song C, Park DY, Gehlbach PL, Park SJ, Kang JU. Fiber-optic OCT sensor guided smart micro-forceps for microsurgery. Biomed Opt Express 2013;4(7):104550. [51] Fleming I, Balicki M, Koo J, Iordachita I, Mitchell B, Handa J, et al. Cooperative robot assistant for retinal microsurgery. In: International conference on medical image computing and computer-assisted intervention. Springer; 2008. p. 54350. [52] Cheon GW, Huang Y, Cha J, Gehlbach PL, Kang JU. Accurate real-time depth control for CP-SSOCT distal sensor based handheld microsurgery tools. Biomed Opt Express 2015;6(5):194253. [53] Ueta T, Nakano T, Ida Y, Sugita N, Mitsuishi M, Tamaki Y. Comparison of robot-assisted and manual retinal vessel microcannulation in an animal model. Br J Ophthalmol 2011;95(5):7314. [54] van Overdam KA, Kilic E, Verdijk RM, Manning S. Intra-ocular diathermy forceps. Acta Ophthalmol 2018;96(4):4202. [55] de Smet MD, Meenink TCM, Janssens T, Vanheukelom V, Naus GJL, Beelen MJ, et al. Robotic assisted cannulation of occluded retinal veins. PLoS One 2016;11(9):e0162037. [56] Willekens K, Gijbels A, Schoevaerdts L, Esteveny L, Janssens T, Jonckx B, et al. Robot-assisted retinal vein cannulation in an in vivo porcine retinal vein occlusion model. Acta Ophthalmol 2017;95(3):2705. [57] Yang S, Balicki M, Wells TS, MacLachlan RA, Liu X, Kang JU, et al. Improvement of optical coherence tomography using active handheld micromanipulator in vitreoretinal surgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2013. p. 56747. [58] Allf B, De Juan E. In vivo cannulation of retinal vessels. Graefes Arch Clin Exp Ophthalmol 1987;225(3):2215. [59] Peters T, Cleary K. Image-guided interventions: technology and applications. Springer Science & Business Media; 2008. [60] Boppart SA, Brezinski ME, Fujimoto JG. Chapter 23: Surgical guidance and intervention. In: Bouma BE, Tearney GJ, editors. Handbook of optical coherence tomography. New York: CRC Press; 2001. p. 61348. [61] Kang JU, Huang Y, Zhang K, Ibrahim Z, Cha J, Andrew Lee W, et al. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries. J Biomed Opt 2012;17(8):081403-1. [62] Choma MA, Sarunic MV, Yang C, Izatt JA. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Opt Express 2003;11(18):21839. [63] Nassif N, Cense B, Park BH, Yun SH, Chen TC, Bouma BE, et al. In vivo human retinal imaging by ultrahigh-speed spectral domain optical coherence tomography. Opt Lett 2004;29(5):4802. [64] Kang JU, Han J-H, Liu X, Zhang K, Song CG, Gehlbach P. Endoscopic functional Fourier domain common-path optical coherence tomography for microsurgery. IEEE J Selected Top Quantum Electron 2010;16(4):78192. [65] Sharma U, Fried NM, Kang JU. All-fiber common-path optical coherence tomography: sensitivity optimization and system analysis. IEEE J Selected Top Quantum Electron 2005;11(4):799805. [66] Fercher AF, Hitzenberger CK, Kamp G, El-Zaiat SY. Measurement of intraocular distances by backscattering spectral interferometry. Opt Commun 1995;117(1-2):438.

Robotic Retinal Surgery Chapter | 36

667

36. Robotic Retinal Surgery

[67] Wojtkowski M, Bajraszewski T, Targowski P, Kowalczyk A. Real-time in vivo imaging by high-speed spectral optical coherence tomography. Opt Lett 2003;28(19):17457. [68] Yun S-H, Tearney GJ, de Boer JF, Iftimia N, Bouma BE. High-speed optical frequency-domain imaging. Opt Express 2003;11(22):295363. [69] Zhang K, Kang JU. Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system. Opt Express 2010;18(11):1177284. [70] Drexler W, Fujimoto JG. Optical coherence tomography: technology and applications. Springer Science & Business Media; 2008. [71] Leitgeb R, Hitzenberger C, Fercher AF. Performance of Fourier domain vs. time domain optical coherence tomography. Opt Express 2003;11(8): 88994. [72] Liu G, Zhang J, Yu L, Xie T, Chen Z. Real-time polarization-sensitive optical coherence tomography data processing with parallel computing. Appl Opt 2009;48(32):636570. [73] Probst J, Hillmann D, Lankenau EM, Winter C, Oelckers S, Koch P, et al. Optical coherence tomography with online visualization of more than seven rendered volumes per second. J Biomed Opt 2010;15(2):026014. [74] Huang Y, Liu X, Kang JU. Real-time 3D and 4D Fourier domain Doppler optical coherence tomography based on dual graphics processing units. Biomed Opt Express 2012;3(9):216274. [75] Zhang K, Kang JU. Real-time intraoperative 4D full-range FD-OCT based on the dual graphics processing units architecture for microsurgery guidance. Biomed Opt Express 2011;2(4):76470. [76] Machemer R. The development of pars plana vitrectomy: a personal account. Graefes Arch Clin Exp Ophthalmol 1995;233(8):45368. [77] Brod RD. Surgery for diseases of the vitreous and retina. J Lancaster Gen Hosp 2009;4(1):49. [78] Kasner D. Vitrectomy: a new approach to management of vitreous. Highlights Ophthalmol 1969;11:304. [79] Berkelman PJ, Whitcomb LL, Taylor RH, Jensen P. A miniature instrument tip force sensor for robot/human cooperative microsurgical manipulation with enhanced force feedback. In: International conference on medical image computing and computer-assisted intervention. Springer; 2000. p. 897906. [80] Berkelman PJ, Whitcomb LL, Taylor RH, Jensen P. A miniature microsurgical instrument tip force sensor for enhanced force feedback during robot-assisted manipulation. IEEE Trans Rob Autom 2003;19(5):91721. [81] Fifanski D, Rivera J, Clogenson M, Baur M, Bertholds A, Llosas P, et al. VivoForce instrument for retinal microsurgery. Proc Surgetical 2017;1557. [82] Liu X, Iordachita I, He X, Taylor R, Kang J. Miniature fiber-optic force sensor based on low-coherence Fabry-Perot interferometry for vitreoretinal microsurgery. Biomed Opt Express 2012;3(5):106276. [83] Gijbels A, Reynaerts D, Stalmans P, Vander Poorten E. Design and manufacturing of a 2-DOF force sensing needle for retinal surgery. In: Fourth joint workshop on computer/robot assisted surgery; 2014. p. 714. [84] He X, Balicki MA, Kang JU, Gehlbach PL, Handa JT, Taylor RH, et al. Force sensing micro-forceps with integrated fiber bragg grating for vitreoretinal surgery. In: Optical fibers and sensors for medical diagnostics and treatment applications XII, vol. 8218. International Society for Optics and Photonics; 2012. p. 82180W. [85] He X, Handa J, Gehlbach P, Taylor R, Iordachita I. A submillimetric 3-DOF force sensing instrument with integrated fiber bragg grating for retinal microsurgery. IEEE Trans Biomed Eng 2014;61(2):52234. [86] Kuru I, Gonenc B, Balicki M, Handa J, Gehlbach P, Taylor RH, et al. Force sensing micro-forceps for robot assisted retinal surgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2012. p. 14014. [87] Smits J, Ourak M, Gijbels A, Esteveny L, Borghesan G, Schoevaerdts L, et al. Development and experimental validation of a combined FBG force and OCT distance sensing needle for robot-assisted retinal vein cannulation. In: IEEE international conference on robotics and automation; 2018. p. 12934. [88] He X, Balicki M, Gehlbach P, Handa J, Taylor R, Iordachita I. A multi-function force sensing instrument for variable admittance robot control in retinal microsurgery. In: IEEE international conference on robotics and automation; 2014. p. 14118. [89] Horise Y, He X, Gehlbach P, Taylor R, Iordachita I. FBG-based sensorized light pipe for robotic intraocular illumination facilitates bimanual retinal microsurgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2015. p. 136. [90] Balicki M, Han J-H, Iordachita I, Gehlbach P, Handa J, Taylor R, et al. Single fiber optical coherence tomography microsurgical instruments for computer and robot-assisted retinal surgery. In: International conference on medical image computing and computer-assisted intervention; 2009. p. 10815. [91] Han J-H, Balicki M, Zhang K, Liu X, Handa J, Taylor R, et al. Common-path Fourier-domain optical coherence tomography with a fiber optic probe integrated into a surgical needle. In: Conference on lasers and electro-optics. Optical Society of America; 2009. p. CMCC2. [92] Schoevaerdts L, Esteveny L, Borghesan G, Ourak M, Gijbels A, Smits J, et al. Innovative bio-impedance sensor towards puncture detection in eye surgery for retinal vein occlusion treatment. In: IEEE international conference on robotics and automation; 2018. p. 16. [93] Ergeneman O, Dogangil G, Kummer MP, Abbott JJ, Nazeeruddin MK, Nelson BJ. A magnetically controlled wireless optical oxygen sensor for intraocular measurements. IEEE Sens J 2008;8(1):2937. [94] Gonenc B, Gehlbach P, Handa J, Taylor RH, Iordachita I. Motorized force-sensing micro-forceps with tremor cancelling and controlled microvibrations for easier membrane peeling. In: IEEE RAS/EMBS international conference on biomedical robotics and biomechatronics; 2014. p. 24451. [95] Gijbels A, Vander Poorten EB, Stalmans P, Reynaerts D. Development and experimental validation of a force sensing needle for robotically assisted retinal vein cannulations. In: IEEE international conference on robotics and automation; 2015. p. 22706.

668

Handbook of Robotic and Image-Guided Surgery

[96] Gonenc B, Taylor RH, Iordachita I, Gehlbach P, Handa J. Force-sensing microneedle for assisted retinal vein cannulation. In: IEEE sensors conference; 2014. p. 698701. [97] Bourla DH, Hubschman JP, Culjat M, Tsirbas A, Gupta A, Schwartz SD. Feasibility study of intraocular robotic surgery with the da Vinci surgical system. Retina 2008;28(1):1548. [98] Tao YK, Ehlers JP, Toth CA, Izatt JA. Intraoperative spectral domain optical coherence tomography for vitreoretinal surgery. Opt Lett 2010;35(20):331517. [99] Krug M, Lankenau E. Wo2017167850 (a1)—oct system; 2017. [100] Liu X, Li X, Kim D-H, Ilev I, Kang JU. Fiber-optic Fourier-domain common-path OCT. Chin Opt Lett 2008;6(12):899901. [101] Yang S, Balicki M, MacLachlan RA, Liu X, Kang JU, Taylor RH, et al. Optical coherence tomography scanning with a handheld vitreoretinal micromanipulator. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2012. p. 94851. [102] Borghesan G, Ourak M, Lankenau E, Hu¨ttmann G, Schulz-Hildebrant H, Willekens K, et al. Single scan OCT-based retina detection for robotassisted retinal vein cannulation. J Med Rob Res 2018;3(02):1840005. [103] Vander Poorten E, Esteveny L, Gijbels A, Rosa B, Schoevaerdts L, Willekens K, et al. Use case for European robotics in ophthalmologic micro-surgery. In: Proceedings of the fifth joint workshop on new technologies for computer/robot assisted surgery; 2015. p. 102. [104] Cereda MG, Faridpooya K, van Meurs JC, et al. First-in-human clinical evaluation of a robot-controlled instrument with a real-time distance sensor in the vitreous cavity; poster presentation at AAO 2018. [105] Saito H, Mitsubayashi K, Togawa T. Detection of needle puncture to blood vessel by using electric conductivity of blood for automatic blood sampling. Sens Actuators, A: Phys 2006;125(2):44650. [106] Schoevaerdts L, Esteveny L, Borghesan G, Ourak M, Reynaerts D, Vander Poorten E. Automatic air bubble detection based on bio-impedance for safe drug delivery in retinal veins. In: Proceedings of the Hamlyn symposium on medical robotics; 2018. p. 78. [107] Cao K, Pinon R, Schachar I, Jayasundera T, Awtar S. Automatic instrument tracking endo-illuminator for intra-ocular surgeries. J Med Devices 2014;8(3):030932. [108] He X, Van Geirt V, Gehlbach P, Taylor R, Iordachita I. IRIS: integrated robotic intraocular snake. In: IEEE international conference on robotics and automation; 2015. p. 17649. [109] Hubschman JP, Bourges JL, Choi W, Mozayan A, Tsirbas A, Kim CJ, et al. The microhand: a new concept of micro-forceps for ocular robotic surgery. Eye 2010;24(2):364. [110] Ikuta K, Kato T, Nagata S. Optimum designed micro active forceps with built-in fiberscope for retinal microsurgery. Med Image Comput Comput Assisted Intervention, LNCS 1998;1496:41120. [111] Lin F-Y, Bergeles C, Yang G-Z. Biometry-based concentric tubes robot for vitre-oretinal surgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2015. p. 52804. [112] Rahimy E, Wilson J, Tsao T, Schwartz S, Hubschman J. Robot-assisted intraocular surgery: development of the IRISS and feasibility studies in an animal model. Eye 2013;27(8):972. [113] Wei W, Goldman R, Simaan N, Fine H, Chang S. Design and theoretical evaluation of micro-surgical manipulators for orbital manipulation and intraocular dexterity. In: IEEE international conference on robotics and automation; 2007. p. 338995. [114] Ikuta K, KatoT, Nagata S. Micro active forceps with optical fiber scope for intra-ocular microsurgery. In: Micro electro mechanical systems, 1996, MEMS’96, proceedings. An investigation of micro structures, sensors, actuators, machines and systems. IEEE, the ninth annual international workshop on. IEEE; 1996. p. 45661. [115] Spaide RF. Macular hole repair with minimal vitrectomy. Retina 2002;22(2):1836. [116] Richa R, Linhares R, Comunello E, Von Wangenheim A, Schnitzler J-Y, Wassmer B, et al. Fundus image mosaicking for information augmentation in computer-assisted slit-lamp imaging. IEEE Trans Med Imaging 2014;33(6):130412. [117] Can A, Stewart CV, Roysam B, Tanenbaum HL. A feature-based technique for joint, linear estimation of high-order image-to-mosaic transformations: application to mosaicing the curved human retina. Proc IEEE Conf Comput Vision Pattern Recognit 2000;2:58591. [118] Bandara AMRR, Giragama PWGRMPB. A retinal image enhancement technique for blood vessel segmentation algorithm. In: IEEE international conference on industrial and information systems; 2017. p. 15. [119] Goldbaum MH, Hatami N. Accurate retinal artery and vein classification using local binary patterns. Invest Ophthalmol Vis Sci 2014;55(13):232. [120] Hoover A, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans Med Imaging 2000;19(3):20310. [121] Staal J, Abra`moff MD, Niemeijer M, Viergever MA, Van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans Med Imaging 2004;23(4):5019. [122] Tan B, Wong A, Bizheva K. Enhancement of morphological and vascular features in oct images using a modified Bayesian residual transform. Biomed Opt Express 2018;9(5):2394406. [123] Pezzementi Z, Voros S, Hager GD. Articulated object tracking by rendering consistent appearance parts. In: IEEE international conference on robotics and automation; 2009. p. 39407. [124] Sznitman R, Richa R, Taylor RH, Jedynak B, Hager GD. Unified detection and tracking of instruments during retinal microsurgery. IEEE Trans Pattern Anal Mach Intell 2014;35(5):126373. [125] Burschka D, Corso JJ, Dewan M, Lau W, Li M, Lin H, et al. Navigating inner space: 3-D assistance for minimally invasive surgery. Rob Autom Syst 2005;52:526.

Robotic Retinal Surgery Chapter | 36

669

36. Robotic Retinal Surgery

[126] Richa R, Balicki M, Sznitman R, Meisner E, Taylor R, Hager G. Vision-based proximity detection in retinal surgery. IEEE Trans Biomed Eng 2012;59(8):2291301. [127] Sznitman R, Ali K, Richa R, Taylor RH, Hager GD, Fual P. Data-driven visual tracking in retinal microsurgery. In: MICCAI 2012; 2012. p. 56875. [128] Sznitman R, Becker C, Fua´ PG. Fast part-based classification for instrument detection in minimally invasive surgery. In: MICCAI 2014; 2014. [129] Rieke N, Tan DJ, Alsheakhali M, Tombari F, di San Filippo CA, Belagiannis V, et al. Surgical tool tracking and pose estimation in retinal microsurgery. In: MICCAI 2015; 2015. p. 26673. [130] Rieke N, Tan DJ, Tombari F, Vizcaı´no JP, di San Filippo CA, Eslami A, et al. Real-time online adaption for robust instrument tracking and pose estimation. In: MICCAI 2016; 2016. p. 42230. [131] Kurmann T, Neila PM, Du X, Fua P, Stoyanov D, Wolf S, et al. Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: Medical image computing and computer-assisted intervention; 2017. [132] Laina, I., Rieke, N., Rupprecht, C., Vizcaı´no, J.P., Eslami, A., Tombari, F., et al. (2017). Concurrent segmentation and localization for tracking of surgical instruments. In: MICCAI 2017; 2017, p. 66472. [133] Rieke N, Tombari F, Navab N. Chapter 4—Computer vision and machine learning for surgical instrument tracking: focus: random forestbased microsurgical tool tracking. In: Leo M, Farinella GM, editors. Computer vision for assistive healthcare, computer vision and pattern recognition. Academic Press; 2018. p. 10526. [134] Weiss J, Rieke N, Nasseri MA, Maier M, Eslami A, Navab N. Fast 5DOF needle tracking in iOCT. Int J Comput Assisted Radiol Surg 2018;13(6):78796. [135] Matinfar S, Nasseri MA, Eck U, Kowalsky M, Roodaki H, Navab N, et al. Surgical soundtracks: automatic acoustic augmentation of surgical procedures. Int J Comput Assisted Radiol Surg 2018;13(9):134555. [136] Roodaki H, Navab N, Eslami A, Stapleton C, Navab N. Sonifeye: Sonification of visual information using physical modeling sound synthesis. IEEE Trans Visual Comput Graphics 2017;23(11):236671. [137] Ben Gayed M, Guerrouad A, Diaz C, Lepers B, Vidal P. An advanced control micromanipulator for surgical applications. Syst Sci 1987;13 (12):12334. [138] Guerrouad A, Vidal P. SMOS: stereotaxical microtelemanipulator for ocular surgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 1989. p. 87980. [139] He C-Y, Huang L, Yang Y, Liang Q-F, Li Y-K. Research and realization of a master-slave robotic system for retinal vascular bypass surgery. Chin J Mech Eng 2018;31(1):78. [140] MacLachlan RA, Becker BC, Cuevas Tabare´s J, Podnar GW, Lobes Jr LA, et al. Micron: an actively stabilized handheld tool for microsurgery. IEEE Trans Rob 2012;28(1):195212. [141] Yang S, MacLachlan RA, Riviere CN. Manipulator design and operation for a six-degree-of-freedom handheld tremor-canceling microsurgical instrument. IEEE/ASME Trans Mechatron 2015;20(2):76172. [142] Wei W, Goldman RE, Fine HF, Chang S, Simaan N. Performance evaluation for multi-arm manipulation of hollow suspended organs. IEEE Trans Rob 2009;25(1):14757. [143] Bergeles C, Kratochvil BE, Nelson BJ. Visually servoing magnetic intraocular microdevices. IEEE Trans Rob 2012;28(4):798809. [144] Kummer MP, Abbott JJ, Kratochvil BE, Borer R, Sengul A, Nelson BJ. OctoMag: An electromagnetic system for 5-DOF wireless micromanipulation. IEEE Trans Rob 2010;26(6):100617. [145] He X, Roppenecker D, Gierlach D, Balicki M, Olds K, Gehlbach P, et al. Toward clinically applicable steady-hand eye robot for vitreoretinal surgery. In: ASME international mechanical engineering congress and exposition. American Society of Mechanical Engineers; 2012. 14553. ¨ neri A, Balicki MA, Handa J, Gehlbach P, Taylor RH, Iordachita I. New steady-hand eye robot with micro-force sensing for vitreoretinal sur[146] U gery. In: IEEE RAS/EMBS international conference on biomedical robotics and biomechatronics; 2010. p. 8149. [147] Song J, Gonenc B, Gua J, Iordachita I. Intraocular snake integrated with the steady-hand eye robot for assisted retinal microsurgery. In: IEEE international conference on robotics and automation; 2017. p. 67249. [148] Mablekos-Alexiou A, Ourselin S, Da Cruz L, Bergeles C. Requirements based design and end-to-end dynamic modeling of a robotic tool for vitreoretinal surgery. In: IEEE international conference on robotics and automation; 2018. p. 13541. [149] Caers P, Gijbels A, De Volder M, Gorissen B, Stalmans P, Reynaerts D, et al. Precision experiments on a comanipulated robotic system for use in retinal surgery. In: Proceedings of the SCATh joint workshop on new technologies for computer/robot assisted surgery; 2011. p. 17. [150] Gijbels A, Wouters N, Stalmans P, Van Brussel H, Reynaerts D, Vander Poorten E. Design and realisation of a novel robotic manipulator for retinal surgery. In: IEEE/RSJ international conference on intelligent robots and systems; 2013. p. 3598603. [151] Hunter IW, Doukoglou TD, Lafontaine SR, Charette PG, Jones LA, Sagar MA, et al. A teleoperated microsurgical robot and associated virtual environment for eye surgery. Presence: Teleoperators Virtual Environ 1993;2(4):26580. [152] Hunter IW, Lafontaine S, Nielsen PMF, Hunter PJ, Hollerbach JM. Manipulation and dynamic mechanical testing of microscopic objects using a tele-micro-robot system. In: IEEE international conference on robotics and automation; 1990. p. 39. [153] Schenker PS, Das H, Ohm TR. A new robot for high dexterity microsurgery. Computer vision, virtual reality and robotics in medicine. Springer; 1995. p. 11522. [154] Grace KW, Colgate JE, Glucksberg MR, Chun JH. A six degree of freedom micromanipulator for ophthalmic surgery. In: IEEE international conference on robotics and automation; 1993;1:630635.

670

Handbook of Robotic and Image-Guided Surgery

[155] Jensen PS, Grace KW, Attariwala R, Colgate JE, Glucksberg MR. Toward robot-assisted vascular microsurgery in the retina. Graefes Arch Clin Exp Ophthalmol 1997;235(11):696701. [156] Meenink HCM, Hendrix R, Naus GJL, Beelen MJ, Nijmeijer H, Steinbuch M, et al. Robot-assisted vitreoretinal surgery. In: Gomes P, editor. Medical robotics: minimally invasive surgery. Woodhead Publishing; 2012. p. 185209. [157] Meenink T, Naus G, de Smet M, Beelen M, Steinbuch M. Robot assistance for micrometer precision in vitreoretinal surgery. Invest Ophthalmol Visual Sci 2013;54(15):5808. [158] Nasseri MA, Eder M, Eberts D, Nair S, Maier M, Zapp D, et al. Kinematics and dynamics analysis of a hybrid parallel-serial micromanipulator designed for biomedical applications. In: IEEE/ASME international conference on advanced intelligent mechatronics; 2013. p. 2939. [159] Nasseri MA, Eder M, Nair S, Dean E, Maier M, Zapp D, et al. The introduction of a new robot for assistance in ophthalmic surgery. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society; 2013. p. 56825. [160] Wilson JT, Gerber MJ, Prince SW, Chen C-W, Schwartz SD, Hubschman J-P, et al. Intraocular robotic interventional surgical system (IRISS): mechanical design, evaluation, and masterslave manipulation. Int J Med Rob Comput Assisted Surg 2018;4:e1842. [161] Ida Y, Sugita N, Ueta T, Tamaki Y, Tanimoto K, Mitsuishi M. Microsurgical robotic system for vitreoretinal surgery. Int J Comput Assisted Radiol Surg 2012;7(1):2734. [162] Ueta T, Yamaguchi Y, Shirakawa Y, Nakano T, Ideta R, Noda Y, et al. Robot-assisted vitreoretinal surgery: development of a prototype and feasibility studies in an animal model. Ophthalmology 2009;116(8):153843. [163] Nakano T, Sugita N, Ueta T, Tamaki Y, Mitsuishi M. A parallel robot to assist vitreoretinal surgery. Int J Comput Assisted Radiol Surg 2009;4(6):517. [164] Yu D-Y, Cringle SJ, Constable IJ. Robotic ocular ultramicrosurgery. Aust N Z J Ophthalmol 1998;26(Suppl):S68. [165] Yang S, Lobes Jr LA, Martel JN, Riviere CN. Handheld-automated microsurgical instrumentation for intraocular laser surgery. Lasers Surg Med 2015;47(8):65868. [166] Becker BC, MacLachlan RA, Lobes LA, Hager GD, Riviere CN. Vision-based control of a handheld surgical micromanipulator with virtual fixtures. IEEE Trans Rob 2013;29(3):67483. [167] Riviere CN, Ang WT, Khosla PK. Toward active tremor canceling in handheld microsurgical instruments. IEEE Trans Rob Autom 2003;19(5): 793800. [168] Yang S, Martel JN, Lobes Jr LA, Riviere CN. Techniques for robot-aided intraocular surgery using monocular vision. Int J Rob Res 2018;37(8): 93152. [169] MacLachlan RA, Riviere CN. High-speed microscale optical tracking using digital frequency-domain multiplexing. IEEE Trans. Instrum Meas 2009;58(6):19912001. [170] MacLachlan RA, Hollis RL, Jaramaz B, Martel JN, Urish KL, Riviere CN. Multirate Kalman filter rejects impulse noise in frequency-domainmultiplexed tracker measurements. In: IEEE sensors conference; 2017. p. 5913. [171] MacLachlan RA, Parody N, Mukherjee S, Hollis RL, Riviere CN. Electromagnetic tracker for active handheld robotic systems. In: IEEE sensors conference; 2016. p. 524. [172] Bergeles C, Shamaei K, Abbott JJ, Nelson BJ. Single-camera focus-based localizing of intraocular devices. IEEE Trans Biomed Eng 2010;57(8): 206474. [173] Ergeneman O, Bergeles C, Kummer MP, Abbott JJ, Nelson BJ. Wireless intraocular microrobots: opportunities and challenges. In: Rosen J, Hannaford B, Satava RM, editors. Surgical robotics: systems applications and visions. Springer; 2011. p. 271311. [174] Mahoney AW, Nelson ND, Parsons EM, Abbott JJ. Non-ideal behaviors of magnetically driven screws in soft tissue. In: IEEE/ RSJ international conference on robotics and intelligent systems; 2012. p. 355964. [175] Nelson ND, Delacenserie J, Abbott JJ. An empirical study of the role of magnetic, geometric, tissue properties on the turning radius of magnetically driven screws. In: IEEE international conference on robotics and automation; 2013. p. 53527. [176] Wu Z, Troll J, Jeong H-H, Wei Q, Stang M, Ziemssen F, et al. A swarm of slippery micropropellers penetrates the vitreous body of the eye. Sci Adv 2018;4(11):eaat4388. [177] Edwards TL, Xue K, Meenink HCM, Beelen MJ, Naus GJL, Simunovic MP, et al. First-in-human study of the safety and viability of intraocular robotic surgery. Nat Biomed Eng 2018;2:64956. [178] Edwards TL, Xue K, Meenink T, Beelen M, Naus G, Simunovic MP, et al. A first-in-man trial assessing robotic surgery inside the human eye to perform a subretinal injection. Invest Ophthalmol Visual Sci 2018;59:5936. [179] Gijbels A, Smits J, Schoevaerdts L, Willekens K, Vander Poorten EB, Stalmans P, et al. In-human robot-assisted retinal vein cannulation, a world first. Ann Biomed Eng 2018;46:110. [180] Elhousseini Z, Lee E, Williamson TH. Incidence of lens touch during pars plana vitrectomy and outcomes from subsequent cataract surgery. Retina 2016;36(4):8259. [181] Nambi M, Bernstein PS, Abbott JJ. Effect of haptic-interface virtual kinematics on the performance and preference of novice users in telemanipulated retinal surgery. IEEE Rob Autom Lett 2017;2(1):6471. [182] Brosilow C, Joseph B. Techniques of model-based control. Prentice-Hall; 2002. [183] Becker BC, MacLachlan RA, Riviere CN. State estimation and feedforward tremor suppression for a handheld micromanipulator with a Kalman filter. In: IEEE/RSJ international conference on intelligent robots and systems; 2011. p. 51605. [184] Riviere CN, Rader RS, Thakor NV. Adaptive canceling of physiological tremor for improved precision in microsurgery. IEEE Trans Biomed Eng 1998;45(7):83946.

Robotic Retinal Surgery Chapter | 36

671

36. Robotic Retinal Surgery

[185] Mukherjee S, MacLachlan R, Riviere CN. Velocity-limiting control of an active handheld micromanipulator. J Med Devices 2016;10(3): 03094410309442. [186] Wells TS, Yang S, MacLachlan RA, Lobes Jr. LA, Martel JN, et al. Hybrid position/force control of an active handheld micromanipulator for membrane peeling. Int J Med Rob Comput Assisted Surg 2016;12(1):8595. [187] Rosenberg LB. Virtual fixtures: perceptual tools for telerobotic manipulation. In: IEEE virtual reality annual international symposium; 1993. p. 7682. [188] Cutler N, Balicki M, Finkelstein M, Wang J, Gehlbach P, McGready J, et al. Auditory force feedback substitution improves surgical precision during simulated ophthalmic surgery. Invest Ophthalmol Visual Sci 2013;54(2):131624. [189] Kumar R, Berkelman P, Gupta P, Barnes A, Jensen PS, Whitcomb LL, et al. Preliminary experiments in cooperative human/robot force control for robot assisted microsurgical manipulation. In: IEEE international conference on robotics and automation; vol. 1; 2000. p. 6107. [190] Ebrahimi A, He C, Roizenblatt M, Patel N, Sefati S, Gehlbach P, et al. Real-time sclera force feedback for enabling safe robot-assisted vitreoretinal surgery. In: 2018 40th Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE; 2018. p. 36505. [191] He C, Ebrahimi A, Roizenblatt M, Patel N, Yang Y, Gehlbach PL, et al. User behavior evaluation in robot-assisted retinal surgery. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). IEEE; 2018. p. 1749. [192] Jingjing X, Long H, Lijun S, Yang Y. Design and research of a robotic aided system for retinal vascular bypass surgery. J Med Devices 2014;8(4):044501. [193] Gijbels A, Vander Poorten EB, Gorissen B, Devreker A, Stalmans P, Reynaerts D. Experimental validation of a robotic comanipulation and telemanipulation system for retinal surgery. In: Fifth IEEE RAS/EMBS international conference on biomedical robotics and biomechatronics. IEEE; 2014. p. 14450. [194] Hashtrudi-Zaad K, Salcudean SE. Analysis of control architectures for teleoperation systems with impedance/admittance master and slave manipulators. Int J Rob Res 2001;20(6):41945. [195] Lawrence DA. Designing teleoperator architectures for transparency. In: Robotics and automation, 1992. Proceedings, 1992 IEEE international conference on. IEEE; 1992. p. 140611. [196] Balicki M, Xia T, Jung MY, Deguet A, Vagvolgyi B, Kazanzides P, et al. Prototyping a hybrid cooperative and tele-robotic surgical system for retinal microsurgery. MIDAS J 2011;2011:815. [197] Balicki MA. Augmentation of human skill in microsurgery (PhD thesis). The Johns Hopkins University; 2014. [198] Griffin JA, Zhu W, Nam CS. The role of haptic feedback in robotic-assisted retinal microsurgery systems: a systematic review. IEEE Trans Haptic 2017;10(1):94105. [199] Dewan M, Marayong P, Okamura AM, Hager GD. Vision-based assistance for ophthalmic micro-surgery. In: International conference on medical image computing and computer-assisted intervention. Springer; 2004. p. 4957. [200] Becker BC, MacLachlan RA, Lobes Jr LA, Riviere CN. Semiautomated intraocular laser surgery ing handheld instruments. Lasers Surg Med 2010;42(3):26473. [201] Castann˜o A, Hutchinson S. Visual compliance: task-directed visual servo control. IEEE Trans Rob Autom 1994;10(3):33442. [202] Yang S, MacLachlan RA, Martel JN, Lobes LA, Riviere CN. Comparative evaluation of handheld robot-aided intraocular laser surgery. IEEE Trans Rob 2016;32(1):24651. [203] Yu H, Shen J-H, Joos KM, Simaan N. Calibration and integration of B-mode optical coherence tomography for assistive control in robotic microsurgery. IEEE/ASME Trans Mechatron 2016;21(6):261323. [204] Probst T, Maninis K-K, Chhatkuli A, Ourak M, Vander Poorten E, Van Gool L. Automatic tool landmark detection for stereo vision in robotassisted retinal surgery. IEEE Rob Autom Lett 2018;3(1):61219. [205] Mukherjee S, Yang S, MacLachlan RA, Lobes LA, Martel JN, Riviere CN. Toward monocular camera-guided retinal vein cannulation with an actively stabilized handheld robot. In: IEEE international conference on robotics and automation; 2017. p. 29516. [206] Routray A, MacLachlan R, Martel J, Riviere C. Incremental intraoperative update of retinal reconstruction using laser aiming beam. In: International symposium on medical robotics; 2019. p. 15. [207] Braun D, Yang S, Martel JN, Riviere CN, Becker BC. EyeSLAM: real-time simultaneous localization and mapping of retinal vessels during intraocular microsurgery. Int J Med Rob Comput Assisted Surg 2018;14(1):e1848. [208] Can A, Shen H, Turner JN, Tanenbaum HL, Roysam B. Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms. IEEE Trans Inf Technol Biomed 1999;3(2):12538. [209] Mukherjee S, Kaess M, Martel JN, Riviere CN. EyeSAM: graph-based localization and mapping of retinal vasculature during intraocular microsurgery. Int J Comput Assisted Radiol Surg 2019;14(5):81928. [210] Carrasco-Zevallos OM, Viehland C, Keller B, Draelos M, Kuo AN, Toth CA, et al. Review of intraoperative optical coherence tomography: technology and applications. Biomed Opt Express 2017;8(3):160737. [211] Zhou M, Roodaki H, Eslami A, Chen G, Huang K, Maier M, et al. Needle segmentation in volumetric optical coherence tomography images for ophthalmic microsurgery. Appl Sci 2017;7(8):748. [212] Zhou M, Hamad M, Weiss J, Eslami A, Huang K, Maier M, et al. Towards robotic eye surgery: Marker-free, online hand-eye calibration using optical coherence tomography images. IEEE Rob Autom Lett 2018;3(4):394451. [213] Song C, Gehlbach PL, Kang JU. Active tremor cancellation by a smart handheld vitreoretinal microsurgical tool using swept source optical coherence tomography. Opt Express 2012;20(21):2341421.

672

Handbook of Robotic and Image-Guided Surgery

[214] Cheon GW, Gonenc B, Taylor RH, Gehlbach PL, Kang JU. Motorized microforceps with active motion guidance based on common-path SSOCT for epiretinal membranectomy. IEEE/ASME Trans Mechatron 2017;22(6):24408. [215] Liu X, Huang Y, Kang JU. Distortion-free freehand-scanning OCT implemented with real-time scanning speed variance correction. Opt Express 2012;20(15):1656783. [216] de Smet MD, Naus GJL, Faridpooya K, Mura M. Robotic-assisted surgery in ophthalmology. Curr Opin Ophthalmol 2018;29(3):24853. [217] Padoy N, Blum T, Feussner H, Berger M-O, Navab N. On-line recognition of surgical activity for monitoring in the operating room. In: AAAI; 2008. p. 171824. [218] Twinanda AP, Alkan EO, Gangi A, de Mathelin M, Padoy N. Data-driven spatio-temporal RGBD feature encoding for action recognition in operating rooms. Int J Comput Assisted Radiol Surg 2015;10(6):73747. [219] Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, et al. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assisted Radiol Surg 2016;11(4):55368. [220] Lalys F, Riffaud L, Bouget D, Jannin P. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries. IEEE Trans Biomed Eng 2012;59(4):96676. [221] Cotin S, Keppi JJ, Allard J, Bessard R, Courtecuisse H, Gaucher D. Project RESET. REtinal Surgery systEm for Training. Acta Ophthalmol 2015;93(S255) ABS150477. [222] Maier-Hein L, Vedula SS, Speidel S, Navab N, Kikinis R, Park A, et al. Surgical data science for next-generation interventions. Nat Biomed Eng 2017;1(9):6916.

37 G

Ventilation Tube Applicator: A Revolutionary Office-Based Solution for the Treatment of Otitis Media With Effusion Kok Kiong Tan1, Wenyu Liang1, Cailin Ng2, Chee Wee Gan3 and Hsueh Yee Lim3 1

Department of Electrical and Computer Engineering, National University of Singapore, Singapore 2 NUS Graduate School for Integrative Sciences and Engineering, Singapore, Singapore 3 Department of Otolaryngology, National University of Singapore, Singapore, Singapore

ABSTRACT The ventilation tube (VT) applicator (VTA) is a novel precision surgical device that allows an office-based VT insertion procedure for patients suffering from otitis media with effusion (OME). OME is characterized by an accumulation of fluid in the middle ear space, causing infection and conductive hearing loss. When medication as the first treatment fails, a VT is usually surgically inserted on the tympanic membrane (TM) of the patient to drain the fluid. This procedure involves myringotomy (i.e., making an incision on the TM using a surgical knife), followed by inserting a VT into the incision. This procedure takes around 1530 minutes and is commonly done in an operating theater under general anesthesia (GA). VTA utilizes a highly integrated mechatronic system that is able to complete the TM incision and tube insertion procedure precisely and efficiently, within 1 second. With its automatic process, it is possible for the surgical procedure to be moved to an in-office procedure that is performed under local anesthesia or moderate sedation. Without the use of an operating theater and its associated manpower, huge savings in time and treatment costs can be achieved. At the same time, the negative effects of GA can be avoided. This chapter describes the mechatronic system and motion control system of the VTA. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00037-2 © 2020 Elsevier Inc. All rights reserved.

673

674

Handbook of Robotic and Image-Guided Surgery

37.1

Introduction

Otitis media with effusion (OME) is a very common ear disease that causes body imbalance, discomfort, and may even result in irreversible damage to the middle ear structure. It arises due to a dysfunctional Eustachian tube while causing fluid to accumulate in the middle ear space. OME affects people of all ages worldwide, though it is more commonly encountered in children and infants. When medication as a treatment for OME fails, a ventilation tube (VT) (also known as “grommet” or “tympanostomy tube”) is surgically inserted on the eardrum (medically known as the “tympanic membrane,” TM) so that the accumulated fluid can be drained out. The procedure involves a few steps and utilizes three to four main tools, such as a myringotomy knife, suction tube, forceps, and depending on surgeon preference, a rosen needle as well. The procedure starts with the surgeon performing a myringotomy, which involves making a small incision on the TM with a myringotomy knife, followed by suction to remove the middle ear fluid behind the TM. Next, the surgeon places a grommet into the incision using forceps, and then pushes the grommet into the incision either by using the forceps or a pick. This surgery can be performed under local anesthesia (LA) in adults if they can tolerate the discomfort. However, the usual practice for young children is to use general anesthesia (GA) as the pain tolerance level of young children is very low and they must keep absolutely still during the procedure to prevent accidental damage to the internal ear structure. According to an estimate [1,2], approximately 70%90% of children developed middle ear effusion by the age of 2, and around 4%10% of children required surgical intervention. The insertion of a VT is the most common reason for a child to undergo a GA [3]. Studies have shown that there are long-term health effects from GA, including possible delay in brain development of children [4]. The VT insertion procedure is considered as minor surgery that can be completed in 15 minutes by an experienced surgeon. However, the setup is complex and the cost is high, as it requires the use of an operating theater, several surgical tools, and involves other skilled personnel such as anesthetists, surgical assistants, and nurses. The conventional and still predominant method for this surgery has several limitations [57]: 1. need for GA with associated risks (some adults only require LA if they can tolerate the discomfort); 2. highly dependent on surgeon’s skills; 3. expensive cost (each surgery costs about US$2000 [8]); 4. reduced access for patients in some areas with poor medical infrastructures; and 5. delay in treatment due to the waiting time for the operating theater and preparation for the surgery. In recent years, with the advances in mechatronics and robotics, auto or semiauto surgical devices have been designed and developed to assist surgeons and improve the operative success rate. They offer advantages of high precision, high speed, repeatability, stability, and convenience in medical applications. For example, in Refs. [911], a high-precision computer-controlled micromanipulation system is developed for intracytoplasmic sperm injection (ICSI), which is a human-assisted method for animal or human reproduction. The precision system helps to enhance the oocyte survival rate in ICSI and to shorten the process time. Furthermore, the injection process can be repeated precisely and consistently. There is also an increase in the number of surgical devices that are designed to be used in the doctor’s/surgeon’s office rather than the operating theater. These devices, often called “office-based surgical devices,” shift the conventional surgical procedures from the operating theater to the confines of the office. The office-based surgical devices have the following advantages: 1. removing the need for the extensive and expensive resources of operating room settings including specialized equipment and the surgical team; 2. simplifying the surgical procedures and thus avoiding the high dependence on the surgeon’s skills; and 3. improving the precision, speed, performance, and success rate of the surgery. Therefore office-based surgical devices can greatly reduce the cost, waiting time, and operating time, and thus increase the access to medical treatment for patients in some areas with poor medical infrastructures. Thus to overcome the limitations of the current surgical treatment for OME, the office-based precision surgical device is a good solution.

37.1.1

Objectives

The main objectives of this study are to revisit the current approaches, and to develop a novel “all-in-one” precision surgical device which aims to allow office-based surgical treatment of OME to be accomplished in a patient under LA. Significantly, the precision surgical device is to overcome the disadvantages of the current art and to simplify the extensive setup. The device mainly consists of a mechanical system, a sensing system, and a motion control system. The specific objectives are to: G

Develop the mechatronic system of a surgical device to carry out both incision and insertion of the grommet in a single procedure automatically and quickly, which can avoid the need for GA, costly expertise and equipment, and treatment delays.

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

G

675

Design the precision motion control system for the device in order to achieve high precision, high speed, and high performance for the procedures.

The presented study may provide a better solution for the surgical treatment of OME without the need of a skilled surgeon and complex setup. The precision motion control system should be able to achieve precise and fast motions for the device and thus help the device to obtain a high success rate for VT insertion.

37.1.2

Challenges

There are four main challenges to the development of the device and these challenges shape the selection and design of the constituent components of the device.

37.1.2.1 Space and accessibility The subject of interest to the design of the device is the TM with a Young’s modulus ranging from 20 to 40 MPa, a mean diameter of about 810 mm, and a nonuniform spatial thickness distribution in the range of 30120 μm [12,13]. It is a delicate elastic membrane with a convex surface contour and it varies from one person to the next in these characteristics. To reach the TM, the device will have to traverse the ear canal, which is approximately 2535 mm in length measuring from the ear hole (external auditory meatus) to the TM, with a slight bend along the path. The ear canal diameter in an adult is about 510 mm (slightly smaller than the diameter of the TM). Furthermore, there are ear bones located at the upper portion of the middle ear space behind the membrane for transmitting the sound vibration from the TM. In particular, the malleus bone attaches to the inner surface of the TM at the upper part of the TM. This part of the membrane is thus out-of-bounds to myringotomy so as not to hit and interfere with the vibration of this bone, leaving an even smaller area at the lower quadrant of the membrane in which the device can work. There are various types of VTs and one of the commonly used pediatric VT is the Tiny Titan type (Medtronic product ID 1056101) as shown in Fig. 37.1. It is made of titanium (with superior biocompatibility) and is one of the smallest tubes with an outer flange diameter of just 1.51.6 mm, length of 1.6 mm, and inner diameter of 0.76 mm. An incision of 1.31.5 mm is required to be made in a small area of approximately 6.58 mm2 during myringotomy, and the tube insertion is to be accomplished within this small area after clearing the tightly constricted ear canal. In order to carry out the full procedure of myringotomy and VT insertion at once, the tools, including the surgical knife, VT, and a holder to manipulate the VT, are required to be collectively encased and brought toward the TM through the canal. Above all, proper synchronization of the tools and operational steps is required to successfully and safely execute the process with adequate sensing mechanisms. This is the main challenge behind the design of the device.

37.1.2.2 Operation time

1.6

1.5–1.6

0.76 FIGURE 37.1 Tiny Titan ventilation tube.

37. Ventilation Tube Applicator

Throughout the procedure, the patient is proposed to be under LA instead of GA. The delicate operation has to be accomplished as instantaneously as possible to minimize the trauma to the patient and to avoid agitating them, causing undue movements which will affect the outcome. Negatively, it should be further noted that many of these patients will be children, thus making the task of ensuring that they are kept still more difficult. Thus the surgical time to complete the myringotomy and grommet insertion is important, and should be much shorter than with conventional treatment, and short enough to overcome or alleviate the effects of undue movements of the patient.

676

Handbook of Robotic and Image-Guided Surgery

37.1.2.3 Precision and repeatability Only a quarter of the TM is an ideal site for the operation as highlighted above. A small incision of the correct size to weave in the grommet has to be done accurately in this small area. A deformation during incision can lead to discomfort. The incision should not deform the TM unduly, damage it, or cause much discomfort to the patient. Thus the deformation on the TM during incision should be as small as possible with the control scheme. Following the precise incision, the tiny VT has to be manipulated to fit into the slit precisely, quickly, and again without undue deformation and tearing of the TM. The manipulation of the small parts over a small area and that being further confined by the ear canal collectively require the device and the control actions to be highly precise and repeatable. This is another major challenge.

37.1.2.4 Diversity As mentioned before, no two TMs are identical. They differ in dimensions, anatomical orientation, flatness, as well as mechanical characteristics like Young’s modulus, and the suitable area for incision also varies. As well as the TM, the ear canal also varies from patient to patient, it may not be straight, which results in more constriction. For instance, in certain patients such as children with Down syndrome or cleft palate abnormalities, the ear canals are sometimes narrower and/or more tortuous than in other patients without congenital conditions. On the other hand, some patients with congenital or acquired ear disorders such as granulation tissues or lumps on the skin of the ear canal have a relatively narrower canal and overall diameter. Exactly repeating a successful operation on the next TM may not work. A fair amount of feedback and intelligent adaptation is needed.

37.1.3

System architecture and organization of this chapter

To address all these challenges, the surgical device is designed to meet all requirements. The system architecture of the surgical device is shown in Fig. 37.2. It is controlled by a computer with an embedded control card and mainly consists of the following systems: 1. Mechanical system (i.e., main body) designed to carry out the whole surgical operation on the TM inside the ear; 2. Force sensing system designed to provide the force information between the tool set and the membrane to the surgeon and the device during the surgery; and 3. Motion control system designed to yield the necessary precise and customized motion profile for the device. The mechanical structure and design of the device are described in Section 37.2. Section 37.3 introduces the sensing system and Section 37.4 describes the motion control system that enables precise control of the motor. The experimental results are presented in Section 37.5 and, finally, the conclusions are drawn in the final section.

37.2

Mechanical system

In this section, the mechanical system (main body) of the designed device is introduced first, and followed by a detailed presentation of the mechanical design.

37.2.1

Mechanical structure

The mechanical structure of the myringotomy and VT insertion device is shown in Fig. 37.3. It mainly consists of the following components: 1. A linear ultrasonic motor (USM) stage for driving the tool set (and VT) along the Z-axis for completing all the required surgical procedures. The USM is a kind of piezoelectric actuator/motor (PA/PM) which is generally FIGURE 37.2 System architecture. VTA, Ventilation tube applicator.

Force sensing system Force sensor amplifier

Computer

Motion controller Motion control system

Motor drive VTA

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

677

FIGURE 37.3 Mechanical structure of the device.

2. 3. 4. 5. 6. 7.

designed and implemented based on the piezoelectric effect [14]. The USM offers advantages of fine accuracy, fast response, and displacement resolution of the medical device far beyond what is possible manually, which can ensure the precision and repeatability for the surgical procedure. Moreover, the USM can offer a longer travel range compared to other types of PA/PMs which generate motions directly by the deformations of the piezoelectric material. The USM with embedded linear encoder used in the device is manufactured by Physik Instrumente GmbH & Co. KG. The minimum incremental motion is 0.3 μm, the travel range is 19 mm, and the maximum velocity achievable is 400 mm/s for the Z-axis. A hollow cutter for making the incision on the TM and holding the tube through the tube’s hole. A hollow holder allows the cutter to extend from or retract into it (the holder) for pushing the tube forward to insert the tube onto the TM. A cover shields the tool set (the cutter and the holder) to prevent the tube falling from the cutter while the tool set is moving into the ear canal and minimizes the direct contact between the holder and the ear canal so as to avoid false alarms from the force sensing. A cutter retraction mechanism comprising of a servo motor with a link mechanism for moving the cutter forward and backward inside the holder. A force sensor for providing the contact force information between the tool set and TM. A handle with one push trigger button for supporting the device as well as being manipulated by the surgeon. The button facilitates the “point and click” concept of the device.

37.2.2

Mechanical design

In the mechanical system, there are two key components: the tool set (the cutter and the holder) and the cutter retraction mechanism. In the following, both components are presented in detail.

37.2.2.1 Tool set To address the space and accessibility challenge, a tightly integrated tool set, leveraging on a telescopic structure, is designed to allow the required tools and parts to be brought to the TM proximity at one go, and to allow each of them to carry out its function in the right order and at the right time. In addition, the tool set is designed to be easily removable

37. Ventilation Tube Applicator

In particular, the cover and the tool set with the tube are integrated compactly into a telescopic structure design, where the cover is the outer core and followed by the holder and the cutter in the inner core. This design minimizes the size of the required tools and allows them to enter the limited space of the ear canal with a diameter of around 5 mm. The overall dimensions of the device are about 112 mm (length) 3 45 mm (width) 3 155 mm (height) and its weight is approximately 220 g. The compact and lightweight allows the device to be portable and easily manipulated.

678

Handbook of Robotic and Image-Guided Surgery

for sterilization and replacement. The tool set consists of a cutter, a hollow holder, and a cover that acts as a shield. Of interest is the design of the cutter and the stress and deformation analysis on the tool set which will be elaborated. Cutter design The cutter is an important tool of the device, primarily to create the incision on the membrane. It is designed to be fine enough to slot through the hollow of the grommet and hold it in place. A needle-like cutter with a cylindrical shape is used. Through initial empirical trials, it was observed that a simple needle as shown in Fig. 37.4A or the syringe needle is able to incise the membrane albeit not as efficiently as using the surgery knife (otology myringotomy blade). The sharpness is an important parameter of the surgical needles/cutters [15]. In Ref. [16], it was indicated that a sharp needle is easier to penetrate the vital structure than a blunt needle. Drawing on the results with the syringe needle, an improved customized version of the cutter is shown in Fig. 37.4B to achieve a sharper cutter tip. The blade of the improved version has a two-step cutting edge: the first angle β is smaller than the next one γ (i.e., β , α , γ). Hence, its first cutting edge is sharper than the simple needle (because the angle β is smaller than the angle α of the simple needle) while the same tapered length l can be maintained. For example, let α 5 19 degrees, β 5 15 degrees, and γ 5 22.5 degrees, then both tips are of the same length l 5 2.2 mm, but the improved cutter has a cutting tip sharper than that of the simple needle. Generally, the sharper the cutter tip is, the shorter the cutting time is and the smaller the TM deformation is. Therefore the improved cutter is adopted to minimize both the myringotomy time and TM deformation. A slit is preferred to a circle for the same reason mentioned for the laser myringotomy device. Stress and deformation analysis Both the cutter and the holder are designed with small diameters and relatively longer lengths. They are the main parts subjected to force during the procedure. A stress and deformation analysis is done to ensure that the force is tolerable, and that the deformation of the cutter under stress does not affect the performance during myringotomy. During the whole process, the critical time instant is when the cutter is incising the TM. Since the ends of the cutter and the holder are fixed, they can be modeled as a cantilever beam system and the force analysis (during myringotomy) is shown in Fig. 37.5. There are two forces being directly applied on the cutter tip along the Z-axis and X-axis at this time, reaching a maximum of close to 1 N along each direction. Significantly, the maximum force is estimated and obtained by the penetration and cutting force measurement tests on mock membranes with a safety factor of 2.5. Due to the changing cross-sectional shapes of the two separate mechanical parts, the finite element analysis (FEA) [17,18], which offers one advantage that it can be applied to the body with any shapes, is applied. Taking into account the conditions aforementioned, the stress and strain analysis on the cutter and the holder was done with a simulation study of the model using the FEA tool of Autodesk Inventor. The stress and displacement distributions when 1 N forces are applied are illustrated in Fig. 37.6. FIGURE 37.4 Design of cutter. (A) simple needle; (B) improved needle cutter.

FIGURE 37.5 Force analysis of the cutter and the holder.

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

679

FIGURE 37.6 Stress and displacement distribution of the tool set.

37.2.2.2 Mechanism for cutter retraction According to the working principle of holding and releasing the grommet by the holder and cutter, the cutter should be efficiently retracted into the holder. Furthermore, the cutter needs to be retracted prior to the grommet insertion as a protruded cutter at this time may hit the inner bone structures located just after the membrane. A sine generator mechanism is designed and implemented as shown in Fig. 37.7 for this purpose. As can be seen in Fig. 37.7, as the crank AB rotates clockwise or anticlockwise, bar 03 will move forward or backward linearly along the fixed guide 04 (the holder). The kinematics of the sine mechanism is given in the following equation: s 5 L sin ϕ

(37.1)

37. Ventilation Tube Applicator

As can be observed from the figure, the maximum stress and strain are on the cutter (close to the tip of the holder). The maximum equivalent stress is 278.5 MPa. Considering that the yield strength of the 304 stainless steel is 290 MPa, the minimum safety factor is about 1.04 which is greater than 1. Hence, the cutter design satisfies the strength requirements and the cutter is safe for use in this application. The maximum deformation incurred is at the cutter tip, which is 0.2895 mm. In addition, the deformations of most portions are less than 0.2 mm, which is acceptable in this application.

680

Handbook of Robotic and Image-Guided Surgery

FIGURE 37.7 Design of cutter retraction mechanism (left: ϕ 5 30 degrees and right: ϕ 5 90 degrees).

where s is the stroke of the cutter (bar 03), L is the length of the crank (AB) (12.5 mm in this design), and ϕ is the angle between crank (AB) and the vertical (axis aa). The resultant stroke is thus directly related to the sine of the angle, which is the reason to call the mechanism a “sine generator.” In Fig. 37.7, the crank is moved from 30 to 90 degrees, the corresponding linear displacement is from 0.5L (6.25 mm) to L (12.5 mm), and thus the cutter retracts 0.5L into the holder, which is sufficient for the intended purpose. The reasons for using the sine generator, instead of other coupling mechanisms, are due to the following advantages it offers: 1. actual linear displacement is easily determined via the sine of the rotational angle; 2. the link mechanism is easily fabricated; and 3. undesirable backlash (which may affect the system precision) can be avoided. A servo motor with a resolution of about 1 degree is used. The linear resolution of the sine generator can be determined by Eq. (37.2): Δs 5 jL sinðϕ 1 ΔϕÞ 2 L sinðϕÞj

(37.2)

where Δϕ is rotary resolution (Δϕ 5 1 degree) and Δs is the linear resolution. From 30 to 90 degrees, the linear resolution is range from 0.0019 to 0.1880 mm, which is less than 0.2 mm and accurate enough for the cutter retraction. A total retraction of about 6.5 mm over two steps will thus be efficiently enabled by this simple servo motor and coupling mechanism.

37.3

Sensing system

This section describes the working process and the force sensing system of the device. The force sensing system is important as it is the main driver behind the automation of the device and assists the controller in determining whether to move to the next step of the working process. It is designed carefully so that the contact force between the tool set and the membrane can be measured precisely.

37.3.1

Working process

The working process of the device involves five steps and is shown in Fig. 37.8. 1. Initialization: The desirable insertion site is determined by the surgeon and the cutter tip is pointed at the spot with the help of the microscope. Then the cutter is moved slowly close to the TM (but not touching it) by the surgeon. 2. Touch detection: The surgeon can then activate the device motion sequence by pressing on the trigger button. Upon activation of the button, the cutter is slightly retracted and withdrawn into the tube, and the tool set (with tube) is driven by the USM stage toward the membrane until the tube touches the membrane at a certain contact force. The touch position is the start point for the rest of the procedures. 3. Myringotomy: The cutter retraction mechanism pushes out the cutter to make an incision on the membrane. After the incision is made, the mechanism retracts the cutter. 4. Tube insertion: The holder is moved forward by the USM stage to push the tube through the incision on the membrane directly to perform insertion. 5. Tube release: The entire tool set is withdrawn to release the tube on the membrane.

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

681

FIGURE 37.8 Working process of the proposed device. (A) touch detection; (B) myringotomy; (C) tube insertion; (D) tube release.

FIGURE 37.9 Force-based supervisory controller. USM, Ultrasonic motor; VTA, ventilation tube applicator.

In particular, steps 2 to 5 are to be efficiently and automatically accomplished with the guidance of the sensing system. Furthermore, all these steps are to be done automatically and sequentially.

Force-based supervisory controller

The force sensing system is not only used for identifying and monitoring each instance during the surgical process, but also for assisting the device to carry out the surgical process automatically. To this end, a force-based supervisory controller is designed as shown in Fig. 37.9. As can be seen, the supervisory controller consists of an instance correlator and a motion sequence selector. The instance correlator is used to identify each instance during the process according to the force sensor output, the order of the instance (or motion sequences) and the specified conditions (i.e., membrane touched, membrane incised, tube inserted, tool set withdrawn). The motion sequence selector is used to select the designed motion sequences of different instances based on the output of the correlator. For example, when the surgical process is started, the correlator will first identify whether the membrane is touched. If it is not touched, then the selector will select the motion sequence for touching as the reference signal to the motion controller (i.e., the USM stage will move the tool set forward along the Z-axis). The correlator will continuously check if the force sensor output fulfills the touching condition. Once the output indicates that the cutter touches the membrane, the correlator will identify that the membrane is touched, then send out a signal to the selector and thus the selector will select the following motion sequences (stop the motion sequences for touching and trigger the motion sequences for

37. Ventilation Tube Applicator

37.3.2

682

Handbook of Robotic and Image-Guided Surgery

incision). During the incision (myringotomy) procedure, the motion sequences for incision will be selected and thus the myringotomy is started. Once the motion sequence for incision is done, the instance correlator will identify whether the membrane is penetrated. If it is penetrated, the selector will select the motion sequences for the next instance: VT insertion. If it is not penetrated, the selector will select the motion sequences for incision (i.e., the myringotomy procedure will be carried out) again until the membrane is penetrated. Thus the supervisory controller is a force feedback control system working in an outer loop which supervises the proposed device to carry out the process sequences automatically and systematically. It also highlights the importance of a reliable force sensing element and method to provide accurate measurements to the supervisory controller.

37.3.3

Installation of the force sensor

A highly sensitive low-cost force sensor (manufactured by Honeywell International Inc.) with a sensitivity of 0.12 mV/g, which provides precise and reliable force sensing performance in a compact commercial-grade package, is utilized to provide accurate force measurements to the device. The force sensor selected uses a silicon-implanted piezoresistor which changes its resistance according to the contact force. Once the force is applied on the sensor probe, it will be concentrated through the stainless steel ball and directly to the silicon-sensing element of the sensor. The amount of resistance changes according to the amount of the contact force being applied. Moreover, an amplifier is connected with the sensor output to amplify the sensor output signals. The direct force measurement method is used in the design. The installation of the force sensor is shown in Fig. 37.10. The force sensor is fixed on the fixed plate while the cutter and the holder are mounted on the movable base. A linear ball guideway links the base and the plate so that the friction between them can be minimized. The base is constrained by the slide locks to move on the guideway in the negative direction of the Z-axis only. The probe of the force sensor attaches to the movable base, so the sensor can measure the force applied on the base in the Z-direction. As can be seen in Fig. 37.10, since the cutter and the holder are fixed on the base, they can be considered as a whole rigid body. Due to the contact between the force sensor and the movable base containing the cutter/holder, any external force exerted on the cutter/holder can be transmitted to the base and measured by the sensor. For the friction Ff of the linear guideway, its frictional resistance is from 0.002 to 0.003, while the load on it is about 0.4 N, hence, the static friction is 0.00080.0012 N which can be ignored. Thus the measured force Fm can be used to represent the applied force Fn (i.e., Fm  Fn). During the myringotomy and grommet insertion process, the force Fm is sensitive enough to distinguish the following four different milestones: 1. tool set has just engaged membrane; 2. cutter makes an incision on the membrane; 3. VT is inserted; and 4. entire tool set is withdrawn, as shown in Fig. 37.11. Cases 1 and 2 arise before cutter retraction while cases 3 and 4 arise after cutter retraction, and thus the milestones can be further differentiated by the order in which they will occur. Based on the force sensor output and the order of these events, the time instances corresponding to these four different milestones can be identified and differentiated. This enables the synchronization of the various functions of the device, minimizes the process time, and improves the success rate.

37.4

Motion control system

The piezoelectric USM stage is the core component for providing high-precision motions as it offers the advantages of accuracy, high speed and resolution, longer travel distances, small, and lightweight. USM works on the principle of friction generated between the piezoceramic plate mounted in the stator and the friction bar attached to the mover [19]. Tracking control of the USM stage to meet the precise specifications is a challenging task. Since the USM stage is driven on the basis of friction, and hysteretic phenomenon coexists in the piezoelectric material, the USM stage

FIGURE 37.10 Installation and force analysis of the force sensor.

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

0.6 (i)

(ii)

(iii)

(iv)

0.5

683

FIGURE 37.11 Measured output of the force sensor during the procedure on a mock membrane: (i) membrane touched; (ii) membrane incised; (iii) ventilation tube inserted; and (v) tool set withdrawn.

Force (N)

0.4

0.3

0.2

0.1

0 4

4.2

4.4

4.6

4.8

5

5.2

Time (s)

37.4.1

System identification

37.4.1.1 System description of the ultrasonic motor stage A single-axis USM stage is shown in Fig. 37.12. The movement of the USM stage is a consequence of the friction generated between the piezoceramic plate mounted in the stator and the friction bar attached to the mover. The piezoceramic plate is the motor’s core piece which can be excited to produce high-frequency eigenmode oscillations. For each oscillation cycle, the tip of the plate moves a microstep along the guideway. Thus at the eigenmode frequency, the mover is driven forward or backward through the contact between the tip and the friction part. The minimum incremental displacement of the mover is 0.3 μm. It is measured by a built-in linear encoder with a resolution of 0.1 μm. The maximum push/pull force it can provide is 2 N and the velocity can be up to 400 mm/s as mentioned previously. Moreover, the USM stage is able to self-lock even when it is powered down. A motor drive is used to convert the analog input signals (from 210 to 10 V) into the required high-frequency drive signals, which are used to excite the required oscillations in the stator. More specifically, the analog input signals to the drive directly relate to the velocity of the motor.

37. Ventilation Tube Applicator

induces nonlinear dynamics and the significant component being due to friction. In addition, during the medical procedures where the tool set will engage the TM in different intensity, the force loading on the USM stage can vary significantly, which can be considered as an additional form of uncertainty along with the unmodeled dynamics of the USM stage. Despite these challenges, the control specifications cannot be compromised. The USM stage has to track the prescribed fine motion trajectories at high speed in order for the medical procedures to be accomplished, which include the incision of a small but controlled slit, and the insertion of a tiny tube through the slit, all these while the patient is awake and could possibly be traumatized if the procedure is prolonged. A poor tracking control performance can lead to an under- or overactuated slit on the TM, and/or undue lateral deformation on the membrane due to insufficient incision frequency. All these may contribute directly or indirectly to a failure of the medical procedures. This section introduces the design of a high-performance composite controller for the USM stage to produce a level of tracking performance which can be translated to procedural success at the clinical front. The composite controller comprises of three control components: a proportional-integral-derivative (PID) feedback controller is used as the main tracking controller with the PID parameters derived optimally using an linear-quadratic regulator (LQR)-assisted tuning approach; a sign function compensator acts to remove nonlinear dynamics due mainly to friction; and finally a sliding mode control action further rejects remnant uncertainty from unmodeled dynamics and disturbances.

684

Handbook of Robotic and Image-Guided Surgery

FIGURE 37.12 Ultrasonic motor stage.

37.4.1.2 System modeling of ultrasonic motor stage The USM stage possesses nonlinear dynamics, so its model can be considered as the combination of two parts: a linear term and a nonlinear term as shown in Eq. (37.3). xðtÞ € 5 Flinear ðtÞ 1 Fnonlinear ðtÞ

(37.3)

where x(t) is the position of the mover, Flinear(t) and Fnonlinear(t) represent the linear term and the nonlinear term, respectively. Significantly, the linear term is the dominant part of this system. For the linear term of the system, since the input to the drive affects the velocity output of the motor, it can be described similarly by Eq. (37.4). _ 1 buðtÞ Flinear ðtÞ 5 2 a1 xðtÞ 2 a2 xðtÞ

(37.4)

where a1 5 k/m, a2 5 c/m, and b are the dominant parameters of the USM stage and u(t) is the input signal to the drive. For the nonlinear term, the nonlinear dynamics includes hysteresis and friction which relate to the velocity of the USM stage. Friction is the major nonlinear part of this USM stage. Therefore the nonlinear term can be written as in Eq. (37.5). _ Fnonlinear ðtÞ 5 2 f ðxÞ

(37.5)

_ and _ is a nonlinear function, which can be decomposed into two parts which are Coulomb friction fc ðxÞ where f ðxÞ uncertain component Δf which may be unknown but bounded, that is, |Δf| # ΔfM. Thus the nonlinear term can be rewritten as: _ 5 2 fc ðxÞ _ 2 Δf : Fnonlinear ðtÞ 5 2 f ðxÞ

(37.6)

Combine Eqs. (37.4) and (37.6), then the full model of the USM stage can be given by (37.7). _ 1 buðtÞ 2 fc ðxÞ _ 2 Δf : xðtÞ € 5 2 a1 xðtÞ 2 a2 xðtÞ

(37.7)

37.4.1.3 Parameter estimation Once the model structure of the USM stage is determined in Eq. (37.7), the model parameters need to be estimated. In the ensuing subsections, the nonlinear term of the model is estimated first before the linear term is identified after neutralizing the nonlinear part with a nonlinear compensator.

Nonlinear term As previously mentioned, the nonlinear term is composed of a structured component and an uncertain component. The structured component can be considered as a kind of disturbance which can be written as: _ 5 bufc ðxÞ _ fc ðxÞ _ is considered as the equivalent frictional input relating to velocity. where ufc ðxÞ

(37.8)

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

685

From the open-loop experimental tests, it is found that the USM begins to move when the input is around 22.535 or 2.305 V. Therefore fc is not symmetric along the forward and backward directions. The Coulomb friction is written as Eq. (37.9). 8 > < ðσ 2 δÞ; x_ . 0 _ 5 σsignðxÞ _ 2 δjsignðxÞj _ 5 ufc ðxÞ 0; x_ 5 0 (37.9) > : ð2 σ 2 δÞ; x_ , 0 where σ is a coefficient, sign is a symbol function, and δ is a constant. The experimental results include the coefficient and the constants in Eq. (37.9) which are estimated as follows: σ 5 2.420, δ 5 0.1150, and b will be given during the identification for the linear term. Therefore the nonsymmetric friction term is described by the following equation. _ 5 2:420signðxÞ _ 2 0:1150jsignðxÞj _ ufc ðxÞ

(37.10)

Then, the complete nonlinear term is obtained. _ 1 0:1150jsignðxÞjb _ 2 Δf Fnonlinear ðtÞ 5 2 2:420signðxÞb

(37.11)

where Δf represents the uncertainty present in the system. Linear term For the identification of the linear term, a compensator equal to the inverse of ufc is applied to the open-loop system so that the nonlinear term can be approximately eliminated. Considering that it is necessary to excite the signals sufficiently during identification, to identify the parameters of the second-order system, a multifrequency square wave is chosen as the input signal. Since the required motion sequences driven by the USM stage for myringotomy and tube insertion are very fast, the input signal for the identification is designed to consist of three different frequencies: 10, 20, and 30 Hz. The amplitude of the signal is 3.5 V. With the help of the System Identification Toolbox of MATLAB and using the ARX model structure on the collected data during the identification experiments, the parameters are estimated as: a1 5 248.4, a2 5 202, b 5 4940. Hence, the linear term of the USM is given by the following model. _ 1 4940uðtÞ xðtÞ € 5 2 248:4xðtÞ 2 202xðtÞ

(37.12)

Combine the linear term and the nonlinear term, the full model of the USM stage described in Eq. (37.3) can be rewritten as: _ 1 4940uðtÞ 2 11954:8signðxÞ _ 1 568:1jsignðxÞj _ 2 Δf : xðtÞ € 5 2 248:4xðtÞ 2 202xðtÞ

(37.13)

More details on the system identification can be found in Ref. [20].

Control scheme

In this application, the motion of the USM stage is required to be of high accuracy and fast response. For the linear term, the PID controller is used and designed because it has a simple structure, is easily understood and implemented. For the nonlinear term, the structured component is compensated by a sign function while the uncertain nonlinear component is rejected by a sliding mode controller. The block diagram of the control scheme is shown in Fig. 37.13. In this section, the LQR-assisted PID controller for the linear term will be designed first, followed by a discussion of the nonlinear compensation method.

37.4.2.1 LQR-assisted PID controller The main controller for the linear term is the PID controller. Although it has a simple structure, tuning the parameters for optimal performance is a challenge. In this case, there are many modified or advanced PID controller tuning approaches aiming to address this problem, which are proposed in Refs. [21,22]. Since the LQR optimal control design

37. Ventilation Tube Applicator

37.4.2

686

Handbook of Robotic and Image-Guided Surgery

FIGURE 37.13 Control Ultrasonic motor.

scheme

for

USM.

USM,

method is easy to use and implement to get a quick response, and an overshoot that is as small as possible, the PID controller is combined with an LQR optimal control strategy for this application. The following PID control law is used: ðt e1 ðtÞ (37.14) uðtÞ 5 Kp e1 ðtÞ 1 Ki e1 ðτÞdτ 1 Kd dt 0 where e1(t) 5 xd(t) 2 x(t) is the position error, xd(t) is the desired position, Kp, Ki, and Kd are controller parameters. To apply the optimal control directly, the system model needs to be changed to an error model. The integral error is given Ðt by e0 ðtÞ 5 0 e1 ðτÞdτ and the derivative error is given by e2 ðtÞ 5 e_1 ðtÞ. Choose the errors as the states, that is, E(t) 5 [e0(t), e1(t), e2(t)]T and thus   _ 1€xd ðtÞ 1 a2 x_d ðtÞ 1 a1 xd ðtÞ _ 5 AEðtÞ 2 BuðtÞ 1 B f ðxÞ EðtÞ (37.15) b with

2

0 A540 0

3 0 1 5; 2a2

1 0 2a1

2 3 0 B 5 4 0 5: b

From this model, the dominant linear part is: _ 5 AEðtÞ 2 BuðtÞ: EðtÞ

(37.16)

From Eq. (37.16) it can be observed that the vector E is a PID term which should be determined for the application. The PID controller is then converted to an equivalent state feedback controller. Moreover, this model is controllable as long as b is not zero since the determinant of the controllability matrix is det(Wc) 5 b3. The PID parameters are obtained by using the LQR technique. Generally, the optimal LQR control is based on the following index: ðN J5 ½ET ðτÞQEðτÞ 1 ruðτÞuðτÞdτ (37.17) 0

where Q . 0 is the weighting matrix, normally, it is chosen as a diagonal matrix, that is, Q 5 diag{q1, q2, q3}, and r is the weighting factor. Considering the PID control structure and Eq. (37.16), the state feedback control form is taken as: ul ðtÞ 5 KEðtÞ 21

(37.18)

where K is the feedback gain, K 5 r B P 5 [k1, k2, k3] and 2 3 p11 p12 p13 P 5 4 p21 p22 p23 5 . 0 is the solution of the Riccati equation shown below p31 p32 p33 T

T

AT P 1 PA 2 r21 PBBT P 1 Q 5 0:

(37.19)

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

687

Thus the feedback controller is: ul ðtÞ 5 k1 e0 ðtÞ 1 k2 e1 ðtÞ 1 k3 e2 ðtÞ 5 r21 bp31 e0 ðtÞ 1 r21 bp32 e1 ðtÞ 1 r21 bp33 e2 ðtÞ:

(37.20)

Actually, the feedback gain K contains the PID parameters, that is, the proportional gain Kp 5 k2, the integral gain Ki 5 k1, and the derivative gain Kd 5 k3.

37.4.2.2 Nonlinear compensation The control law in Eq. (37.20) does not consider the effects of the nonlinear term. Substitute Eqs. (37.6) and (37.9) into Eq. (37.15), the system becomes:   _ 2 δjsignðxÞjb _ 1 Δf 1 bud _ 5 AEðtÞ 2 BuðtÞ 1 B σsignðxÞb EðtÞ (37.21) b where bud ðtÞ 5€xd ðtÞ 1 a2 x_d ðtÞ 1 a1 xd ðtÞ which can be compensated by setting the input to be ud, and the nonlinear part is:   _ 2 δjsignðxÞjb _ 1 Δf ðxÞ _ σsignðxÞb _ EðtÞ 5 B : (37.22) b Moreover, Eq. (37.22) can also be decomposed into two parts: (1) the structured component shown in Eq. (37.23):   _ 2 δjsignðxÞjb _ σsignðxÞb B (37.23) b and (2) the uncertain component shown in Eq. (37.24).

  Δf B b

(37.24)

_ For the structured component, it is easily eliminated with a sign function ufa ðxÞ. For the uncertain component, a sliding mode control law Eq. (37.25) is proposed to reject it. us ðtÞ 5 k^s signðET PBÞ

(37.25)

where k^s is used to adaptively estimate the amplitude of the uncertain term in the system and that is given by: _ k^s 5 Projðρ1 jET PBj; k^s Þ

(37.26)

with pðk^s Þ 5

T k^s k^s 2 ks2 E2 1 2Eks

and dpðk^s Þ pk^s ðk^s Þ 5 dk^s

M M

37. Ventilation Tube Applicator

where ρ1 is the adaptive gain and Proj(  ) is the smooth projection algorithm, the details of which are shown as follows. The smooth projection algorithm is given by the following equation: 8 ρ1 jET PBj; if pðk^s Þ # 0 > > > > ρ jET PBj if pðk^ Þ $ 0 and p ðk^ Þρ jET PBj # 0 > < 1 s k^s s 1 _^ 3 (37.27) ks 5 2 > pðk^s Þpk^s ðk^s Þpk^s ðk^s ÞT > T > 4 5 > ρ1 jE PBj; if not > I2 2 : :pk^s ðk^s Þ:

688

Handbook of Robotic and Image-Guided Surgery

where jk^s j # ks

M;

ks

M

is a positive constant and E is an arbitrary positive real. This projection has the property: k~s Projðρ1 jxT jP B; k^s Þ $ k~s ρ1 jxT P Bj:

(37.28)

Substitute this controller into the system, thus Δf E_ 2 AE 2 Bk^s signðET P BÞ 1 B b

(37.29)

where A 5 A 2 BK 5 A 2 r21 B BT P: Theorem 1: The system shown in Eq. (37.15) with the controller given by: uðtÞ 5 r21 BT PEðtÞ 1 k^s signðEðtÞT P BÞ 1 ud ðtÞ 1 uf c ðtÞ

(37.30)

is stable and the errors converge to zero, that is, lim :E: 5 0:

t-N

(37.31)

Proof: (Proof of Theorem 1) ~ ~ ~ Consider a Lyapunov function V 5 ET P E 1 ρ21 1 ks , where k s 5 ðΔfM =bÞ 2 ks : Its time derivative is given by: 2

T V_ 5 ET ðA P 1 P AÞE 2 2ET P Bk^s signðET PBÞ Δf ~ _^ 2 2ρ21 1 2ET PB 1 ks ks b

5 ET ðAT P 1 PA 2 PBr21 BT PÞE 2 ET P Br21 BT PE  Δf   2 2ρ21 k~ k_^s 2 2jET PBjk^s 1 2jET PBj 1 s b

(37.32)

# ET ðAT P 1 P A 2 P Br21 BT PÞE 2 ET P Br21 BT PE  ΔfM   2 2ρ21 k~ k_^s 2 2jET PBjk^s 1 2jET PBj 1 s b Since Eq. (37.19) holds, it follows that 2 21 T ~ T 21 ~ _^ ~ _^ # 2 λ min ðQ 1 PBr B PÞ:E: 1 2ks jE PBj 22ρ1 ks k s : V_ # 2 ET ðQ 1 P Br21 BT PÞE 1 2k~s jET PBj 2 2ρ21 1 ks ks (37.33)

Apply the adaptive law to the equation above, thus 2 V_ # 2 λmin ðQ 1 PBr21 BT PÞ:E:

(37.34)

It is obvious that the sign of V_ is not positive. This implies that the closed-loop system is stable. It also implies that E is bounded and k~s is bounded. In the following, it needs to show the boundedness of the tracking error ||E||. This requires to first prove that E, E_ are bounded. From Eq. (37.29), it is observed that E_ is also bounded, as E, k^s ; Δf are bounded. Furthermore, it has the following inequality from Eq. (37.34), ðt 2 lim λmin ðQ 1 P Br21 BT PÞ:E: dτ # Vð0Þ 2 VðNÞ # Vð0Þ; (37.35) t-N 0

where the positive definiteness of V has been used. This implies that E belongs to L2. By virtue of Barbalats lemma, it can be concluded that lim :E: 5 0:

t-N

From the above analysis, the sliding mode controller can reject the unknown term Δf.

(37.36)

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

689

In summary, combine Eqs. (37.9) and (37.25), the nonlinear compensation is given by: unl ðtÞ 5 ufc ðtÞ 1 us ðtÞ _ 2 δjsignðxÞj _ 1 k^s signðEðtÞT P BÞ 5 ½σsignðxÞ

(37.37)

37.4.2.3 Overall control system Combine Eqs. (37.18), (37.9), and (37.25), for the system shown in Eq. (37.15), the control law is proposed and given by: _ 2 δjsignðxÞj _ 1 k^s signðEðtÞT P BÞ uðtÞ 5 r21 BT P EðtÞ 1 ud 1 ½σsignðxÞ

37.5

(37.38)

Experimental results

In this section, the experimental system setup is shown and then the experiments of applying the device on mock membranes are given and discussed in detail.

37.5.1

Experimental setup

The experimental system setup is shown in Fig. 37.14. It consists of the proposed surgical device, a sensor amplifier, a motor drive, power supplies, and a computer with dSPACE DS1104 control card (some of them are not shown in the figure). The motion controller and the force controller are implemented using MATLAB and dSPACE with a sampling time of 1 ms. A mock-up system with a mock membrane holder is used in the experiments (see Fig. 37.14). Two different kinds of mock membranes are used in the following experiments so as to test the robustness of the proposed control scheme. Both mock membranes are made of polyethylene films, but one is the normal mock membrane with characteristics closer to the TM and the other is a stronger membrane. The mock membrane is secured to the membrane holder using a rubber elastic band. The procedure is considered successful, that is, as a successful insertion, when the whole of the inner flange of the VT is completely inserted, and the whole of the outer flange of the tube is outside the membrane (at the first attempt). A total of 100 tube insertion tests are carried out, 50 membranes of each kind are used in the tests. Tiny Titan VTs are used in the experiment.

37. Ventilation Tube Applicator

FIGURE 37.14 Experimental system setup (rigid). VTA, Ventilation tube applicator.

690

Handbook of Robotic and Image-Guided Surgery

TABLE 37.1 Experimental results on two types of mock membranes. Normal membrane

Strong membrane

No. of fail cases

1

7

Success rate (%)

98

86

FIGURE 37.15 Successfully inserted Tiny Titan tube on mock membrane.

FIGURE 37.16 First-generation VTA with industrial design. VTA, Ventilation tube applicator.

37.5.2

Results

The experiments show that the device can successfully create the required incision on both types of membranes within a short time. For the normal membrane, only one out of 50 tubes failed to be inserted successfully while seven out of 50 tubes failed on the strong membranes (see Table. 37.1). Therefore the overall success rate is 92%, and the success rates are 98% and 86% for the normal membrane and strong membrane, respectively. Fig. 37.15 shows a successfully inserted tube on a normal mock membrane. From the experiment, the mean total incision and insertion time is found to average around 0.488 second.

37.6

Conclusion

The primary objective of this study was to develop an all-in-one office-based precision surgical device for the treatment of a common ear disease: OME. The mechatronic system of the device, including the mechanical system, sensing

Ventilation Tube Applicator: A Revolutionary Office-Based Solution Chapter | 37

691

system, as well as the precision motion control system, have been developed to address the problems associated with conventional surgery and reported devices for similar purposes. Based on experimental results, it was found that the device was able to carry out myringotomy with VT insertion in a single procedure automatically with a high success rate of over 90% as well as a short surgical time of less than 1 second. Significantly, the surgical time is much shorter than the conventional surgery which normally takes around 15 minutes. This is attributed to the sensing and motion systems which provide precise actions for each procedure. Using the device, the procedure is simplified and the workload is reduced. An important contribution of this study is that it may provide a better solution to carry out surgical treatment for patients with OME automatically in a very short time, thus avoiding the need for GA, and costly expertise and equipment. Fig. 37.16 shows the first generation of VT applicator with an industrial design. Other related research works can be found in Refs. [2326]. In the future, the team will continue to improve such novel ear surgical devices in terms of better performance, higher success rate, as well as lower cost.

Acknowledgment This work was supported by the Science and Engineering Research Council within the Agency for Science, Technology and Research, Singapore, through the Biomedical Engineering Programme under Grant 132 148 0014.

References

37. Ventilation Tube Applicator

[1] Paradise JL, Rockette HE, Colborn DK, Bernard BS, Smith CG, Kurs-Lasky M, et al. Otitis media in 2253 Pittsburgh-area infants: prevalence and risk factors during the first two years of life. Pediatrics 1997;99(3):31833. [2] Alho O-P, Koivu M, Sorri M, Rantakallio P. The occurrence of acute otitis media in infants. A life-table analysis. Int J Pediatr Otorhinolaryngol 1991;21(1):714. [3] Vaile L, Williamson T, Waddell A, Taylor G. Interventions for ear discharge associated with grommets (ventilation tubes). Cochrane Database Syst Rev (2):CD001933. [4] DiMaggio C, Sun LS, Kakavouli A, Byrne MW, Li G. A retrospective cohort study of the association of anesthesia and hernia repair surgery with behavioral and developmental disorders in young children. J Neurosurg Anesthesiol 2009;21(4):286. [5] Brodsky L, Brookhauser P, Chait D, Reilly J, Deutsch E, Cook S, et al. Office-based insertion of pressure equalization tubes: the role of laserassisted tympanic membrane fenestration. Laryngoscope 1999;109(12):200914. [6] Shahoian EJ. System and method for the simultaneous automated bilateral delivery of pressure equalization tubes, US patent 8,052,693. November 8, 2011. [7] Liu G, Girotra R, Morriss JH, Vrany JD, Ha HV, Knodel B, et al. Tympanic membrane pressure equalization tube delivery system, US patent 8,864,774. October 21, 2014. [8] Gates GA. Cost-effectiveness considerations in otitis media treatment. Otolaryngol Head Neck Surg 1996;114(4):52530. [9] Tan KK, Putra A. Piezo stack actuation control system for sperm injection. In: Optomechatronic actuators and manipulation, vol. 6048. International Society for Optics and Photonics; 2005. p. 60480O. [10] Tan KK, Huang S, Tang K. Robust computer-controlled system for intracytoplasmic sperm injection and subsequent cell electro-activation. Int J Med Robot Comput Assist Surg 2009;5(1):8598. [11] Tan KK, Ng SC, Xie Y. Optimal intra-cytoplasmic sperm injection with a piezo micromanipulator. In: Proceedings of the fourth world congress on intelligent control and automation, vol. 2. IEEE; 2002. p. 11203. [12] Aernouts J, Soons JA, Dirckx JJ. Quantification of tympanic membrane elasticity parameters from in situ point indentation measurements: validation and preliminary study. Hear Res 2010;263(12):17782. [13] Kuypers LC, Decraemer WF, Dirckx JJ. Thickness distribution of fresh and preserved human eardrums measured with confocal microscopy. Otol Neurotol 2006;27(2):25664. [14] Liu L, Tan KK, Chen S-L, Huang S, Lee TH. SVD-based Preisach hysteresis identification and composite control of piezo actuators. ISA Trans 2012;51(3):4308. [15] Thacker JG, Rodeheaver GT, Towler MA, Edlich RF. Surgical needle sharpness. Am J Surg 1989;157(3):3349. [16] Heavner JE, Racz GB, Jenigiri B, Lehman T, Day MR. Sharp versus blunt needle: a comparative study of penetration of internal structures and bleeding in dogs. Pain Pract 2003;3(3):22631. [17] Cook RD, Malkus DS, Plesha ME, Witt RJ. Concepts and applications of finite element analysis, vol. 4. New York: Wiley; 1974. [18] Hutton DV. Fundamentals of finite element analysis. McGraw-Hill; 2017. [19] Storck H, Wallaschek J. The effect of tangential elasticity of the contact layer between stator and rotor in travelling wave ultrasonic motors. Int J Nonlinear Mech 2003;38(2):14359. [20] Tan KK, Liang W, Huang S, Pham LP, Chen S, Gan CW, et al. Precision control of piezoelectric ultrasonic motor for myringotomy with tube insertion. J Dyn Syst Meas Control 2015;137(6):064504. [21] Daley S, Liu G. Optimal PID tuning using direct search algorithms. Comput Control Eng J 1999;10(2):516.

692

Handbook of Robotic and Image-Guided Surgery

[22] Han J, Wang P, Yang X. Tuning of PID controller based on fruit fly optimization algorithm. In: 2012 International conference on mechatronics and automation (ICMA). IEEE; 2012. p. 40913. [23] Liang W, Gao W, Tan KK. Stabilization system on an office-based ear surgical device by force and vision feedback. Mechatronics 2017;42:110. [24] Liang W, Huang S, Chen S, Tan KK. Force estimation and failure detection based on disturbance observer for an ear surgical device. ISA Trans 2017;66:47684. [25] Liang W, Tan KK. Force feedback control assisted tympanostomy tube insertion. IEEE Trans Control Syst Technol 2017;25(3):100718. [26] Ng C, Liang W, Gan CW, Lim HY, Tan KK. Novel design and validation of a micro instrument in an ear grommet insertion device. J Med Device 2018;12(3):031004.

38 G

ACTORS: Adaptive and Compliant Transoral Robotic Surgery With Flexible Manipulators and Intelligent Guidance Hongliang Ren1, Changsheng Li1, Liang Qiu1 and Chwee Ming Lim2 1

National University of Singapore, Singapore, Singapore National University Hospital, Singapore, Singapore

2

ABSTRACT There exist limitations to transoral robotic surgery in curvilinear navigation and tremor suppression. To address these challenges, this chapter introduces an adaptive and compliant transoral robotic surgery (ACTORS) with flexible manipulators and intelligent guidance. The patient-side manipulators are based on a flexible parallel mechanism with three sets of chains composed of superelastic nickel-titanium (Ni-Ti) rods and universal joints. Compared with conventional parallel mechanisms, this design optimizes the structure for adaptiveness and compliance by introducing superelastic structures. Due to adjoining to the head neck brain anatomical structures, transoral surgery raises a crucial demand for navigation accuracy. However, the transoral environment poses a prominent challenge because of the textureless surface, irregular shape, and nonrigid characteristics. Furthermore, it is notably tough for surgeons to determine the location of the endoscope instrument on account of the narrow field of view. For precise navigation during transoral surgical procedures, endoscopic simultaneous localization and mapping can provide real-time localization of the endoscope and the three-dimensional map of the surgical site, which can further be registered to preoperative imaging data to enhance visual understanding. This chapter elaborates on the ACTORS robot and the intelligent guidance in detail with a performance test and cadaveric trials. Handbook of Robotic and Image-Guided Surgery. DOI: https://doi.org/10.1016/B978-0-12-814245-5.00038-4 © 2020 Elsevier Inc. All rights reserved.

693

694

Handbook of Robotic and Image-Guided Surgery

38.1

Introduction

Transoral endoscopy is the conventional surgical intervention for excision and reconstruction in head and neck surgery [1], such as extirpation of oropharyngeal and laryngeal cancers and reducing obstructive tongue base tissue in patients with obstructive sleep apnea. The surgeon’s ability is restricted due to the long handle of the surgical instruments, poor sensory feedback, and the magnification of tremor [2,3]. Transoral robotic surgery (TORS) is increasingly utilized to improve the functional and aesthetic outcomes with enhanced maneuverability and wristed instruments [3], which allow operations to be performed in the tight confines within the oropharynx and larynx [4]. The current widely used robotic system for TORS is the da Vinci Surgical System (Intuitive Surgical Inc.) [5]. This system is composed of a master console and slave robotic arms with a cable-driven and multiple-joint mechanism. The surgeon in the master side steers the surgical instruments by operating the hand controls under visual guidance [6]. Several advantages such as the tremor filtration and wristed instrumentation are offered by this system [7]. However, this system, consisting of the bulky straight arms, big footprints, and costly dedicated setups, has no adaptive or haptic guidance, which limits practical applications [6,8]. Additionally, the Asian facial skeleton has a typical receding chin profile which poses a unique challenge for robotic surgeons to use the rigid robotic system. The FLEX System (Medrobotics Inc.) is designed to promote more flexibility and maneuverability [9,10]. It is composed of two channels for instruments and an endoscope with multiple discrete linkages that can rotate independently and achieve a semirigid or flexible state. The instruments are manually operated devices. Compared to the da Vinci Surgical System, this system is more flexible and can provide haptic feedback. However, skilled surgeons are required as the instruments are not robotic. Surgical navigation can provide real-time localization information of surgical tools to help surgeons to perform operations, especially in minimally invasive surgeries nowadays, which can reduce the risk of surgery and greatly improve the operative success rate. The electromagnetic and optical surgical navigations are two main navigation methods in both research and clinical practice. An electromagnetic navigation system can work without line of sight but it is susceptible to metallic objects and electromagnetic fields around the instruments. An optical surgical navigation system can provide better precision, track multiple tools simultaneously, and already serves as an industry standard for neurosurgeries, while the occlusion problem is unavoidable sometimes. Meanwhile, the commercial navigation systems are usually high cost and have cumbersome hardware, which has been a vital prohibitive factor to their application. To perform real-time and accurate surgical navigation, information of surgical sites should be obtained. An endoscope, as a significant and widely used optical tool, can provide surgeons with information intuitively and plentifully. However, the visualization is limited by the narrow field of view and the two-dimensional characteristics of endoscopic videos. Moreover, the lesion area beneath the tissue surface cannot be displayed. To enhance the navigation ability in the narrow surgical space, many techniques such as structure from motion [11], simultaneous localization and mapping (SLAM), and augmented reality [12] have been developed to show real-time locations of surgical tools compared to surrounding anatomical structures by using endoscopic video sequences, combined with a computed tomography (CT) or magnetic resonance imaging model, which has provided great assistance to surgeons. Therefore, the technology convergence of compliant and flexible robotics, sensing, and mechatronics brings the possibility of more adaptive, compact, and intelligent assistance to revolutionize a transoral robotic system. We hypothesize that the development of a compact, intelligent, plug-and-play adaptive and compliant transoral robotic surgery (ACTORS) system will enhance the surgeon’s capability through an immersive guidance from visual serving and endoscopic navigation.

38.2 38.2.1

Adaptive and compliant transoral robotic surgery Clinical requirements

For the TORS, the following requirements should be met, including: 1. Load capacity: Enough load capacity is required for manipulating organs and tissues effectively. The payload of the surgical robotics for natural orifice transluminal endoscopic surgery ranges from 0.5 to 3 N [13], which is a reference for TORS. 2. Safety: Safety is a key issue for surgical robotics that operates in a narrow space involving critical structures. Compliance mechanisms can increase the level of safety by striking energy at short-term and consequently reduce the resulting forces in the robot structures [14].

ACTORS Robotic System Chapter | 38

695

3. Dexterity and operability: Four to six degrees of freedom (DoFs) are required for most surgical operations [4]. The robotic system usually has two or three manipulators to perform a basic surgical operation such as cutting, grasping, and pulling. 4. Real-time performance: To guarantee the accuracy of surgical navigation, the latency between the instructions from surgeons and the respondent of the navigation system should be reduced to meet the surgical requirements. In addition, the robotic system should be compact enough to insert into the oral cavity. A comfortable man machine interaction should be provided for a surgeon.

38.2.2

Overview of the robotic system

As shown in Fig. 38.1. The robotic system consists of the master console including a monitor and two haptic devices, a computer, a controller, and two manipulators with an endoscope. A master slave configuration is achieved with the surgeon in the loop. The manipulators in the slave side are attached to a fixture on the side of the operating bed. The monitor provides the surgeon on the master side with a real-time image from the endoscope attached to the manipulators. Under guidance, the surgeon operates haptic devices to steer the slave manipulators via the controller and computer. The computer that runs control programs sends control commands to the controller when receiving signals from the haptic device, then the controller provides driving signals to the motors that drive the manipulators. The sensor signals of motors can be detected and transmitted to the computer via the controller for use. The main features of the ACTORS robotic system are summarized in Table 38.1.

38.2.3

Flexible parallel manipulators

The flexible parallel manipulators in the slave side are shown in Fig. 38.2. Each manipulator is composed of a gripper, a superelastic rod-based flexible parallel mechanism, a motion transmission, and flexible shafts. There are five DoFs, including two DoFs for grasping and rotation of the gripper and three DoFs for translation and bending of the flexible parallel mechanism (one translational DoF and two bending DoFs). The translation distance and the bending angle of the parallel mechanism are 16 mm and 60 degrees, respectively. These DoFs provide the manipulators with enough flexibility to perform transoral surgery. The size of the manipulator is 153 mm 3 32 mm 3 22 mm, which means that it can insert through a natural orifice larger than 32 mm 3 22 mm. The diameter of the manipulator’s end-effector is Φ11 mm. The distance between the tips of the grippers is 20 mm in its initial state. FIGURE 38.1 Robotic system architecture under the master slave configuration.

38. ACTORS Robotic System

696

Handbook of Robotic and Image-Guided Surgery

TABLE 38.1 Main features of the ACTORS robotic system. Features

Values

Units

Size

153 3 32 3 22

mm

Diameter of the end-effector

11

mm

DoF

10

Stroke

16

mm

Bending angle

60

degree

Operating mode

Master slave

DoFs, Degrees of freedom; ACTORS, adaptive and compliant transoral robotic surgery.

FIGURE 38.2 Flexible parallel manipulators with size.

38.2.3.1 Gripper The gripper is designed for general grasping purposes in surgical operation, as it is one of the most commonly used surgical instruments. It can be replaced by other surgical instruments such as an electrotome, electrocoagulator, needle holder, forceps, and scissors according to the requirements of the surgery. As shown in Fig. 38.3B, it is driven by the parts composed of sliders, linkage, and screw shaft (lead: 0.5 mm, diameter: 3 mm). When the screw shaft rotates, the rotational motion can be turned into translation motion. Then the slider moves along its axis. As the slider is connected with the linkage, with the help of a slider, the motion of opening and closing of the gripper can be achieved via this linkage. The maximum opening angle is 60 degrees, which is large enough for grasping objects in TORS. High efficiency is achieved by this transmittal mode with a simple structure. The antiback drivable characteristic of the screw shaft allows the gripper to hold a stable state without extra driving force for better grasping. The high reduction ration of the screw shaft provides sufficient accuracy for opening and closing. As shown in Fig. 38.3C, the rotational motion of the gripper is driven by a pair of gears and a rod that can rotate along its axis. When the rod rotates, the gripper can be rotated to achieve the rotational DoF via the motion transmission of the pair of gears for grasping with different angles.

38.2.3.2 Parallel mechanism A parallel mechanism composed of multichains and joints possesses some advantages such as high stiffness, load capability, and accuracy [15 17]. These advantages are particularly required in medical applications. However, the

ACTORS Robotic System Chapter | 38

697

FIGURE 38.3 Structure of the manipulator: (A) composition of the manipulator; (B) gripper; (C) two bending DoFs of the parallel mechanism; (D) rotational motion of the gripper; (E) translational motion of the parallel mechanism. DoFs, Degrees of freedom.

38.2.3.3 Motion transmission Screws with flexible shafts are used to convert the rotational motion into linear motion. Each manipulator has five sets of screws and flexible shafts for five DoFs. The flexible shafts can transfer torque while providing a certain compliance. The performance of the flexible shafts has been verified in our previous work [18], with the results showing the acceptable motion-tracking effect within the rated velocity and payload. The screws improve the accuracy of the

38. ACTORS Robotic System

mechanical structure of robotics for TORS should be compact due to the space limitations of the human oral cavity, which increases the difficulty of the design and fabrication of the parallel mechanism, especially the motion joints such as the spherical joints. The durability of the compact structure also should be taken into consideration for the medical applications. With these above considerations, we adopt superelastic materials as part of the chains of the parallel mechanism to improve its performance. The parallel mechanism is composed of three chains combined with a base disk and a moving disk. The chains can be driven to translate along their axis. Each chain includes a nickel-titanium (Ni-Ti) rod and a universal joint. A three-prismatic-universal (3-PU) mechanism is formed by these three chains. It possesses three DoFs, including one translational DoF (Fig. 38.3E) and two bending DoFs (Fig. 38.3C) along the axis of the base disk. It is an improvement to the three-prismatic-revolute-spherical (3-PRS) parallel mechanism which is composed of three chains, each with a rigid rod and a spherical joint. In our design, the spherical joints that are difficult to fabricate in small size are replaced by universal joints. The Ni-Ti rod that can effectively transmit pulling, pushing, and rotational forces has the characteristic of superelasticity with excellent bending and recovery performance [18]. This characteristic allows the parallel mechanism to be more flexible and durable, while retaining the advantages of a conventional rigid parallel mechanism. The use of universal joints reduces the bending radius compared with parallel continuum robots composed of elastic links without universal joints [19], making it more suitable for operating in a confined space. As the kinematics of this parallel mechanism is related to the deformation of the coupled Ni-Ti rods, it is difficult to be directly derived. As a result, we calculate the kinematics by simplifying this 3-PU mechanism to the 3-PRS mechanism as they are similar structures. The detailed derivative process and experimental verification can be found in our previous work [18].

698

Handbook of Robotic and Image-Guided Surgery

transmission with a high reduction ratio. The flexible shafts connected to the screws and the flexible shafts allow the motors to be put far from the manipulators. As a result, the manipulators can be fixed on a surgical bed.

38.2.4

Master console

In the master side, two haptic devices (Sensable Technologies Inc.) with six DoFs are used as an interactive device for surgeons. A three-dimensional (3D)-printed handle attached to the end-effector of the haptic device is designed for performing the grasping and rotational motion of the gripper. The surgeon can hold the handle and steer the manipulators under the visual guidance. The action of the surgeon can be obtained by the haptic devices and transferred to the computer running Matlab/Simulink programs. The simLab board (Zeltom LLC) is adopted to obtain the signals from the Matlab/Simulink programs and control the motion of the manipulators. The control law to achieve position closed-loop control is the proportion-integration-differentiation (PID) position control algorithm with a sampling rate of 100 Hz.

38.2.5

Intelligent guidance

Our endoscope navigation system framework includes three parts, namely endoscope navigation and mapping using ORBSLAM [20] with parameter retuning, registration between the reconstruction of 3D transoral structure, and rendered CT scan model and surgeon-centered navigation, as shown in Fig. 38.4, which can be treated as a complete closed-loop feedback control system to provide accurate and convenient surgical navigation. Compared with other transoral navigation systems, our design is low cost, convenient, and can meet the accuracy requirements of most medical applications. In addition, real-time performance can be achieved without using powerful and expensive computing machines. Furthermore, with registration to preoperative CT data, the real scale of the trajectory and reconstructed map from a monocular endoscope can be recovered.

38.3 38.3.1

Experimental evaluation Performance of the manipulators

As shown in Fig. 38.5, multiple tests were conducted to demonstrate the performance of the manipulators. These tests show the basic operation of the manipulator that may be used in robotic surgery. First, we tested the payload of the manipulator. A weight of 2 N was hung over the gripper of the manipulator. Then the gripper was driven to operate, including bending and translating. Fig. 38.5A shows that the manipulator can move with the payload of 2 N, which can fulfill most of the payload requirements in surgical operations. The second test is to show the flexibility of the manipulator around a half human head model beside the manipulators. The operator in the master side steered the manipulators within their workspace. Fig. 38.5B shows that the manipulator can reach the area of pharynx/larynx and move in different directions freely. As shown in Fig. 38.5C, the third is to test the grasping capability of the gripper. A plastic cylinder with a diameter of 10 mm was put on the desk. The gripper was controlled to pick the cylinder and place it in

FIGURE 38.4 Schematic diagram of our endoscopy navigation system.

ACTORS Robotic System Chapter | 38

699

FIGURE 38.5 Performance of the manipulators: (A) payload capability test; (B) flexibility test; (C) pick-and-place test; (D) cooperative operation test.

another place. The last is the test of cooperative operation. The gripper in one manipulator grasped a slender rod and passed it to the gripper in the other manipulator, then released it. The slender rod was finally grasped by the gripper in the other manipulator.

38.3.2

Cadaveric trial with the manipulators

This test is to preliminarily verify the feasibility of the robotic system. The setup of the manipulators is shown in Fig. 38.6. A cadaveric head was placed on the table. A Crowe Davis mouth gag was used to open the mouth. The grippers of the manipulators with one attached to an electrocoagulation were inserted to the target area. As shown in Fig. 38.7, after the initial setup of the robotic system, the operator steered the manipulators to grasp and cut tissue via the haptic devices under visual guidance from the endoscope. The manipulator worked steadily and accurately in the test.

38. ACTORS Robotic System

FIGURE 38.6 Setup of the manipulators for cadaveric trial.

700

Handbook of Robotic and Image-Guided Surgery

FIGURE 38.7 Performing robotic system.

tonsillectomy

using

the

FIGURE 38.8 The performance at some point of the whole transoral navigation procedure: (A) an image of the phantom acquired from the endoscope; (B) intensity images with extracted feature points (green dots); (C) reconstructed original maps, semidense, maps and trajectories of the endoscope.

38.3.3

Endoscope navigation trial on phantom

After parameter retuning of the original ORBSLAM, we perform the transoral navigation to realize the real-time localization and mapping in the oral cavity of the phantom. The performance at the remarkable point of the whole transoral navigation procedure is shown in Fig. 38.8. Fig. 38.8A show an image of the phantom acquired from the endoscope. We can see that most parts of the oral cavity are homogeneous and low texture. Green dots in Fig. 38.8B are feature points extracted from the corresponding intensity image and the reconstructed map and trajectory of the endoscope are depicted in Fig. 38.8C. After the map reconstruction and endoscopic trajectory, combining intraoperative and preoperative information is necessary, as shown in Fig. 38.9. We need to register the map points with a CT scan to make them have a uniform coordinate system. The reconstructed map is displayed in Fig. 38.9A, and the combination of the transoral SLAM and preoperative CT model is shown in Fig. 38.9B. From the trial, we can get that the root-mean-square error of the reconstructed map and trajectory of the endoscope are 0.836 and 0.614 mm, respectively, which can meet the requirements of most transoral surgeries.

38.4

Conclusion

The major clinical needs of a surgical robot in performing transoral surgeries are flexible mechanisms for collision avoidance. The ACTORS with flexible manipulators and intelligent guidance provides a solution. In our design, the 3-PU parallel mechanisms with superelastic Ni-Ti rods are used to achieve flexible movement, improving the performance of conventional rigid parallel mechanisms. The adaptiveness and compliance are achieved by the flexibility of the manipulators. According to the clinical requirements, the load capacity, safety, dexterity, operability, and real-time performance are taken into consideration in designing the robotic system. The performance including payload, flexibility, grasping capacity, and corporative operation is demonstrated. Also, visual SLAM with a low-cost endoscope to realize the accurate endoscope navigation in the transoral surgery is realized. Furthermore, a cadaveric trial was conducted to test the feasibility of the robotic system.

ACTORS Robotic System Chapter | 38

701

FIGURE 38.9 Transoral map reconstruction and registration: (A) reconstructed map from visual SLAM; (B) the combination of intraoperative (transoral SLAM) and preoperative (CT) information together to conduct surgical navigation in oral cavity. CT, Computed tomography; SLAM, simultaneous localization and mapping.

References

38. ACTORS Robotic System

[1] Simaan N, Xu K, Wei W, Kapoor A, Kazanzides P, Taylor R, et al. Design and integration of a telerobotic system for minimally invasive surgery of the throat. Int J Robot Res 2009;28(9):1134 53. [2] Hong MB, Jo Y-H. Design of a novel 4-DOF wrist-type surgical instrument with enhanced rigidity and dexterity. IEEE/ASME Trans Mechatron 2014;19(2):500 11. [3] Poon H, Li C, Gao W, Ren H, Lim CM. Evolution of robotic systems for transoral head and neck surgery. Oral Oncol 2018;87:82 8. [4] Ren H, Lim CM, Wang J, Liu W, Song S, Li Z, et al. Computer-assisted transoral surgery with flexible robotics and navigation technologies: a review of recent progress and research challenges. Crit Rev Biomed Eng 2013;41(4 5):365 91. [5] McLeod IK, Melder PC. Da Vinci robot-assisted excision of a vallecular cyst: a case report. Ear Nose Throat J 2005;84:170 2. [6] Hockstein NG, O’Malley Jr. BW, Weinstein GS. Assessment of intraoperative safety in transoral robotic surgery. Laryngoscope 2006;116 (2):165 8. [7] Garg A, Dwivedi RC, Sayed S, Katna R, Komorowski A, Pathak K, et al. Robotic surgery in head and neck cancer: a review. Oral Oncol 2010;46(8):571 6. [8] Reiley CE, Akinbiyi T, Burschka D, Chang DC, Okamura AM, Yuh DD. Effects of visual force feedback on robot-assisted surgical task performance. J Thorac Cardiovasc Surg 2008;135(1):196 202. [9] Lang S, Mattheis S, Hasskamp P, Lawson G, Gu¨ldner C, Mandapathil M, et al. A European multicenter study evaluating the flex robotic system in transoral robotic surgery. Laryngoscope 2017;127(2):391 5. [10] Mattheis S, Hasskamp P, Holtmann L, Scha¨fer C, Geisthoff U, Dominas N, et al. Flex robotic system in transoral robotic surgery: the first 40 patients. Head Neck 2017;39(3):471 5. [11] Pizarro D, Bartoli A. Feature-based deformable surface detection with self-occlusion reasoning. Int J Comput Vision 2012;97(1):54 70. [12] Bernhardt S, Nicolau SA, Soler L, Doignon C. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017;37:66 90. [13] Zhao J, Feng B, Zheng M-H, Xu K. Surgical robots for SPL and NOTES: a review. Minim Invasive Ther Allied Technol 2015;24(1):8 17. [14] Groothuis SS, Stramigioli S, Carloni R. Modeling robotic manipulators powered by variable stiffness actuators: a graph-theoretic and porthamiltonian formalism. IEEE Trans Robot 2017;33(4):807 18. [15] Li C, Wang T, Hu L, Zhang L, Du H, Wang L, et al. Accuracy analysis of a robot system for closed diaphyseal fracture reduction. Int J Adv Robot Syst 2014;11(10):169. [16] Li C, King NKK, Ren H. A skull-mounted robot with a compact and lightweight parallel mechanism for positioning in minimally invasive neurosurgery. Ann Biomed Eng 2018;1 14. [17] Du H, Hu L, Li C, Wang T, Zhao L, Li Y, et al. Advancing computer-assisted orthopaedic surgery using a hexapod device for closed diaphyseal fracture reduction. Int J Med Robot Comput Assist Surg 2015;11(3):348 59. [18] Li C, Gu X, Xiao X, Lim CM, Ren H. A robotic system with multi-channel flexible parallel manipulators for single port access surgery. IEEE Trans Ind Inf 2018;15:1678 87. [19] Black CB, Till J, Rucker DC. Parallel continuum robots: modeling, analysis, and actuation-based force sensing. IEEE Trans Robot 2018;34 (1):29 47. [20] Mur-Artal R, Montiel JM, Tardos JD. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot 2015;31(5):1147 63.

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively.

A Abdominal surgery, 182 application in, 232 234 Accessory neurovascular plate, 188 Accuracy phantom, 421, 422f Acetabular implant alignment screen, 420, 421f Acidosis, 182 ACL replacement surgery. See Anterior cruciate ligament replacement surgery (ACL replacement surgery) ACS NSQIP. See American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) Active arrays, 553 Active assistance, 364, 368, 371 Active constraints, 232 Active methods, 228 Active scope holders, 90 Active soft-tissue balancing system development OMNIBotics system, 460 472 BalanceBot system development, 465 BoneMorphing/shape modeling, 461 463 cadaver labs and clinical results, 471 472 engineering for product commercialization, 467 initial prototype design requirements, 465 466 OMNIBot miniature robotic cutting guide, 464 465, 465f proof of concept, 466 467 surgical workflow, 470 471 verification, validation, and regulatory clearances, 467 469 ACTORS. See Adaptive and compliant transoral robotic surgery (ACTORS) Actual measured (or estimated) electrical angle, 360 Actuation, 288, 377 methods for MR-safe/conditional robots, 376 377 Actuators, 332 feedback level, 292 space variable, 310 Acute MI, 342 Adaptive and compliant transoral robotic surgery (ACTORS), 694. See also Commercial surgical robot systems; Robotic surgery clinical requirements, 694 695

experimental evaluation cadaveric trial with manipulators, 699 endoscope navigation trial on phantom, 700 manipulators for cadaveric trial, 699f performance of manipulators, 698 699, 699f flexible parallel manipulators, 695 698, 696f, 697f intelligent guidance, 698 master console, 698 robotic system, 695, 695f, 696t Adenocarcinoma, 173 Admittance-type paradigm, 654 Admittance-type robots, 650 651 Advanced control techniques, 364 Advanced rendering methods, 646 AESOP. See Automated Endoscopic System for Optimal Positioning (AESOP) Aggregation cost computation, 230 AHSQC. See Americas Hernia Society Quality Collaborative (AHSQC) AirSeal system, 175 176 AKTORmed GmbH, 90 American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), 161 Americas Hernia Society Quality Collaborative (AHSQC), 216 (3-Aminopropyl)triethoxysilane (APTES), 332 Analytic linear least squares estimation, 621 Anastomosis, 184 Anatomic implant designs, 444 Anatomical landmarks, identification of, 533 534 Ancillary devices, 427 Anesthesiological considerations, 175 176 Angiography, 361 Angular motions, 270f, 271 272, 279 280 Animation video, 414 415 image processing pipeline, 418f validated tracking volume, 419f Anterior cruciate ligament replacement surgery (ACL replacement surgery), 500 Anterior lobe, 172 Anterior periprostatic tissue preservation, 188, 188f Anterograde dissection, 184 186 Anterograde intrafascial dissection, 184 187, 186f

left intrafascial anterior dissection, 187f ligation of NVB using clip in extrafascial approach, 187f NVB, 186f right anterolateral dissection, 187f Antireflux surgery, 217 Anubiscope platform, 126 127, 127f Application accuracy, 616 APTES. See (3-Aminopropyl)triethoxysilane (APTES) AR. See Augmented reality (AR) Arduino microcontroller, 390 Arm. See Articulated arm Arteriovenous malformations (AVMs), 16 Arthroscopic/arthroscopy, 511 cameras, 495 procedures, 508 steerable robotic tools for, 495 500 tools, 494 Articulated arm, 344 345, 345f Artificial palpation, 298 ASTM 2554-10 standard, 421, 422f ASTM F2503 standard, 376 Asymptotically stable system, 360 Auditory augmentation, 649 650 Auditory feedback, 646 Augmented reality (AR), 204, 206f, 224, 224f, 576, 646 650 application in abdominal surgery, 232 234 auditory augmentation, 649 650 augmented reality based three-dimensional image guided surgery system, 577 579 3D integral videography image overlay for image guidance, 578 579 depth visualization, 648 injection guidance application, 647f mosaicing, 646 647 in robotic liver surgery, 204 205 safety in robotic MIS, 225 safety warning methods, 231 232 structure of interest, 225 227 subsurface imaging, 647 648 surgical scene description, 227 231 semantic segmentation, 227 228 surgical scene reconstruction, 228 230 tissue tracking, 230 231 tool tracking, 648 649 vessel enhancement, 648 Automated drilling operation, 615, 615f

703

704

Index

Automated Endoscopic System for Optimal Positioning (AESOP), 41, 61b, 80 Automated image segmentation, 31 Automated marker localization, 608 609 Automatic image-based classification, 227 Automatic localization in image space, 608, 608f in physical space, 609 Automatic patient localization and registration, 604 609 automated marker localization, 608 609 robotic navigation and point-pair correspondence, 606 608 Automatic segmentation and tracking, 506 507 Autonomous interventions, 664 Autonomous robotic bone drilling, 614 616 automated drilling operation, 615 experimental results, 616, 617f force controller, 615 616 Autonomous robotic navigation, 501 Autonomous surgical robotic systems, 511 AVMs. See Arteriovenous malformations (AVMs) Axial resolution of spectral-domain optical coherence tomography, 637 638

B Back emf constant, 358 BalanceBot robotic ligament tensioning tool, 460 BalanceBot system development, 465 engineering analysis of, 466f evolution of BalanceBot design, 468f Ball-prism-spherical (SPS), 266 267 Balloon/stent catheter (BSC), 344 Bariatric surgery, 212, 212f procedure background, 213 robotic gastric bypass, 214 robotic sleeve gastrectomy, 214 215 Bayes’ theorem, 256 Bayesian framework, 255 Benign prostatic hyperplasia (BPH), 324 325, 327, 330f Bicompartmental components, 476 Bicruciate-retaining tibia implants, 454 Bilateral TAPP inguinal hernia repairs, 5 Biliostasis, 200 Binary large objects (BLOBs), 609 Biocompatible silicone elastomers, 332 Biopsy gun, 389 Birmingham wire gauge system (BWG system), 640 Bladder neck, 184, 184f, 185f BLDC. See Brushless DC (BLDC) BLOBs. See Binary large objects (BLOBs) Blunt Hasson trocar, 179 Bocciardi approach. See Retzius-sparing approach Bone bone-mounted robotic systems, 464 cuts navigation, 429 430 model color, 487

morphing process, 470 471 pin insertion, 481 preparation, 486f approach mode, 487, 487f checkpoints, 485 CT view, 487 488, 488f page layout, 485, 486f visualization and stereotactic boundaries, 486 487 registration, 483f capturing remaining landmarks, 483, 484f femur and tibia, 483, 484f implant planning, 485 patient landmarks, 483 verification, 483 485 tracking hardware, 445 446 bicortical engagement of bone screws, 446f Tibial tracker attachment on the patient’s bone, 447f typical OR setups, 446f BoneMorphing/shape modeling, 461 463 3D BoneMorphing acquisitions and model check screens, 464f Bowden cables transport energy, 377 BPH. See Benign prostatic hyperplasia (BPH) Brain shift, 582 Branch retinal vein occlusion (BRVO), 631 Breast cancer, 376 Brightness adjustment, 609 Bronchoscopy, 304, 511 Brushless DC (BLDC), 275, 357 linear DC motors, 360 BRVO. See Branch retinal vein occlusion (BRVO) BSC. See Balloon/stent catheter (BSC) Burr shutoff, 487 BWG system. See Birmingham wire gauge system (BWG system)

C C-arm fluoroscopy, 552 CABG. See Coronary artery bypass graft (CABG) Cable-driven mechanisms, 498 Cadaver labs and clinical results, 471 472 Cadaveric trial with manipulators, 699 Calibration process, 390 391, 553, 618, 619f CAM. See Chorioallantoic membrane (CAM) Camera, 415 417, 415f, 499 applying shroud over draped camera, 416f Clamp/Shroud/Camera assembly for, 417f sensor fusion of camera image, 510 size comparison between Intellijoint minioptical technology and Polaris system, 416f Camera Calibration Toolbox, 370 CAN. See Control area network (CAN) Canonical image, 365 CAOS. See Computer-assisted orthopedic surgery (CAOS) Capacitive sensors, 288 Cardiopulmonary disease, 174 175

Cart, 128 Cartesian coordinates, 280 281 CAS. See Computer-assisted surgery (CAS) CASIT. See Center for Advanced Surgical and Interventional Technology (CASIT) Caspar robotic device, 398, 406t Cassette, 345 347, 346f, 346t Cath lab, 342 343 Catheter, 184, 185f, 249 BSC, 344 endovascular, 248 249 intervention, 249 mapping, 249 multilumen, 327 328, 328f polyamide multilumen, 327 robotic, 327 330 ThermoCool SmartTouch ablation, 248 249 vascular, 249 Cement and close, 456 Center for Advanced Surgical and Interventional Technology (CASIT), 286 haptic feedback system, 290f correlation between grip force and softtissue injury, 286f existing haptic feedback systems for robotic surgery, 287 288 feedback modalities, 286 287 multimodal feedback system haptic feedback unit, 294 overview, 289, 293f sensory unit, 289 292 signal processing unit, 292 294 validation studies, 294 299 pneumatic feedback unit, 294 Central nervous system (CNS), 241 Central retinal vein occlusion (CRVO), 631 Central venous pressure (CVP), 200 Centroid detection, 417 Cervical fusion, 566 569 with radiofrequency ablation and vertebroplasty, 566 CG. See Computer graphics (CG) Chandelier endoilluminators, 634 Chan Vese level set active contour algorithm, 501 Cholecystectomy, 160 161 Chorioallantoic membrane (CAM), 642 643 Circle detection, 609, 610f, 610t Circular-track RCM mechanisms, 655 Clamp, 48, 58, 59f Clarke transform, 358 Clinical motivations for MRI guided robotic stereotaxy, 586 587 Clinical target volume (CTV), 16 Closed-loop control for cooperative-control systems, 658 659 for handheld systems, 657 660 system, 412 413 for teleoperated systems, 659 660 feedback and guidance, 656 657 method, 622 regulation, 615, 616f

Index

CMM. See Coordinate measurement machine (CMM) CNN. See Convolutional neural network (CNN) CNS. See Central nervous system (CNS) Cobra (manually driven endoscopic robot), 305 306, 305f Cockpit, 348 Collision avoidance, 23, 609 610 Colon Carcinoma Laparoscopic or Open Resection (COLOR), 148 Colon surgery, robotic, 218 219 Colonoscope, 330 331 COLOR. See Colon Carcinoma Laparoscopic or Open Resection (COLOR) Color fundus imaging, 634 Colorectal cancer, 124, 330 331 Colorectal disease, 12 Colorectal surgery, 148 anatomic depiction of right hemicolectomy, 218f features of Flex Colorectal Drive, 149 153 future directions in robotics, 156 157 innovations, 149 laparoscopic era, 148, 148t procedure background, 218 robotic colon surgery, 218 219 robotic rectal surgery, 219 robotics, 148 149, 148t surgery with Flex Colorectal Drive, 153 155 Comanipulation. See Cooperative-control systems Come downstairs! phrase, 62 Commercial master device, 278 Commercial surgical robot systems, 245 249. See also Adaptive and compliant transoral robotic surgery (ACTORS) Medtronic MiroSurge, 247 248 NeuroArm, 248 REVO-I, 247 Senhance system, 246 247 Sensei, 248 249 Common-path OCT (CP OCT), 636 Common-path SS OCT (CP-SSOCT), 662 Communication protocol, 278 279 Compatible anesthesia system, 587 Complementary metal-oxide semiconductor (CMOS) image sensors, 503 for knee arthroscopy, 501 503 sensors for knee arthroscopy, 501 503 Computed tomography (CT), 173, 197, 225 226, 228, 400, 576 577, 586, 603, 694 Computer navigation in knee replacement surgery, 439 440 software, 109 Computer graphics (CG), 577 Computer Motion, Inc., 41 Computer-aided design software, 603 Computer-aided navigation systems, 586 Computer-assisted “robotic” technologies, 108

Computer-assisted implantation of total knee prostheses, 426 Computer-assisted orthopedic surgery (CAOS), 507 Computer-assisted surgery (CAS), 224, 426 Computer-assisted telemanipulator, 212 Condition number, 609 610 Console, 3 Constraints, 34 35 Continuum manipulator, 310 318, 312f decomposed, 312f manipulator-independent mapping, 314 316 manipulator-specific mapping, 311 314 Contrast enhancement, 648 Control area network (CAN), 275 276 Control console, 347, 349 Conventional IV-based 3D image rendering algorithms, 580 Conventional kinematic analysis, 316 Conventional knee arthroplasty, 444 Conventional laparoscopy, 196 Conventional surgical techniques, 516 Conventional trocar position, 95 96, 96f Convolutional neural network (CNN), 579 Cooperative operation test, 698 699 Cooperative teleoperation behavior, 660 Cooperative-control systems, 654, 656 closed-loop control for, 658 659 robot control algorithms based on sclera force information, 659 robot control algorithms based on tool-tip force information, 658 Coordinate measurement machine (CMM), 421 Coordinate systems, 21 22, 21f, 310 318, 311f, 350, 351f, 354, 586, 607, 607f drawback of typical, 316 318 Coronary artery bypass graft (CABG), 364 Coronary artery disease, 342 CorPath control console, 348, 349f CorPath GRX, 344, 344f CorPath System, 347 Coulomb friction, 685 CP OCT. See Common-path OCT (CP OCT) CP-SSOCT. See Common-path SS OCT (CPSSOCT) CR. See Cruciate retaining (CR) Cradle, 128 130 Cranial surgery, 420 Craniotomy, 591 592 Cruciate retaining (CR), 445 CRVO. See Central retinal vein occlusion (CRVO) CT. See Computed tomography (CT) CTV. See Clinical target volume (CTV) CUDA technology, 365, 639 Cup orientation, 545 Curved stepper motor, 384, 385f Cutter design, 678, 678f Cutter retraction, mechanism for, 679 680, 680f CVP. See Central venous pressure (CVP) CyberKnife System, 16 17 data management and connectivity systems, 35 robotic manipulation, 20 25

705

subsystems, 20 35 system overview, 17 20 treatment delivery imaging systems, 27 target localization and tracking methods, 27 31 treatment head, 25 27, 26f treatment planning image registration and segmentation algorithms, 31 33 radiation dose calculation and optimization algorithms, 33 35 treatment suite, 18f Cybernetic surgery, 204 205

D da Vinci or Raven platform, 274 da Vinci Research Kit, 364 da Vinci robot and docking, 176 179 da Vinci robot Xi, 177 179 da Vinci Si Surgical System, 44f, 163 164 port placement for, 164f da Vinci Skills Simulator, 51, 52f da Vinci SP Robotic System, 50 51, 51f, 156 157 da Vinci Surgical System (Intuitive Surgical Inc. ), 40, 109, 153, 194, 212, 298, 694 basic principles and design, 41 44 clinical adoption, 52 53 procedure trends, 53 publications, 53, 53f features, 152t intuitive surgical timeline, 40 41, 45f models, 42f robotic surgical system, 124, 289 surgical access, 50 51 technology training, 51 52 timeline of selected company milestones, 41f tissue interaction, 47 50 visualization, 45 47 da Vinci Xi Surgical System, 44f, 163 166, 166f Data management and connectivity systems, 35 DCE. See Dual-channel flexible endoscopes (DCE) DDES. See Direct Drive Endoscopic System (DDES) Decision-making processes, 242 Decoupling, 660 Deep learning, 511 Defense Advanced Research Programs Agency, 40 Deformable image registration method, 31 Deformable registration, 226 Deformation analysis, 678 679 Degrees of freedom (DoFs), 108, 124, 266, 329, 364, 379 380, 497 498, 516 517, 588, 602, 695 sensor, 641 642 DeMayo, 500 Denavit Hartenberg method (D H method), 350 352, 352f, 353t parameters, 370 Denonvilliers’ fascia, 183 186, 188 rectus bladder fascia, 172

706

Index

Denonvilliers’ plane, 184, 185f Deployable tissue retraction mechanisms, 336, 337f Depressed-membrane pneumatic actuator design, 294, 295f Depth visualization, 648 Detection-based approaches, 230 231, 233 Dexterous instruments, 645 646 dexterous vitreoretinal instruments, 645f OCT volume, 646f Dexterous workspace (DWS), 267 D H method. See Denavit Hartenberg method (D H method) Digital sensor devices, 468 469 Digitally reconstructed radiographs (DRRs), 17, 27 29 Digitized surface, 560 Direct current control mode, 357 of stepper motors, 361 Direct Drive Endoscopic System (DDES), 306, 307f Direct quadrature (DQ), 357 control architecture for permanent magnet synchronous motors, 358 360 Direct visualization, 231 Disparity computation, 230 disparity-based stereo technique, 660 661 refinement, 230 Distal articulated cap, 308 Distractor, 427 DLCC. See Dynamic load carrying capacity (DLCC) DLR. See German Aerospace Center (DLR) Docking of robot, 5, 182f, 184. See also Optimal robot positioning DoFs. See Degrees of freedom (DoFs) Doppler OCT, 634 Dorsal lithotomy, 200 Dorsal vascular complex (DVC), 184 186 Dose conformality, 16 Dose nodes, 23 Double parallelogram (DP), 266 267, 653 Double-acting cylinder, 380, 382 Double-level osteotomy, 433, 434f, 439 Douglas space, 183, 188 DP. See Double parallelogram (DP) DQ. See Direct quadrature (DQ) DRIVE dataset, 648 Driving torque, 465 466 DRRs. See Digitally reconstructed radiographs (DRRs) Dual-channel flexible endoscopes (DCE), 124 Dual-GPU architecture, 639, 640f Dual-speed stepper motor, 384 385, 385f Dual-SPU dual SPU-based DP mechanism, 267 parallel mechanism, 269, 271f DVC. See Dorsal vascular complex (DVC) DWS. See Dexterous workspace (DWS) Dynamic load carrying capacity (DLCC), 523 Dynamic performance index, 523

E EBL. See Estimated blood loss (EBL) ECE. See Extracapsular extension (ECE) EDM. See Electrical discharge machining (EDM) EDWS. See Extended DWS (EDWS) EI. See Elemental image (EI) EIA. See Elemental image array (EIA) Elastomer O-rings, 381 Electric-motor actuation, 650 651 Electrical discharge machining (EDM), 324 Electrical impedance sensing, 644 645 Electromagnetic (EM) electromagnetic-based actuation, 663 664 shielding limit, 593 tracking system, 499, 557 Elemental image (EI), 577 Elemental image array (EIA), 577 EMR. See Endoscopic mucosal resection (EMR) Encoders, 349, 498 EndoAssist, 59 60, 60f, 61b EndoControl, 61, 80 Endoilluminators, 634 Endopelvic fascia, 188 Endorectal ultrasound, 173, 174f EndoSamurai from Olympus, 306, 306f Endoscope/endoscopic/endoscopy, 304, 511 camera vision system, 326 clamp, 91, 93f, 95 devices, 266 module, 130 navigation system, 698, 698f navigation trial on phantom, 700 stabilization, 336, 337f visualization, 231 Endoscopic arm, 331 335 centimeter-scale soft-foldable actuators, 337f deployable tissue retraction soft-foldable mechanisms, 337f hybrid soft-foldable manufacturing method, 333f integration of centimeter-scale soft-foldable actuators, 337f multiarticulated soft-foldable robotic arm, 335f proprioceptive actuation through capacitive sensing, 335f soft-foldable arm ex vivo test on porcine stomach, 336f soft-foldable mechanisms, 332f trajectories of fully soft bending actuator, 333f Endoscopic mucosal resection (EMR), 306, 330 331 Endoscopic robots advantages of flexible robots, 309 310 motorized endoscopic robots, 307 309 purely mechanical endoscopic robots, 305 306 Endoscopic submucosal dissection (ESD), 124, 308, 324, 330 331 Cyclop, 126 Endoscopic surgery, 304

coordinate system and kinematic mapping of continuum manipulator, 310 318 endoscopic robots, 305 309 endoscopic surgical robotic system, 319f experimental results, 318 321, 320f flexible robots in, 321 manual endoscopic surgery tools, 304 robot arms, 319f technical challenges, 304 305 Endoscopists, 304 Endovascular catheters, 248 249 EndoWrist, 47 design, 40 instruments, 109, 194 Stapler, 47 48, 48f Enhanced recovery after surgery (ERAS), 160 Enhanced recovery pathway, 163 Enhanced Vision System for Robotic Surgery (EnViSoRS), 232 234, 232f Entropy-based mosaic update method, 646 647 EnViSoRS. See Enhanced Vision System for Robotic Surgery (EnViSoRS) EOS system. See also FreeHand system acquisition system, 530 532 benefits of slot-scanning weight-bearing technology, 531 532 Cobb angle variation, 532f magnification-free images, 533t patient stabilization accessories, 531f patient-specific three-dimensional models, 533 538 preoperative surgical planning solutions and intraoperative execution, 538 547 system description, 530 531 platform, 530 room, 530f EOSapps, 530 EP approach. See Extraperitoneal approach (EP approach) Epiretinal membranes (ERMs), 628 peeling, 631 Equivalent path length, 34 ERAS. See Enhanced recovery after surgery (ERAS) Ergonomics, 212 of arthroscopy, 494 ERMs. See Epiretinal membranes (ERMs) ERP. See Extraperitoneal RP (ERP) Error analysis of neurosurgical robotic system, 616 622 Error mode, 138 ESD. See Endoscopic submucosal dissection (ESD) Estimated blood loss (EBL), 6 8 EtherCAT Master module, 136 EurEyeCase framework, 656 Extended DWS (EDWS), 267 Extended reach arm, 344 Extracapsular extension (ECE), 173 Extrafascial dissection exacerbates, 184 Extrahepatic approach, 201 Extraperitoneal approach (EP approach), 175 176, 179 183

Index

configuration, 180f Pneumo-Retzius induction, 181f port position to create a pneumoretroperitoneum, 180f trendelenburg position, 180f Trocars’ tent effect, 181f underumbilical vertical incision to expose muscularis fascia, 179f video laparoscopic approach, 181f Extraperitoneal RP (ERP), 179 Extreme robotic liver surgery, 203 204 Eye sensing, 5 6 Eye-tracking visualization, 2

F Fabrication process of soft-foldable actuators, 334, 334f Fabry Pe´rot interferometry (FPI), 643 FBG sensors. See Fiber Bragg grating sensors (FBG sensors) FD OCT. See Frequency-domain OCT (FD OCT) FD OCT principle. See Fourier domain optical coherence tomography principle (FD OCT principle) FDA. See US Food and Drug Administration (FDA) FDA-cleared OMNIBotics system, 468 469 FDG-PET. See Fluorodeoxyglucose-PET (FDG-PET) FEA. See Finite element analysis (FEA) Feed-forward, 390 391 Feedback modalities, 286 287, 287t haptic feedback, 287 sensory substitution, 287 Feedback technology, 288 Female reproductive system, 304 Femoral array to array adapter, 481, 482f Femoral axis length, 533 534 Femoral implant, rotation of, 430, 431f Femoral offset, 537, 537f Femoral tracking array, 470 471 Femoro-tibial mechanical angle navigation, 429 Femoro-tibial mechanical axis (FTMA), 434 436 Femur, 483, 484f, 535 fracture, 516 Fetus-supporting manipulator, 581 FFR. See Fractional flow reserve (FFR) Fiber Bragg grating sensors (FBG sensors), 327 328, 328f optical fibers, 642 643 Fiducial localization error (FLE), 617 618 Fiducial marker tracking, 28 Fiducial points, 605 Field of view (FOV), 494, 509 510, 633 Filtering-based approaches, 228 Finite element analysis (FEA), 678 Finite-size pencil beam (FSPB), 34 Fire, 48 Firefly, 45, 46f Fixed alignment guides, 413 Fixed collimator housing, 23 Fixed conical collimators, 26 Flash of light, 560

Flash Registration, 560 561, 560f FLE. See Fiducial localization error (FLE) Flex Base, 149, 151f Flex Colorectal Drive, 149 bedside positioning, 151f bending of scope, 155 in cadaveric model, 154f features, 149 153 hybrid nature, 156 instrumentation, 153 with laparoscopic-style flexible instruments, 151f, 152f loss of tactile feedback, 156 range, 156 surgery with, 153 155 visualization, 149 153 Flex Robotic Base, 149, 150f Flex Robotic systems, 125, 153 features, 152t Flex Scope, 149, 150f, 152f FLEX System (Medrobotics Inc.), 694 FlexCart, 149, 150f Flexibility test, 698 699 Flexible catheter-like robot sensing system design, 328, 329f Flexible endoscopes, 325 Flexible manipulators, 309 Flexible microactuators, 331 Flexible parallel manipulators, 695 698, 696f, 697f gripper, 696 motion transmission, 697 698 parallel mechanism, 696 697 Flexible robots, 309 310 in endoscopic surgery, 321 FlexiForce sensors, 291, 291f Fluid-driven actuation, 595 Fluidic lines, 334 335 Fluorescein angiography, 634, 635f Fluorescence imaging, 45 46 Fluorodeoxyglucose-PET (FDG-PET), 173 174 Fluoroscopy fluoroscopy-based IGS systems, 553 low-dose ionizing radiation, 343 “Follow-the-leader” concept, 125 Fonds Unique Interministe´riel (FUI), 144 Foot pedal, 83 Force controller, 615 616 Force feedback mechanism, 6, 241, 280 281, 658 Force sensing, 273 274, 274f, 275f, 641 643, 642f force gradients, 643 retinal interaction forces, 641 643 scleral interaction forces, 643 Force sensor, 682, 682f Force-based supervisory controller, 681 682, 681f Foregut surgery, 217 218 procedure background, 217 robotic nissen fundoplication, 217 218 Forward kinematics, 350 353 D H method, 350 352

707

formulation of arm, 352 353 Fourier domain optical coherence tomography principle (FD OCT principle), 636 639 CP OCT, 636 SD OCT axial resolution, 637 638 imaging depth, 638 lateral resolution, 638 sensitivity, 639 FOV. See Field of view (FOV) FPI. See Fabry Pe´rot interferometry (FPI) Fractional flow reserve (FFR), 348 Fracture reduction, 516 517 Frameless stereotactic neurosurgery, 586 FreeHand system. See also EOS system advantages, 72 73, 77b appendectomy setup, 70f applying plastic sleeve, 73f, 74f bariatric surgery setup, 71f challenges with manual surgery, 58 cholecystectomy setup, 69f clip, 74f components, 68b control box, 64f development and iterations, 58 62, 63b disadvantages, 73 74, 77b experience with, 67 74 food pedal, 66f foot pedal in use, 76f FreeHand 1.0, 63f FreeHand 1.2, 63f FreeHand 2.0, 64f goniometric LEDs, 75f hands-free control unit, 65f headset in use, 76f indicator unit, 66f, 77f inguinal hernia setup, 69f nephrectomy setup, 72f Nissen fundoplication setup, 71f operative setup, 62 operative use, 62 67 positioning template, 67f preoperative preparation, 62 rectal surgery setup, 70f rectopexy setup, 72f robotic motion assembly, 65f sleeve cover, 68f video-assisted thoracic surgery setup, 73f zoom module and clip, 67f Frenet Serret frame, 311 Frequency-domain OCT (FD OCT), 635 Friction, 684 FSPB. See Finite-size pencil beam (FSPB) FTMA. See Femoro-tibial mechanical axis (FTMA) FUI. See Fonds Unique Interministe´riel (FUI) “Fulcrum” effect, 6, 194 Full width at half maximum (FWHM), 638 Fully autonomous robotic and image-guided system, 509 511 arthroscopy, 511 leg manipulation for better imaging and image-guided leg manipulation, 510

708

Index

Fully autonomous robotic and image-guided system (Continued) sensor fusion of camera image and ultrasound guidance, 510 vision-guided operation with steerable robotic tools, 510 511 FWHM. See Full width at half maximum (FWHM)

G GA. See General anesthesia (GA) 68 Ga-labeled prostate-specific membrane antigen-PET CT, 173 174 Gap balancing technique, 460 Gastric bypass, 213, 213f robotic, 214 Gastroenterology, 325 326, 326f Gastroesophageal reflux disease, 217 Gastrointestinal (GI) endoscopic surgery, 326 tract, 304, 324 Gastrointestinal cancers, 318 Gaze interaction, 5 6 GC. See Guide catheter (GC) GCR. See Guide catheter rotation (GCR) General anesthesia (GA), 674 General purpose computing on graphics processing units (GPGPU), 639 General surgery, 2, 247 248 Genu varum deformity, osteotomies for, 432 433 Geometric accuracy, 16 German Aerospace Center (DLR), 247 Goniometric point, 62 Gough Stewart platform, 516 517, 519 523, 521f GPGPU. See General purpose computing on graphics processing units (GPGPU) Graphical user interface (GUI), 134, 329 330, 518 519 Graphics processing units processing, 639 Graphics system description, 370 371 Grip, 48 reduction in grip forces, 294 297, 297f Gripper, 696 Gripping force, 280 281 Grommet. See Ventilation tube (VT) GUI. See Graphical user interface (GUI) Guide catheter (GC), 342 343, 342f Guide catheter rotation (GCR), 345 Guidewire (GW), 342 343 guidewire-linear module, 347f Guidewire linear (GWL), 346 347 Guidewire rotation (GWR), 346 347 GW. See Guidewire (GW) GWL. See Guidewire linear (GWL) GWR. See Guidewire rotation (GWR) Gynecology, 10 11, 197 heterogeneous series, 11 hysterectomy in obese patients, 11 monolateral ovarian cyst removal, 10 11 Senhance and standard laparoscopy for benign and malignant disease, 11

H Hand-assisted laparoscopic surgery (HALS), 160 161 Handheld robotics, 454 Handheld systems, 654 closed-loop control for, 657 660 “Hands-on” cooperative control. See Cooperative-control systems Haptic feedback systems for robotic surgery, 287 288, 289t actuation and feedback technology, 288 sensing technology, 287 288 Haptic(s), 6 assistance, 364, 368 369, 371 373 display device, 369 370 feedback, 157, 244 245, 249, 286 287, 288f systems, 253 255 unit, 294, 296f fundamentals, 240 241 future perspectives, 256 257 haptic-enabled robotic systems, 255 information, 286 research systems, 251 256 human interaction, 255 256 sensing systems, 251 253 surgery and, 241 242 surgical systems commercial surgical robot systems, 245 249 emerging surgical needs, 250 251, 250f, 250t surgical practice, 249 surgical robotics landscape, 243 245 tele-operated surgical robot systems, 242 243 warning system, 248 Haptics Manager software, 292 294, 293f Harris Hip Score, 401 HCC. See Hepatocellular carcinoma (HCC) HD image. See High-definition image (HD image) Head movements, 70 72 Head-mounted display (HMD), 576 Health hazards, 343 Hemostasis, 200 Hepatectomy, robotic, 219 Hepatic hilum, 201 Hepatocaval dissection, 201 Hepatocellular carcinoma (HCC), 194 Hernia surgery procedure background, 215 robotic inguinal hernia repair, 216 217 robotic transversus abdominis release, 216 robotic ventral hernia repair, 215 216 High tibial opening wedge osteotomy, 432 433, 433f High tibial osteotomy (HTO), 427, 439 High-definition image (HD image), 175 stereoscopic laparoscope, 247 High-performance image registration schemes, 592 High-resolution tomography, 586

High-speed OCT using graphics processing units, 639, 640f Hilum dissection, 201 Hip-knee-ankle angle (HKA angle), 429, 432f, 535 537 hipEOS, 542 545, 544f HKA angle. See Hip-knee-ankle angle (HKA angle) HMD. See Head-mounted display (HMD) Holmium:yttrium-aluminum-garnet laser (Ho: YAG laser), 327 329 Homogeneous transformation matrix, 315 HOUGH transform, 609 HTO. See High tibial osteotomy (HTO) Human decision-making process, 256 Human haptic sensory system, 240 241, 240f Human interaction, 255 256 Human vision, 501 503 Human robot interaction, 498 Hybrid manufacturing paradigm, 331 Hybrid soft-foldable manufacturing method, 332, 333f robotic arm, 326 Hybrid visual servoing scheme, 661 Hydraulics, 377 Hypercapnia, 182 Hyperplastic process, 173 Hysterectomy in obese patients, 11, 11f

I iBlock, 403 ICG. See Indocyanine green (ICG) ICP. See Iterative closest point (ICP) ICSI. See Intracytoplasmic sperm injection (ICSI) iDMS. See Integrated data management system (iDMS) IGS. See Image-guided surgery (IGS) Illumination systems, 633 ILM. See Internal limiting membrane (ILM) IMA. See Inferior mesenteric artery (IMA) Image registration, 412 and segmentation algorithms for treatment planning, 31 33 automated image segmentation, 31 multimodality image import and registration, 31 retreatment, 32 33 Image stabilization, 365 366 algorithms, 364 fundamental spaces for motion compensation, 366f motion compensation, 365f effect of strip-wise affine map, 366f Image-free technology, 446 448 fine tuning of rotational axis, 449f hip center collection of patient’s anatomy, 447f image-free patient femur anatomy mapping, 448f image-free patient tibia anatomy mapping, 449f patient ligament laxity collection, 448f

Index

Image-guided leg manipulation, 510 Image-guided motion compensation, 364 for beating heart surgery, 364 experimental setup graphics system description, 370 371 robotic system description, 369 370 image stabilization, 365 366 shared control, 367 369 simulation experiments, 371 373 sonomicrometry system, 365 SWAM, 366 367 Image-guided surgery (IGS), 552 554, 576 evolution, 552 553 intraoperative image-guided surgery systems, 553 554 navigation in spinal and cranial procedures, 552, 552f preoperative image-guided surgery systems, 554 robotic surgery image guidance based on optical coherence tomography, 662 image-guidance based on video, 660 662 Image-processing algorithm, 648 “Image-to-physical” registration process, 605 Image/imaging depth of spectral-domain optical coherence tomography, 638 modalities, 634 635 preprocessing, 609 processing, 111 113, 112f pipeline, 417 418, 418f software, 503 quality, 632 rectification, 370 371 systems for treatment delivery, 27 Imageless robotic systems, 403 Immediate planning, 586 Impedance sensing, 644 645 Impedance-type robots, 650 651 iMRI. See Interventional MRI (iMRI) IMU. See Inertial measurement unit (IMU) IMV. See Inferior mesenteric vein (IMV) In vitro test of robotic platform, 329 330, 330f “Inchworm” strategies, 653 InCise MLC, 26 27 Independent joint variables, 354 Indocyanine green (ICG), 45, 201 Inertial measurement unit (IMU), 499 Inferior mesenteric artery (IMA), 165 Inferior mesenteric vein (IMV), 165 Inferior vena cava plan, 199 Infrared (IR) optical tracking system, 559 Inguinal hernia repair, 12 robotic, 216 217 Inguinal hernias, 215, 215f Insertable flexible robotic platform, 321 Institut de Recherche sur les Cancers de l’Appareil Digestif (IRCAD), 126 Instrument modules, 130 131 Instrument motion DoF, 653, 653f Instrument tracking, 111 113, 112f Integral videography technique (IV technique), 576

Integrated data management system (iDMS), 35, 36f Integrated table motion, 48 50, 49f Integrating haptic feedback, 241 Intelligent guidance, 698 Intellijoint device, 406t Intellijoint HIP, 420 acetabular implant alignment screen, 421f in use for acetabular implant inclination and anteversion measurement, 421f Intellijoint minioptical technology, 413 414, 414f accuracy performance, 421 422 accuracy phantom, 422f camera, 415 417, 415f challenges and further development, 423 clinical applications, 420 Intellijoint HIP, 420 minioptical system calibration, 418 420 software framework, 417 418 system overview, 414 415 tracker, 418 Intensity-based features, 227 Interconnectivity, 35 Intermediate collimator, 25 27 Internal iliac arteries, 173 Internal limiting membrane (ILM), 631 Internal vision systems, 508 Intervention catheters, 249 Interventional cockpit, 344 Interventional MRI (iMRI), 588 Interventional physicians, 343 Intraarticular arthroscopy, 509 511 Intracorporeal knot tying, 298 299 Intracytoplasmic sperm injection (ICSI), 674 Intrafascial prostatectomies, 183 Intraknee perception, miniature stereo cameras for, 501 506 Intraluminal surgery, 305 in digestive tract, 124 technical advances in, 124 126 Intraocular Robotic Interventional and Surgical system (IRISS), 645 646, 655 Intraoperative (intraop). See also Preoperative (preop) decision process, 225 image-guided surgery systems, 553 554 intraoperative fluoroscopy-based imageguided surgery systems, 553 intraoperative three-dimensional imageguided surgery systems, 553 554 imaging modalities, 47 patient three-dimensional image registration, 579 refinement, 586 tissue classification, 227 Intraoperative OCT (iOCT), 636, 648, 662 664. See also Optical coherence tomography (OCT) Intravascular ultrasound, 348 Intro-ocular dexterity robot (IODR), 655 Intuitive surgical timeline, 40 41, 45f Inverse kinematics, 354 357 formulation, 354 356, 357f independent joint variables, 354

709

iOCT. See Intraoperative OCT (iOCT) IODR. See Intro-ocular dexterity robot (IODR) IR optical tracking system. See Infrared (IR) optical tracking system iRAM!S. See Robot-assisted Microscopic Manipulation for Vitreoretinal Ophthalmologic Surgery (iRAM!S) IRCAD. See Institut de Recherche sur les Cancers de l’Appareil Digestif (IRCAD) Iris collimator, 23 Iris variable aperture collimator, 26 IRISS. See Intraocular Robotic Interventional and Surgical system (IRISS) Irradiated volume (IV), 16 Isoperistaltic ileocolic anastomosis, 165 Isotropic score, 273 Isotropy mechanism, 271, 273 Iterative closest point (ICP), 622 IV. See Irradiated volume (IV) IV technique. See Integral videography technique (IV technique)

J Jacobian analysis, 271 273 Jacobian matrix, 316 JDMS. See Joint displacement minimization strategy (JDMS) JHU SHER variable admittance control, 659, 659f JHU Steady-Hand Robot, 660 Joint displacement minimization strategy (JDMS), 614 Joint limit avoidance (JLA), 609 610 Joystick, 91 95, 94f

K K-FLEX robotic system, 125 Kalman filter, 644 Kinematic(s) analysis, 268 269, 270f, 350 357, 488, 489f forward kinematics, 350 353 inverse kinematics and workspace analysis, 354 357 mapping of continuum manipulator, 310 318 model, 498, 618 619 Kinesthesia, 241 Kinesthetic feedback, 287 288, 288f haptic interface, 253 254 sensation, 240 241 Knee arthroscopy, 494, 500 Knee gap detection for leg manipulation, 501 Knee surgery navigation computer navigation in knee replacement surgery, 439 440 operative procedures, 428 431 orthopilot device, 427, 427f osteoarthritis post malunion of right femur, 440f

710

Index

Knee surgery navigation (Continued) osteotomies for genu varum deformity, 432 433 results, 438 439 results of prospective randomized study, 426t UKA, 440 kneeEOS, 545 547, 546f, 547f Knot tying, 298 299 KUKA Agilus KR6 R900 sixx robot, 619, 620t KUKA QUANTEC KR300 R2500 Ultra robot, 20, 21f

L LA. See Local anesthesia (LA) LAN. See Local area network (LAN) Laparo-endoscopic single-site surgery, 324 Laparoendoscopic single-site surgery (LESS), 178 Laparoscopic right colectomy with extracorporeal anastomosis (LRCEA), 161 162 Laparoscopic right colectomy with intracorporeal anastomosis (LRCIA), 161 162 Laparoscopic/laparoscopy, 40, 160, 194, 495 appendectomy, 96 97, 96f approach, 196 cholecystectomy multiport, 98 99, 98f single incision, 99, 99f colectomy, 160 161 left-side colon, 101, 101f right-side colon, 100, 101f distal gastrectomy, 99 100, 100f distal pancreatectomy and splenectomy, 102 gastric bypass, 214 inguinal hernia repair, 97 98, 97f instruments, 177 prostatectomy, 183 rectal resection and five-port left-side colectomy, 101 102, 102f surgery, 40, 58, 80, 108, 212, 242, 244 245 challenges, 160 161 interventions, 228 Laser fiber, 329 Laser therapy, smart composites in robotic catheter for targeted, 327 330 Laser trackers, 620, 620f Laser-assisted BPH, 327 Lateral resolution of spectral-domain optical coherence tomography, 638 Leadscrew mechanism, 377 Learning-based methods, 511 LEDs. See Light-emitting diodes (LEDs) Left temporal open biopsy, 569 571 Leg holder, 479, 480f Leg manipulation, 510 Leg manipulators for knee arthroscopy knee gap detection for leg manipulation, 501 leg manipulation systems, 500, 500f Lenses, additional, 633 LESS. See Laparoendoscopic single-site surgery (LESS)

LifeCam VX-800 USB 2.0 camera, 370 371 Ligament, 460 balance, 430 431, 432f Light sources, 633 634 Light-emitting diodes (LEDs), 553 Lighting, 503 504 Limb alignment/component positioning, 476 477 Linac laser, 24 Linear actuator, 271 272 Linear-quadratic regulator (LQR) LQR-assisted PID controller, 685 687 LQR-assisted tuning approach, 683 Linear slot-scanning radiography, 531 533 Linear term, 685 Linkage-based RCM mechanisms, 655 Live X-ray images, registration of, 27 29 Liver parenchymal transaction, 199 Liver transplantation, 203 204 LNi. See Lymph node involvement (LNi) Load Angle Inlay Total Knee design, 476 Local anesthesia (LA), 674 Local area network (LAN), 35 Local excision, 154 Long-term (LT) soft-tissue tracking, 231 Lower limbs, 535 537 LRCEA. See Laparoscopic right colectomy with extracorporeal anastomosis (LRCEA) LRCIA. See Laparoscopic right colectomy with intracorporeal anastomosis (LRCIA) Lymph node involvement (LNi), 173 Lymphatic drainage, 173

M Machemer lens, 633 Machine center, 17 Machine learning (ML), 227 Machine vision algorithm, 604 Macroscopic Soft Tissue Injury score, 401 Magnetic resonance (MR), 376, 586, 589t, 604 605 conditional device, 376 MR-based active tracking techniques, 593 594, 593f MR-guided biopsy, 376 MR based tracking, 593 594 safe device, 376 unsafe device, 376 Magnetic resonance imaging (MRI), 197, 225 226, 324, 376, 559 560, 576 577, 586 clinical motivations for MRI guided robotic stereotaxy, 586 587 compatibility of surgical robots, 376 MRI guided robotic stereotaxy, 586 587, 587f MRI guided robotic systems, 589 595 MR based tracking, 593 594 MRI compatible actuation, 594 595, 594f nonrigid image registration, 591 592 MRI guided stereotactic neurosurgery, 587 588, 588f, 591f

Magnetic resonance-safe robotic system actuation methods for MR-safe/conditional robots, 376 377 clinical challenge, 376 evaluation of stepper motors and Stormram 4, 391 395 pneumatic cylinders, 380 382 pneumatic device control, 389 391 state of the art, 377 380, 378f pneumatic magnetic resonance imaging robots, 377 378 Stormram 1 4 and Sunram 5, 379 380 stepper motors, 382 385 Sunram 5 design, 386 389 Mako robotic arm, 404 device, 398, 403, 403f, 406t system, 412 413, 452, 478, 488 489 Male pelvic anatomy, 31 Mammography, 376 Manipulator(s), 109 arms, 3, 4f manipulator-independent mapping, 311, 314 316, 314f manipulator-specific mapping, 310 314 Manual approach, 226 Manual intraoperative selection, 227 Manual MRI-guided breast biopsy procedure, 376 Mapping catheters, 249 Markers, 427, 605 Marmor Modular Knee development, 476 Master console, 698 interfaces calibration, 138 workspace, 367 Master And Slave Transluminal Endoscopic Robot (MASTER), 125, 308, 308f Master slave approach, 255 architecture of da Vinci system, 41 42, 43f control, 278 Matching cost computation, 230 Matlab/Simulink programs, 698 Maximally stable extremal regions (MSER), 609 Mayo Hip Score, 401 MCL. See Medial collateral ligament (MCL) MDH transformation procedure. See Modified Denavit Hartenberg transformation procedure (MDH transformation procedure) Mechanical design, 496 498 Mechanical synchronization, 364 Mechanics modeling, 498 Mechanoreceptors, 241, 287 Mechatronic arms, 44 Mechatronic concepts, 650 653 electric-motor actuation, 650 651 piezoelectric actuation, 651 653 remote-center-of-motion mechanisms, 653 Medial avascular plane, 184 Medial collateral ligament (MCL), 429 Median lobe, 172

Index

Mediastinum, 217 Medical device standards, 257 Medical imaging technologies, 632 Medical robotics, 608 miniature stereo cameras for, 501 506 Medtronic MiroSurge, 247 248 Medtronic O-arm, 553 Memorial Sloan Kettering Cancer Center (MSKCC), 175 MEMS. See Microelectromechanical systems (MEMS) MER. See Microelectrode recording (MER) Metallic tibial components, 476 MI. See Myocardial infarction (MI) Micro-scope visualization, 231 Microconvex lens array (MLA), 577 Microelectrode recording (MER), 586 Microelectromechanical systems (MEMS), 324, 331 Microforceps with integrated force sensing, 643 Micron, 654, 657 658, 660 661 Microrobots, 656, 663 664 Microsurgery, 248 Microsurgical procedures, 628 Miniature stereo cameras, 501 506 CMOS for knee arthroscopy, 501 503 emerging sensor technology for medical robotics, 503 504 for knee arthroscopy, 504 506 stereo imaging validation in knee arthroscopy, 504 506 Miniature three-axis three-axial force sensor, 273 274 Minimally invasive approaches, 212, 218 Minimally invasive surgery (MIS), 2, 90, 108, 148, 160, 194, 224, 266, 324, 494, 576 integration of diagnosis and treatment in, 582 robots gastroenterology, 325 326 robotic platforms, 326 smart composites in robotic catheter for targeted laser therapy, 327 330 soft-foldable endoscopic arm, 330 335 urology, 324 325 Minioptical system calibration, 418 420 MIS. See Minimally invasive surgery (MIS) ML. See Machine learning (ML) MLA. See Microconvex lens array (MLA) MLC. See Multileaf collimator (MLC) Model predictive control algorithms (MPC algorithms), 365 Modern retinal instruments, 640 Modified Denavit Hartenberg transformation procedure (MDH transformation procedure), 618 619, 620t Modified Rose Bengal method, 631 632 Modified trocar position, 95 96, 96f Monitor chamber, 25 Monitor units (MU), 34 Monolateral ovarian cyst removal, 10 11 Monte Carlo dose calculation, 34 Monteris stereotactic platform, 588

Morbid obesity, 213 Morpho-realistic parametric model, 534 Mosaicing methods, 646 647, 647f Motion compensation, 364, 365f control system modeling, 357 361 direct current control of stepper motors, 361 DQ control architecture for permanent magnet synchronous motors, 358 360 PMSM model, 357 358 quadrature current control of brushless linear DC motors, 360 prediction, 365 transmission, 697 698 Motorized endoscopic robots, 307 309 MPC algorithms. See Model predictive control algorithms (MPC algorithms) mpMRI. See Multiparametric magnetic resonance imaging (mpMRI) MR. See Magnetic resonance (MR) MrBot, 377, 378f MRI. See Magnetic resonance imaging (MRI) MSER. See Maximally stable extremal regions (MSER) MSKCC. See Memorial Sloan Kettering Cancer Center (MSKCC) MU. See Monitor units (MU) muC103A cameras, 504 Multiarticulated soft-foldable robotic arm, 334, 335f Multileaf collimator (MLC), 20, 27 Multilevel registration for spine deformity procedures, 571 572 Multilumen catheter, 327 328, 328f Multimodal imaging system, 510 Multimodality image import and registration, 31 Multiparametric magnetic resonance imaging (mpMRI), 173 Multispectral imaging pixels, 503 Myocardial infarction (MI), 342 Myringotomy, 674

N NanEye stereo camera, 504 National Aeronautics and Space Administration (NASA), 80 National Cancer Database, 160 Natural orifice transluminal endoscopic surgery (NOTES), 50, 124 125, 149, 305, 324, 694 Navigation principles, 444 system, 601 602 Navio robotic device, 398, 406t NAVIOsurgical system, 452 handheld robotic-assisted tool and, 445f workflow, 444 456 cement and close, 456 patient and system setup, 445 446 prosthesis planning, 448 452 registration, 446 448 robotic-assisted bone cutting, 452 454 trial reduction, 454 456

711

Near-infrared fluorescence imaging, 45 Near-infrared technology (NIR technology), 201 Nelder Meads algorithm, 612 Neovascularization, 631 Nerve-sparing technique, 184 186 Nervous anatomy, 188 189 NeuroArm system, 248 NeuRobot, 249 Neurosurgeon, 603 604 Neurosurgery, 249, 600 Neurosurgical robotic system, error analysis of, 616 622 RONNA kinematic and nonkinematic calibration, 618 622 Neurovascular bundle (NVB), 172, 184 186, 186f Neutral orientation strategy (NOS), 614 Ni-Ti rod, 697 NIR technology. See Near-infrared technology (NIR technology) Nissen fundoplication, 217, 217f robotic, 217 218 Node, 3 properties, 22 23 Nonlinear compensation, 687 689 Nonlinear term, 684 685 Nonrigid image registration, 591 592, 592f Nonrigid registration, 592 Nonuniform discrete Fourier transform, 637 Nonuniform fast Fourier transformation, 637 Normal force feedback, 288 NOS. See Neutral orientation strategy (NOS) NOTES. See Natural orifice transluminal endoscopic surgery (NOTES) Novel manipulator, 581 Numerical interpolation, 637 NVB. See Neurovascular bundle (NVB)

O OARs. See Organs at risk (OARs) Obesity, 216 217, 477 478 Obturator node, 173 OC disease. See Organ-confined disease (OC disease) OCS. See Orientation correction strategy (OCS) OCT. See Optical coherence tomography (OCT) OCT angiography (OCT-A), 634, 648 OctoMag system, 656 Office-based surgical devices, 674 OME. See Otitis media with effusion (OME) OMNI BalanceBot system, 460 OMNIBotics system, 406t, 452, 460 461, 461f BalanceBot system development, 465 BoneMorphing/shape modeling, 461 463 cadaver labs and clinical results, 471 472 engineering for product commercialization, 467 initial prototype design requirements, 465 466 OMNIBot miniature robotic cutting guide, 464 465, 465f

712

Index

OMNIBotics system (Continued) proof of concept, 466 467 surgical workflow, 470 471 intraoperative photo sequence, 470f verification, validation, and regulatory clearances, 467 469 OMS. See Oral and maxillofacial surgery (OMS) OPDs. See Organic photodetectors (OPDs) Open surgery, 324 challenges with, 160 161 Open-loop calibrations, 620 Open-sky implementation, 661 surgery, 628 Operating room (OR), 224, 413, 603, 663 configuration in Intellijoint system, 415f Operation phase, 604 Operative stereo microscopy, visualization through, 632 634 Optical coherence tomography (OCT), 632, 634 636, 638f, 644, 648, 663 664. See also Spectral-domain optical coherence tomography (SD OCT) image guidance based on, 662 Optical Fabry Perot interferometry technique, 253 Optical fiber system, 304 Optical imaging system, 27 Optical technology, 416 Optical trackers, line-of-sight issues for, 557 Optical tracking system (OTS), 499, 578, 601 602 Optimal robot positioning, 609 614 dexterity evaluation, 610 611 in physical space, 613 position planning for collaborating robots, 613 robot localization strategies, 613 614 RONNA reachability maps, 611 single robot position planning algorithm, 612 Optimization method, 621 OR. See Operating room (OR) Oral and maxillofacial surgery (OMS), 576 ORBSLAM, 698, 700 Organ-confined disease (OC disease), 173 Organic photodetectors (OPDs), 503 Organs at risk (OARs), 31 Orientation correction strategy (OCS), 614 Orthopedic robotics, 452 Orthopilot device, 406t, 427, 427f marker with four reflecting balls, 428f tibial and femoral markers fixed percutaneously to bone, 428f Orthosoft Hip navigation system, 418 Osteoarthritis, 412 Osteotomies, 439 for genu varum deformity, 433 434, 435f, 439 double-level osteotomy, 433, 434f high tibial opening wedge osteotomy, 432 433, 433f UKA, 434 436, 436f UKA to TKA revision, 436 438

Otitis media with effusion (OME), 674 OTS. See Optical tracking system (OTS) Otsu adaptive thresholding algorithm, 501, 502f Overcorrection, 476 477

P PA/PM. See Piezoelectric actuator/motor (PA/ PM) Paired-point matching, 554 Parallel mechanism, 266 267, 696 697 Parallel robotic Jacobian analysis method, 271 Paralytic ileus, 182 Parasympathetic fibers, 173 Parenchymal transection phase, 200 Pars-plana vitrectomy, 640 Pasadena Consensus Panel (PCP), 174 Passive arrays, 553 Passive methods, 228 Passive scope holders, 90 Passive sensors, 460 Passivity theorem, 359 360 Patellofemoral arthroplasty, 398 399 disease, 477 478 Paths, 22, 22f Patient cart, 177 Patient localization, 604 605, 608 Patient movement, 628 Patient positioning, 4 array assembly, 481 femoral array to array adapter, 481, 482f base array placement and orientation, 482 bone pin insertion femur only, 481 tibia only, 481 operating room configuration, 482, 482f patient time out page, 482 securing leg and IMP De Mayo knee positioner, 479 480 leg holder attachment block, 480f putting posts into slots of leg holder bloc, 480f securing leg holder, 481f single-leg procedures, 479 leg holder, 480f Patient reported outcomes (PROMs), 471 472 Patient side manipulator (PSM), 364 Patient-mounted tracking camera, 414 Patient-side cart, 41 Patient-specific blocks, 444 Patient-specific three-dimensional models, 533 538 lower limbs, 535 537 modeling technology, 533 534 pelvis, 535 spine, 538 Payload capability test, 698 699 PCa. See Prostatic carcinoma (PCa) PCI. See Percutaneous coronary intervention (PCI) PCL. See Posterior cruciate ligament (PCL) PCP. See Pasadena Consensus Panel (PCP) Pelvic tilt (PT), 538 540, 543f

Pelvis, 535 Perception action cycle, 242 Percutaneous coronary intervention (PCI), 342 343, 343f. See also Roboticassisted PCI health hazards, 343 precision, 343 Peritoneum, 183 Periurethral muscular fascial structures, 188 Permanent magnet synchronous motors (PMSMs), 357 360 model, 357 358 torque, 358 PERUSIA technique, 188 PET. See Positron emission tomography (PET) Phantom model design for hidden tubular anatomical structure, 297 298, 298f Physical image, 365 Physical space automatic localization in, 609 robot positioning in, 613 PI. See Proportional integral (PI) Pick-and-place test, 698 699 PID control. See Proportional integral derivative control (PID control) “Pie-crusting” ligaments, 460 Piezo motors, 377 Piezoelectric actuation, 651 653 stick-slip actuators, 651 USM stage, 682 683 Piezoelectric actuator/motor (PA/PM), 676 677 Piezoelectric motors (PM), 588, 653 Piezoresistive sensors quantify force, 288 Pinch force, 346 347 Pipelined approach, 292 Pitch, 48 Pivot point. See Goniometric point Planning target volume (PTV), 16 PM. See Piezoelectric motors (PM) PMSMs. See Permanent magnet synchronous motors (PMSMs) Pneumatic(s), 377 cylinders, 380 382 double-acting cylinder, 382 manufacturization, 382 rectangular cross-sectional shape, 380 381 sealing, 381 single-acting cylinder design, 381, 382f device control, 389 391 kinesthetic feedback actuator design, 294, 296f magnetic resonance imaging robots, 377 378 transducers, 251 252 Pneumo-Retzius induction, 179, 181f Pneumoperitoneum, 177 178, 183, 202 Pneumothorax, 176 PneuStep motor, 377, 378f Point cloud acquisition process, 461 Point correspondence, 417

Index

Point spread function (PSF), 638 Point-based approach, 226 Point-pair correspondence, 606 608 Polaris Optical Tracking System, 416 Polaris Vicra, 559 Polyamide multilumen catheter, 327 Polycentric TKA, 476 Polyetheretherketone, 588 Polyjet printer, 382 Polyoxymethylene, 588 Porcine retina ex vivo model, 661 Pose calculation, 417 Position control mode, 138 Position planning for collaborating robots, 613 Positron emission tomography (PET), 173, 576 577 Posterior cruciate ligament (PCL), 507 Posterior segments, robotic liver resection for operative setup, 202 surgical technique, 202 203 Posterior stabilized designs, 445 Postoperative course, 155 Power vision monitor (PVM), 344, 348 PRECE. See Predicting ECE in prostate cancer (PRECE) PRECEYES Surgical System, 255, 655 656 Precision, 343 force transducers, 248 Predicting ECE in prostate cancer (PRECE), 175 Predictive Balancing technique, 471 472 Preoperative (preop). See also Intraoperative (intraop) clinical assessment, 174 175 course, 153 image-guided surgery systems, 554, 555f imaging modality for prostate cancer, 173 174 endorectal ultrasound, 174f MRI image, 174f knee motion collection, 446 447 pain assessment, 477 phase, 603 planning, 401, 586 Presacral node, 173 Primary collimator, 25 Primary image, 31 Prisms-universal robot (UPU robot), 266 267 Probe tip, 95 Proficiency, 59 PROMs. See Patient reported outcomes (PROMs) Proportional integral (PI), 357 Proportional integral derivative control (PID control), 367, 685 686 Prostate gland, 172 prostate urethral junction, 188 Prostatic carcinoma (PCa), 173 Prostatic pathology, 172 Prostatic vascular complex, 188 189 Prosthesis planning, 448 452 full range of motion ligament balance planning, 451f, 452f

initial femur implant planning, 450f initial tibia implant planning, 450f ligament balance planning with the virtual components, 451f Proximity detection, 23 3-PRS parallel mechanism. See Threeprismatic-revolute-spherical parallel mechanism (3-PRS parallel mechanism) PSF. See Point spread function (PSF) PSM. See Patient side manipulator (PSM) PT. See Pelvic tilt (PT) PTV. See Planning target volume (PTV) 3-PU mechanism. See Three-prismaticuniversal mechanism (3-PU mechanism) Purely mechanical endoscopic robots, 305 306 PVM. See Power vision monitor (PVM)

Q QoL. See Quality of life (QoL) Quadrature current control of brushless linear DC motors, 360 mode, 357 Quality of life (QoL), 188 189

R Radiation dose calculation and optimization algorithms for treatment planning, 33 35 Radical prostatectomy (RP), 173 Radiofrequency (RF), 566 Radiosurgery, 16 RAMS. See Robotically assisted minimally invasive surgery (RAMS) Range of motion (ROM), 445 446, 542 544 Rapid prototyping techniques, 377 RARP. See Robot-assisted RP (RARP) Ray circle detector (RCD), 609 Ray-tracing algorithm, 34 RCM. See Remote center of motion (RCM) Reachability parameter (RP), 611 Real-time accurate three-dimensional image rendering, 580 optical coherence tomography for retinal surgery, 634 636 respiratory motion tracking, 29 31 tracking, 593 Rectal cancer, 219 Rectification, 229, 229f Rectus fascia, 172 Reduction in grip forces, 294 297, 297f Redundant FBG sensors, 328 Region of interest (ROI), 364 Registration, 226 227, 556 accuracy, 616 Remote center of motion (RCM), 109, 266, 266f, 268f, 273f, 643, 653 Remote surgery. See Telesurgery Remote telepresence manipulators, 242 243 Repeatability of robot, 132 Resectoscope, 325

713

Respiratory tract, 304 RESTORIS partial knee application (RESTORIS PKA), 478 479 intraoperative, 479 preoperative, 479 Retina, 628 Retinal interaction forces, 641 643 Retinal microsurgery, 628 Retinal robotic systems, 654 Retinal tracking algorithm, 662 Retinal vein cannulation, 631 occlusion, 631 Retrorectus, 215 Retzius, 179 Retzius-sparing approach, 183 184 space, 182 RF. See Radiofrequency (RF) RF ablation (RFA), 566 Rhabdosphincter, 188 Rigid bodies. See Markers Rigid registration, 226, 605 RIO. See Robotic Arm Interactive Orthopedic System (RIO) RMIS. See Robotic minimally invasive surgery (RMIS) RMS. See Root mean square (RMS) RoboCouch, 24 25, 25f ROBODOC system, 398, 401, 404 405, 406t RoboLens, 109 113, 112f Robolensbedside, 118, 119f, 120f Robossis (orthopedic surgical robot), 516. See also Commercial surgical robot systems comparison with Gough Stewart platform, 519 523 DLCC, 523 dynamic performance index, 523 singularity analysis, 519, 521f singularity effects on actuator forces and torques, 519 523 workspace, 519, 520f, 521f experimental testing force testing, 525 526 high-stiffness rubber bands, 526f surgical workspace, 524 525 trajectory tracking, 523 524 robot structure, 516 519 control panel of robot, 518f robot for long-bone fracture reduction application, 518f showing gripping fractured femur saw bone, 517f Robot arms, 319, 319f assistance, 400 calibration and working modes, 137 138, 620 master interfaces calibration, 138 teleoperation activation, 138 dexterity, 610 end-effector, 613 intrinsic accuracy, 616 localization strategies, 613 614

714

Index

Robot (Continued) normalized Jacobian matrix, 610 positioning control modes, 464 in physical space, 613, 614f robot-assisted fracture reduction of long bones, 516 robot-assisted left hepatectomy, 198 200 operative setup, 198 199 surgical technique, 199 200 robot-assisted operation, 581 robot-assisted right hepatectomy, 202f dissection of hilum, 201 hepatocaval dissection, 201 operative setup, 200 201 surgical technique, 201 transection of liver, 201 robot-assisted surgeries, 495, 516 for soft tissue, 46 47 tool frame, 21 user frame, 21 world frame, 21 Robot control algorithms based on sclera force information, 659 examples of research in image-guided robotic retinal surgery, 661f JHU SHER variable admittance control, 659, 659f velocity-limiting function, 658, 658f based on tool-tip force information, 658 Robot-assisted Microscopic Manipulation for Vitreoretinal Ophthalmologic Surgery (iRAM!S), 655 656 Robot-assisted RP (RARP), 174, 178t Robotic Arm Interactive Orthopedic System (RIO), 478 Robotic minimally invasive surgery (RMIS), 224, 250, 266 Robotic neuronavigation (RONNA), 600 601 historical development, 600 601 industrial robots used for neuronavigation, 601t kinematic and nonkinematic calibration, 618 622 kinematic model, 618 619 measurement setup, 620 621 optimization method, 621 validation, 621 622 reachability maps, 611, 611t α and β angles with respect to patient, 612f mean reachability map, 612f RONNA G3 system, 602f RONNAplan, 603, 604f RONNAstereo, 602, 609, 614f state of art in, 600 surgical workflow, 603 604, 605f, 606f freely distributed fiducial markers, 605f state of art in, 600 Robotic Operating System (ROS), 278 279 Robotic radical prostatectomy for prostate cancer anterograde intrafascial dissection, 184 187 approach to seminal vesicles, 184

bladder neck, 184 complications, 188 191 patients’ preparation, 173 179 anesthesiological considerations, 175 176 Da Vinci robot and docking, 176 179 preoperative clinical assessment, 174 175 preoperative imaging modality for prostate cancer, 173 174 preservation of anterior periprostatic tissue, 188 preservation of santorini plexus, 188 robotic surgical anatomy of prostate, 172 173 surgical approach to prostate, 179 184 extraperitoneal approach, 179 183 retzius-sparing approach, 183 184 transperitoneal approach, 183 urethrovesical anastomosis, 188 Robotic retinal surgery advanced instrumentation, 640 646 dexterous instruments, 645 646 force sensing, 641 643, 642f impedance sensing, 644 645 layout of stereo microscope with iOCT and digital cameras, 641f optical coherence tomography, 644 sensor-integrated vitreo-retinal instruments, 642t augmented reality, 646 650 autonomous interventions, 664 clinical requirements, 628 632 cross-section of human eye, 628f governing dimensions in retinal surgery, 629t human factors and technical challenges, 628 629, 630t main targeted interventions, 631 models used for replicating anatomy, 631 632, 632t motivation for robotic technology, 629 631 overall layout and view during retinal surgery, 630f closed-loop feedback and guidance, 656 657 image-guided robotic surgery, 660 662 novel therapy delivery methods, 663 664 practical challenges, 663 state-of-the-art robotic systems, 650 657 system optimization, 663 visualization in retinal surgery, 632 639 Robotic right colectomy with intracorporeal anastomosis (RRCIA), 161 162 Robotic surgery, 86, 148, 212, 212f. See also Adaptive and compliant transoral robotic surgery (ACTORS) anatomy of prostate, 172 173 challenges with manual surgery patellofemoral arthroplasty, 398 399 THAs, 399 TKA, 399 UKA, 398, 477 478 future directions, 405 407 comparison of major robotic devices, 406t

midline sagittal section of prostate and anatomical location, 172f operative setup, 403 404 robotic hip surgery experience, 401 robotic knee surgery experience, 399 401 surgical technique, 404 405 systems, 108, 286 Robotic-assisted laparoscopic surgery for patients with rectal adenocarcinoma (ROLARR), 162 Robotic-assisted PCI, 343 344 future of robotic vascular interventional therapy, 361 kinematics analysis, 350 357 motor control system modeling, 357 361 operation and workflow preparation for robotic-assisted PCI, 348 robotic procedure, 348 349 safety considerations, 349 350 system description articulated arm, 344 345 cockpit and power vision monitor, 348 control console, 347 CorPath GRX, 344 robotic drive and cassette, 345 347 Robotic(s), 148 149, 148t arm, 601 602 arm assisted surgery, 489 in arthroplasty, 399 catheter, 327 330 colorectal surgery experience, 161 162 operative setup, 163 164 patient selection and evaluation, 162 163 port placement for robotic right hemicolectomy, 166f preoperative preparation, 163 procedure-specific instruments, 164f robotic arms, 167f surgical technique, 165 166 devices, 398 drive, 349, 349f and cassette, 345 347, 346f future directions in, 156 157 in general surgery in bariatric surgery, 213 215 in colorectal surgery, 218 219 in foregut surgery, 217 218 in hernia surgery, 215 217 in solid organ surgery, 219 utilization, 212 and image-guided knee arthroscopy autonomous robotic knee arthroscopy systems, 495f fully autonomous robotic and imageguided system for intraarticular arthroscopy, 509 511 leg manipulators for knee arthroscopy, 500 501 miniature stereo cameras for medical robotics and intraknee perception, 501 506 scenario in knee arthroscopy, 494f steerable robotic tools for arthroscopy, 495 500

Index

ultrasound-guided knee arthroscopy, 506 509 inguinal hernia repair, 216 217 knee surgery experience, 399 401 TKA, 400 401 UKA, 399 400 liver resection advantages and disadvantages, 196 197 robotic liver resection for posterior segments, 202 203 liver surgery, 204f advantages and disadvantages of robotic liver resection, 196 197 cybernetic surgery, 204 205 extreme robotic liver surgery, 203 204 patient selection and preoperative preparation, 197 robot cost prohibitive, 206 207 robot-assisted left hepatectomy, 198 200 robot-assisted right hepatectomy, 200 201 robotic liver resection for posterior segments, 202 203 robotic-assisted minimally invasive liver surgery, 194 196 low anterior resection, 165 motivation for robotic technology, 629 631 navigation, 606 608 platforms, 326 procedure, 348 349 robotic hip surgery experience, 401 CT-based preoperative planning, 402f preoperative preparation, 401 403 robotic-assisted bone cutting, 452 454 confirmation of saw cut before execution, 455f hybrid total knee execution where handheld robotics, 455f hybrid total knee tibia execution, 456f NAVIO handheld robotics using exposure control, 453f NAVIO surgical screen depicting locking features, 453f robotic-assisted burring to fine tune saw cuts, 456f robotic-assisted CABG surgery, 364 robotic-assisted digital laparoscopy, 2 8 indications, 6 8 pelvic lymph node dissection, 12f system components, 3 6 robotic-assisted minimally invasive liver surgery, 194 196 robotic-assisted orthopedic surgery, 398 robotic-assisted platforms, 286 robotic-assisted surgery, 12 robotic-assisted UKA, 478 robotics-assisted orthopedic cutting systems, 444 robotics-assisted systems, 444 scope holders, 104 systems, 175, 250, 369 370, 404, 460, 600 teleoperation master slave system, 369 370 transversus abdominis release, 216

vascular interventional therapy, 361 ventral hernia repair, 215 216 Robotically assisted minimally invasive surgery (RAMS), 240 Rod implant bent, 542, 542f ROI. See Region of interest (ROI) ROLARR. See Robotic-assisted laparoscopic surgery for patients with rectal adenocarcinoma (ROLARR) Roll, 48 ROM. See Range of motion (ROM) RONNA. See Robotic neuronavigation (RONNA) RONNA fourth generation system (RONNA G4 system), 601 603, 602f automatic patient localization and registration, 604 609 autonomous robotic bone drilling, 614 616 current version, 603f error analysis of neurosurgical robotic system, 616 622 future development and challenges, 622 optimal robot positioning with respect to patient, 609 614 Root mean square (RMS), 471, 606 607 ROS. See Robotic Operating System (ROS) RP. See Radical prostatectomy (RP); Reachability parameter (RP) RRCIA. See Robotic right colectomy with intracorporeal anastomosis (RRCIA)

S S-surge, 277 experimental environment, 278 279, 279f experimental results, 279 282, 282f sensorized surgical instrument, 273 274, 276 277 surgical manipulator, 268 276 surgical robot, 267 268, 277f, 278t, 280f SA. See Safety area (SA) SABR. See Stereotactic ablative radiotherapy (SABR) SAD. See Source-to-axis distance (SAD); Sum of ADs (SAD) Safety augmentation, 233 234 considerations, 349 350 in robotic MIS, 225 warning methods, 231 232 active constraints, 232 AR visualization, 231, 233f Safety area (SA), 232 Safety volume (SV), 233 234 Santorini plexus, 173, 183 preservation, 188, 189f SBRT. See Stereotactic body radiation therapy (SBRT) Scleral interaction forces, 643 Scope holder, 90 Scorpion Shaped Endoscopic Surgical Robot (SSESR), 307 308, 307f SD OCT. See Spectral-domain optical coherence tomography (SD OCT)

715

SDS. See Surgical data science (SDS) SDU. See Stent deployment unit (SDU) Segment detection, 417 rejection, 417 Segmentation algorithms, 501, 502f Self-retaining retractors, 58 Semantic segmentation, 227 228, 228f Semiactive robotics, 452 Semiautomatic preoperative identification, 225 227 registration, 226 227 Semiautomatic segmentation and tracking, 506 507 Seminal vesicles (SVs), 172 approach to, 184 catheter, 185f Denonvilliers’ plane, 185f posterior plane, 185f Senhance Robotic System, 157 Senhance Surgical System, 3, 212, 246 247 challenges of general surgery and need for value-driven solutions, 2 clinical findings, 10 12 colorectal disease, 12 cost considerations, 12 13 gynecologic procedures, 10 11 inguinal hernia repair, 12 four-arm setup, 5f fully reusable surgical instruments, 8t labeled procedures for, 7t operating room setup with, 3f procedure planning, 9 robotic-assisted digital laparoscopy, 2 8 surgical equipment with, 4t three-arm setup, 5f training session overview, 10t user training, 8 9, 9f Sensei X platform, 248 249 Sensing modalities, 412 systems, 251 253, 252f, 287 288, 498 499 Sensor data level, 292 fusion of camera image and ultrasound guidance, 510 technology for medical robotics, 503 504 Sensorized micromanipulation aided roboticsurgery tools (SMART), 654, 658, 662 Sensorized surgical instrument, 273 274, 275t, 276 277, 277f Sensory feedback, 325 326 Sensory substitution, 286 287 Sensory unit, 289 292 Separate paths, 22 Sequential optimization (SO), 34 35 7D Surgical Machine-vision IGS system (7D MvIGS system), 552, 555f, 561f clinical case studies with, 562 571 cervical fusion, 566 569 cervical fusion with radiofrequency ablation and vertebroplasty, 566 left temporal open biopsy, 569 571

716

Index

7D Surgical Machine-vision IGS system (7D MvIGS system) (Continued) revision instrumented posterior lumbar fusion L3 L5, 562 563 revision instrumented posterior lumbar fusion L4 S1, 563 565 future, 571 572 motivation and benefits, 555 558 complex workflow and long learning curve, 556 exposure to intraoperative ionizing radiation, 558 extended surgical time due to workflow disruptions, 556 557 large device footprint, 558 line-of-sight issues for optical trackers, 557 requiring nonsterile user assistance, 557 558 trackable surgical tools, 558f technical aspects, 559 561 cranial software, 561f Flash Registration, 561 hardware components, 559, 559f workflow, 559 560 7D Surgical’s Flash Align, 572 Sexual function, 188 189 “Shape-lock” function, 305 306 Shaped memory alloy (SMA), 377 Shared control, 364, 367 369 active assistance, 368 haptic assistance, 368 369 simple motion compensation, 368 Shear force feedback system, 297 298, 299f Shear-sensing mechanism, 292, 292f SHER. See Steady-Hand Eye Robot (SHER) Shifting angle, 366 367 Signal processing unit, 287, 292 294 Silicone grease, 383 simLab board (Zeltom LLC), 698 SimNow, 52 Simple motion compensation, 368 Simultaneous localization and mapping (SLAM), 694 Sina Robotic Telesurgery System, 108 109 challenges and future directions, 119 120 laparoscopic surgery methods, 108f milestones, 110f system overview, 109 118 Sinaflex model, 109, 113 118 7 DoFs surgical robot, 117f configurations, 117f master robotic surgery console, 114f reconfiguration of master robotic surgery console, 115f robotic telesurgery system, 114f slave surgical robotic subsystem, 116f technical points, 115t, 118t Sinastraight model, 109 113 subsystems, 111f surgeon’s console, 111f surgical robotic arm, 112f technical points, 113t Single Access and Transluminal Robotic Assistant for Surgeons, 308

Single port and Transluminal Robotic Assistant for Surgeons (STRAS), 124, 128f, 129f actuation technology, measurement systems, and calibration methods, 129t Anubiscope platform, 126 127 context of intraluminal surgery in digestive tract, 124 current developments and future work, 141 143 mechatronic design, 127 131 control and software architecture, 136 137 control of instruments, 134 control of main endoscope, 134 136 control of robot by users, 133 136 dedicated master interfaces, 134, 135f, 135t features of slave system, 131 133 modules, 128 131 rationale for robotization, 127 128 robot calibration and working modes, 137 138 ranges, velocities, and forces for distal side, 130t technical advances in intraluminal surgery, 124 126 in vivo use of system, 138 141 change of instruments, 139 140 feasibility and interest, 140 141 workflow, 138 140 Single Port Orifice Robotic Technology robotic system (SPORT robotic system), 156 157 Single robot position planning algorithm, 612, 613f Single-acting cylinder design, 381, 382f Single-photon emission CT (SPECT), 576 577 Single-port access (SPA), 125 Single-port surgery (SPS), 156 157, 306 Single-stage acetabular reaming, 405 Single-threaded approach, 292 Single-use cassette, 348 Singularity analysis, 519, 521f singularity effects on dynamic responses of mechanisms, 522f effects on actuator forces and torques, 519 523 6 degrees of freedom (6DOF), 17 6D skull tracking, 27 28 Skew symmetric matrix, 317, 358 Skin deformation, 288 SLAM. See Simultaneous localization and mapping (SLAM) Slave features of slave system, 131 133 manipulator, 656 Sleeve gastrectomy, 213, 213f robotic, 214 215 Sleeve screw, 91, 93f Sliding window-based techniques, 230 231 Slot-scanning weight-bearing technology, benefits of, 531 532, 531f SMA. See Shaped memory alloy (SMA)

SMART. See Sensorized micromanipulation aided roboticsurgery tools (SMART) Smart composites in robotic catheter for targeted laser therapy clinical motivation, 327 robotic catheter, 327 330 SmartClamp, 48 Smooth projection algorithm, 687 SMOS. See Stereotaxical microtelemanipulator for ocular surgery (SMOS) SO. See Sequential optimization (SO) Soft biomedical robots, 331 Soft-foldable actuators and sensors, 331 endoscopic arm, 331 335 clinical motivation, 330 331 Soft-tissue envelope, 460 tension, 460 tracking, 231 SOI. See Structure of interest (SOI) Solid organ surgery minimally invasive liver resection, 219f procedure background, 219 robotic hepatectomy, 219 Solo surgery, 80 VIKY for, 84f, 85f Soloassist system, 90 clinical experience and discussion, 103 104 control panel, 92f history, 90 installation, 95 103 movement control, 95f range of movement, 93f Soloassist I, 91f Soloassist II, 90 95, 92f joystick, 91 95 structure, 90 91 Sonification, 649 650 Sonomicrometry system, 365 Soterial Remote Controlled Manipulator, 377, 378f Source-to-axis distance (SAD), 22 SP mechanism. See Spherical parallel mechanism (SP mechanism) SPA. See Single-port access (SPA) Space vector pulse width modulation (SVPWM), 360 361 SPECT. See Single-photon emission CT (SPECT) Spectral-domain optical coherence tomography (SD OCT), 635, 636f axial resolution of, 637 638 imaging depth of, 638 lateral resolution of, 638 sensitivity, 639 Spectrometer-based FD OCT, 636 637 Spherical parallel mechanism (SP mechanism), 266 267 Spherical-prism-general robots, 266 267 Sphincter mechanism, 217 SPIDER2 manipulator, 500 Spinal surgery, 420

Index

Spine, 538 SpineEOS, 538 542, 540f, 541f SPORT robotic system. See Single Port Orifice Robotic Technology robotic system (SPORT robotic system) SPS. See Single-port surgery (SPS) SRI. See Stanford Research Institute (SRI) SRS. See Stereotactic radiosurgery (SRS) SS OCT. See Swept-source OCT (SS OCT) SSESR. See Scorpion Shaped Endoscopic Surgical Robot (SSESR) Standard fluoroscopy, 553 Standby mode, 138 Stanford Research Institute (SRI), 40 Stapler, 47 48 STARE dataset, 648 State-of-the-art robotic systems, 650 657 clinical use cases, 656 common mechatronic concepts, 650 653 cooperative-control systems, 654 general considerations with respect to safety and usability, 656 657 handheld systems, 654 robotic retinal surgery platforms, 652f systems for robotic retinal surgery, 651t teleoperated systems, 655 656 untethered “microrobots”, 656 Steady-Hand Eye Robot (SHER), 654 Steerable robotic tools for arthroscopy cadaver experiment to evaluate steerable arthroscope, 499f evaluation, 499 500 human robot interaction, 498 mechanical design, 496 498, 497f, 497t modeling, 498 reasons for using, 495 496 sensing, 498 499 Stenosed arteries, 342 Stent deployment unit (SDU), 645 Stepper motors, 378, 382 385 curved, 384, 385f direct current control, 361 dual-speed, 384 385, 385f evaluation, 391 395 accuracy, 393 force, 391 392 stepping frequency, 392 393 Stormram 4 evaluation, 393 395 two-cylinder stepper motor design, 383 Stereo correspondence, 229 230, 229f Stereo imaging validation in knee arthroscopy, 504 506, 505f Stereo microscope, 632 633 SterEOS femoral frame, 536f femur and tibia 3D modeling, 536f patient frame, 535f pelvis 3D orientation modeling in, 535f postural assessment, 540f software, 533 spine 3D modeling, 539f workstation, 530 Stereotactic ablative radiotherapy (SABR), 16

Stereotactic body radiation therapy (SBRT), 16 Stereotactic neurosurgery, 586, 600 Stereotactic radiosurgery (SRS), 16 Stereotaxical microtelemanipulator for ocular surgery (SMOS), 655 Stereotaxy, 588 Sterile drape, 416 field control, 414 STM32F103 controller chip, 276 277 Stormram 1 4, 379 380 Stormram 4 evaluation, 393 395 accuracy experiment in free air, 394f measurement setup of, 394f Strain gauges, 499 STRAS. See Single port and Transluminal Robotic Assistant for Surgeons (STRAS) Stress, 678 679, 679f Strip-wise affine map (SWAM), 366 367 Structure of interest (SOI), 225 227 manual intraoperative selection, 227 semiautomatic preoperative identification, 225 227 Stryker, 500 Subcostal trocar, 95 96, 96f Subcutaneous emphysema, 175 176 Subretinal injection, 631 Subsurface imaging, 647 648 Sum of ADs (SAD), 230 Sunram 5, 379 380 design, 386 389 kinematic configuration, 386 387, 386f mechanical design, 387 389 with user interface, 390f Surface rendering, 226 Surface-based approach, 226 227 Surgeon console, 41, 43f Surgical data science (SDS), 227, 664 Surgical manipulator, 274 276, 276f D H parameters of manipulator, 268t kinematic analysis, 268 269 workspace optimization, 269 273 Surgical navigation systems, 412, 694 Surgical practice, 249 endovascular, 249 general surgery, 249 neurosurgery, 249 Surgical robotics, 243 245, 245t, 694 Surgical robots, 109 MRI compatibility of, 376 Surgical team, 8 9 Surgical trainees, 59 Surgical variability, 2 Surgical workflow, 470 471 Suture Quill, 188 SV. See Safety volume (SV) SVPWM. See Space vector pulse width modulation (SVPWM) SVs. See Seminal vesicles (SVs) SWAM. See Strip-wise affine map (SWAM) Swept-source OCT (SS OCT), 636 Synchrony Respiratory Motion Tracking System, 29

717

T T-26 stepper motor, 391, 392f T/R modules, 128, 130f, 131 Tactile feedback, 287 288, 288f sensations, 240 241 Tan’s algorithm, 648 TAPP inguinal hernia repairs. See Transabdominal preperitoneal (TAPP) inguinal hernia repairs TAR. See Transversus abdominis release (TAR) Target localization and tracking methods for treatment delivery, 27 31 real-time respiratory motion tracking, 29 31 registration of live X-ray images and digitally reconstructed radiographs, 27 29 Target registration errors (TREs), 616 TaTME technique. See Transanal total mesorectal excision technique (TaTME technique) TCP. See Tool center point (TCP) TD OCT. See Time-domain OCT (TD OCT) Tekscan FlexiForce B201 piezoresistive force sensors, 289, 290f Telelap ALF-X robotic device, 157 Telemanipulation, 41 42, 125 126 Teleoperation, 41 42 activation, 138 mode, 138 robotic surgery systems, 109, 242 243, 244f systems, 655 656 closed-loop control for, 659 660 Telesurgery, 109 Tendon-driven actuation mechanism, 276 Tendon-sheath mechanism (TSM), 305 306 Testosterone, 172 THAs. See Total hip arthroplasties (THAs) Therapy delivery methods, 663 664 ThermoCool SmartTouch ablation catheter, 248 249 Thoracic surgery, 197 Thoracoscopic approach, 197 Thoracoscopic esophageal resection, 102 103, 103f Three-axis manipulating force, 280 281, 281f Three-cylinder stepper motor design, 382 383 Three-dimension (3D), 324, 600 coordinates, 586 CT, 17 19 image-guided surgery system, 576 579 augmented reality based threedimensional image guided surgery system, 577 579 integration of diagnosis and treatment in minimally invasive surgery, 582 intraoperative patient three-dimensional image registration, 579 planning and operation, 580 581 real-time accurate three-dimensional image rendering, 580 robot-assisted operation, 581 3D image acquisition, 576 577

718

Index

Three-dimension (3D) (Continued) 3D stereoscopic and autostereoscopic displays, 577 images, 305, 576 laparoscopic optics, 160 modeling, 534, 534f optical tracking technology, 460 461 overbent rod, 538, 541f printing techniques, 380 3D-printed mounting component, 292 3D-printed process, 289 sensor mounting component, 291f ultrasound-guided motion compensation, 365 video, 40 virtual models, 224 visualization, 148 Three-DoF Cartesian robots, 655 Three-prismatic-revolute-spherical parallel mechanism (3-PRS parallel mechanism), 697 Three-prismatic-universal mechanism (3-PU mechanism), 697 Thulium:yttrium-aluminum-garne, 327 Tibia, 535 Tibial tracking array, 470 471 TilePro, 47 Time-domain OCT (TD OCT), 635 Tiny Titan ventilation tube, 675, 675f, 690f Tissue interaction of da Vinci Surgical System, 47 50 integrated table motion, 48 50 stapler, 47 48 vessel sealer, 48 3D reconstruction, 233 tracking, 230 231, 233 Tissue trauma, minimizing, 253 Titanium, 588 TKA. See Total knee arthroplasty (TKA) TM. See Tympanic membrane (TM) TME. See Total mesorectal excision (TME) Tomographic imaging, 46 47 Tonsillectomy using robotic system, 699, 700f Tool center point (TCP), 369 370, 604 Tool mounting calibration, 23 24 Tool tracking, 648 649 modality-specific instrument tracking approaches, 649f one frame taken from Lumera 700 with integrated iOCT-Rescan 700, 649f TORS. See Transoral robotic surgery (TORS) Total abdominal colectomy, 165 166 Total hip arthroplasties (THAs), 398 399, 412, 537 Total knee arthroplasty (TKA), 398 401, 420, 427 431, 438, 444, 460, 476, 538. See also Unicompartmental knee arthroplasty (UKA) bone cuts navigation, 429 430 final prosthesis implantation, 431, 432f ligament balance, 431, 432f navigation of femoro-tibial mechanical angle, 429 rotation of femoral implant, 430

trial prosthesis implantation, 430 UKA to TKA revision, 436 439 Total mesorectal excision (TME), 154 155, 160, 219 Total system error (TSE), 20 TP approach. See Transperitoneal approach (TP approach) TPS. See Treatment planning software (TPS) Tracker, 418 with retroreflective spheres, 419f Tracking-based algorithms, 230, 233 Traditional laparoscopy, 2 Traditional serial-link robots, 496 Transabdominal preperitoneal (TAPP) inguinal hernia repairs, 5 Transanal surgery, 156 157 Transanal total mesorectal excision technique (TaTME technique), 149, 153 Transection of liver, 201 Transformation, 350 matrix, 315 Transitional zone, 173 Transluminal process, 305 Transoral endoscopy, 694 Transoral robotic surgery (TORS), 694 requirements, 694 695 Transparency, 243 Transperitoneal approach (TP approach), 175 176, 183, 188 “TransPort”, endoscopic platform, 305 306, 319 Transurethral laser-assisted surgery, 324 Transurethral resection of prostate (TURP), 324 Transversus abdominis release (TAR), 215 Treated volume (TV), 16 Treatment head, 25 27, 26f manipulator, 20 paths, 22 23 planning, 17 19, 19f image registration and segmentation algorithms, 31 33 radiation dose calculation and optimization algorithms, 33 35 workspace calibration, 21 22 Treatment planning software (TPS), 17 19, 23 Trendelenburg position, 175 177, 182 TREs. See Target registration errors (TREs) Trial prosthesis implantation, 430 Trial reduction, 454 456 Trial-and-error approach, 516 Triangulation, 230 Trocar point, 91 95, 95f Trolley, 91, 94f TruSystem 7000dV, 48 49 TSE. See Total system error (TSE) TSM. See Tendon-sheath mechanism (TSM) TSolution One, 452 Tube insertion, 675, 680, 685 TURP. See Transurethral resection of prostate (TURP) TV. See Treated volume (TV)

Two-cylinder stepper motor design, 382 383, 384f Two-dimension (2D) EOS examination of legs, 535, 536f images, 501 screen, 576 video, 40 Tyler Coye algorithm, 648 Tympanic membrane (TM), 674 Tympanostomy tube. See Ventilation tube (VT)

U UDP. See User Datagram Protocol (UDP) UHMWPE. See Ultrahigh-molecular-weight polyethylene (UHMWPE) UKA. See Unicompartmental knee arthroplasty (UKA) Ultrahigh-molecular-weight polyethylene (UHMWPE), 476 Ultrasonic motors (USM), 377, 676 677 control scheme for, 686f parameter estimation, 684 685 system description, 683 system modeling, 684 Ultrasound (US), 47, 227, 376, 577 guidance and tissue characterization, 508 509 imaging systems, 495 for knee automatic and semiautomatic segmentation and tracking, 506 507 ultrasound-guided interventions, 507 ultrasound-guided robotic procedures, 507 sensor fusion of ultrasound guidance, 510 ultrasound-guided knee arthroscopy ultrasound-based navigation, 506 Undercorrection, 476 477 Undistortion, 229, 229f Uni knee arthroplasty (UKA). See Unicompartmental knee arthroplasty (UKA) Unicompartmental knee arthroplasty (UKA), 398 400, 434 436, 438 439, 444, 476. See also Total knee arthroplasty (TKA) challenges with manual surgery, 476 477 limb alignment/component positioning, 476 477 fixation of tibial cutting guide for, 437f insertion of distal cutting guide for, 437f palpating of intercondylar eminence for, 436f preoperative preparation/operative setup/ surgical technique AP and lateral views, 490f bone preparation, 485 488 bone registration, 483 485, 483f case completion, 488 kinematic analysis, 488 Mako System application, 488 489 patient positioning, 479 preoperative, 491f RESTORIS partial knee application, 478 479

Index

robotic surgery experience, 477 478 to TKA revision, 436 438 loosening of UKA revised by computerassisted TKA, 438f Unicondylar arthroplasty, 478 Universal joint, 91 Universal serial bus (USB), 275 276 Unsupervised learning, 511 Untethered “microrobots”, 656 UPU robot. See Prisms-universal robot (UPU robot) Urethral incision, 188, 189f Urethra sphincter complex, 188 Urethrovesical anastomosis, 188, 191f Urinary continence, 173 and sexual rehabilitation, 190 sphincter activity, 173 tract, 304 Urology, 197, 324 325 laser-assisted transurethral surgical procedure of BPH, 325f US. See Ultrasound (US) US Food and Drug Administration (FDA), 40 41, 80, 148 FDA-cleared OMNIBotics system, 468 469 USB. See Universal serial bus (USB) User Datagram Protocol (UDP), 278 279 USGI Medical, 305 USM. See Ultrasonic motors (USM)

V VA. See Veil of Aphrodite (VA) Vacuum-assisted breast biopsy system (VABB system), 376 Validated tracking volume, 418, 419f Validation studies artificial palpation, 298 knot tying, 298 299 reduction in grip forces, 294 297 visual perceptual mismatch, 297 298 Value-based healthcare models, 2, 412 Vascular catheters, 249 Veil of Aphrodite (VA), 179 Velocity-limiting function, 658, 658f Ventilation tube (VT), 674 Ventilation tube applicator (VTA), 681f challenges, 675 676 diversity, 676 operation time, 675 precision and repeatability, 676 space and accessibility, 675 experimental results, 689 690, 690t experimental setup, 689, 689f first-generation VTA with industrial design, 690f mechanical system, 676 680, 677f mechanical structure, 676 677 mechanism for cutter retraction, 679 680, 680f

tool set, 677 679 motion control system, 682 689 control scheme, 685 689 system identification, 683 685 objectives, 674 675 sensing system, 680 682 force sensor, 682 force-based supervisory controller, 681 682, 681f working process, 680 681, 681f system architecture and organization, 676, 676f Ventral hernias, 215, 215f Vesicourethral anastomosis, 182 Vessel enhancement, 648 sealer, 48, 49f VF. See Virtual fixture (VF) ViaCath instruments, 308 309, 309f Vibration motors installed on 3D-printed pneumatic actuators, 294, 296f VIKY. See Vision Kontrol for endoscopY (VIKY) Virtual fixture (VF), 243, 368 369 Virtual fluoroscopy. See Fluoroscopy-based IGS systems Virtual Incision robot, 157 Virtual reality, 224 Virtual sonography, 506 507 Virtual Surgical Planning (VSP), 205 Vision cart, 41 Vision Kontrol for endoscopY (VIKY), 80 83 adapters, 83 advantages and disadvantages, 83 86, 86t arm and clamp, 81, 82f background and history of surgical robots, 80 control interfaces, 83, 83f control unit, 81, 82f current clinical applications and data, 86 87, 86t driver, 81 82, 82f endoscope, 82f, 83f for solo surgery, 84f, 85f system overview, 80 81 voice control commands, 84f Vision-guided operation with steerable robotic tools, 510 511 Visual cart, 177 Visual information, 501 Visual modality, 649 Visual servoing, 510 511, 661 frameworks, 664 techniques, 499 Visual survoing, 648 649 Visual synchronization, 364 Visualization, 628 of da Vinci Surgical System, 45 47 fluorescence imaging, 45 46 tomographic imaging, 46 47

719

Fourier domain optical coherence tomography principle, 636 639 high-speed optical coherence tomography, 639 through operative stereo microscopy, 632 634 additional imaging, 634 additional lenses, 633 diagnostic imaging modalities, 635f light sources, 633 634 stereo microscope, 632 633, 633f real-time optical coherence tomography for retinal surgery, 634 636 in retinal surgery, 632 639 Visual perceptual mismatch, 297 298, 298f, 299f VOI. See Volume of interest (VOI) Voice coil actuators, 254 255 VOLO, 34 35 Volume rendering, 226 volume-based approach, 227 Volume of interest (VOI), 28 Volumetric X-ray systems, 508 VSP. See Virtual Surgical Planning (VSP) VT. See Ventilation tube (VT)

W Weber Fechner Law of perception, 241 WHO. See World Health Organization (WHO) Wide-open three-legged parallel mechanism, 516 Wire-driven manipulators, 309 Wireless microphones, 83 Workspace, 357 analysis, 354 357 optimization, 269 273 Jacobian analysis, 271 273 mechanism of isotropy, 273 World Health Organization (WHO), 225

X X-ray emission, 343 344 Xchange table, 23 24 Xsight lung tracking system, 28 29 spine tracking system, 28

Y Yaw, 48

Z Zero sequence component voltage, 361 Zeus surgical system, 41 Ziehm 3D C-arm, 553