Human Inspired Dexterity in Robotic Manipulation 0128133856, 9780128133859

Human Inspired Dexterity in Robotic Manipulation provides up-to-date research and information on how to imitate humans a

494 44 19MB

English Pages 218 [207] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Human Inspired Dexterity in Robotic Manipulation
 0128133856, 9780128133859

Table of contents :
Cover
HUMAN INSPIRED
DEXTERITY
IN ROBOTIC
MANIPULATION
Copyright
Contributors
1
Background: Dexterity in Robotic Manipulation by Imitating Human Beings
Contents
Background
Complemental Information
Statistically Significant Difference
State Space Representation
Mechanical Impedance
Fundamental Grasping Style
Kinematics and Statics of Robots
Dynamics of Robots
References
Section 1
2
Digital Hand: Interface Between the Robot Hand and Human Hand
Introduction
Structure of the Human Hand and Digital Hand
Anatomy of the Human Hand
Link and Joint of the Digital Hand
Surface Mesh of a Digital Hand
Digital-Hand Model With Size Variations
Individual Digital-Hand Model
Representative Digital Hand Models
Analysis by Digital-Hand Model
Motion Analysis and Posture Synthesis
Mechanical Analysis
Conclusion
References
3
Sensorimotor Learning of Dexterous Manipulation
Introduction
Learning Manipulation: Theoretical and Experimental Evidence
Background
Interaction Between Multiple Sensorimotor Processes Underlies Learning Dexterous Manipulation
Subjects and Apparatus
Protocols
Data Analysis
Model and Simulation
Results
Discussion
Lessons Learned From Human Data and Potential Applications to Robotic Dexterous Manipulation
Conclusions
References
Further Reading
4
Intuitive Control in Robotic Manipulation
Introduction
Multisensory Integration and Illusion
Robotically Enhanced Illusion
Multisensory Illusion on Master-Slave System
Conclusions
References
5
Modeling and Human Performance in Manipulating Parallel Flexible Objects
Introduction
Minimum Hand-Jerk Model
Minimum Hand-Force-Change Model
Hand Mass Identification
Experimental Results
Hand Mass Identification
Analysis of Reaching Movements
Discussion
Conclusions
References
Section 2
6
Approaching Human Hand Dexterity Through Highly Biomimetic Design
Introduction
Related Work and Motivation
Anthropomorphic Robotic/Prosthetic Hands
Design Tools for Medicine and Biology Research
The Important Anatomy of the Human Hand
Bones and Joints
Joint Ligaments
Flexor and Extensor Tendons
Tendon Sheaths
Biological Joints
Development of the Highly Biomimetic Robotic Hand
The Rapid Prototyping Process of Our Biomimetic Robotic Hand
Teleoperation of Our Proposed Robotic Hand
Performance of the Biomimetic Robotic Hand
Fingertip Trajectories
Object Grasping and Manipulation
Conclusion
References
Further Reading
7
Hand Design-Hybrid Soft and Hard Structures Based on Human Fingertips for Dexterity
Introduction
Fluid Fingertip
Basic Structure of Fluid Fingertip
Stiffness of Fluid Fingertip
Uniform Contact Pressure of Fluid Fingertip
Contact With a Flat Surface
Contact With a Line Surface
Grasping Test
Prevention of Fracture on Ductile and Fragile Objects (Soft Tofu) Based on Pressure Profile
Basic Concept for Fracture Avoidance and Applicable Objects
Methodology for Detecting Linear Compression Behavior for Fracture Avoidance [3]
Compression Test With a Fluid Fingertip
Step 1: Detect the Point Where Fluid Pressure Starts Increasing
Step 2: Detect the Points Where Compression Behavior Is Linear
Experimental Validation
Versatile Grasping by the Fluid-Fingertip Embedded Gripper [26]
Underactuated Soft Gripper Utilizing Fluid Fingertips
Parallel Gripper Mode; Fig. 7.28A
Pinching Mode; Fig. 7.28B
Enveloping Mode; Fig. 7.28C
Experimental Validation of Versatile Grasping
Conclusions
References
Further Reading
8
Dynamic Manipulation Based on Thumb Opposability: Passivity-Based Blind Grasping and Manipulation
Chapter Points
Introduction
Structure of Hand Control Systems
Modeling and Dynamic Control of Precision Grasp
Contact Models
Dynamics and Equilibrium Points of a Finger-Object System
Blind Grasp Based on Thumb Opposability
Superposition of Lower Priority Tasks to Stable Grasping
Supervisory Control and Autonomy of Grasp and Manipulation
Discussion
References
9
A Grasping and Manipulation Scheme That Is Robust Against Time Delays of Sensing Information: An Application o ...
Introduction
Multifingered Hand-Arm System Grasping an Object
Virtual-Object Frame
Control Law Based on Finger-Thumb Opposability
Time Delay
Control Input
Conditions for Convergence of a Desired State
Numerical Simulation
Experiments
Conclusion
References
Further Reading
10
Planning Dexterous Dual-Arm Manipulation
Introduction
Definitions
Notation
Grasp Planner
Object Placement Planner
Manipulation Paths
Manipulation Planner
Graph Structure
Graph Components
Initial Object Placement
Intermediate Object Placement
Target Object Placement
Component Connection
Type2 Path
Transfer Path
Transit Path
Path Generation
Path Optimization
Examples
Conclusions
References
Further Reading
11
Prospects of Further Dexterity in Robotic Manipulation by Imitating Human Beings
Summary
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
Back Cover

Citation preview

HUMAN INSPIRED DEXTERITY IN ROBOTIC MANIPULATION

HUMAN INSPIRED DEXTERITY IN ROBOTIC MANIPULATION Edited by

TETSUYOU WATANABE KENSUKE HARADA MITSUNORI TADA

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom © 2018 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-12-813385-9 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisition Editor: Sonnini R. Yura Editorial Project Manager: Thomas Van Der Ploeg Production Project Manager: Vijay Bharath Cover Designer: Matthew Limbert Typeset by SPi Global, India

CONTRIBUTORS Jumpei Arata Department of Mechanical Engineering, Faculty of Engineering, Kyushu University, Fukuoka, Japan Ji-Hun Bae Robotics R&D Group, Korea Institute of Industrial Technology, Cheonan, South Korea Qiushi Fu Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States Igor Goncharenko College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan Kensuke Harada Graduate School of Engineering Science, Osaka University, Toyonaka, Japan Akihiro Kawamura Department of Advanced Information Technology, Kyushu University, Fukuoka, Japan Makiko Kouchi Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan Victor Kryssanov College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan Ryuta Ozawa Department of Mechanical Engineering Informatics, Meiji University, Kawasaki, Japan Marco Santello School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States Mikhail Svinin College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan Mitsunori Tada Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan Kenji Tahara Department of Mechanical Engineering, Kyushu University, Fukuoka, Japan Tetsuyou Watanabe Faculty of Mechanical Engineering, Institute of Science and Engineering, Kanazawa University, Kanazawa, Japan

ix

x

Contributors

Zhe Xu Department of Mechanical Engineering, Yale University, New Haven, CT, United States Motoji Yamamoto Department of Mechanical Engineering, Kyushu University, Fukuoka, Japan

CHAPTER 1

Background: Dexterity in Robotic Manipulation by Imitating Human Beings Tetsuyou Watanabe

Faculty of Mechanical Engineering, Institute of Science and Engineering, Kanazawa University, Kanazawa, Japan

Contents 1.1 Background 1.2 Complemental Information 1.2.1 Statistically Significant Difference 1.2.2 State Space Representation 1.2.3 Mechanical Impedance 1.2.4 Fundamental Grasping Style 1.2.5 Kinematics and Statics of Robots 1.2.6 Dynamics of Robots References

1 3 3 4 5 5 6 6 7

1.1 BACKGROUND Human beings have been a kind of textbook for constructing robots because robots perform the tasks that humans do in the support of human beings. However, there are a lot of unrevealed things about human beings. Even if knowledge is given, how to utilize that knowledge during the development of robots is on a case by case basis and there are no general methodologies to effectively transfer the knowledge during the development process. It is impossible to deal with all aspects of human beings. This text is focused on dexterity and aims to understand how human beings acquire dexterity in object manipulation, discusses the possibility of its application in robotic systems, and draws key strategies for dealing with robotic dexterous manipulation in the next generation. Here, we are focused on up-to-date research about human functions and the method for transferring the functions to robotic manipulation. Human Inspired Dexterity in Robotic Manipulation https://doi.org/10.1016/B978-0-12-813385-9.00001-7

© 2018 Elsevier Inc. All rights reserved.

1

2

Human Inspired Dexterity in Robotic Manipulation

Human functions: The level of dexterous manipulation by robots is currently far from that of human beings. What can improve the ability of robots? One hint might be to understand the approach that human beings take in dexterity acquisition. For example, suppose there is a task of grasping and manipulating an unknown object. The shape, softness, weight, and gravitational center are unknown parameters of the presented object. How do human beings identify the physical parameters and use the identified parameters to complete dexterous manipulation? How do human beings learn such procedures? The methodology for the identifying and learning process of human beings could provide valuable insights for the construction of dexterous manipulation of robots. The structure of the human finger and hand play an important role for dexterous manipulation. It was acquired in the process of evolution. The key structures for dexterity are also valid for the key structures of robotic hand design. Method for transferring human functions to robotic manipulation: It is basically difficult to transfer human manipulation techniques or functions to robotic manipulation, because of the different structures and system architectures. One approach might be imitating or embedding key function/components in robots one by one. Recent attempts to transfer the techniques or functions could give us deep insights in realizing dexterous manipulation of robots. Such attempts could proceed to creating new strategies to design, control, and plan for robotic manipulations, although the functions do not perfectly coincide with the ones for human beings. This text will consolidate recent approaches from both viewpoints in accelerating the next developments in the dexterous manipulation of robots. To facilitate this understanding, there are two separate sections corresponding to the two viewpoints. The first section focuses on human functions while the second section focuses on transferring the functions to robots. Chapter 2 provides recent revelations in hand anatomy, which lead to human functions for dexterous manipulation. Newly discovered functions give us new viewpoints for constructing robotic manipulation. The concept of muscle synergy has been utilized for controlling robotic hands, but the synergy does not always correspond to that for human or animal evolution. Chapter 3 presents how human beings learn dexterous manipulation in the context of sensorimotor functions. The actual functions of human beings give us different insights to understand dexterous manipulation. Chapter 4 presents a trail of excitation of a multisensory illusion of a surgical robotic system to enhance the dexterity of the control of surgical robotic systems. A hint of the embedded features of human behaviors in the system

Background: Dexterity in Robotic Manipulation by Imitating Human Beings

3

can be obtained. Chapter 5 examines human reaching behavior when manipulating parallel flexible objects and shows that the optimal hand trajectory is composed of a fifth order polynomial (as in the classic minimum jerk model) and trigonometric terms depending on the natural frequencies of the system and time movement. The second section provides recent results of design (Chapters 6 and 7), control (Chapters 8 and 9), and planning (Chapter 10) for dexterous robotic manipulation while considering human functions. Chapter 6 presents a novel anthropomorphic robotic hand design, imitating the salient features of the human hand. Chapter 7 presents a novel fingertip design imitating the structure of human fingers, and a robotic hand equipped with the fingertip. Both designs give hints for constructing robotic hands utilizing human hand features. Based on the human function of thumb opposability, Chapter 8 presents a control schema utilizing the concept of passivity. Chapter 9 presents the controller with considering the difference of sensing timing between visual sensors (low sampling rate) and joint sensors (high sampling rate). Chapter 10 presents a planning methodology to manipulate objects with two arms like human beings.

1.2 COMPLEMENTAL INFORMATION To make it easier to understand, complemental information is provided here.

1.2.1 Statistically Significant Difference When examining the differences between two groups, statistical hypothesis testing is used. Both groups are supposed to be represented by the Student’s t-distribution. The testing outputs the P-value (probability value), which indicates the probability for the null hypothesis that each element of the two groups belongs to the same distribution. If the P-value is less than the given level of significance (for example, 0.05 or 5%), the null hypothesis is rejected, and it can be said that there is a statically significant difference between the two groups. This analysis is valid for the two groups. If the number of the target groups is more than two, the post hoc test is performed for the analysis. Table 1.1 shows a summary of which test should be performed in each case. For details, please see text books on statistics, for example [1].

4

Human Inspired Dexterity in Robotic Manipulation

Table 1.1 Which statistical test should be performed in each case Number of Assumption of Assumption Name of Target elements in Distribution in of variance in test comparison each group each group each group

T-test TukeyKramer Bonferoni/ Dunn Scheffe

Two groups Pairwise differences Pairwise differences No limitation

No limitation No limitation

Nothing Normal

Nothing Homogeneity

Same

Normal

Homogeneity

No limitation

Nothing

Nothing

1.2.2 State Space Representation State space representation is conducted when modeling a system as a firstorder differential equation of the input (u), output (y), and state (x). If the system is linear, the state and observation equations are respectively represented by x_ ¼ Ax + Bu y5Cx + Du

(1.1)

where A, B, C, D are the matrixes. It should be noted that D 5 0 for most of the cases because it is the feedthrough term. If the system is nonlinear, the state and observation equations are respectively represented by x_ ¼ f ðx, uÞ y5gðx, uÞ

(1.2)

Here, one simple example is shown. The model of mass, damper, and spring is considered and illustrated in Fig. 1.1. Let x be the state, m be the mass, d be the damping coefficient, k be the spring coefficient, and f be the applied force. The equation of motion is then represented by

k

m

f

d x

Fig. 1.1 Model of mass, damper, and spring.

Background: Dexterity in Robotic Manipulation by Imitating Human Beings

mx€ ¼ kx  d x_ + f :

5

(1.3)

If letting f be the input, Eq. (1.3) can be formulated as the form of state equation as follows.        x_ 0 1 x 0 ¼ + f (1.4) x€ k=m d=m x_ 1=m

1.2.3 Mechanical Impedance The concept of mechanical impedance came from the concept of electrical impedance and represents the dynamics of the system by the model of mass, damper, and spring (1.3). If changing Eq. (1.3) as a form of the relationship between force (left side) and motion (right side), the mechanical impedance can be represented by Eq. (1.5). For more details, please see robotic text books such as [2]. f ¼ mx€ + dx_ + kx

(1.5)

1.2.4 Fundamental Grasping Style There are several grasping styles, but the most popular and fundamental ones are precision grasp and power grasp, as illustrated in Fig. 1.2. For the precision grasp, only fingertip areas are utilized for grasping, and the mobility of objects is good. An in-hand manipulation can be then realized for this style. For the power grasp, the object is enveloped by several parts of a hand, and the motion of the object is restrained. However, the graspable weight is big, and uncertain disturbances can be easily balanced. For more details, please see text books such as [3].

Fig. 1.2 Fundamental grasping style; (A) precision grasp and (B) power grasp.

6

Human Inspired Dexterity in Robotic Manipulation

1.2.5 Kinematics and Statics of Robots Let r 2Rm be the vector representing the position and orientation of the robot at the task frame (m is the dimension of the task space), and q 2 Rn be the joint vector with the n dimension. r is the function of q: r ¼ r ðqÞ

(1.6)

It should be noted that the problem to derive r from the given q is referred to as forward kinematics while the problem to derive q from the given r is referred to as inverse kinematics. By differentiating Eq. (1.6) with respect to time, we get r_ ¼

∂r q_ ¼ J ðqÞ_q ∂q

(1.7)

where J(q) is the Jacobian matrix. Let w 2Rd be the wrench whose components are constructed by force and moment exerted at the task frame, and τ 2 Rn be the joint torque vector. From Eq. (1.7) and the principle of virtual work, we have τ ¼ J ðqÞT w

(1.8)

For more details, please see robotic text books such as [2].

1.2.6 Dynamics of Robots The dynamics of robots can be approximately represented by the model of mechanical impedance while accurately derived as the equation of motion. The equation of motion for robotics is given by τ ¼ M q€ + Cðq, q_ Þ_q + gðqÞ

(1.9)

where M is the inertia matrix, Cðq, q_ Þ_q is Coriolis and centrifugal forces, and g(q) is a gravitational term. The equation of motion can be derived by utilizing Lagrange’s equations or Newton’s and Euler’s equations. See robotic text books for a detailed deviation. If considering the w at the task frame, Eq. (1.9) becomes τ ¼ M q€ + Cðq, q_ Þ_q + gðqÞ + J ðqÞT w where statics term is added. For more details, please see robotic text books such as [2].

(1.10)

Background: Dexterity in Robotic Manipulation by Imitating Human Beings

7

REFERENCES [1] K.R. Murphy, B. Myors, A.H. Wolach, Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests, Routledge, New York, 2009. Available at: https://books.google.co.jp/books/about/Statistical_Power_ Analysis.html?id¼paYdeZRTsT4C&redir_esc¼y. Accessed 14 August 2017. [2] T. Yoshikawa, Foundations of Robotics: Analysis and Control, MIT Press, Cambridge, MA, 1990. Available at: https://mitpress.mit.edu/books/foundations-robotics. Accessed 14 August 2017. [3] M.R. Cutkosky, R.D. Howe, Human grasp choice and robotic grasp analysis, in: Dextrous Robot Hands, Springer New York, New York, NY, 1990, pp. 5–31. Available at: http://link.springer.com/10.1007/978-1-4613-8974-3_1. Accessed 14 August 2017.

CHAPTER 2

Digital Hand: Interface Between the Robot Hand and Human Hand Makiko Kouchi, Mitsunori Tada

Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan

Contents 2.1 Introduction 2.2 Structure of the Human Hand and Digital Hand 2.2.1 Anatomy of the Human Hand 2.2.2 Link and Joint of the Digital Hand 2.2.3 Surface Mesh of a Digital Hand 2.3 Digital-Hand Model With Size Variations 2.3.1 Individual Digital-Hand Model 2.3.2 Representative Digital Hand Models 2.4 Analysis by Digital-Hand Model 2.4.1 Motion Analysis and Posture Synthesis 2.4.2 Mechanical Analysis 2.5 Conclusion References

11 12 12 15 16 17 17 18 20 20 22 24 25

2.1 INTRODUCTION Robot hands are intended to realize the same dexterous and versatile manipulation that we humans can do. Thus, for robot-hand research, understanding the anatomy and motion of the human hand is fundamental. On the other hand, from the point of view of human-hand research, describing the mechanisms and mechanics behind hand posture and motion helps to understand how we realize such dexterous and versatile manipulations. However, there is a wide gap between robotic and human informatics, and it is difficult to interchange the diverse knowledge accumulated in each research field directly. We have been involved in digital-hand research for more than a decade. The goal of this project is to simulate the mechanical interaction between

Human Inspired Dexterity in Robotic Manipulation https://doi.org/10.1016/B978-0-12-813385-9.00002-9

© 2018 Elsevier Inc. All rights reserved.

11

12

Human Inspired Dexterity in Robotic Manipulation

the human hand and the object (hand-held product) for promoting ergonomic product design, taking the individual differences in hand properties into account. For this purpose, we realize a synthesis of various hand models, including individual [1] and representative hands [2], based on a template hand model, a hand dimension database, and spatial deformation of the template model. Also, the individual model was used for motion analysis of the human hand in collaboration with a motion-capturing system [3], while the representative model was used for the mechanical analysis of grasp and manipulation by using a hand model in a grasping posture [4]. Interestingly, the digital hand is applicable to not only ergonomic product design as previously described, but to interface the robotic and human informatics. For example, the digital-hand model with a simplified anatomical link structure enabled us to reproduce the motion of the human hand. Once the motion is quantified, it is possible to summarize representative hand postures in the lower dimensional space by using a statistical method such as principal component analysis [5]. This is quite useful in designing robot hands and in synthesizing the posture of robot hands [6]. Also, the unified theory of grasp and manipulation established in robot-hand research provides a deep insight about the stability of grasp and its impact on the ease of grasp while human subjects are using hand-held products. This is indispensable for understanding the strategies behind the motion synthesis of human hands and how the subjective response arises during product use. In this chapter, we review the basics of digital-hand technology, (1) hand anatomy, link structure, and surface mesh, (2) individual and representative hand models, and (3) example cases where the digital-hand model was used in motion and mechanical analysis of the human hand, hoping that digitalhand technology will be an interface between robots and humans; enabling dexterous manipulations in robotics and introducing mechanical and quantitative understanding of grasp in human informatics.

2.2 STRUCTURE OF THE HUMAN HAND AND DIGITAL HAND 2.2.1 Anatomy of the Human Hand The skeleton of the human hand is composed of eight carpal bones, five metacarpal bones, and fourteen phalangeal bones as shown in Fig. 2.1. The thumb has two phalangeal bones, and the four fingers have three phalangeal bones [7].

Digital Hand: Interface Between the Robot Hand and Human Hand

13

Phalangeal bones

Metacarpal bones Carpal bones

Fig. 2.1 Skeletal structure of the human hand.

The main active movements of the wrist are flexion (palmar flexion), extension (dorsiflexion), abduction (radial deviation), and adduction (ulnar deviation) at the radiocarpal joint and midcarpal joint. Circumduction of the wrist results from consecutive flexion, adduction, extension, and abduction carried out in this or in the reverse order. Active movements of the wrist are produced by extrinsic muscles of the hand originating from the bones in the upper arm or forearm, and inserted into the carpal bones, metacarpal bones, or palmar aponeurosis. The second to the fifth metacarpal bones almost lie on a plane, but the first metacarpal bone lies on the more palmar side than the other four metacarpal bones and rotates medially around its longitudinal axis by 90 degrees. Therefore, its morphologically dorsal surface is directed at the lateral (radial) direction, and flexion-extension occurs in a plane parallel to the palmar surface. Also, the dominant movements of the thumb are produced at the first carpometacarpal (CMC, joint between carpal and metacarpal bone) joint, while dominant movements of the other four fingers are produced at the second to fifth metacarpophalangeal (MP, joint between metacarpal and phalangeal bone) joints. Active movements of the first CMC joint are flexion, extension, abduction, adduction, rotation, and circumduction. In the second to fifth fingers, the movements of the CMC joints are limited to slight gliding of the articular surfaces. The flexion of the first CMC joint is associated with the medial rotation.

14

Human Inspired Dexterity in Robotic Manipulation

The tip of the thumb can reach to the other four fingertips due to this combined movement and abduction at the first CMC joint. Flexion, adduction, and opposition at the first CMC joint are produced by intrinsic muscles of the thumb (thenar muscles), extension by extrinsic muscles of the hand, and abduction by both muscles. Active movements at the first MP joint are limited to flexion and extension, while those at the second to fifth MP joints are flexion, extension, abduction, adduction, circumduction, and limited rotation. The active movements of the interphalangeal joints are flexion and extension. Flexion at the proximal interphalangeal (PIP, proximal side joint between two phalangeal bones) joints is controlled by two muscles, while that at the distal interphalangeal (DIP, distal side joint between two phalangeal bones) joints and the interphalangeal (IP, joint between two phalangeal bones) joint of the thumb is controlled by one muscle. Extension at PIP and DIP joints of the four fingers are controlled by one muscle. Therefore, PIP and DIP joints cannot be extended independently. There are accessory movements (passive movements produced when the active movements are constrained by external resistance) of a limited range of rotation, abduction, adduction, and gliding, which allows the thumb and fingers to adapt to the shape of the object grasped in hand. The active movements of the four fingers are produced by both extrinsic and intrinsic muscles. Like the extrinsic muscles of the thumb, those contributing to the movements of the four fingers are originated from the bones in the upper arm or forearm and inserted in the phalangeal bones. Among these four fingers, the second and fifth fingers have their own extensor muscle. Intrinsic muscles contribute to producing the fine movements of the fingers in cooperation with the extrinsic muscles. The intrinsic muscles of the second to fourth fingers have a similar structure. It is composed of two types of intrinsic muscles, one muscle originates from the metacarpal bones, and the other is from the flexor muscles, and both are inserted in the phalangeal bones. In addition to these two intrinsic muscles, the fifth finger has its own intrinsic muscles (hypothenar muscles) similar to the thumb which has thenar muscles. The various motions of the human hand are produced by this complicated anatomical structure which is composed of bones and muscles. A movement of one specific joint is produced by more than one muscle. Also, one specific muscle is contributing to movements of several joints. This makes musclebased control of the digital hand extremely difficult. For simplicity, the digital-hand model usually has a minimum link and joint structure that is enough to reproduce human-hand motion.

Digital Hand: Interface Between the Robot Hand and Human Hand

15

2.2.2 Link and Joint of the Digital Hand The digital-hand model consists of a surface mesh, explained in the next section, for visualizing the surface geometry and a link structure for controlling the posture and movement of the hand. As we reviewed in the previous section, the control of hand motion by the musculoskeletal structure is complicated, because the movement of one specific joint is produced by more than one muscle, and one specific muscle is contributing to the movements of several joints. To simplify the control of hand motion, the link structure of the digitalhand model has fewer degrees of freedom (DoF) than those of the human hand. Fig. 2.2 shows examples of the typical link structure of digital-hand models. The range of motion is small for the second to fifth CMC joints, and movement of these joints are often ignored (Fig. 2.2, left) [8]. However, if the movement of these joints is completely ignored, hand posture such as a cupped palm cannot be reproduced. To solve this problem, other digitalhand models place two joints at the second to fifth CMC joints (Fig. 2.2, middle, and right) [9,10]. Usually, IP joints are described by one (flexionextension) DoF hinge joint, while MCP joints of the second to fifth fingers are described by two (flexion-extension and adduction-abduction) or three (flexion-extension, adduction-abduction, and pronation-supination) DoF ball joints. CMC joints and wrist joints are also described by two or three DoF joints. Joint centers of the hand are determined from a two-dimensional anatomical image, three-dimensional medical images, or motion capture (MoCap) data of the hand. The anatomical image based methods utilizes landmarks and/or hand dimensions obtained from the two-dimensional image, the X-ray image, or palmar-side image of the hand, for estimating

DIP IP

IP

MCP

(A)

PIP

PIP

MCP

IP

MCP

CMC

1 2

CMC

(B)

MCP

MCP

MCP CMC

DIP

DIP

PIP

CMC

1 2

CMC

(C)

CMC

1 3 6

Fig. 2.2 Link structures of the three different digital-hand models. (A) No CMC DoF [8]; (B) Santos hand [9]; (C) DhaibaHand [10].

16

Human Inspired Dexterity in Robotic Manipulation

the positions of joint centers on the two-dimensional image. In these methods, joint-center position in the normal direction to the image may not be accurate, because it cannot be measured directly from the image, and thus it is estimated from a predefined regression model [12]. In the medical images-based methods, multiple MRI images or CT images are acquired from the same subject in different postures (joint angles), and bone surfaces geometries are extracted from these images by applying image segmentation. The procedure to compute the joint centers is as follows. The proximal bone of the joint target in different postures are registered first, then transformation matrices that describe relative movement of the distal bone with respect to the proximal bone are computed, and finally, a joint center for this joint is determined by finding an immovable point of the transformation. The MoCap data based methods utilize hand motions collected using a MoCap system that measures three-dimensional coordinates of retroreflective spherical markers. At least three markers attached to each link are enough to describe relative movements of the distal bone with respect to the proximal bone. However, this is not a practical solution, because more than fifty markers are necessary to capture the entire movement of the link structure. Usually, to minimize the number of markers attached to the hand, only one marker at the joint is attached, and the position of the marker is measured while the subject is performing the calibration motion for each joint to estimate the joint centers as an immovable point regardless of the joint movement. The link length is calculated as the distance between two adjacent joint centers. The link length of the most distal segment of the finger is calculated as the distance between the fingertip and the DIP joint center. In the digitalhand model, the link length is always assumed to be constant. In terms of reliability, the second approach generates the best link structure, because the medical images have three-dimensional information, and unlike the third approach, it is not affected by skin movement artifact [11]. However, the third approach is the most common, because the MoCap is also used in the mechanical analysis for synthesizing hand posture and motion after creating the link and joint model.

2.2.3 Surface Mesh of a Digital Hand Like the second approach in determining joint centers, a surface mesh of a digital-hand model is usually created by applying image segmentation to

Digital Hand: Interface Between the Robot Hand and Human Hand

17

medical images such as MRI or CT. When joint angles change, the surface mesh needs to deform accordingly. Each data point in the surface mesh (sometimes called a node) is described as a function of related joint angles to deform the surface mesh. For each node, joints that contribute to the movement of this node and skinning parameters are defined. Anatomical landmarks on the skin surface that are later used to define hand dimensions and virtual markers are also assigned on the nodes. This surface-mesh model with anatomical landmarks and joint centers that is obtained by the previously mentioned methods is used as a template-mesh model for homologous modeling, in which all models consist of the same number of nodes with the same mesh topology. Such template-mesh models are also called the “generic model.”

2.3 DIGITAL-HAND MODEL WITH SIZE VARIATIONS 2.3.1 Individual Digital-Hand Model The digital-hand model of a specific person can be generated by the methods described in the previous section, links and joints from the anatomical image, medical images, or MoCap data, and the surface mesh from medical images. Another method to create an individual model is to deform the generic model by using hand dimensions. This method is more practical than the first one because acquiring and processing the medical images is usually difficult and time consuming. To generate an individual-hand model by deforming the generic model, hand dimensions, link lengths, and joint centers of the target subject are necessary. Hand dimensions are obtained from traditional manual methods by using a caliper. Some dimensions can be measured from a two-dimensional palmar image scanned by a flatbed scanner [12]. As the generic model is deformed by hand dimensions, the posture of the generic model must be close to that for a manual hand anthropometry, fully extended position. The surface mesh of the target person is generated by scaling the generic model according to the scaling factor at each joint, which are calculated by a nonlinear optimization method that minimizes the sum of the differences of hand dimensions between the generic model and target person. A “deformation transfer” is also used to deform the generic model to the target model [13]. In the case of a deformation transfer, target dimensions are reproduced more accurately than the scaling-based method. To reduce the computational cost while keeping the difference of hand dimensions between the generic model and the target subject small enough, the

18

Human Inspired Dexterity in Robotic Manipulation

following procedure is proposed [1]. In this method, the genetic model is simplified (decimated) by reducing the number of meshes first. The decimated generic model is then deformed so that the landmark positions and hand dimensions of this model are identical to those of the target. Finally, deformation of the decimated generic model is transferred to the original generic model. Joint centers are estimated by using MoCap data, or by using marker positions on the skin surface measured under static conditions [14]. In the latter method, a database of “correct” data are taken for a certain number of people which is necessary to calculate a regression equation for estimating the coordinates of joint centers from coordinates of marker positions attached to the hand with the palm and fingers extended. Link lengths are estimated by using hand dimensions taken under several positions when the hand has different joint angles [15] or calculated as distances between two adjacent joint centers, which are estimated by the previously mentioned methods. The accuracy of the generated individual hand model is evaluated by computing errors in hand dimensions. When the correct dimensions are used, the mean error is at most 0.5 mm. However, when the estimated dimensions are used, the accuracy of the model depends heavily on the accuracy of the estimation [14]. Differences between the measured surface shape of the target subject and that of the individual digital-hand model were estimated to be 5.0 mm for the maximum under the cylinder grasping posture [14], rather large compared to the error in the dimensions. Deformation of the surface shape due to the change of hand postures needs to be simulated by a physically more realistic method in the future. Shape characteristics of the generic model are directly transferred to the generated individual hand model of the target subject. Thus, the detailed shape of the target subject such as the shapes of fingernails and medial bending of the DIP joint will not be reproduced in the deformed generic hand model.

2.3.2 Representative Digital Hand Models The range of inter-individual variation in a body dimension is often described using 5th and 95th (or 1st and 99th) percentile values. However, the calculated range is misleading when more than one body dimension are simultaneously considered. This is because it is not probable that the same person has 95th percentile values of, for example, hand length, hand width,

Digital Hand: Interface Between the Robot Hand and Human Hand

19

and hand thickness. A boundary family is used to represent a possible range of inter-individual variation in a population under the multi-dimensional situation. The boundary family of hand dimensions is calculated using an anthropometric database of the target population. The information carried by hand dimensions correlated with each other is compressed using a principal component analysis (PCA). The first two principal components carrying the largest percentage of the total variance are used to draw a scatter diagram of subjects and a probability ellipse that contains 95% or 99% of the subjects. The boundary family consists of nine members located on this ellipse [16]. One member is located at the center of the ellipse and eight members are located on the boundary of the ellipse; at the crossing points of the ellipse and the x-axis (PC 1) or y-axis (PC 2), or the crossing points of the ellipse and lines with the slope of SD_PC2/SD_PC1, where SD_PC1 and SD_PC2 are standard deviations of PC1 and PC2 scores, respectively. Fig. 2.3 shows the probability ellipse for Japanese adults including males and females. In this figure, five representative models are also shown. Hand dimensions of the nine members of the boundary family are calculated from their PC scores and eigenvectors. Representative hand models are synthesized by deforming the generic model using these calculated hand dimensions [15]. Locations of joint centers of the representative hand models

Fig. 2.3 Probability ellipse of hand dimensions for Japanese adults, males, and females, and five representative digital hand models from the boundary family.

20

Human Inspired Dexterity in Robotic Manipulation

have not been validated because the representative hands are virtual and no one has the same hand dimensions as the representatives. They should be investigated by comparing individual hand models obtained from measured joint centers, surface shapes, and landmark positions with that created from the PC scores and eigenvectors. The representative hand models of virtual individuals at the center and boundary of the distribution is used for screening design candidates of the hand-held product. This allows us to eliminate prototype fabrications and user experiments from the product design process, especially in its early stage.

2.4 ANALYSIS BY DIGITAL-HAND MODEL 2.4.1 Motion Analysis and Posture Synthesis Hand postures and motion are synthesized from MoCap data, the posture database, or inverse kinematics computation given contact constraints between hand and object. Among these three methods, the MoCap based one is the most intuitive. In this method, the individual digital-hand model is fit in the MoCap data by minimizing the error between the measured marker positions and corresponding landmark positions assigned to the digital-hand model (Fig. 2.4). However, this method is applicable only to the individual model, as no one has the same hand dimensions as the representatives. The database and inverse kinematics driven posture synthesis are thus important to synthesize posture and motion for the representative hands. The hand posture database is constructed from measured MoCap data. In the simplest case, a hand posture is synthesized by computing the weighted sum of the existing data captured in different postures. The motion profile of the hand from initial to final posture is also synthesized by similar interpolation [17]. This method ignores the coupling between DIP and PIP joint movements, or that between the joint movements of the fourth and fifth fingers. This will sometimes lead to the synthesis of realistic, but infeasible posture and motion. Similar to the concept of the boundary family, another method for constructing the posture database is to introduce a statistical approach [5]. By conducting a PCA for the measured MoCap data, the hand posture is represented by a smaller number of parameters. In this method, it is possible to synthesize a realistic hand posture from the PC scores and the eigenvectors. Also, it is possible to clarify the representative hand postures that appear in our daily life (Fig. 2.5), if this method is applied to the hand motion data

Digital Hand: Interface Between the Robot Hand and Human Hand

21

Fig. 2.4 Hand motion of the individual subject synthesized from measured MoCap data.

Fig. 2.5 Representative hand postures computed from PCA results of hand MoCap data for daily necessities. (A) Representative hand postures along the first principal component axis. (B) Representative hand postures along the second principal component axis. (C) Representative hand postures along the third principal component axis.

22

Human Inspired Dexterity in Robotic Manipulation

captured while the subjects are moving their hand to grasp daily necessities, such as a cup, dish, screwdriver, etc. As shown in Fig. 2.5, PC1, PC2, and PC3 account for the flexion-extension of all finger joints, flexion-extension of the MP joint, and adduction-abduction of the MP joint, respectively. The inverse kinematics driven posture synthesis [18] is the most versatile because MoCap data is not necessary for posture synthesis. In this method, target positions of the thumb tip, fingertips, and when necessary, the palm are assigned to the object. Inverse kinematics then tries to find the hand posture that minimizes the error between the target positions and hand landmarks while avoiding the interference between hand and object surface. So far, the target positions must be manually assigned, but it is possible to generate them automatically by categorizing the pattern of target points according to the attributes of the grasped objects. Also, deformation of the hand surface shape due to the contact is not considered. The Virtual Soldier Santos system, developed at the University of Iowa, offers a more general solution for posture synthesis by introducing the attributes of objects, such as surface shape, weight, and task. When the user chooses an object and a task, the system selects the type of grasp (power grasp, precision grasp, etc.) from several candidates. The number of fingers and hands to use is calculated based on the object shape, hand size, and grasp type. The power grasp is generated by inverse kinematics based on the assumption that each link of the second to fifth fingers makes contact with the object. The precision grasp is also generated by inverse kinematics based on the assumption that the fingertips of all fingers contact with the object [19]. This method can generate a different hand posture for the same object by defining a different task.

2.4.2 Mechanical Analysis Figs. 2.6 and 2.7 show examples of the inverse kinematics-based grasp posture synthesis that was described in the previous section. In the case of the power grasp shown in Fig. 2.6, six pairs of target positions and landmarks are assigned; thumb tip, four fingertips, and palm. The target position for the index finger is especially put on the shutter button, because the user of the camera is expected to push this button by an index finger. In the case of the precision grasp shown in Fig. 2.7, three pairs are assigned; thumb tip, index fingertip, and middle fingertip. Once the grasping posture is synthesized, it is possible to compute mechanical interaction between hand and object by introducing the theory of grasp and manipulation established in robotics.

Digital Hand: Interface Between the Robot Hand and Human Hand

23

Fig. 2.6 Synthesis of power grasp and computation of convex hull of the friction cones. (A) Synthesized posture. (B) Computed convex hull.

Fig. 2.7 Synthesis of precision grasp and computation of contact forces. (A) Synthesized posture and computed contact forces from palmar side. (B) Synthesized posture and computed contact forces from dorsal side.

Fig. 2.6 also shows the convex hull of friction cones computed from the distribution of contact points with the assumption that the sum of normal contact forces is equal to one. From this convex hull, the grasp quality that represents the stability of grasp is derived by computing the radius of the maximum inscribed sphere [20]. This is quite important because the stability of grasp proposed in robotics has a positive correlation with the subjective response, the ease of grasp that subjects feel when they are grasping cylinder or quadric prisms. If we construct a regression model that relates the stability of grasp computed from the digital-hand model and subjective response revealed by questionnaires, no further user experiments are necessary. We can estimate the subjective response from a digital-hand simulation and the regression model.

24

Human Inspired Dexterity in Robotic Manipulation

As explained in the previous section, the stability of grasp shown in Fig. 2.6 is useful for estimating the subjective response, but this method rather simplifies the mechanisms of the human hand. Fig. 2.7 also shows contact force vectors computed from anatomically more realistic digitalhand models including simplified muscle structures composed of the moment arm and the maximum isometric force of each muscle. In this method, contact forces and muscle activations are computed by solving a quadratic programming problem; minimizing the sum of squared muscle activations while satisfying two equality constraints and one inequality constraint. The equality constraints are the equilibrium between total wrench applied to the object and contact forces, and that between contact forces and muscle activations, while the inequality constraints ensure that no slippage occurs at each contact point. This method is applied to posture synthesis by finding the thumb-tip position that can minimize the sum of squared muscle activations within the range of motion for the thumb [21]. A modified version of this formulation is applied to compute the robustness of grasp along specified unit wrench vectors [22]. In this case, the scale factor for the unit wrench, contact forces, and muscle activations are computed by solving linear programming problems; maximizing the scale factor for the unit wrench while satisfying the same equality and inequality constraints as the previous formulation. This method is also applied to a datadriven grasp synthesis by pruning undesirable grasps from many grasp candidates based on the computed grasp quality.

2.5 CONCLUSION There is a wide gap between robot hands and human hands, and it is difficult to interchange the diverse knowledge accumulated in each research field directly. However, as reviewed in this chapter, digital-hand technology will be an interface that connects robot with human; motion analysis of the human hand will help to achieve better motion synthesis and dexterous manipulation in robot hands, while the unified formulation of grasp and manipulation will help to understand the mechanics behind human hands and subject responses arisen from product use. Digital hand research is conducted in several research institutes; the Virtual Soldier Research Program at the Iowa University, Transportation Research Institute at the Michigan University, and Human Informatics Research Institute at the National Institute of Advanced Industrial Science

Digital Hand: Interface Between the Robot Hand and Human Hand

25

and Technology, where whole-body digital human research is also conducted. So far, the commercially available digital hand is from the Iowa University. Also, hand-dimension data of the Japanese is also available [23].

REFERENCES [1] Y. Endo, M. Tada, M. Mochimaru, Reconstructing individual hand models from motion capture data, J. Comput. Des. Eng. 1 (2014) 1–12. [2] R. Nohara, Y. Endo, A. Murai, H. Takemura, M. Kouchi, M. Tada, Multiple regression based imputation for individualizing template human model from a small number of measured dimensions, in: Proceedings of the 38th International Conference of the IEEE Engineering in Medicine and Biology Society, 2016, pp. 2188–2193. [3] Y. Endo, M. Tada, M. Mochimaru, Hand modeling and motion reconstruction for individuals, Int. J. Autom. Technol. 8 (2014) 376–387. [4] N. Miyata, M. Kouchi, M. Mochimaru, Posture estimation for screening design alternatives by DhaibaHand – cell phone operation, SAE Technical Paper, 2006-01-2327, 2006. [5] M. Santello, M. Flanders, J.F. Soechting, Postural hand synergies for tool use, J. Neurosci. 18 (23) (1998) 10105–10115. [6] G.C. Matrone, C. Cipriani, E.L. Secco, G. Magenes, M.C. Carrozza, Principal components analysis based control of a multi-dof underactuated prosthetic hand, J. Neuroeng. Rehabil. 7 (1) (2010). [7] H. Gray, R. Warwick, P.L. Williams, Gray’s Anatomy, 35th ed., Longman, 1973. [8] X. Zhang, S.W. Lee, P. Braido, R. Hefner, M. Redden, in: Simulating hand grasp motion: three modules integrated into one computer application, Proceedings of the International Ergonomic Association Congress 2003, 2003. [9] E. Pen˜a-Pitarch, J. Yang, K. Abdel-Malek, SANTOS Hand: a 25 degree-of-freedom model, SAE Technical Paper 2005-01-2727, 2005. [10] N. Miyata, M. Kouchi, T. Kurihara, M. in: Mochimaru, Modeling of human hand link structure from optical motion capture data, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004. [11] J.H. Ryu, N. Miyata, M. Kouchi, M. Mochimaru, K.H. Lee, Analysis of skin movement with respect to flexional bone motion using MR images of a hand, J. Biomech. 39 (5) (2006) 844–852. [12] T. Takahashi, Y. Aoki, M. Kouchi, M. Mochimaru, Image measurement of hand dimensions based on anatomical structure and creases, IEEJ Trans. Electron. Inf. Syst. 130 (2010) 686–695 (in Japanese with English abstract). [13] R.W. Sumner, J. Popovic, Deformation transfer for triangle meshes, ACM Trans. Graph. 23 (3) (2004) 399–405. [14] N. Miyata, Y. Shimizu, Y. Maeda, M. Mochimaru, Hand MoCap using an individual model with a skeleton and a surface skin, Int. J. Hum. Factors Model. Simul. 3 (2) (2012) 147–168. [15] M. Kouchi, N. Miyata, M. Mochimaru, An analysis of hand measurements for obtaining representative Japanese hand models, SAE Technical Paper 2005-01-2734, 2005. [16] A.C. Bittner Jr., R.J. Wherry Jr., F.A. Glenn III, R.M. Harris, CADRE: a family of manikins for workstation design, Technical Report 2100.07B, Naval Air Development Center, 1986. [17] T. Kurihara, N. Miyata, in: Modeling deformable human hands from medial images, Proceedings of Eurographics/ACM SIGGRAPH Symposium on Computer Animation, 2004, 2004.

26

Human Inspired Dexterity in Robotic Manipulation

[18] Y. Endo, N. Miyata, M. Kouchi, M. Mocihmaru, S. Kanai, J. Konno, M. Ogasawara, M. Shimokawa, A development of a system for ergonomic design by integrating a digital hand with product models (14th report), Proceedings of the Technical Meeting of the Precision Engineering Society, 2010, p. 167 (in Japanese with English abstract). [19] E. Pen˜a-Pitarch, J. Yang, K. Abdel-Malek, in: V.G. Duffy (Ed.), Virtual human hand: grasping and simulation, Digital Human Modeling, HCI International 2009, 2009, pp. 140–149. [20] C. Ferrari, J. Canny, in: Planning optimal grasps, Proceedings of the IEEE International Conference on Robotics and Automation, 1992, pp. 2290–2295. [21] T. Tsunesaki, Y. Asami, M. Tada, Y. Endo, N. Ogihara, in: Synthesis of hand posture during power grip task using a three-dimensional musculoskeletal model and a loadminimization criterion, Proceedings of the 8th World Congress of Biomechanics, 2018. [22] Y. Li, J.L. Fu, N.S. Pollard, Data driven grasp synthesis using shape matching and taskbased pruning, IEEE Trans. Vis. Comput. Graph. 13 (4) (2007) 732–747. [23] http://riodb.ibase.aist.go.jp/dhbodydb/hand/index.html.

CHAPTER 3

Sensorimotor Learning of Dexterous Manipulation Qiushi Fu*, Marco Santello† *

Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States



Contents 3.1 Introduction 3.2 Learning Manipulation: Theoretical and Experimental Evidence 3.2.1 Background 3.2.2 Interaction Between Multiple Sensorimotor Processes Underlies Learning Dexterous Manipulation 3.2.2.1 Subjects and Apparatus

31 31

3.2.3 Protocols 3.2.3.1 3.2.3.2 3.2.3.3 3.2.3.4

27 29 29

32

Data Analysis Model and Simulation Results Discussion

3.3 Lessons Learned From Human Data and Potential Applications to Robotic Dexterous Manipulation 3.4 Conclusions References Further Reading

34 35 38 46

48 50 50 52

3.1 INTRODUCTION The human hand is a unique sensorimotor system with unparalleled versatility. Throughout the early stages of motor development and before acquiring the ability to walk, humans learned to explore the environment and its dynamics by interacting with objects and tools. The hand’s complex biomechanical architecture, together with the nervous system’s ability to integrate sensory feedback with motor commands, dictate the extent to which such interactions are successfully learned to be then executed in a consistent and precise fashion (hand anatomy and neural mechanisms are presented in this chapter and Chapter 2 and reviewed in [1]). Human Inspired Dexterity in Robotic Manipulation https://doi.org/10.1016/B978-0-12-813385-9.00003-0

© 2018 Elsevier Inc. All rights reserved.

27

28

Human Inspired Dexterity in Robotic Manipulation

Learning dexterous hand-object interactions, such as using a screwdriver or modulating grip force on the handle of a hammer when hitting a nail over consecutive times, relies on several sensorimotor processes. At the sensory level, these processes involve multiple sensory modalities (i.e., vision, proprioception, and touch), whose integration conveys information about the state of the hand’s musculoskeletal system and the object in two ways. First, sensory feedback is acquired and processed during hand-object interactions. For review, see [2]. This informs the nervous system about object properties, and the extent to which a given manipulation was performed as planned. Second, sensory feedback is stored after hand-object interactions have been executed, and used to anticipate object dynamics prior to performing subsequent manipulations with the same or similar objects [3]. Thus, sensory feedback about the current state of the hand and object, together with sensorimotor memory from previous manipulations, are used to plan and execute motor commands that are appropriate for the desired manipulation. This general framework is consistent with the notion of internal models through which the nervous system anticipates sensory consequences of a planned movement to compute the corresponding motor commands [4]. Errors arising from a mismatch between desired and actual manipulation outcomes would then be used to update such internal models in an iterative fashion until the mismatch is eliminated. A significant body of literature exists on human sensorimotor learning, most of which has been focused on the control of arm movements using two experimental approaches: adaptation of reaching movements against force fields [5] and in response to visuomotor rotations [6]. Both approaches have provided significant insights into humans’ ability to leverage a learned task A to improve the learning rate of a similar task B, and conversely how learning of task A might be—in certain circumstances—detrimental to learning task B. For example, it has been shown that humans can generalize the learned dynamics of a movement to neighboring directions to a greater extent than with directions that are further away from the learned direction [7]. Contextual cues also appear to play a role in the extent to which humans can generalize learned reaching movement dynamics [8]. In contrast to this work, much less is known about the mechanisms underlying the learning of hand-object interactions, such as manipulation. It should be emphasized that, due to the neural and biomechanical differences between the arm and hand, one should not assume that sensorimotor-learning theories developed using studies of reaching movements would apply to the learning of manipulation tasks. Such differences

Sensorimotor Learning of Dexterous Manipulation

29

include the hand’s greater number of degrees of freedom, and the fact that manipulation provides richer contextual cues than reaching (i.e., object geometry provides information about the object’s mass distribution), and therefore the way it would react to forces and torque exerted by the hand. This chapter reviews recent studies and our current understanding of how humans learn to perform dexterous manipulation in the context of theories of sensorimotor learning and concludes with open questions and future directions for research on acquisition of dexterity in humans and robotic applications.

3.2 LEARNING MANIPULATION: THEORETICAL AND EXPERIMENTAL EVIDENCE 3.2.1 Background Sensorimotor learning and memory have been studied extensively in motor neuroscience using adaptation experiments in reaching and oculomotor tasks [9]. In general, all skilled motor behavior relies on learning both control and prediction which can be considered as two reciprocal procedures: control generates motor commands to produce desired consequences, whereas prediction maps motor commands (i.e., efferent copy) into expected sensory consequences [10]. The mechanisms underlying these two procedures have been traditionally proposed as inverse and forward internal models, respectively [4]. The update of internal models is considered to be driven by the error caused by a mismatch between sensed and predicted sensory outcome on a trial-by-trial basis. This error is assumed to signal the direction of adjustment for the next trial for error reduction. Error-based learning has been demonstrated in many motor adaptation schemes, including saccade adaptation [11], visuomotor adaptation [12], and force-field adaptation [7]. State-space models are often used to describe error-based updates by defining the estimation of task dynamics as states that can be modified by errors with learning rates. The exact structure of the model may vary as it could consist of a single learning rate [13] or multiple rates [5], and could include context selectors [14]. It has been demonstrated that these models can capture many well-known phenomena including interference and savings. Recent research has also demonstrated other sensorimotor learning processes. Besides explaining additional performance improvements after errors have been minimized, these nonerror based learning processes were also found to be operating in parallel with error-based learning [6]. Reinforcement learning uses signals, such as success and failure, which do not provide the

30

Human Inspired Dexterity in Robotic Manipulation

gradient of error reduction, therefore being exploratory to some extent. Rewards or penalties are given to a certain action that might have led to the success or failure and eventually lead to optimal solutions [9]. Another learning process has been termed use-dependent learning that describes the ability to adapt motor commands through repetition of movements without outcome information [15]. There is evidence that use-dependent learning could influence the directional control of subsequent reaching movement [6,16,17]. Sensorimotor learning and memory can also be evaluated from the perspective of retention and generalization. Generalization tests the extent to which a learned task could benefit other tasks that may share characteristics with the learned task. It has been shown that sensorimotor learning might be context dependent, and that the ability to generalize to other tasks follows a bell-shaped tuning curve depending on the extent to which the new context is similar to the learned one [18]. Retention requires subjects to recall a learned task after a break of various durations. A learned motor task can be retained for at least 24 h [19,20]. It has also been shown that learning a secondary task in the opposite direction interferes with the retention of the first learned task [21], although the mechanisms underlying protection and retrieval of motor memory remain unclear [22]. Despite the success of the previously mentioned computational frameworks in capturing experimental observations, sensorimotor learning of multidigit dexterous manipulation tasks has received little attention. Specifically, it remains unclear how the central nervous system (CNS) learns, stores, and retrieves knowledge about previously performed manipulations. Most research in the grasping literature has been focused on empirical findings about force modulation to an object’s weight to enable the lifting of an object. When subjects initially misjudged the weight, trial-by-trial learning was usually assessed by the improvement in force generation (peak force rate modulation to object weight) that most closely matched the actual weight. There is evidence that the internal representation of force scaling could be updated, maintained, and retrieved given appropriate visual cues [10,20,23,24]. In addition to memory associated with a specific task, it has also been demonstrated that task-independent memory could be formed and influence the subsequent manipulation [25]. A novel experimental protocol using objects with asymmetrical mass distribution [26] has been developed to provide more insights about how manipulation is learned and represented. In this protocol, task performance can be assessed by errors made during lift because subjects are required to lift the object straight and

Sensorimotor Learning of Dexterous Manipulation

31

have to produce a torque to counter-balance the object’s asymmetrical mass distribution. It has been shown that the learning of this dexterous manipulation task has very limited generalizability: following the object’s physical rotation, subjects have to re-learn digit-force distribution to generate a torque appropriate to minimize object tilt [27–29]. The question arises about whether the physical rotation of the target object is the primary factor that prevents the generalization. Physical rotation of the object may force subjects to perform a mental rotation of the previously established sensorimotor memory [18] similar to the mental rotation in visual object recognition which is considered computationally challenging for the brain [30]. Alternatively, failure to generalize learned manipulation may be an intrinsic feature of the sensorimotor system not limited to tasks involving changes of object orientation. We will review the main findings of one of our experimental approaches to demonstrate, both empirically and theoretically, how context-dependent parallel learning processes could interact during learning of dexterous manipulation.

3.2.2 Interaction Between Multiple Sensorimotor Processes Underlies Learning Dexterous Manipulation 3.2.2.1 Subjects and Apparatus Sixty-four healthy right-handed subjects participated in this experiment. Subjects were randomly assigned to one of five conditions. We asked subjects to grasp and lift a single L-shaped object (Fig. 3.1A), which had a base

Fig. 3.1 (A) Two alternative presentations, context left (L) and right (R), of the L-shaped object and their corresponding compensatory torque directions (clockwise and counterclockwise arrows). (B) Sequence of manipulation contexts, transfer and retrieval trials, and breaks for each experimental condition. (Modified from Q. Fu, M. Santello, Retention and interference of learned dexterous manipulation: interaction between multiple sensorimotor processes, J. Neurophysiol. 113 (1) (2015) 144–155.)

32

Human Inspired Dexterity in Robotic Manipulation

made of white plastic and a vertical handle mounted on the end of the base. The handle was made of gray plastic and equipped with two hidden six-axis force-torque sensors. The design of the handle enabled measurement of forces and torques applied by the digits. The design of the object provided visual geometrical cues that were congruent with the objects’ mechanical properties, such as mass and mass distribution [31]. Unlike previous studies using objects that gave conflicting visual and physical information, the congruent visual geometrical cue here was used to allow subjects to be aware of the task context using object orientation (i.e., handle on right or left). Motion capture system was used to measure object peak roll and lift onset. Technical details of the apparatuses and force/torque data processing algorithms can be found elsewhere [29,31,32].

3.2.3 Protocols Subjects sat in front of the object with the arm positioned in a way that ensured comfortable grasp and lift actions. On hearing a “go” signal, subjects reached to grasp and lifted the object above the table, held it in a stationary position for 2 s, and replaced it on the table. Subjects were required to grasp the specified handle with the tip of the thumb on the left contact surface and the tip of the index and middle fingers on the right contact surfaces of the handle, and to prevent the object from tilting. Subjects then replaced the object back on the table. All tasks required subjects to plan and generate torques in an anticipatory fashion to compensate the torque caused by the asymmetrical mass distribution of the object. Additionally, subjects were instructed to rotate the object 180 degree after several lifts (Fig. 3.1A). Therefore, the two task contexts differed based on opposite cues induced by the object orientation, which required opposite compensatory torque (Tcom) production from the hand. Every time the object was rotated, the context switched. Before the experiments started, the object was visually presented to the subjects and they were allowed to briefly touch the graspable surfaces to familiarize themselves with the frictional properties of the handle. All experimental conditions consisted of four blocks of eight consecutive trials with the exception of the Random condition (see later on) and each block was performed in the same context. As there were two manipulation contexts (i.e., left or right center of mass), the order of presentation of R and L contexts was counter-balanced across subjects for each experimental

Sensorimotor Learning of Dexterous Manipulation

33

condition. We also define the first context experienced by the subjects as context A, and the following context as B. The intertrial and interblock duration lasted 0 is a positive constant; J Ωi ðtÞ 2 3ND denotes the Jacobian matrix for the orientational angular velocity of each fingertip with respect to the angular velocity of each joint q_ ; and the desired orientation of the virtual-object frame is expressed   in the form of a rotational matrix Rdvir ðtÞ ¼ rxdvir ðtÞ,r ydvir ðtÞ, rzdvir ðtÞ 2 SOð3Þ. The summation of each cross product in Eq. (9.12) implies the desired instantaneous rotational axis of the virtual-object frame. In other words, the orientation error between the present virtual frame and the desired-virtual frame can be reduced by rotating the virtual frame around this axis. The desired orientation of the virtual-object frame Rdvir ðtÞ is designed in the following manner: Rdvir ðtÞ ¼ Rd Rðt  tdelay ÞT Rvir ðt  tdelay Þ,

(9.13)

where the desired orientation of themeasured-object frame is expressed in  the form of a rotational matrix Rd ¼ r xd , ryd , rzd 2 SOð3Þ. The orientations of both the measured- and virtual-object frames at tdelay (s) or one time step before the present time are denoted by R(t  tdelay) and Rvir(t  tdelay), respectively, and they are obtained for the same time as x(t  tdelay).

9.4.3 Conditions for Convergence of a Desired State For the control signal u(t) to ensure that the position and orientation of the measured-object frame converge to their desired values in the final state, it is necessary to fulfill the following inequalities. For the object’s position: jfxðtB Þ  xvir ðtB Þg  fxðtA Þ  xvir ðtA Þgj  jxðtB Þ  xðtA Þj, For the object’s orientation:





1 1

1 1  

cos ðtr½Rα   1Þ



cos tr½Rβ   1

,

2 2 where   T Rα ¼ RðtB ÞRvir ðtB ÞT RðtA ÞRvir ðtA ÞT , Rβ ¼ RðtB ÞRðtA ÞT ,

(9.14)

(9.15)

(9.16) (9.17)

and both tA and tB denote an instantaneous arbitrary time between the one time step and next time step of a visual image acquisition interval. These times satisfy the following inequality: 1 0  tB  tA  : h

(9.18)

174

Human Inspired Dexterity in Robotic Manipulation

Eqs. (9.14), (9.15) state that the error norm between the differences between the measured- and virtual-object frames at tA and at tB is smaller than that between the measured-object frames at tA and at tB, where the error norm of the object orientation is represented by a rotational angle around the equivalent rotational axis. In other words, the physical meaning of these conditions is that the change in the difference between the measured-object frame and the virtual-object frame must be smaller than the discretization error due to the low sampling rate. The measured-object frame will ultimately converge to the desired state if these conditions are satisfied in every servo loop.

9.5 NUMERICAL SIMULATION This section presents the results of numerical simulations to verify the effectiveness of the proposed controller. A three-fingered hand-arm model as shown in Fig. 9.1 was utilized in the simulations [6]. As it is mentioned in Section 9.2, this model consists of a five-DOF arm component and a three-fingered hand component with one five-DOF finger and two fourDOF fingers. A triangular prism is considered as the grasped object as an example of a polyhedral object. The specific parameters of the hand-arm system and the grasped object are shown in Table 9.1. In this table, Yi is the perpendicular distance from the center of mass of the object Oc.m. to the ith surface of the object, and θti is the internal angle of the corresponding triangular cross section of the object. Table 9.2 shows the desired nominal grasping force and gains. Assume that under the initial conditions, the handarm system has already grasped the object. The parameters of this system in the initial state are shown in Table 9.3. Two types of simulations were conducted to demonstrate the advantages of the proposed method. One simulation used the proposed visual servoing method. The other used our previously proposed visual servoing method, which directly utilizes the position and orientation of the measured-object frame as control variables [7]. The previous method uses control inputs for position and orientation control of the measured-object frame, upreal and uoreal, respectively, instead of up and uo. These control inputs are given as follows: upreal ðtÞ ¼ Kp

N X   J 0i ðtÞT xd  xðt  tdelay Þ , i¼1

(9.19)

A Grasping and Manipulation Scheme

175

Table 9.1 Physical parameters for the simulations Three-fingered hand-arm system

1st link length 2nd link length 3rd link length 1st center of mass 2nd center of mass 3rd center of mass 1st mass 2nd mass 3rd mass

la1 la2 la3 lga1 lga2 lga3 ma1 ma2 ma3

1st inertia, Ia1 2nd inertia, Ia2 3rd inertia, Ia3 1st inertia, Ii1 2nd inertia, Ii2 3rd inertia, Ii3

diagð7:453, 7:453, 0:260Þ  101 diagð3:397, 3:397, 0:128Þ  101 diagð0:291, 0:291, 0:500Þ  101 diagð7:725, 7:725, 0:450Þ  103 diagð2:060, 2:060, 0:120Þ  103 diagð0:538, 0:538, 0:031Þ  103

Fingertip radius, ri Stiffness coefficient, ki Damping function, ξi

1.300 m 1.000 m 0.175 m 0.650 m 0.500 m 0.0875 m 1.300 kg 1.000 kg 0.400 kg

li1 li2 li3 lgi1 lgi2 lgi3 mi1 mi2 mi3 (kg (kg (kg (kg (kg (kg

0.300 m 0.200 m 0.140 m 0.150 m 0.100 m 0.070 m 0.250 kg 0.150 kg 0.100 kg m2) m2) m2) m2) m2) m2)

0.070 m 5 2 1.000  2 10 2 N/m 1.000 ri  Δri π Ns/m2

Object Mass (m) Y1 Y2 Y3 θt1 θt2 θt3 Inertia, I

0.037 kg 0.092 m 0.048 m 0.048 m 1.309 rad 1.309 rad 0.524 rad diagð1:273, 0:193, 1:148Þ  103 (kg m2)

uoreal ðtÞ ¼ Ko

N X     J Ωi ðtÞT f r x ðt  tdelay Þ  r xd + ry ðt  tdelay Þ  ryd i¼1

  + rz ðt  tdelay Þ  r zd g,

(9.20)

where the orientation of the measured-object frame is expressed in the form   of a rotational matrix, RðtÞ ¼ rx ðt Þ, ry ðtÞ,r z ðtÞ 2 SOð3Þ. The sampling rate was taken to be h ¼ 20 Hz, and the time delay due to the computational cost of image processing and the latency of data communication was taken to be timage ¼ 50 ms.

176

Human Inspired Dexterity in Robotic Manipulation

Table 9.2 Nominal desired grasping force and gains for the simulations

fd

10.0 N

Kp Ko Ca C1 C2 C3 xd Rd

4.762 0.238 diagð1:003, 0:651, 0:735, 0:278, 0:177Þ  101 (Ns m/rad) diagð0:606, 0:687, 0:786, 0:642, 0:198Þ  102 (Ns m/rad) diagð0:468, 0:780, 0:318, 0:099Þ  102 (Ns m/rad) diagð0:648, 0:780, 0:318, 0:099Þ  102 (Ns m/rad) T 2ð0:100, 0:500, 0:700Þ 3(m) 0:88 0:32 0:34 4 0:34 0:94 0:00 5 0:32 0:12 0:94

Table 9.3 Initial conditions of the simulations

q_ qa q01 q02 q03 x_ x ω R

0 (rad/s) ð0:183,  1:369, 1:898, 1:343,  0:787ÞT (rad) ð1:007, 0:235,  0:771, 1:338, 0:328ÞT (rad) ð0:242,  0:733, 1:122, 0:754ÞT (rad) ð2:019,  0:924, 0:912, 1:088ÞT (rad) 0 (m/s) ð0:158, 0:501, 0:681ÞT (m) 20 (rad/s) 3 1:00 0:00 0:00 4 0:00 1:00 0:00 5 0:00 0:00 1:00

Figs. 9.4 and 9.5 show the transient responses of the x component of the measured x and of θz, which is the measured rotational angle around the z-axis expressed in terms of XYZ Euler angles, respectively. In these figures, xnew and θznew represent the transient responses of x and θz when the proposed control inputs up(t) and uo(t) are utilized, whereas xpre and θzpre represent the transient responses of x and θz when the previously proposed control inputs upreal(t) and uoreal(t) are utilized. When the proposed method is used, the oscillation is reduced, and the convergence rate is faster compared with the case in which the previous method is used, even with the low-sampling rate and long-time delays. Figs. 9.6 and 9.7 show the behavior of the positions and orientations of the measured- and virtual-object frames when the control inputs up ðtÞ and

A Grasping and Manipulation Scheme

177

0.2

Object position

x (m)

xd xnew 0.1

xpre

0

–0.1 0

2

4

6 Time (s)

8

10

Fig. 9.4 Transient responses of the x component of the position of the measured-object frame, x, where the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data communication is timage ¼ 50 ms.

Object orientation

µz (degrees)

40

µzd µz new 20

µz pre

0

–20 0

2

6

4

8

10

Time (s)

Fig. 9.5 Transient responses of θz, which is the measured rotational angle around the z-axis expressed in terms of XYZ Euler angles, where the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data communication is timage ¼ 50 ms.

uo ðtÞ are used for the case in which the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data communication is timage ¼ 50 ms. Fig. 9.6 shows the x components of the positions of the measured-object frame, x; the virtual-object frame, xvir; and the desired virtual-object frame, xdvir . Fig. 9.7 shows the θz component of the orientations of the measured-object frame, R; the θzvir component of the orientation of the virtual-object frame, Rvir; and the

Human Inspired Dexterity in Robotic Manipulation

178

Object position x and virtual position xvir (m)

θzdvir component of the orientation of the desired virtual-object frame, Rdvir , expressed in terms of XYZ Euler angles. These figures illustrate that xdvir and Rdvir are updated in a step-like pattern with every visual image acquisition interval. Fig. 9.8 shows snapshots from the simulation. In this figure, both the measured and virtual frames simultaneously approach and finally converge to their desired values. 0.2

xd x 0.1

xdvir xvir

0

–0.1 0

6 4 Time (s)

2

10

8

Object orientation µz and virtual orientation µzvir (degrees)

Fig. 9.6 Transient responses of the x components of the position of the measuredobject frame, x; the position of the virtual-object frame, xvir; and the position of the desired virtual-object frame, x dvir , when the control inputs up and uo are used, where the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data transmission is timage ¼ 50 ms.

40

µzd µz µzdvir µzvir

20

0

–20 0

2

4

6

8

10

Time (s)

Fig. 9.7 Transient responses of the θz component of the orientation of the measuredobject frame, R, and the θzvir component of the orientation of the virtual-object frame, Rvir, expressed in terms of XYZ Euler angles when the control inputs up and uo are used, where the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data communication is timage ¼ 50 ms.

A Grasping and Manipulation Scheme

179

Fig. 9.8 Snapshots of the simulation of the case in which the sampling rate is h ¼ 20 Hz and the time delay due to the computational cost of image processing and the latency of data communication is timage ¼ 50 ms.

Additionally, it is confirmed that the required conditions (9.14), (9.15) to guarantee convergence to the desired state are satisfied in all simulations.

9.6 EXPERIMENTS Experiments involving object manipulation using a prototype were performed to demonstrate the practical usefulness of the proposed method. Each parameter of the system is listed in Table 9.4. The grasped object was a triangular prism, and it was made of styrene form to ensure that its

180

Human Inspired Dexterity in Robotic Manipulation

Table 9.4 Physical parameters Three-fingered robotic hand

1st link length, li1 2nd link length, li2 3rd link length, li3 1st center of mass, lgi1 2nd center of mass, lgi2 3rd center of mass, lgi3 1st mass, mi1 2nd mass, mi2 3rd mass, mi3 (Fingertip) Radius, ri Physical properties, si

0.064 m 0.064 m 0.030 m 0.023 m 0.035 m 0.010 m 0.038 kg 0.024 kg 0.054 kg 0.015 m 2.390  106 (N/m2)

Table 9.5 Details of the grasped object Triangular prism

Mass (m) Material (Figure) Length of side of triangle Height

0.0015 kg Styrene foam 0.060 m 0.039 m

weight would be sufficiently light that the effect of gravity would be negligible. Each parameter of the grasped object and each gain are listed in Tables 9.5 and 9.6. A stereo optical tracking system (Micron Tracker H3-60, Claron, Inc.) was utilized to measure the position and orientation of the grasped object. This system could measure the position and orientation of the grasped object every 50 ms by tracking a specific marker on the object. The overall system is shown in Fig. 9.9. Two types of time delays were present in the visual-sensing system, tsample and timage, as mentioned in Section 9.4.1. The sampling rate of the system, h ¼ 20 Hz, was not sufficient to allow the direct use of a real-time feedback approach. The average time delay due to the image processing burden and data communication latency was timage  50 ms. The time delay due to image processing was observed in real time by measuring the time from when an image was captured to the beginning of data transmission. The time delay due to the latency of data transmission from the visual-sensing system to the controller was measured in advance.

A Grasping and Manipulation Scheme

181

Table 9.6 Nominal desired grasping force and gains

fd

1.1 N

Kp Ko C1 C2 C3

70.0 4.0  102 diagð0:07, 0:07, 0:05, 0:03Þ  102 (Ns m/rad) diagð0:07, 0:07, 0:05, 0:03Þ  102 (Ns m/rad) diagð0:07, 0:07, 0:05, 0:03Þ  102 (Ns m/rad)

Triple-fingered robotic hand Micron tracker Marker

Object

Fig. 9.9 Experimental setup.

In these experiments, the robotic hand had already grasped the object initially. Two types of experiments were performed, similar to the numerical simulations. One experiment used the proposed method, and the other used the previously proposed method. Fig. 9.10 shows the transient responses of the x component of the position of the measured-object frame, x. In this figure, xnew represents the transient response of x(t  tdelay) when the proposed control inputs up ðtÞ and uo ðt Þ are utilized, and xpre represents the transient response of x(t  tdelay) when the previously proposed control inputs upreal(t) and uoreal(t) are utilized. Fig. 9.11 shows the transient responses of θx, which is the rotational angle around the x-axis expressed in terms of XYZ Euler angles. In this figure, θxnew represents θx(t  tdelay) when the proposed control inputs up(t) and uo(t) are utilized, and θxpre represents θx(t  tdelay) when the previously proposed control inputs upreal(t) and uoreal(t) are utilized. In these figures, the position and orientation of the measured-object frame converge to the desired values when the new control inputs are used. However, when the previous method is applied, the behavior of the overall system becomes unstable, and grasping of the object ultimately fails. Thus, it is

Human Inspired Dexterity in Robotic Manipulation

182

0.03

Measured position

x (m)

xd xnew 0.02

xpre

0.01

0

0

2

6

4

8

10

Time (s)

Measured orientation

µx

(degrees)

Fig. 9.10 Transient responses of the x component of the position of the measuredobject frame, x.

50

µx µxnew

25

µxpre 0

–25

–50 0

2

6

4

8

10

Time (s)

Fig. 9.11 Transient responses of the θx component of the orientation of the measuredobject frame, R, expressed in terms of XYZ Euler angles.

confirmed that the proposed visual servoing method can achieve stableobject grasping and manipulation even when the sampling rate is low and in the presence of considerable time delays, unlike the previous method. Fig. 9.12 shows the transient responses of the x components of the positions of the measured-object frame, the virtual-object frame, and the desired virtual-object frame when the control inputs up ðtÞ and uo ðt Þ are used. In addition, Fig. 9.13 shows the transient responses of θx, which is the measured rotational angle around the x-axis expressed in terms of XYZ Euler angles. The desired position and orientation of the virtual-object frame are updated in a step-like pattern with every visual image acquisition interval. Finally,

Measured position x and virtual position xvir (m)

A Grasping and Manipulation Scheme

183

0.03

xd x 0.02

xdvir xvir

0.01

0 0

2

6

4

10

8

Time (s)

(degrees)

Fig. 9.12 Transient responses of the x components of the positions of the measuredobject frame, x, and the virtual-object frame, xvir.

50

Object orientation µx and virtual orientation µx

vir

µxd µx

25

µxdvir µxvir

0

–25

–50 0

2

6

4

8

10

Time (s)

Fig. 9.13 Transient responses of the θx component of the orientation of the measuredobject frame, R, and the θxvir component of the orientation of the virtual-object frame, Rvir, expressed in terms of XYZ Euler angles.

both the measured- and virtual-object frames generally converge to the desired values with no unstable behavior. Thus, these experimental results confirm the practical usefulness of the proposed visual servoing method.

9.7 CONCLUSION In this chapter, we presented a visual servoing method for object manipulation that is robust to not only temporary loss, but also a considerable time

184

Human Inspired Dexterity in Robotic Manipulation

delay of visual information. First, a control scheme for visual servoing for object manipulation was proposed to avoid unstable behavior caused by two types of time delays. In this scheme, the virtual-object frame is complementarily utilized as a set of nominal control variables during the visual image acquisition interval. Next, the effectiveness of the proposed method was verified through numerical simulations. Finally, its practical usefulness was demonstrated through experiments. In our future work, the stability of the overall system will be extensively analyzed. In addition to a detailed stability analysis, the necessary conditions to achieve robustness against time delays should be investigated in more detail.

REFERENCES [1] P.I. Corke, M.C. Good, Dynamics effects in visual closed-loop systems, IEEE Trans. Robot. Autom. 12 (5) (1996) 671–683. [2] A.J. Koivo, N. Houshangi, Real-time vision feedback for servoing robotic manipulator with self-tuning controller, IEEE Trans. Syst. Man Cybern. 21 (1) (1991) 134–142. [3] K. Hashimoto, H. Kimura, Visual servoing with nonlinear observer, Proceedings IEEE International Conference Robotic Automation, 1995, pp. 484–489. Nagoya, Japan. [4] H. Fujimoto, Y. Hori, Visual servoing based on intersample disturbance rejection by multirate sampling control—time delay compensation and experiment verification, Proceedings IEEE Conference Decision Control, 2001, pp. 334–339. Orlando, FL. [5] K. Sasajima, H. Fujimoto, 6 DOF multirate visual servoing for quick moving objects, Proceedings of the 2007 American Control Conference, 2007, pp. 1538–1543. New York, USA. [6] A. Kawamura, K. Tahara, R. Kurazume, T. Hasegawa, Dynamic grasping for an arbitrary polyhedral object by a multi-fingered hand-arm system, Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, pp. 2264–2270. St Louis, MO. [7] A. Kawamura, K. Tahara, R. Kurazume, T. Hasegawa, Dynamic object manipulation using a multi-fingered hand-arm system: enhancement of a grasping capability using relative attitude constraints of fingers, Proceedings International Conference on Advanced Robotics, 2011, pp. 8–14. Estonia, Tallinn. [8] A. Kawamura, K. Tahara, R. Kurazume, T. Hasegawa, Robust manipulation for temporary lack of sensory information by a multi-fingered hand-arm system, Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011. San Francisco, CA. [9] T. Wimb€ ock, C. Ott, G. Hirzinger, Passivity-based object-level impedance control for a multifingered hand, Proceedings of the 2006 IEEE/RSJ on Intelligent Robots and Systems, 2006, pp. 4621–4627. Beijing, China. [10] K. Tahara, S. Arimoto, M. Yoshida, Dynamic object manipulation using a virtual frame by a triple soft-fingered robotic hand, Proceedings IEEE International Conference Robotic Automation, 2010, pp. 4322–4327. Anchorage, AK.

A Grasping and Manipulation Scheme

185

FURTHER READING [11] L.E. Weiss, A.C. Sanderson, C.P. Neuman, Dynamic sensor-based control of robots with visual feedback, IEEE Trans. Robot. Autom. 3 (5) (1987) 404–417. [12] J.T. Feddema, O.R. Mitchell, Vision-guided servoing with feature-based trajectory generation, IEEE Trans. Robot. Autom. 5 (5) (1989) 691–700. [13] S. Hutchinson, G.D. Hager, P.I. Corke, A tutorial on visual servo control, IEEE Trans. Robot. Autom. 12 (5) (1996) 651–670. [14] N.R. Gans, S.A. Hutchinson, Stable visual servoing through hybrid switched-system control, IEEE Trans. Robot. 23 (3) (2007) 530–540.

CHAPTER 10

Planning Dexterous Dual-Arm Manipulation Kensuke Harada

Graduate School of Engineering Science, Osaka University, Toyonaka, Japan

Contents 10.1 Introduction 10.2 Definitions 10.2.1 Notation 10.2.2 Grasp Planner 10.2.3 Object Placement Planner 10.2.4 Manipulation Paths 10.3 Manipulation Planner 10.3.1 Graph Structure 10.3.2 Graph Components 10.3.3 Component Connection 10.3.4 Path Optimization 10.4 Examples 10.5 Conclusions References Further Reading

187 188 188 188 189 189 191 191 191 193 194 194 199 200 200

10.1 INTRODUCTION Object manipulation is one of the most typical tasks required for a robot manipulator to realize. However, it is still difficult for a robot manipulator to automatically generate the motion of object manipulation, especially when the manipulation task has to be performed in a highly cluttered environment. So as to assemble the object with an adequate grasping posture, the robot has to once regrasp the object. When regrasping the object, both hands are used simultaneously or not according to the context, whereas the objective is to minimize the amount of regrasping. Dual-arm manipulators allow several types of regrasping such as (1) regrasping between the right and left hands, (2) once placing an object Human Inspired Dexterity in Robotic Manipulation https://doi.org/10.1016/B978-0-12-813385-9.00010-8

© 2018 Elsevier Inc. All rights reserved.

187

188

Human Inspired Dexterity in Robotic Manipulation

by using the right or left hand and then regrasping it by using the right or left hand, and (3) once placing an object by using both hands and then regrasping it by using both hands, etc. Moreover, dual-arm manipulator motions induce a special topology in the manipulation space. Indeed, the manipulation space is structured into four foliated manifolds: the manifold in which the robot moves alone, the manifolds in which the left (respectively, right) hand moves the object, and the manifold in which both hands move the object. A regrasping strategy should be selected according to the context of the task. So to seamlessly generate a regrasping motion, we have to construct a manipulation graph that accounts for the special topology of the manipulation space of dual-arm manipulation. In this section, we describe the dexterous manipulation planner for dualarm manipulators [1]. Here, the manipulation graph is originally proposed in [2] for a single manipulator. Grasp and placement configurations are, respectively, computed by the grasp planner proposed in [3, 4] and the object placement planner proposed in [5].

10.2 DEFINITIONS In this section, we introduce the definitions required to describe a manipulation problem.

10.2.1 Notation Fig. 10.1 shows the manipulation space used to plan a manipulation task. Let us consider a robot having n-DOF right arm and n-DOF left arm. Let CSr, CSl, and CSo be the configuration space of the right arm, the left arm, and the object, respectively. The manipulation space is defined by CS ¼ CSr  CSl  CSo. Let CSfree be the collision-free subset of CS. Let CP be the domain in CS where the object is stably placed on the environment. Also, let CGr and CGl be the domain in CS where the object is stably grasped by the right and the left hands, respectively. CP, CGr, and CGl are subdimensional manifolds in CS.

10.2.2 Grasp Planner A stable grasp is realized by using the grasp planner proposed in [3, 4]. Before executing the manipulation planner, the grasp planner generates multiple candidates of stable grasping postures for a given object. More concretely speaking, the output of the grasp planner is a set of position/orientation

Planning Dexterous Dual-Arm Manipulation

189

Fig. 10.1 The manipulation space gathers the three configuration spaces of the object, the left arm and the right arm, respectively.

of the palm with respect to the object coordinate system and the finger joint angles. We define the ith candidate of the right-hand grasp gri and the lefthand grasp gli in CGr and CGl, respectively.

10.2.3 Object Placement Planner Stable object placements are computed by using the object placement planner [5]. The object placement planner generates multiple object postures stably placed on the environment surface. We denote by pj the jth object placement in CP. A combination of the right-hand grasp gri and the placement pj defines a configuration in CGr \ CP.

10.2.4 Manipulation Paths A solution path of the manipulation planning problem consists of the following two kinds of paths [2] (Fig. 10.2): • Transit path where the robot moves alone while the object stays stationary in a stable position. Transit paths induce a foliation of CP. • Transfer path where the robot moves while stably grasping the object. Transfer paths induce a foliation of CGr or CGl. The manipulation planning problem is to find a sequence of the transfer and the transit paths connecting the starting configuration qi and the goal configuration qf in CP [ CGr [ CGl . We now focus on CGr \ CP. Two foliation structures are induced, respectively, by the transfer paths and by the transit paths. Let us consider

190

Human Inspired Dexterity in Robotic Manipulation

(A)

(B) Fig. 10.2 Transfer and transit paths. (A) Transit path. (B) Transfer path.

two configurations corresponding to the grasp gr1 and gr2 and the object placement p1 and p2, noted (gri, pi)i¼1, 2. To connect (gr1, p1) to (gr2, p2), we can define two intermediate nodes as (gr1, p2) and (gr2, p1). As shown in Fig. 10.3, now we can define three kinds of paths to connect (gr1, p1) to (gr2, p2): • Type1: A direct path from (gr1, p1) to (gr2, p2) lying inside CGr \ CP. • Type2a: A transfer path from (gr1, p1) to (gr1, p2) followed by a transit path from (gr1, p2) to (gr2, p2). (g1, p1)

(g1, p2) (g2, p1) (g2, p2)

Fig. 10.3 Three types of node connection.

Planning Dexterous Dual-Arm Manipulation

191

Type2b: A transit path from (gr1, p1) to (gr2, p1) followed by a transfer path from (gr2, p1) to (gr2, p2). Type1 paths are not physically realizable due to physical constraint. If two nodes are connected by using a Type1 path, it is replaced by a sequence of Type2a or Type2b paths in a postprocess of the manipulation plan. This is the consequence of the so-called reduction property demonstrated in [2]. We also note that, although we just focus on CGr \ CP, three kinds of paths can also be defined in CGl \ CP. On the other hand, CGr \ CGl also has foliation structure. Especially, CGr \ CGl \ CP has a three-foliation structure. However, as shown in the next section, we can plan the object manipulation without considering the foliation structure in CGr \ CGl . •

10.3 MANIPULATION PLANNER 10.3.1 Graph Structure Based on the manipulation space defined in the previous section (Fig. 10.1), we now construct the manipulation graph. First, we consider categorizing the object placement CP into three classes: initial, intermediate, and target placements, respectively. Then, according to the grasping state CGr, CGl, and CGr \ CGl , the manipulation space is divided into ten components as shown in Fig. 10.4. Here, each component includes a set of configurations denoted as nodes. The component belonging to the initial object placement has a single object pose with multiple grasping postures because the grasp planner generates multiple grasping configurations. On the other hand, in the intermediate object placement, multiple object configurations may be associated with multiple grasping configurations. Two components are connected by using one of the three kinds of paths: transit, transfer, or Type2 path. Also, although all components of the manipulation graph have a foliation structure, the search is performed only by considering the foliation structures of the intersection between the intermediate object placement and the right-hand and left-hand hand configuration spaces. The reason will be explained later in this section.

10.3.2 Graph Components We now explain how to generate each component of the manipulation graph.

192

Human Inspired Dexterity in Robotic Manipulation

Fig. 10.4 Graph structure used in manipulation planning.

10.3.2.1 Initial Object Placement For the initial object placement pini, we consider the right-hand grasp gri and the left-hand grasp gli, (i ¼ 1, …, I). Let us consider the case of CGr \ CP, we consider solving the inverse kinematics of the right arm for the given combination of (gri, pini). If the inverse kinematics is solvable and the arm posture is collision free, we consider applying this configuration as a node included in the component of the right-hand grasping space at the initial placement. The same discussion applies for CGl \ CP and CGr \ CGl \ CP. 10.3.2.2 Intermediate Object Placement CGr \ CP and CGl \ CP manifolds are structured along two foliations. So, any path in these manifolds may be approximated by a sequence of manipulation paths. We make use of the Visibility-PRM algorithm [2, 6] to capture the connected components of these manifolds. Visibility-PRM tends to generate a minimum number of nodes covering CGr \ CP and CGl \ CP. The steering method used to connect two nodes is based on the computation of Type1 paths. By using the Visibility-PRM, we can expect that the number of regrasps included in the solution path is not large.

Planning Dexterous Dual-Arm Manipulation

193

Let us now consider CGr \ CGl . For simplicity, we do not have to consider the foliation structure of this component. We consider generating a set of configurations in CGr \ CGl by randomly sampling the object pose in 3D and the grasping pose of each hand. 10.3.2.3 Target Object Placement Let us consider the object configuration at the target. For a set of target object placements pfj (j ¼ 1, …, J), we consider a set of grasps gri and gli, (i ¼ 1, …, I). Within CGr \ CP, we consider solving the inverse kinematics of the right arm for the given (gri, pfi). If the inverse kinematics is solvable and the arm configuration is collision free, we consider applying this configuration as a node included in this component of the graph. The same discussion applies for CGl \ CP and CGr \ CGl \ CP.

10.3.3 Component Connection Now, we consider connecting the various components of the graph. 10.3.3.1 Type2 Path Let us consider the case where two components are connected by using a Type2 path. We consider randomly selecting one node from both components and trying to connect them by using a Type2 path. We consider iterating this operation a predefined times. 10.3.3.2 Transfer Path To connect two components by using a transfer path, we first randomly select a node from one of the components. Then, while keeping the same grasp, we consider changing the robot posture so that the configuration is included in the other component. If we can find such a collision-free path, we conclude that the connection between two components can be established by using a transfer path. 10.3.3.3 Transit Path To connect two components by using a transit path, we first randomly select a node from one of the components. Then, while keeping the same object placement, we consider changing the grasp so that the configuration is included in the other component. If we can find such a collision-free path, we conclude that the connection between two components can be established by using a transit path.

194

Human Inspired Dexterity in Robotic Manipulation

10.3.3.4 Path Generation In this work, two components of the graph are connected by using either Type2, transit, or transfer paths. For all cases, a path between two configurations is computed by using a path planner such as [7]. Let us assign a group ID for each node. When connecting two nodes, we set that the ID is the same for two nodes. After connecting two components included in the manipulation graph, we check the ID of initial and target nodes. If the IDs are the same for both nodes, we search for the path connecting them. We use the depth-first search to obtain a solution path. If a Type1 path is included in the solution path, we consider replacing it by using a Type2 path. Completing a task by a dual-arm manipulator may require multiple manipulation styles. For example, pick-and-place in a simple environment may be realized only by using the right arm. The same task may be realized only by using the left arm. Or, sometimes the same task may be realized by once picking up the object with the right arm, and then placing it with the left arm. The manipulation style selected by using our proposed planner depends on the order of the component connection.

10.3.4 Path Optimization The obtained manipulation path may include several regrasping and placement operations. To reduce the motion time and increase the reliability of manipulation, it is desirable that the number of regrasps is minimized. We iteratively use three operations for path optimization [1]: transfer-transfer path optimization, transit-transit path optimization, and Type2-Type2 path optimization.

10.4 EXAMPLES First, we run the manipulation planner in the environment as shown in Fig. 10.5. We consider moving a rectangular object from a cage and place it on the middle of the table. A generated manipulation graph is shown in Fig. 10.5B. The solution path and its optimization are shown in Fig. 10.5C and D, respectively. As shown in this figure, the number of regrasps is much reduced. Fig. 10.6 shows the motion of the robot moving the object from the initial to the final placement. Fig. 10.7 shows the motion of the robot regrasping the object from the right hand to the left hand. The solution path in the manipulation graph (Fig. 10.4) is shown in Fig. 10.8A. In the second example, we also consider placing the object at the middle of the cage located at the left-hand side of the robot. Although it is possible

Planning Dexterous Dual-Arm Manipulation

(A)

(B)

(C)

(D)

195

Fig. 10.5 Manipulation by using right arm. (A) Manipulation by using right arm where the target point is shown by a small arrow at the middle of the table. (B) Manipulation graph. (C) Solution path. (D) Optimized path.

(A)

(B)

(C)

(D)

Fig. 10.6 Motion of robot manipulating an object by using right arm.

196

Human Inspired Dexterity in Robotic Manipulation

(A)

(B)

(C)

(D)

(E)

(F)

Fig. 10.7 Motion of the robot manipulating the object with regrasping/placing.

for the robot to grasp and to place the object by using the right hand, we first connected the component of CGr \ CP at the intermediate placement with the component of CGr \ CGl \ CP at the same placement and connected the component of CGl \ CP at the intermediate placement with the component of CGr \ CGl \ CP at the same placement. Then, the robot first grasps the object by using the right hand and places it on the middle of the table. Then, the robot regrasps the object by using the left hand and places it at the desired placement. The solution path in the manipulation graph is shown in Fig. 10.8C.

Planning Dexterous Dual-Arm Manipulation

(A)

197

(B)

(C) Fig. 10.8 Solution path in the manipulation graph. (A) Manipulation by using right arm. (B) Manipulation with regrasping. (C) Manipulation with regrasping/placing.

We also performed real-world experiments. The experimental setup is shown in Fig. 10.9. In this environment, the robot tries to pick up an object located in a tray and put it in a box placed on a table. As shown in the figure, the 3D data of the environment is captured by using a Kinect sensor and converted into a polyhedral model. The figure also shows the 3D model of the grasped object. Fig. 10.10 shows an experimental result of pick and place by using the right arm. On the other hand, Fig. 10.11 shows an experimental result of regrasping between the right and left hands. In this experiment, the robot is ordered to place the object on the box with an upside-down posture. As the regrasping is needed for this example, the regrasping motion is

(A)

(B)

(C)

Fig. 10.9 Experimental setup. (A) Snapshot of real environment. (B) Polygon model of environment. (C) Polygon model of object.

(A)

(B)

(C)

(D)

Fig. 10.10 Experimental result.

Planning Dexterous Dual-Arm Manipulation

(A)

(B)

(C)

(D)

(E)

(F)

199

Fig. 10.11 Experimental result.

selected. In both examples, the robot can stably place the object while the environment is highly cluttered. The solution path in the manipulation graph (Fig. 10.4) is shown in Fig. 10.8B.

10.5 CONCLUSIONS In this chapter, we explained that a dual-arm manipulation planner can realize several manipulation styles including pick-and-place operations. The proposed planner considers the subdimensional manifolds in the so-called manipulation space. We showed by examples that several manipulation

200

Human Inspired Dexterity in Robotic Manipulation

styles may be considered, such as single-arm grasping, regrasping from one to the other hand, regrasping and placing, and bimanual grasping. Recently, there have been several advancements with the manipulation motion planner. The following are some examples: Kaelbling et al. [8] formulated the manipulation motion planner in the belief space. Wan et al. [9] compared the efficiency of a single-arm regrasp with a dual-arm handover. The assembly motion planner has been researched by some researchers [10, 11].

REFERENCES [1] K. Harada, T. Tsuji, J.P. Laumond, Manipulation motion planner for dual-arm industrial manipulators, in: Proceedings of IEEE International Conference on Robotics and Automation, 2014. [2] T. Simeon, J.P. Laumond, J. Cortes, A. Sahbani, Manipulation planning with probabilistic readmaps, Int. J. Robot. Res. 23 (8) (2004) 729–746. [3] K. Harada, et al., Grasp planning for parallel grippers with flexibility on its grasping surface, in: Proceedings of IEEE International Conference on Robotics and Biomimetics, 2011. [4] K. Harada, et al., Pick and place planning for dual-arm manipulators, in: Proceedings of IEEE International Conference on Robotics and Automation, 2012, pp. 2281–2286. [5] K. Harada, et al., Object placement planner for robotic pick and place tasks, in: Proceedings of IEEE International Conference on Intelligent Robots and Systems, 2012, pp. 980–985. [6] T. Simeon, J.P. Laumond, C. Nissoux, Visibility-based probabilistic roadmaps for motion planning, Adv. Robot. 14 (6) (2000) 477–493. [7] MPK (Motion Planning Kit), Available from: http://robotics.stanford.edu/mitul/ mpk/. [8] L.P. Kaelbling, T. Lozano-Perez, Hierarchical task and motion planning in the now, in: Proceedings of IEEE International Conference on Robotics and Automation, 2011. [9] W. Wan, K. Harada, Developing and comparing single-arm and dual-arm regrasp, Robot. Autom. Lett. 1 (1) (2016) 243–250. [10] M. Dogar, A. Spielberg, S. Baker, D. Rus, Multi-robot grasp planning for sequential assembly operations, in: Proceedings of IEEE International Conference on Robotics and Automation, 2015. [11] W. Wan, K. Harada, Integrated assembly and motion planning using regrasp graphs, Robot. Biomim. 3 (8) (2016) 3–18.

FURTHER READING [12] J.P. Laumond, T. Simeon, Notes on visibility roadmaps for motion planning, algorithmic Foundations of Robotics (WAFR 00), 2001. [13] C. Smith, et al., Dual arm manipulation—a survey, Robot. Auton. Syst. 60 (2012) 1340–1353. [14] K. Harada, K. Kaneko, F. Kanehiro, Fast grasp planning for hand/arm systems based on convex model, in: Proceedings of IEEE International Conference on Robotics and Automation, 2008, pp. 1162–1168. [15] T. Tsuji, K. Harada, K. Kaneko, F. Kanehiro, K. Maruyama, Grasp planning for a multifingered hand with a humanoid robot, J. Robot. Mechatronics 22 (2) (2010) 230–238.

CHAPTER 11

Prospects of Further Dexterity in Robotic Manipulation by Imitating Human Beings Tetsuyou Watanabe

Faculty of Mechanical Engineering, Institute of Science and Engineering, Kanazawa University, Kanazawa, Japan

Contents 11.1 Summary

201

11.1 SUMMARY This text provides up-to-date research about human functions for dexterous object manipulations, and how to transfer human functions to robotic manipulation. The first section was focused on human functions, and we presented that recent revelations on hand anatomy (Chapter 2), the human learning process of dexterous manipulation in the context of sensorimotor functions (Chapter 3), the excitation of multisensory illusion in a surgical robotic system (Chapter 4), and the examination of human reaching behavior when manipulating parallel flexible objects (Chapter 5). In the second section, the methodologies to imitate human functions for dexterous robotic manipulation were presented. In Chapters 6 and 7, the design strategies for robotic hands were presented to attain the dexterity in robots. In Chapters 8 and 9 the focus was on thumb opposability and the control strategies based on that were presented. In Chapter 10, the planning strategy for dual-arm manipulation was presented. The results indicate that the approaches in imitating human functions with object manipulation are powerful and promising. However, robots cannot conduct dexterous manipulation like humans. Luckily, there are still a lot of unrevealed human functions, and so the capacity for improving the

Human Inspired Dexterity in Robotic Manipulation https://doi.org/10.1016/B978-0-12-813385-9.00011-X

© 2018 Elsevier Inc. All rights reserved.

201

202

Human Inspired Dexterity in Robotic Manipulation

dexterity of robots is large via this approach. To make robots dexterous, the two different approaches of revealing human functions and transferring the human functions to robots are important. Hopefully many researchers will make efforts in these directions, and robots will finally attain the dexterity level of humans.

INDEX Note: Page numbers followed by f indicate figures, and t indicate tables.

A Abduction and adduction (ad/b) motions, 98 Adaptation, 28–29, 46 use-dependent, 46–48 Anatomically corrected testbed (ACT) hand, 89–90 Anthropomorphic prosthetic hands, 89–90 Anthropomorphic robotic hands, 87–90, 89f, 106–107

B Ball-socket joint, 98–99 Bayesian Information Criterion, 35 Biological joints, hand, 98–100, 98–100f Biology research, design tools for, 90–91 Biomimetic anthropomorphic design, 88, 89f Biomimetic anthropomorphic robotic hand, 91 Biomimetic robotic hand crocheted artificial ligaments used in, 102f design, 89 development of highly, 101 dynamixel servos used in, 105t elastic pulley system implemented, 104f fully assembled, 103f performance fingertip trajectories, 108–111, 108–110f object grasping and manipulation, 111–112, 112f rapid prototyping process of, 101–105, 101f BioTac, 117–118 Blind grasp, 149, 162 on thumb opposability, 154–158, 156–158f

C Carpometacarpal (CMC) joint, 13, 15, 92–93, 99–100 active movements of, 13–14 of thumb, 100f

Center of mass model, 81 CMC joint. See Carpometacarpal (CMC) joint Compression behavior, as linear, 132–134, 133f, 134t Compression test, 127f of fluid fingertip, 119f with fluid fingertip, 128–129, 128–129f Control group, 33

D Damper model, 4–5, 4f DC servo motor, 56 Decimated generic model, 17–18 Deformation transfer, 17–18 Degree of freedom (DOF), 150 Degree of freedom (DOF) joint, 98–99 Denavit-Hartenberg (DH) convention, 100 Dexterity, 2 acquisition, 2 of robots, 201–202 Dexterous hand-object interactions, 28 Dexterous manipulation human data and potential applications to robotic, 48–49 by robots, 2 Digital-hand link and joint of, 15–16 research, 11–12 surface mesh of, 16–17 technology, 12 Digital-hand model, 12, 14–15 individual, 17–18 link structures of, 15, 15f mechanical analysis, 22–24 motion analysis and posture synthesis, 20–22, 21f, 23f representative, 18–20 with size variations, 17–20 Distal interphalangeal (DIP) joint, 14, 92–96, 94f, 98f DRMC model, 35–36, 44 203

204

Index

Dual-arm manipulator, 187–188 motions, 193 Ductile objects, prevention of fracture on, 125–135 Dynamic environment, 62–63 reaching movement in, 62f, 64–65 Dynamics of robots, 6

E Elastic joint, 98–99 Electromyography (EMG) signals, 90 Endoscopic surgical robot, 53 Equation of motion, for robotics, 6 Error-based learning, 29 Euler-Lagrange equation, 64–65 Extensor tendons, 94–96, 94f, 96f

F Finger-object system, dynamics and equilibrium points of, 153–154 Fingers, active movements of, 14 Finger-thumb opposability control law based on, 170–174 conditions for convergence of desired state, 173–174 control input, 171–173 time delays, 170–171, 171f Fingertip contact models of, 153f fluid (see Fluid fingertip) trajectories, biomimetic robotic hand performance, 108–111, 108–110f Flexible object, 62 multimass, 72 parallel, 63, 81 single mass, 63 two-mass, 63, 72–73 Flexor tendons, 94–96, 94f, 96f Fluid fingertip compression test with, 119f, 128–129, 128–129f contact with flat surface, 120–123, 121–122f contact with line surface, 123–125, 123–125f and control unit, 118f

embedded gripper, versatile grasping by, 136–144 enveloping mode, 139, 139–141f grasping test, 125, 126f parallel gripper mode, 139, 139–141f pinching mode, 139, 139–141f and soft object (tofu), interaction, 130f stiffness of, 118–120, 118f structure of, 116–118, 116–117f underactuated soft gripper utilizing, 136–140, 136f, 138f uniform contact pressure of, 120–125 Fluid pressure detection, 129–132, 132f Forward internal models, 29 Fracture avoidance concept, 126–128 Fragile objects, prevention of fracture on, 125–135 Frictional-point contact model, 152 Friction-cone model, 152

G Generalization, 30–31, 41–42 Grasp(ing), 149–150 blind, 162 controller, 149 multifingered hand-arm system, 167–168, 168f planner, 188–189 posture, 22 power, 150 precision (see Precision grasp) stable-object, 165–167 style, 5, 5f superposition of lower priority tasks to stable, 158–159, 159f test, 125, 126f, 141f, 142t for soft (Kinugoshi) tofu, 134, 134f, 135t weight, 5 Grasping and manipulation, 165–167 experiments, 179–183, 180t, 181–183f numerical simulation, 174–179, 175–176t, 177–179f supervisory control and autonomy of, 159–162, 160–161f

Index

H

J

Hand anatomy, 91–92 biological joints, 98–100, 98–100f bones and joints, 92–93, 92f extensor tendons, 94–96, 94f, 96f flexor tendons, 94–96, 94f, 96f joint ligaments, 93–94, 93f tendon sheaths, 96–97, 97f Hand-and-object system, 153f Hand control systems structure, 150–152, 151f Hand dimensions, 17 boundary family of, 19–20, 19f Hand-eye coordination, 53 Hand-force-change model, 68–70 Hand-jerk model, 63 minimum, 63–68, 73f, 76t, 77–78f, 78–79t Hand manipulation, 150 Hand mass estimation by haptic force, 71f identification, 70–72, 74–75 Hand models, 12 Hand-object interactions, 28–29 Haptic force, hand mass estimation by, 71f Haptic sensation, 56 Haptic simulator, 73 Human hand anatomy of, 12–14, 13f motions of, 14 research, 11 Human-like reaching movements analysis, 76–79 in dynamic environment, 62f, 64–65 modeling of, 61–62, 62f

Joint capsule, 94

I Interference groups (IF), 33 Internal representation, 35–36, 43–44 of force scaling, 30–31 of manipulation, 43–46 Interphalangeal (IP) joint, 14 active movements of, 14 Intrinsic muscles, 14, 95 Intuitive control, 53 Intuitiveness, 59 Inverse internal models, 29

205

K Kinematics vs. dynamics, 79

L Learning dexterous manipulation, 31–32 error-based, 29 reinforcement, 29–30 sensorimotor, 27–28 trial-by-trial, 30–31 use-dependent, 29–30, 47–48 Learning manipulation, 29–31 interaction between multiple sensorimotor process, 31–32, 31f protocols, 32–48 data analysis, 34–35 experimental results, 38–46, 39f, 43f, 45f model and simulation, 35–37

M Manipulation paths, 189–191, 190f Manipulation planner component connection, 193 path generation, 194 transfer path, 193 transit path, 193 type2 path, 193 examples, 194–199, 195–199f graph components, 191 initial object placement, 192 intermediate object placement, 192–193 target object placement, 193 graph structure, 191, 192f path optimization, 194 Mass model, 4–5, 4f Master-slave system (MSS), 53, 55, 151 multisensory illusion on, 58–59, 58f MCP joint. See Metacarpophalangeal (MCP) joint Mechanical impedance, 5 Medicine research, design tools for, 90–91 Memory, sensorimotor learning and, 29–30

206

Index

Metacarpal bones, 13 Metacarpophalangeal (MCP) joint, 13, 15, 55–56, 92–93, 95–96, 98–99 of human fingers, 99f Minimum hand-force-change model, 68–70 Minimum hand-jerk model, 63–68, 73f, 76t, 77–78f, 78–79t MoCap system, 16, 20 Motion-capturing system, 12, 31–32 Motor behavior, skilled, 29 MSS. See Master-slave system (MSS) Multifingered hand-arm system grasping, 167–168, 168f Multisensory illusion, on master-slave system, 58–59, 58f Multisensory integration and illusion, 54–55, 54f

N Notation, 188, 189f

O Object manipulation, 187 placement planner, 189 Optimal control, 65–66, 80 Optimization models, 62

P Pascal’s law, 116–117 Pick-and-place operation, 150 PIP joint. See Proximal interphalangeal (PIP) joint Power grasp, 5, 5f, 22, 150 synthesis of, 23f Precision grasp, 5, 5f, 22, 150 contact models, 152–153, 153f effect of opposed motion in, 155f synthesis of, 23f Prescale mat, 120–121, 123–124 Principal component analysis (PCA), 19–22 Prosthetic robotic hands anthropomorphic, 89–90 development of, 90 teleoperation of, 105–107, 106–107f

Proximal interphalangeal (PIP) joint, 14, 92–96, 94f, 98f Psychophysics, 56

R Random group, 33 Range of motion (ROM), 93, 93f, 98, 98–99f Regrasping, 187–188 Reinforcement learning, 29–30 Retention, 30, 41 group, 33 Retrieval index (RI), 34–35 RHI. See Rubber hand illusion (RHI) Robot-hand research, 11 Robotically enhanced illusion, 55–58, 55f, 57f Robotic dexterous manipulation human data and potential applications to, 48–49 Robotic hands, anthropomorphic, 87–90, 89f, 106–107 Robots dynamics of, 6 kinematics and statics of, 6 ROM. See Range of motion (ROM) Rubber hand illusion (RHI), 54–56

S Sensorimotor learning, 27–28 and memory, 29–30 Sensorimotor process, interaction between multiple, 31–32, 31f Silicon fingertip, 120, 120f Single mass-flexible object, 63 Skilled motor behavior, 29 Soft-finger models, 152 Soft (Kinugoshi) tofu experimental setup for compression test of, 126, 127f fluid fingertip and, 130f grasping test for, 134, 134f, 135t Spring model, 4–5, 4f Stable grasping, 188–189 superposition of lower priority tasks to, 158–159, 159f Stable-object grasping, 165–167

Index

State-space models, 29 State space representation, 4–5 Statistical hypothesis testing, 3, 4t Surface mesh of digital hand, 16–17 Surgical robot, 53, 59

U Universal joint, 98–99 Use-dependent adaptation, 46–48 Use-dependent learning, 29–30, 47–48

T Task performance, 30–31 Tele-manipulation, 53 Tendon contraction motions of, 94–95 extensor, 94–96, 94f flexor, 94–96, 94f sheaths, 96–97, 97f Thumb carpometacarpal joint of, 100f opposability, blind grasp on, 154–158, 156–158f opposition, 149 Transfer group (TF), 33 Trial-by-trial learning, 30–31

V Velcro straps, 55 Versatile grasping experimental validation of, 140–144 by fluid-fingertip embedded gripper, 136–144 Virtual-object frame, 168–170, 169f Virtual Soldier Santos system, 22 Volar plate, 94

W Wrist, active movements of, 13

207