Filtered repetitive control with nonlinear systems 9789811514531, 9789811514548

639 72 4MB

English Pages 227 Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Filtered repetitive control with nonlinear systems
 9789811514531, 9789811514548

Table of contents :
Foreword......Page 5
Preface......Page 6
Contents......Page 8
Symbols......Page 13
1 Introduction......Page 16
1.1.1 Basic Concept......Page 17
1.1.2 Internal Model Principle......Page 18
1.2 Brief Overview of Repetitive Control for Linear System......Page 22
1.3.1 Major Repetitive Controller Design Method......Page 23
1.4 Objective and Structure of This Book......Page 28
References......Page 30
2.1.1 Noncausal Zero-Phase Filter......Page 33
2.1.2 Sensitivity and Complementary Sensitivity Function......Page 35
2.1.3 Small Gain Theorem......Page 37
2.1.4 Positive Real System......Page 39
2.2.1 Schur Complement......Page 40
2.2.2 Feedback Linearization......Page 41
2.2.3 Additive State Decomposition......Page 46
2.3.1 Barbalat's Lemma......Page 50
2.3.2 Ordinary Differential Equation......Page 51
2.3.3 Functional Differential Equation......Page 53
2.4 Rejection Problem and Tracking Problem......Page 57
References......Page 59
3 Repetitive Control for Linear System......Page 61
3.1.1 Continuous-Time Transfer Function......Page 62
3.1.2 Discrete-Time Transfer Function......Page 65
3.2 Repetitive Control Based on State-Space Model......Page 67
3.2.2 Repetitive Controller Design......Page 68
3.2.3 Numerical Simulation......Page 79
References......Page 82
4 Robustness Analysis of Repetitive Control Systems......Page 84
4.1 Measurement of Stability Margin......Page 85
4.2 Robustness Analysis of Internal Model System......Page 88
4.3.1 Stability Margin of Repetitive Control System......Page 91
4.3.2 Limitation of Repetitive Control System......Page 92
4.3.3 Filtered Repetitive Control......Page 95
References......Page 97
5.1 Repetitive Control for System with State-Related Nonlinearity......Page 98
5.1.1 Problem Formulation......Page 99
5.1.2 Repetitive Controller Design Under Assumption 5.1......Page 101
5.1.3 Repetitive Controller Design Under Assumption 5.2......Page 103
5.2.1 Problem Formulation......Page 109
5.2.2 Repetitive Controller Design......Page 110
References......Page 112
6 Filtered Repetitive Control with Nonlinear Systems: An Adaptive-Control-Like Method......Page 114
6.1 Problem Formulation......Page 115
6.2 Simple Example......Page 116
6.3 New Model of Periodic Signal......Page 117
6.4.1 Controller Design......Page 118
6.4.2 Stability Analysis......Page 120
6.5 Filtered Repetitive Controller Design with Saturation......Page 121
6.6.1 Problem Formulation......Page 123
6.6.2 Controller Design......Page 124
6.6.4 Numerical Simulation......Page 125
6.7.1 Problem Formulation......Page 128
6.7.2 Assumption Verification......Page 130
6.8 Summary......Page 131
6.9.1 Proof of Lemma 6.1......Page 132
6.9.2 Detailed Proof of Theorem 6.1......Page 133
6.9.3 Proof of Lemma 6.2......Page 135
References......Page 136
7.1 Problem Formulation......Page 138
7.2 Additive-State-Decomposition-Based Repetitive Control Framework......Page 141
7.2.1 Decomposition......Page 142
7.2.2 Controller Design......Page 144
7.3 Application to TORA Benchmark Problem......Page 148
7.3.1 Additive State Decomposition of TORA Benchmark......Page 149
7.3.2 Filtered Repetitive Controller Design for Primary System......Page 150
7.3.3 Stabilizing Controller Design for Secondary System......Page 151
7.3.4 Controller Synthesis for Original System......Page 153
7.3.5 Numerical Simulation......Page 155
7.4 Summary......Page 156
7.5.1 Proof of Theorem 7.2......Page 157
7.5.2 Proof of Proposition 7.1......Page 158
References......Page 159
8 Sampled-Data Filtered Repetitive Control With Nonlinear Systems: An Additive-State-Decomposition Method......Page 161
8.1.1 System Description......Page 162
8.1.2 Objective......Page 163
8.2 Sampled-Data Output-Feedback Robust Repetitive …......Page 164
8.2.1 Additive State Decomposition......Page 165
8.2.2 Controller Design for Primary System and Secondary System......Page 167
8.2.3 Controller Synthesis for Original System......Page 172
8.3.1 Problem Formulation......Page 173
8.3.2 Controller Design......Page 174
8.3.3 Controller Synthesis and Simulation......Page 176
8.3.4 Comparison......Page 178
References......Page 180
9 Filtered Repetitive Control with Nonlinear Systems: An Actuator-Focused Design Method......Page 182
9.1 Motivation and Objective......Page 183
9.2.1 General Idea......Page 185
9.2.2 Three Examples......Page 187
9.2.3 Filtered Repetitive Control System Subject to T-Periodic Signal......Page 191
9.3 Actuator-Focused Repetitive Controller Design Method......Page 192
9.3.1 Linear Periodic System......Page 193
9.3.2 General Nonlinear System......Page 197
9.4.1 Linear Periodic System......Page 199
9.4.2 Minimum-Phase Nonlinear System......Page 201
9.4.3 Nonminimum-Phase Nonlinear System......Page 203
9.6.1 Proof of Theorem 9.3......Page 206
9.6.2 Uniformly Ultimate Boundedness Proof for Minimum-Phase Nonlinear System......Page 207
9.6.3 Uniformly Ultimate Boundedness Proof for Nonminimum-Phase Nonlinear System......Page 208
References......Page 209
10.1 Preliminary Lemma for Contraction Mapping......Page 211
10.2 Problem Formulation......Page 213
10.3.1 Saturated D-Type Repetitive Controller Design......Page 215
10.3.2 Convergence Analysis......Page 216
10.4.1 Problem Formulation......Page 219
10.4.3 Assumption Verification......Page 220
10.4.5 Numerical Simulation......Page 221
10.5 Summary......Page 223
10.6 Appendix......Page 224
References......Page 227

Citation preview

Quan Quan Kai-Yuan Cai

Filtered Repetitive Control with Nonlinear Systems

Filtered Repetitive Control with Nonlinear Systems

Quan Quan Kai-Yuan Cai •

Filtered Repetitive Control with Nonlinear Systems

123

Quan Quan School of Automation Science and Electrical Engineering Beijing University of Aeronautics and Astronautics Beijing, China

Kai-Yuan Cai School of Automation Science and Electrical Engineering Beijing University of Aeronautics and Astronautics Beijing, China

ISBN 978-981-15-1453-1 ISBN 978-981-15-1454-8 https://doi.org/10.1007/978-981-15-1454-8

(eBook)

© Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Foreword

Since its inception in 1981, repetitive control (RC) has become a major chapter of control theory, with applications as diverse as power supplies, robotic manipulators, and quadcopters. These may have in common the requirement that the system track a periodic reference signal or reject a periodic disturbance or do both. This book, by two well-known control researchers at the Beijing University of Aeronautics and Astronautics, aims to provide state-of-the-art coverage of RC, with due attention to theoretical precision combined with a strong emphasis on engineering design. The basic design challenge is to achieve an appropriate trade-off between the mutually conflicting goals of steady-state tracking accuracy and robust internal stability. As their starting point, the authors introduce the familiar internal model principle of linear regulation, but now for a generic, not necessarily continuous, periodic reference signal. This infinite-dimensional extension raises new issues of stabilizability resolved by filtered repetitive control (FRC). FRC lays the groundwork for an extensive treatment of alternative design approaches to both linear and nonlinear systems, including the technique (original with the authors) of “additive state decomposition”. The book is well suited to a course on engineering design for readers with some preparation in ordinary differential-delay equations and Lyapunov stability. I recommend it as a timely and significant contribution to the current literature on RC. September 2019

W. M. Wonham Systems Control Group, ECE Department University of Toronto Toronto, ON, Canada

v

Preface

Repetition is the mother of all learning —A Latin Phrase

In nature, numerous examples of periodic phenomena are found and observed, ranging from the orbital motion of celestial bodies to heart rate. In practice, many control tasks are often of periodic nature as well. Industrial manipulators are often required to track or reject periodic exogenous signals when performing operations such as picking, dropping, and painting. Besides, special applications include magnetic spacecraft attitude control, helicopter vibration active control and vertical landing on oscillating platform, aircraft power harmonic elimination, satellite formation, LED light tracking, control of hydraulic servo mechanism, and lower limb exoskeleton control. For these periodic control tasks, repetitive control (RC, or repetitive controller, also specified RC) enables high-precision control performance. RC is derived from the internal model principle and contains a special structure with time-delay components which play the memory role. RC is, at the root, based on the compensation control or the predictive control that uses the additional memory. RC was originally developed on continuous single-input, single-output linear time-invariant (LTI) systems for high-precision tracking of periodic signals within a known period. Later, RC extended to multiple-input multiple-output LTI systems. Since then, RC has been propelled to the forefront of research and development in control theory. However, previous studies focused on theories and applications that use frequency-domain methods in relation to LTI systems, while RC for nonlinear systems received limited attention. What is more, RC often faces robustness problem, including stability robustness against uncertain parameters of systems and performance robustness against uncertain or time-varying period-time of external signals. For these problems, filter design with the frequency-domain analysis is the main tool, which then develops into filtered RCs. But it is difficult to apply them, if possible, to nonlinear systems. Therefore, we write this book for the utilization of filtered RC with nonlinear systems. As an outcome of a course developed at Beihang University (Beijing University of Aeronautics and Astronautics, BUAA), this book aims at providing more methods and tools for the students and researchers in the field of RC to explore the potential of RC. In this book, commonly used methods like the feedback vii

viii

Preface

linearization method and adaptive-control-like method are summarized and further modified to be filtered RC. However, feedback linearization or error dynamics derived is often difficult to perform due to various reasons. To solve this problem, three new methods parallel to the two methods mentioned above are also proposed: the additive-state-decomposition-based method, the actuator-focused design method, and the contraction mapping method. To be specific, an introduction (Chap. 1) and preliminaries (Chaps. 2–4) are presented in the first four chapters, where the preliminaries consist of mathematics preliminary (Chap. 2), a brief introduction to RC for linear systems (Chap. 3), and the robustness problem of RC system (Chap. 4) that will serve to be an illustration of what RC is and why filtered RC must be used. After that, this book will give basic but new methods to solve RC problems for some special nonlinear systems: commonly used methods like linearization method (Chap. 5) and adaptive-control-like method (Chap. 6) will be summarized. They consist of both previous research findings and authors’ contributions. In addition, three new methods parallel to the two methods mentioned above will be proposed: the additive-state-decomposition-based method in Chaps. 7–8 that will bridge the LTI systems and nonlinear systems so that the linear RC methods can be used in nonlinear systems; the actuator-focused design method in Chap. 9 derived from another viewpoint of the internal model principle proposed by the authors; and the contraction mapping method (Chap. 10) being another attempt of the authors to solve the RC problems for nonlinear systems without the need of corresponding Lyapunov functions. Beijing, China

Quan Quan Kai-Yuan Cai

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Basic Idea of Repetitive Control . . . . . . . . . . . . . . . . . . 1.1.1 Basic Concept . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Internal Model Principle . . . . . . . . . . . . . . . . . . 1.2 Brief Overview of Repetitive Control for Linear System . 1.3 Repetitive Control for Nonlinear System . . . . . . . . . . . . 1.3.1 Major Repetitive Controller Design Method . . . . 1.3.2 Existing Problem in Repetitive Control . . . . . . . 1.4 Objective and Structure of This Book . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

1 2 2 3 7 8 8 13 13 15

2

Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 System-Related Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Noncausal Zero-Phase Filter . . . . . . . . . . . . . . . . . . 2.1.2 Sensitivity and Complementary Sensitivity Function . 2.1.3 Small Gain Theorem . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Positive Real System . . . . . . . . . . . . . . . . . . . . . . . 2.2 Transformation-Related Preliminary . . . . . . . . . . . . . . . . . . . 2.2.1 Schur Complement . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Additive State Decomposition . . . . . . . . . . . . . . . . . 2.3 Stability-Related Preliminary . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Barbalat’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Ordinary Differential Equation . . . . . . . . . . . . . . . . 2.3.3 Functional Differential Equation . . . . . . . . . . . . . . . 2.4 Rejection Problem and Tracking Problem . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

19 19 19 21 23 25 26 26 27 32 36 36 37 39 43 45

. . . . . . . . . .

. . . . . . . . . .

ix

x

Contents

3

Repetitive Control for Linear System . . . . . . . . . . . . . . . . 3.1 Repetitive Control based on Transfer Function Model . 3.1.1 Continuous-Time Transfer Function . . . . . . . . 3.1.2 Discrete-Time Transfer Function . . . . . . . . . . 3.2 Repetitive Control Based on State-Space Model . . . . . 3.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . 3.2.2 Repetitive Controller Design . . . . . . . . . . . . . 3.2.3 Numerical Simulation . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

47 48 48 51 53 54 54 65 68

4

Robustness Analysis of Repetitive Control Systems . . . . . . 4.1 Measurement of Stability Margin . . . . . . . . . . . . . . . . 4.2 Robustness Analysis of Internal Model System . . . . . . 4.3 Robustness Analysis of Repetitive Control System . . . 4.3.1 Stability Margin of Repetitive Control System 4.3.2 Limitation of Repetitive Control System . . . . 4.3.3 Filtered Repetitive Control . . . . . . . . . . . . . . 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

71 72 75 78 78 79 82 84 84

5

Filtered Repetitive Control with Nonlinear Systems: Linearization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Repetitive Control for System with State-Related Nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Repetitive Controller Design Under Assumption 5.1 . 5.1.3 Repetitive Controller Design Under Assumption 5.2 . 5.2 Repetitive Control for Systems with Input-Related Nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Repetitive Controller Design . . . . . . . . . . . . . . . . . . 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

85

. . . .

. . . .

85 86 88 90

. . . . .

. . . . .

96 96 97 99 99

Filtered Repetitive Control with Nonlinear Systems: An Adaptive-Control-Like Method . . . . . . . . . . . . . . . . . 6.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 New Model of Periodic Signal . . . . . . . . . . . . . . . . . 6.4 Filtered Repetitive Controller Design . . . . . . . . . . . . 6.4.1 Controller Design . . . . . . . . . . . . . . . . . . . . 6.4.2 Stability Analysis . . . . . . . . . . . . . . . . . . . . 6.5 Filtered Repetitive Controller Design with Saturation 6.6 Application 1: Robotic Manipulator Tracking . . . . . .

. . . . . . . . .

. . . . . . . . .

101 102 103 104 105 105 107 108 110

6

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

Contents

6.6.1 Problem Formulation . . . . . . . . . . . . . . . . . 6.6.2 Controller Design . . . . . . . . . . . . . . . . . . . . 6.6.3 Assumption Verification . . . . . . . . . . . . . . . 6.6.4 Numerical Simulation . . . . . . . . . . . . . . . . . 6.7 Application 2: Quadcopter Attitude Control . . . . . . . 6.7.1 Problem Formulation . . . . . . . . . . . . . . . . . 6.7.2 Assumption Verification . . . . . . . . . . . . . . . 6.7.3 Controller Design and Numerical Simulation 6.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.1 Proof of Lemma 6.1 . . . . . . . . . . . . . . . . . . 6.9.2 Detailed Proof of Theorem 6.1 . . . . . . . . . . 6.9.3 Proof of Lemma 6.2 . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

8

xi

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

Continuous-Time Filtered Repetitive Control with Nonlinear Systems: An Additive-State-Decomposition Method . . . . . . . . . . 7.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Additive-State-Decomposition-Based Repetitive Control Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Application to TORA Benchmark Problem . . . . . . . . . . . . . . 7.3.1 Additive State Decomposition of TORA Benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Filtered Repetitive Controller Design for Primary System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Stabilizing Controller Design for Secondary System . 7.3.4 Controller Synthesis for Original System . . . . . . . . . 7.3.5 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Proof of Theorem 7.2 . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Proof of Proposition 7.1 . . . . . . . . . . . . . . . . . . . . . 7.5.3 Proof of Proposition 7.2 . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampled-Data Filtered Repetitive Control With Nonlinear Systems: An Additive-State-Decomposition Method . . . . . . . . 8.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 System Description . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Sampled-Data Output-Feedback Robust Repetitive Control Using Additive State Decomposition . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

110 111 112 112 115 115 117 118 118 119 119 120 122 123

. . 125 . . 125 . . . .

. . . .

128 129 131 135

. . 136 . . . . . . . . . .

. . . . . . . . . .

137 138 140 142 143 144 144 145 146 146

. . . .

. . . .

149 150 150 151

. . . . 152

xii

Contents

8.2.1 8.2.2

Additive State Decomposition . . . . . . . . . . . . . . . . . Controller Design for Primary System and Secondary System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Controller Synthesis for Original System . . . . . . . . . 8.3 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Controller Synthesis and Simulation . . . . . . . . . . . . 8.3.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Filtered Repetitive Control with Nonlinear Systems: An Actuator-Focused Design Method . . . . . . . . . . . . . . . . . . 9.1 Motivation and Objective . . . . . . . . . . . . . . . . . . . . . . . 9.2 Actuator-Focused Viewpoint on Internal Model Principle 9.2.1 General Idea . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Three Examples . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Filtered Repetitive Control System Subject to T-Periodic Signal . . . . . . . . . . . . . . . . . . . . . 9.3 Actuator-Focused Repetitive Controller Design Method . 9.3.1 Linear Periodic System . . . . . . . . . . . . . . . . . . . 9.3.2 General Nonlinear System . . . . . . . . . . . . . . . . 9.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Linear Periodic System . . . . . . . . . . . . . . . . . . . 9.4.2 Minimum-Phase Nonlinear System . . . . . . . . . . 9.4.3 Nonminimum-Phase Nonlinear System . . . . . . . 9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Proof of Theorem 9.3 . . . . . . . . . . . . . . . . . . . . 9.6.2 Uniformly Ultimate Boundedness Proof for Minimum-Phase Nonlinear System . . . . . . . . . . 9.6.3 Uniformly Ultimate Boundedness Proof for Nonminimum-Phase Nonlinear System . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 Repetitive Control with Nonlinear Systems: A Contraction Mapping Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Preliminary Lemma for Contraction Mapping . . . . . . . . 10.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Saturated D-Type Repetitive Controller Design and Convergence Analysis . . . . . . . . . . . . . . . . . . . . . 10.3.1 Saturated D-Type Repetitive Controller Design 10.3.2 Convergence Analysis . . . . . . . . . . . . . . . . . .

. . 153 . . . . . . . . .

. . . . . . . . .

155 160 161 161 162 164 166 168 168

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

171 172 174 174 176

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

180 181 182 186 188 188 190 192 195 195 195

. . . . . 196 . . . . . 197 . . . . . 198

. . . . . . 201 . . . . . . 201 . . . . . . 203 . . . . . . 205 . . . . . . 205 . . . . . . 206

Contents

10.4 Robotic Manipulator Tracking Example 10.4.1 Problem Formulation . . . . . . . 10.4.2 Model Transformation . . . . . . 10.4.3 Assumption Verification . . . . . 10.4.4 Controller Design . . . . . . . . . . 10.4.5 Numerical Simulation . . . . . . . 10.4.6 Discussion . . . . . . . . . . . . . . . 10.5 Summary . . . . . . . . . . . . . . . . . . . . . . 10.6 Appendix . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

209 209 210 210 211 211 213 213 214 217

Symbols

¼ ,  2   B

Equality Definition. x , y means that x is defined to be another name for y, under certain assumption Congruence relation Belong to Cross product The convolution of f and g is written R1 ð f  gÞðtÞ , 1 f ðsÞgðt  sÞds, where f ðtÞ; gðtÞ 2 R Bðo; dÞ , fn 2 Rn jkn  ok  dg; and the notation xðtÞ ! Bðo; dÞ means min kxðtÞ  yk ! 0 y2Bðo;dÞ

C R; Rn ; Rnm Rþ C; Cn ; Cnm Z Zþ N L2e L2 ð1; 1Þ L2 ½0; 1Þ L1 ½a; b In 0nm x x X

Set of complex numbers Set of real numbers, Euclidean space of dimension n, and Euclidean space of dimension n  m Set of positive real numbers Set of complex numbers, complex vector of dimension n, complex matrix of dimension n  m Integer Positive integer Nonnegative integers L2e , ff 2 L2 ½0; T ; for all T\1g L2 ð1; 1Þ , kf k2 \1 L2 ½0; 1Þ , ff 2 S : f ðtÞ ¼ 0 for all t\0g \ L2 ð1; 1Þ n o L1 ½a; b , fj supt2½a;b kf ðtÞk\1

Identify matrix of dimension n  n Zero matrix of dimension n  m Scale Vector, xi represents the ith element of vector x Matrix, xij represents the element of matrix X at the ith row and the jth column xv

xvi

_ dx x, dt ^ x ~ x AT AT det(A) tr(A) rmax ðCÞ; rmin ðCÞ rðAÞ kðAÞ; kmax ðAÞ; kmin ðAÞ sup inf ðÞðkÞ r @a=@x @a=@x CðA; BÞ Lf h Lif h OðA; CT Þ

Re(s) L; L1 Z; Z1 jj j sj k k kC k k k1

Symbols

The first derivative with respect to time t An estimate of x An estimate error of x Transpose of A Transpose of inverse A Determinant of A Trace of a square matrix A, trðAÞ ,

n P i¼1

aii ; A 2 Rnn

The maximum singular value and the minimum singular of matrix C 2 Cnn ; respectively is the spectrum and rA ¼ supz2rðAÞ jzj the spectral radius, where A be a linear compact operator The eigenvalue, maximum eigenvalue, and the minimum eigenvalue of a matrix A 2 Rnn ; respectively The least upper bound of a set The greatest lower bound of a set The kth derivative with respect to time t Gradient @a=@x , ½ @a=@x1 @a=@x2 . . . @a=@xn  2 R1n ð@a=@xÞij , @ai =@xj 2 Rmn ; where a ¼ ½ a1 . . . am T and x ¼ ½ x1 . . . xn T Controllability matrix of pairs ðA; BÞ,   CðA; BÞ ¼ B AB . . . An1 B The Lie derivative of the function h with respect to the vector field f   The ith-order derivative of Lf h, Lif h ¼ r Li1 f h f Observability matrix 2of pairs 3 CT T 7   6 6 C A 7 ðA; CÞ, O A; CT ¼ 6 7 .. 4 5 . CT An1 Real part of the complex number s Laplace transform and inverse Laplace transform, respectively Z-transform and inverse Z-transform, respectively Absolute value The modulus of a complex number s is defined as pffiffiffiffiffiffiffiffiffiffiffiffiffiffi jsj , a2 þ b2 ; where s ¼ a þ ib; a; b 2 R pffiffiffiffiffiffiffiffi Euclidean norm, kxk , xT x; x 2 Rn The norm of a complex matrix kCk , rmax ðCÞ; C 2 Cnn Infinity norm, kxk1 ¼ maxfjx1 j; . . .; jxn jg; x 2 Rn

Symbols

kf k2;½0;T  ; kf k2 kf k1 kf k½a;b A 0; A [ 0 s C Cð½a; b; Rn Þ n Cm PT ð½0; 1Þ; R Þ

kxt k½a;b K K1 KL

k f ka Dþ x

xvii

kf k2;½0;T  ,

nR

o12

; kf k2 ,

nR 1

1 kf ðt Þk

2

o12 dt ; f : R ! Rn

kf k1 , supt2½0;1Þ kf ðtÞk kf k½a;b , supt2½a;b kf ðtÞk Represent that A 2 Rnn is a positive semidefinite or positive definite matrix, respectively The conjugate of the complex number s ¼ a þ ib is s , a  ib The conjugate is formally defined by   transpose  ; C 2 Cnm ðC Þij , C ji The space of continuous n-dimension function vector on ½a; b The space of mth-order continuously differentiable functions f : ½0; 1Þ ! Rn which are T-periodic, i.e., f ðt þ T Þ ¼ f ðt Þ kxt k½a;b , sup kxðt þ hÞk; where h2½a;b

xt , xt ðhÞ ¼ xðt þ hÞ; h 2 ½a; b A continuous function a : ½0; aÞ ! ½0; 1Þ is said to belong to class Kif it is strictly increasing and að0Þ ¼ 0 A continuous function a is said to belong to class K1 if a ¼ 1 and aðr Þ ! 1 as r ! 1 A continuous function b : ½0; aÞ  ½0; 1Þ ! ½0; 1Þ is said to belong to class KL if, for each fixed s, the mapping bðr; sÞ belongs to K with respect to r and, for each fixed r, the mapping bðr; sÞ is decreasing with respect with s and bðr; sÞ ! 0 as s ! 1 kf ka , limsupkf ðtÞk; where f 2 L1 ½0; 1Þ t!1

The upper right Dini derivative of a function and is defined by D þ xðt0 Þ ,

jj OðxÞn

T 2 0 kf ðt Þk dt



1 ð t0 Þ n ð t0 Þ lim sup x1 ðtÞx ; . . .; lim sup xn ðtÞx tt0 tt0

t!t0 þ 0

T

t!t0 þ 0

Modulus of a complex number A function dðxÞ is said to be OðxÞn if lim

kdðxÞk n kxk!0 kxk

exists

and is 6¼ 0. In particular, a function dðxÞ, said to be OðxÞ0 or Oð1Þ, implies lim kdðxÞk exists and is 6¼ 0: kxk!0



Function composition, f 1 f 2 : Rn ! Rm implies that ðf 1 f 2 ÞðxÞ ¼ f 1 ðf 2 ðxÞÞ; 8x 2 Rn , where f 1 : Rp ! Rm ; f 2 : Rn ! Rp

Chapter 1

Introduction

There are several examples of periodic phenomena, such as the orbital motion of heavenly bodies and heartbeats, that can be observed in nature. In practice, many control tasks are often considered to exhibit periodic behavior. Industrial manipulators are often used to track or reject periodic exogenous signals when performing picking, placing, or painting operations. Moreover, some special applications of these periodic exogenous signals include magnetic spacecraft attitude control, active control of helicopter vibrations, autonomous vertical landing in an oscillating platform, elimination of harmonics in aircraft power supply, satellite formation, light-emitting diode tracking, control of hydraulic servomechanisms, and control of lower limb exoskeletons. High-precision control performance can be realized for such periodic control tasks using repetitive control (RC, or repetitive controller, which is also designated as RC). RC was initially developed for continuous single-input, single-output (SISO) linear time-invariant (LTI) systems in [1] to accurately track periodic signals with a known period. RC was then extended to multiple-input multiple-output (MIMO) LTI systems in [2]. Since then, RC has been the subject of increasing attention, and applications that employ RC have become a special subject of focus in control theory. Recent developments concerning RC have not been consistent, with limited research on RC in nonlinear systems. However, the use of frequency-based methods has significantly aided the development of theories and applications pertaining to LTI systems. This chapter aims to answer the following question: What are the challenges in employing repetitive control for nonlinear systems?

To answer this question, it is essential to introduce the basic idea of RC and provide a brief overview of RC for linear and nonlinear systems. This chapter presents a revised and extended version of a paper that was published earlier [3].

© Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_1

1

2

1 Introduction

1.1 Basic Idea of Repetitive Control 1.1.1 Basic Concept Before discussing RC, the concept of iterative learning control (ILC, or iterative learning controller, which is also designated as ILC) must be introduced to avoid confusion between these two similar control methods.

1.1.1.1

Iterative Learning Control

ILC is used for repetitive tasks with multiple execution times. It focuses on improving task results by learning from previous executions [4–10]. This control method performs a repetitive task and can utilize past control (delayed) information for generating present control action, which makes it different from most existing control methods. The classic ILC comprises three steps for each trial: (i) storing past control information; (ii) suspending the plant and resetting to the initial state condition; and (iii) controlling the plant using stored past control information and current feedback. For example, a remote pilot practices the take-off movements of a multicopter from the ground to a predetermined height. During each take-off, the remote pilot observes the trajectory of the multicopter (first step). If the trajectory is not satisfactory, the remote pilot will land the multicopter and then start it again by setting the initial rotation speed of the propellers to the previously recorded values (second step). Finally, the remote pilot adjusts the operation based on previous data. As the pilot continues to practice, the correct operation is learned and ingrained into the muscle memory of the pilot so that the skill of the pilot can be improved iteratively, which is the principle of the ILC learning method.

1.1.1.2

Repetitive Control

RC is used for periodic signal tracking and rejection. It aims to improve task results by learning from previous executions. RC is a special tracking method intended for a class of special problems. The classic RC comprises two steps for each trial: (i) storing past control information and (ii) controlling the plant using stored past control information (the last trial) and current feedback. For example, a pilot attempts to land a helicopter on a periodic oscillating deck at sea. Given the periodicity of the deck, the pilot can adjust his or her operation based on the previous trajectory of the helicopter, which is the principle of the RC learning method. The most significant difference between RC and ILC is that during RC, the initial state of the current trial cannot be reset to the final state of the previous trial. The entire process is continuous without any interruption at the end of each trial.

1.1 Basic Idea of Repetitive Control Table 1.1 Comparison between ILC and RC ILC Desired trajectory Initial condition Controller form

1.1.1.3

yd : [0, T ], yd (0) = yd (T ) or yd (0) = yd (T ) xk+1 (0) can be reset manually uk = uk−1 + L (yk−1 − yd )

3

RC yd : [0, ∞) , yd (t + T ) = yd (t) , t ≥ 0 xk+1 (0) = xk (T ) automatically uk = uk−1 + L (yk−1 − yd )

Comparison

Let us consider a class of linear systems as follows: x˙ (t) = Ax (t) + Bu (t) y (t) = CT x (t) , where A ∈ Rn×n , B, C ∈ Rn×m , x ∈ Rn , and y (t) , u (t) ∈ Rm . The control objective is to design u that enables y to track the desired trajectory yd . For simplicity, resetting the time is ignored for ILC and the control variable u is defined as uk (t)  u (kT + t) , t ∈ [0, T ], where T > 0 denotes the interval time of a trial for ILC and the period for RC, k = 0, 1, 2, . . .. A comparison is presented in Table 1.1, where the two controller forms are found to be the same but exhibit a major difference in terms of the initial condition setting.

1.1.2 Internal Model Principle The basic concept of RC originates from the internal model principle (IMP); this principle states that if any exogenous signal can be regarded as the output of an autonomous system, then the inclusion of this signal model, namely, internal model, in a stable closed-loop system can assure asymptotic tracking and asymptotic rejection of the signal [11]. If a given signal is composed of a certain number of harmonics, then a corresponding number of neutrally stable internal models (one for each harmonic) should be incorporated into the closed-loop based on the IMP to realize asymptotic tracking and asymptotic rejection. To further explain the IMP, the zeropole cancelation viewpoint of the IMP is used to explain the role of the internal models in step signals, sine signals, and T -periodic signals.

4

1 Introduction

Fig. 1.1 Step signal tracking

1.1.2.1

Step Signal

It is well known that integral control can track and reject any external step signal; this can be explained using the IMP given that the models of an integrator and a step signal are the same, namely, 1/s. Based on the IMP, inclusion of the internal model 1/s into a stable closed-loop system can assure asymptotic tracking and asymptotic rejection of a step signal. According to Fig. 1.1, the transfer function from the desired signal to the tracking error can be written as follows:  a 1 1 s y = (s) d s + G (s) s 1 + 1s G (s) a , = s + G (s)

e (s) =

where yd (s) = a/s is the Laplace transformation of a step signal with amplitude a ∈ R. Therefore, it is sufficient to merely verify whether the roots of the equation s + G (s) = 0 are all in the left s-plane, which confirms the stability of the closedloop system. If all roots are in the left s-plane, then the tracking error tends to zero as t → ∞. Therefore, the tracking problem is converted into a stability problem of the closed-loop system.

1.1.2.2

Sine Signal

Suppose that the external signal is in the form a0 sin (ωt + ϕ0 ), where a0 , ϕ0 are constants,  and the Laplace transformation model of a0 sin (ωt + ϕ0 ) is (b1 s + b0 ) /  2 or s + ω2 , where b0 , b1 ∈ R. Precise tracking  complete rejection can then be  achieved by incorporating the model 1/ s 2 + ω2 . into the closed-loop system. Figure 1.2 demonstrates that the transfer function from the desired signal to the tracking error can be expressed as follows: e (s) =

1 1+

1 G s 2 +ω2

(s)

yd (s)

   2  1 2 b1 s + b0 = 2 s +ω s + ω2 + G (s) s 2 + ω2

1.1 Basic Idea of Repetitive Control

5

Fig. 1.2 Sine signal tracking

=

b1 s + b0 . s 2 + ω2 + G (s)

Thus, it is adequate to only verify whether the roots of the equation s 2 + ω2 + G (s) = 0 are all in the left s-plane, which confirms the stability of the closed-loop system. Therefore, the tracking problem is converted into a stability problem of the closed-loop system. Based on the IMP, designing a tracking controller for a general periodic signal is challenging because any periodic signal may be a summation of finite or infinite harmonics with period T . The harmonics of a general periodic signal must first be analyzed. However, obtaining accurate harmonics is difficult or time-consuming. Second, according to the IMP, the controller will contain more neutrally stable internal models (one for each harmonic) as the number of harmonics increases. For example, a T -periodic signal consists of harmonics with frequencies 0, 2π/T, . . . , 2π N /T, where N ∈ Z+ . The corresponding internal model can then be written as I M,fin =

1 N   1+ s k=1

T 2s2 4π 2 k 2

.

(1.1)

This will result in an extremely complex controller structure. Moreover, it will be time-consuming to solve these neutrally stable internal models (differential equations) to obtain the control output. However, these two drawbacks can be overcome by using the following internal model for the T -periodic signal.

1.1.2.3

T -Periodic Signal

The Laplace transformation of a signal yd (t) delayed by T is expressed as follows: L (yd (t − T )) = e−sT L (yd (t)) . can repreSuppose that the external signal is of the form yd (t) = yd (t − T ), which  sent any T -periodic signal. Its Laplace transformation is 1/ 1 − e−sT with an initial condition on the interval [−T, 0]. Based on the IMP, asymptotic tracking   and asymptotic rejection can be achieved by incorporating the internal model 1/ 1 − e−sT into

6

1 Introduction

Fig. 1.3 T -periodic signal tracking

the closed-loop system. The internal model for any T -periodic signal can be rewritten as follows [12]: I M,inf 

1 1 = lim N  −sT N →∞  1−e 1+ s k=1

T 2s2 4π 2 k 2

.

(1.2)

This internal model contains the internal models of all harmonics with period T , including the step signal. However, it is interesting to note the simple structure of RC with an infinite number of harmonics. This observation validates the Chinese proverb that states “things will develop in the opposite direction when they become extreme.” Figure 1.3 shows that the transfer function from the desired signal to the tracking error can be written as follows: e (s) =

1 1+

1 1−e−sT

=

1

G (s)

yd (s)    1 − e−sT

1 − e−sT + G (s) 1 . = −sT 1−e + G (s)

1 1 − e−sT



Therefore, it is sufficient to merely verify whether the roots of the equation 1 − e−sT + G (s) = 0 are all in the left s-plane. Consequently, the tracking problem is converted into a stability problem of the closed-loop system. Based on the RC presented in Table 1.1, the Laplace transformation of the controller is expressed as follows: u (s) =

1 L (yk−1 (s) − yd (s)) , 1 − e−sT

where the internal model of T -periodic signals is also incorporated. A controller that includes the internal model I M,inf in (1.2) is called an RC, and a system that employs this controller is called an RC system [2].

1.2 Brief Overview of Repetitive Control for Linear System

7

Fig. 1.4 Plug-in RC system diagram

1.2 Brief Overview of Repetitive Control for Linear System RC is an internal model-based control method where the infinite-dimensional internal model I M,inf gives rise to an infinite number of poles on the imaginary axis. [2] proved that for a class of general linear plants, the exponential stability of RC systems can be achieved only when the plant is proper, but not strictly proper. Moreover, the system may be destabilized by the internal model I M,inf . A linear RC system is a neutral-type system in a critical case [13, 14]. Consider the following simple RC system: x˙ (t) = −x (t) + u (t) u (t) = u (t − T ) − x (t) , where x (t) , u (t) ∈ R. The RC system expressed above can also be written as a neutral-type system in a critical case as follows: x˙ (t) − x˙ (t − T ) = −2x (t) + x (t − T ) . The above system is a neutral-type system in a critical case [13, 14]. Additional information is presented in Chap. 3. To enhance stability, a suitable filter is introduced, as shown in Fig. 1.4, forming a filtered repetitive controller (FRC, or filtered repetitive control, which is also designated as FRC) where the loop gain is reduced at high frequencies.1 Stable results can only be achieved by compromising on high-frequency performance. However, using an appropriate design, FRC can often achieve an acceptable trade-off between tracking performance and stability. This trade-off broadens the practical applications of RC. The plug-in RC system shown in Fig. 1.4 is a widely used structure. The objective of this structure is to design and optimize the filter Q (s) and compensator B (s). Given the developments over the past 30 years, it is evident that significant research efforts have been devoted toward developing theories and applications regarding RC for linear systems. Additional information on RC for linear systems has been 1 In

this book, the term “modified” in [2] is replaced with a more descriptive term “filtered”.

8

1 Introduction

included as part of [6, 10, 15–17] and the references therein. Current research mainly focuses on robust RC [18–21], spatial-based RC [22], or a combination of both [24]. Robust RC mainly includes two aspects: robustness against uncertain parameters of the considered systems [18–20] and robustness against uncertain or time-varying periods [21, 23–25]. Researchers are currently attempting to design better RCs to satisfy the increasing practical requirements.

1.3 Repetitive Control for Nonlinear System For nonlinear systems, the concept of FRC is not difficult to follow because the relevant theories have been derived in the frequency domain; these can be applied with difficulty, if at all, only to nonlinear systems. Currently, RCs for nonlinear systems are designed using two methods, namely, the feedback linearization method and adaptive-control-like method.

1.3.1 Major Repetitive Controller Design Method 1.3.1.1

Linearization Method

One of the design methods involves transforming a nonlinear system into a linear system and then applying the existing design methods to the transformed linear system. Earlier, researchers often considered the following nonlinear system: x˙ (t) = Ax (t) + Bu (t) + φ (t, x) y (t) = CT x (t) + Du (t) .

(1.3)

This method is related to the early stages of research on nonlinear systems. RC design is often restricted on the nonlinear term φ (t, x), including Lipschitz conditions [26] or sector conditions [27, 28]. Along with the development of feedback linearization and backstepping, RC design for nonlinear systems was further developed [29–33]. Using these new techniques, some nonlinear systems can be transformed into (1.3) with some restrictions, while some existing design methods can also be used directly. Differential geometric techniques are combined with the IMP, resulting in a nonlinear RC strategy. A formulation is presented for the case of input–state linearizable and input–output linearizable systems in continuous time [29]. Using the input– output linearized method and the approximate input–output linearized method, the applicability of the finite-dimensional RC to nonlinear tracking control problems is investigated for three different classes of nonlinear systems: (1) systems with a well-defined relative degree, (2) systems without a well-defined relative degree, and (3) linear plants with small actuator nonlinearity [30]. Using feedback linearization

1.3 Repetitive Control for Nonlinear System

9

Table 1.2 Differences between the traditional AC method and LB method Lyapunov function Controller AC LB

v˜ T (t) v˜ (t) t ˜ T (θ) v˜ (θ)dθ t−T v

v˙ (t) = h (t, e) v (t) = v (t − T ) + h (t, e)

and output redefinition, RC was developed to achieve precise periodic signal tracking control of single-input single-output nonlinear nonminimum-phase systems [31, 32]. Using backstepping, RC design and analyses were developed for backsteppingcontrolled nonlinear systems [33]. Backstepping control employs both feedback and feedforward actions to render linearized I/O plants; therefore, the outer loop of the RC design can be based on the compensated linear system. Using these new techniques, some nonlinear systems can be more easily transformed into LTI systems subject to nonlinear terms. Based on these techniques, existing design methods can be used directly, thereby facilitating RC design. However, it is interesting to note that not all nonlinear systems can be transformed into a familiar form; the resulting nonlinear terms can still be difficult to handle, or the output can still exhibit a nonlinear relationship with respect to the new state.

1.3.1.2

Adaptive-Control-Like Method

Another design method involves converting a tracking problem of nonlinear systems into a rejection problem of nonlinear error dynamics, and then applying the existing adaptive-control-like method to the converted error dynamics. This technique includes two design methods, namely, the Lyapunov-based (LB) method [34–47] and the evaluation-function-based method [48–55]. The LB method is only applicable to RC design, but the evaluation-function-based method is applicable to both RC design and ILC design. To clarify the findings of previous works, it is assumed that vˆ is a learning variable, v is the desired signal, and v˜ = v − vˆ is the learning error. The LB method is similar to the traditional adaptive-control (AC) method, where vd is a T -periodic signal for the former and is a constant for the latter. Therefore, the resulting controllers are called adaptive repetitive (learning) controllers [39]. To adopt the traditional AC method or LB method, the nonlinear error dynamics in the form of e˙ (t) = f (t, e) + b (t, e)˜v (t) , (1.4) must be derived first, where e is the error. Given the different desired signals, the chosen Lyapunov functions and the designed controllers are different, as observed in Table 1.2. The AC method is currently used as the leading method in designing RCs for nonlinear systems. This method was first applied to control robot manipulators [34]. Finally, an intermediate result (an assumption) was obtained [35] to establish the framework for the LB method.

10

1 Introduction

Intermediate Result: The functions f : [0, ∞) × Rn → Rn and b : [0, ∞) × Rn → Rn×m are bounded when e (t) is bounded. Moreover, there exist a differentiable function V : [0, ∞) × Rn → [0, ∞) , a positive-definite matrix M (t) = MT (t) ∈ Rn×n with 0 < λ M In < M (t) , λ M > 0, and a function h (t, e) ∈ Rm such that V˙ (t, e) ≤ −eT (t) M (t) e (t) + hT (t, e) v˜ (t) .

(1.5)

Based on the intermediate result, the controller v (t) = v (t − T ) + h (t, e) can ensure that the tracking error approaches zero. The proof must employ the Lyapunov function

V (t, e) + 1/2

t

v˜ T (s) v˜ (s) ds

t−T

and Barbalat’s Lemma [36, p. 123]. A novel learning approach has been described in [37] for asymptotic state tracking in a class of nonlinear systems. Compared to the previous methods, the proposed RC method is advantageous because it is computationally simple and does not require solving any complicated equations based on full system dynamics. Hybrid control schemes were developed, which utilize an RC term to compensate for periodic dynamics and other methods to compensate for aperiodic dynamics [38]. An LBadaptive RC was proposed for a class of nonlinearly parameterized systems [39]. Both partially and fully saturated RCs were analyzed in detail and compared. Results for a class of periodically time-varying nonlinear systems have been presented in [40]. Given that many RC schemes require the plant to be parameterizable, an RC is integrated with an adaptive robust control based on the backstepping design for a class of cascade systems without parametrization [41]. A continuous universal RC was proposed in [42] to track periodic trajectory in a class of nonlinear dynamical systems with nonparametric uncertainty and unknown state-dependent control direction matrices. An FRC was proposed to achieve a trade-off between tracking performance and stability for a class of nonlinear systems [43]. More importantly, the proposed FRC can handle small input delays, while the corresponding RC cannot. The classical PIDρ−1 control combined with RC was used for output regulation in a class of minimum-phase, nonlinear systems with unknown output-dependent nonlinearities, unknown parameters, and known relative degrees [44]. Local results were obtained and further results have been presented in [45], extending the nonlinear systems considered in [44]. Researchers [44, 45] also worked on the spatial-based RC for nonlinear autonomous vehicles [46] and permanent magnet step motors [47]. The evaluation-function-based method can be used to design both ILCs [5] and RCs. The evaluation function is often formulated as follows [48–55]:

1.3 Repetitive Control for Nonlinear System



T

Ek = 0

11

v˜ kT (θ ) v˜ k (θ ) dθ,

where v˜ k (θ )  v˜ (kT + θ ) , θ ∈ [0, T ] , T is the period, and k = 0, 1, 2, . . . is the iteration number. The objective is often to design ILCs for the resetting condition and RCs for the alignment condition, resulting in the following relationship: E k = E k − E k+1   ≤ α ek (0)2 − ek (T )2 − β



T

ek (θ )2 dθ,

0

where ek (θ )  e (kT + θ ) , θ ∈ [0, T ] , α, β > 0. Under the resetting condition, ek (0) = 0. Then,

E k ≤ −α ek (T )2 − β

T

ek (θ )2 dθ.

0

In this case, the following inequality can be obtained: β lim

i



i→∞

k=0

T

ek (θ )2 dθ ≤ E 0 .

0

On the other hand, under the alignment condition, ek+1 (0) = ek (T ) . In this case, the following inequality can be obtained: i i i

  ek (0)2 − ek (T )2 − β E k ≤ α k=0

k=0

≤ α e0 (0)2 − β

i T k=0

Then, β lim

i→∞

i

k=0

T

k=0

T

ek (θ )2 dθ

0

ek (θ )2 dθ.

0

ek (θ )2 dθ ≤ E 0 + α e0 (0) .

0

T Finally, using Barbalat’s Lemma, the tracking error 0 ek (θ )2 dθ can be proved to approach zero as k → ∞. In the early time, researchers mainly considered the resetting condition based on the concept described above. The alignment condition was analyzed in [48]. This work significantly stimulated the development of both ILCs and RCs. In [51], the backstepping technique was combined with the learning control mechanism to develop a constructive control strategy to cope with nonlinear systems subject to

12

1 Introduction

both structured periodic and unstructured aperiodic uncertainties. RC schemes using a proportional–derivative (PD) feedback structure was proposed in [52], where an iterative term was added to cope with unknown parameters and disturbances. The proposed adaptive ILC of robot manipulators was further improved in [54]. Fully saturated adaptive RC for trajectory tracking of uncertain robotic manipulators has been presented in [55]. The adaptive-control-like method is the leading method used to design RCs for nonlinear systems. The structures of RCs obtained for linear and nonlinear systems are similar, but the methods for obtaining these controllers are different. As discussed in Sect. 1.1.2, the tracking problem can be converted into a stability problem of the closed-loop system. Therefore, the error dynamics do not have to be obtained for LTI systems. However, for nonlinear systems, error dynamics must often be derived to convert a tracking problem into a disturbance rejection problem as in (1.4). This follows the concept of general tracking controller design, whereas the special feature of periodic signals is underexploited. Therefore, the general tracking controller design will not only restrict the application of RC but will also fail to represent the special features and importance of RC. For nonlinear nonminimumphase systems, ideal internal dynamics are required to derive the error dynamics as in (1.4), which is difficult and computationally expensive, particularly, when the internal dynamics are subject to an unknown disturbance [56]. Therefore, this is considered the reason for which few RCs that work on these systems are reported.

1.3.1.3

Contraction Mapping Method

Other design methods, such as the contraction mapping approach, are briefly discussed. Contraction mapping was often used in ILC during the early years when ILC was proposed. The controller design does not require additional information on the plant model. This is the biggest advantage of the contraction mapping approach over other methods. However, this tool is difficult to use for designing RCs without resetting the initial condition. Some researchers also attempted to use contraction mapping to design RCs. A formalism of ILC was used in [57] to solve an RC problem that forces a system to track a prescribed periodic reference signal. The proposed method adopts the concept of contraction mapping. However, the proposed method is only applicable to discrete-time systems. Moreover, it cannot be applied to the rejection of periodic disturbances. Based on the contraction mapping approach, a conditional learning control was proposed to track periodic signals for a class of nonlinear systems with unknown dynamics [58]. Learning is based on a steady-state output so that the updating law works only when a particular condition has been satisfied. Using this mechanism, monotonic convergence of the control sequence in the iteration domain can be achieved.

1.3 Repetitive Control for Nonlinear System

13

1.3.2 Existing Problem in Repetitive Control A linear RC system is a neutral-type system in a critical case [13, 14]. The characteristic equation of a neutral-type system includes an infinite sequence of roots with negative real parts approaching zero; i.e., sup{Re (s) |F (s) = 0 } = 0, where F (s) is the characteristic equation. This implies that a sufficiently small uncertainty may lead to sup{Re (s) |F (s) = 0 } > 0. Reference [43] shows that a nonlinear RC system loses its stability when subject to a small input delay. In practice, input delays are very common. Therefore, it is important to design an RC to deal with small input delays. In addition to input delays, the following problems must also be considered to increase the practical applications of RC. (i) Most studies on LTI systems focus on designing digital RCs. However, current research on digital RCs for continuous-time nonlinear systems is not producing the desired results. Currently, controllers are generally realized using digital computers. Considering the insufficient robustness of RC systems, it is still unknown if digital controllers destabilized the original systems. Moreover, a minimum-phase system may be transformed into a nonminimum-phase system after discretization. Therefore, it is important to establish theories for designing digital RCs for continuous-time nonlinear systems. (ii) Most RC designs require the period to be known as a priori. In practice, the period cannot be exactly known. Moreover, the method for designing an RC to cope with an uncertain period is very practical. For LTI systems, some researchers attempted to improve RCs to deal with uncertain periods [21]. However, research on nonlinear RC systems subject to uncertain periods is limited. (iii) In addition to the uncertain parameters, the presence of high unmodeled dynamics is one of the main reasons for which a closed-loop system loses stability; this is easy to analyze in LTI systems using frequency-domain methods but difficult for nonlinear systems. (iv) In addition to stability, transient response and convergence performance are key factors that determine the applicability of RC in practical applications. Therefore, the optimal RC for nonlinear systems must also be investigated. RC is a specific tracking control. Therefore, in addition to the problems described above, the method used to design RC for nonlinear nonminimum-phase systems and underactuated nonlinear systems is challenging.

1.4 Objective and Structure of This Book According to the authors, developments related to RC are inconsistent, while there is a significant development in terms of the theories and applications of LTI systems. On the other hand, research on RC for nonlinear systems is limited. Currently, there are two major methods for designing RCs in nonlinear systems: the linearization method and the adaptive-control-like method. Some problems exist in each method, and new design methods are expected to be developed. In addition, books on RC for nonlinear systems are scarce, and many of the existing books mainly discuss

14

1 Introduction

Fig. 1.5 Structure of this book

LTI systems. However, many books are available on ILC for nonlinear systems [7, 59–63]. This scarcity has motivated the authors to write this book; comprising ten chapters, as outlined in Fig. 1.5. In the first four chapters, the introduction (Chap. 1) and preliminaries (Chaps. 2– 4) are presented. The preliminaries consist of mathematical preliminaries (Chap. 2), a brief introduction to RC for linear systems (Chap. 3), and the robustness problem of the RC system (Chap. 4). These preliminaries present an overview of RC and the reasons for using FRC. This book focuses on discussing the different basic methods for solving RC problems in some special nonlinear systems. Commonly used methods such as the linearization method (Chap. 5) and the adaptive-control-like method (Chap. 6) will be discussed separately. These chapters discuss both the authors’ work and classical work. In addition, three new methods parallel to the two methods mentioned above are proposed. For example, the additive-state-decomposition-based method in Chaps. 7, 8 will bridge LTI systems and nonlinear systems so that linear RC methods can be used in nonlinear systems. The actuator-focused design method described in Chap. 9 is based on another viewpoint of the IMP proposed by the authors. This method eliminates the need for deriving error dynamics so that the RC problem for nonminimum-phase systems and periodic linear systems will be easier. The proposed contraction mapping method is another attempt by the authors to solve the RC problem for nonlinear systems without requiring the corresponding Lyapunov functions (Chap. 10). This book provides additional methods and tools for researchers working on the development of RC.

References

15

References 1. Inoue, T., Nakano, M., Kubo, T., Matsumoto, S., & Baba, H. (1981). High accuracy control of a proton synchrotron magnet power supply. In Proceedings of the 8th IFAC World Congress, Part 3 (pp. 216–221). Oxford: Pergamon Press. 2. Hara, S., Yamamoto, Y., Omata, T., & Nakano, M. (1988). Repetitive control system: A new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7), 659–668. 3. Quan, Q., & Cai, K.-Y. (2010). A survey of repetitive control for nonlinear systems. Science Foundation in China, 18(2), 45–59. 4. Arimoto, S., Kawamura, S., & Miyazaki, F. (1984). Bettering operation of robots by learning. Journal of Robotic Systems, 2(1), 123–140. 5. Bien, Z., & Xu, J.-X. (Eds.). (1998). Iterative learning control: Analysis, design, integration and application. Norwell: Kluwer. 6. Longman, R. W. (2000). Iterative learning control and repetitive control for engineering practice. International Journal of Control, 73(10), 930–954. 7. Xu, J.-X., & Tan, Y. (2003). Linear and nonlinear iteration learning control. Berlin: Springer. 8. Bristow, D. A., Tharayil, M., & Alleyne, A. G. (2006). A survey of iterative learning control: A learning based method for high performance tracking control. IEEE Control Systems Magazine, 26(3), 96–114. 9. Ahn, H. S., Chen, Y. Q., & Moore, K. L. (2007). Iterative learning control: Brief survey and categorization. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(6), 1099–1121. 10. Wang, Y., Gao, F., & Doyle, F. J, I. I. I. (2009). Survey on iterative learning control, repetitive control, and run-to-run control. Journal of Process Control, 19(10), 1589–1600. 11. Francis, B. A., & Wonham, W. M. (1976). The internal model principle of control theory. Automatica, 12(5), 457–465. 12. Yamamoto, Y. (2001). An overview on repetitive control—what are the issues and where does it lead to? In IFAC Workshop on Periodic Control Systems, Cernobbio-Como, I. http://wiener. kuamp.kyoto-u.ac.jp/~yy/Papers/RepetitiveControldocfin.pdf. 13. Rabah, R., Sklyar, G. M., & Rezounenko, A. V. (2005). Stability analysis of neutral type systems in Hilbert space. Journal of Differential Equations, 214(2), 391–428. 14. Quan, Q., Yang, D., & Cai, K.-Y. (2010). Linear matrix inequality approach for stability analysis of linear neutral systems in a critical case. IET Control Theory and Applications, 4(7), 1290– 1297. 15. Li, C., Zhang, D., & Zhuang, X. (2004). A survey of repetitive control. Proceedings of Intelligent Robots and Systems, 2, 1160–1166. 16. Tomizuka, M. (2008). Dealing with periodic disturbances in controls of mechanical systems. Annual Reviews in Control, 32, 193–199. 17. Longman, R. W. (2010). On the theory and design of linear repetitive control systems. European Journal of Control, 16(5), 447–496. 18. Osburn, A. W., & Franchek, M. A. (2004). Designing robust repetitive controllers. Journal of Dynamic Systems, Measurement, and Control, 126, 865–872. 19. Pipeleers, G., Demeulenaere, B., de Schutter, J., & Swevers, J. (2008). Robust high-order repetitive control: Optimal performance trade-offs. Automatica, 44, 2628–2634. 20. Demirel, B., & Guvenc, L. (2010). Parameter space design of repetitive controllers for satisfying a robust performance requirement. IEEE Transactions on Automatic Control, 55(8), 1893– 1899. 21. Steinbuch, M. (2002). Repetitive control for systems with uncertain period-time. Automatica, 38(12), 2103–2109. 22. Chen, C.-L., & Yang, Y.-H. (2009). Position-dependent disturbance rejection using spatialbased adaptive feedback linearization repetitive control. International Journal of Robust and Nonlinear Control, 19(12), 1337–1363.

16

1 Introduction

23. Olm, J. M., Ramos, G. A., & Costa-Castello, R. (2010). Adaptive compensation strategy for the tracking/rejection of signals with time-varying frequency in digital repetitive control systems. Journal of Process Control, 20(4), 551–558. 24. Yao, W.-S., Tsai, M.-C., & Yamamoto, Y. (2013). Implementation of repetitive controller for rejection of position-based periodic disturbances. Control Engineering Practice, 21(9), 226– 1237. 25. Kurniawan, E., Cao, Z., & Man, Z. (2014). Design of robust repetitive control with time-varying sampling periods. IEEE Transactions on Industrial Electronics, 61(6), 2834–2841. 26. Hara, S., Omata, T., & Nakano, M. (1985). Synthesis of repetitive control systems and its applications. In Proceedings of IEEE Conference on Decision and Control (pp. 1387–1392). New York, USA. 27. Ma, C. C. H. (1990). Stability robustness of repetitive control systems with zero phase compensation. Journal of Dynamic System, Measurement, and Control, 112(3), 320–324. 28. Lin, Y. H., Chung, C. C., & Hung, T. H. (1991). On robust stability of nonlinear repetitive control system: Factorization approach. In Proceeding of American Control Conference (pp. 2646–2647). Boston, MA, USA. 29. Alleyne, A., & Pomykalski, M. (2000). Control of a class of nonlinear systems subject to periodic exogenous signals. IEEE Transactions on Control Systems Technology, 8(2), 279– 287. 30. Ghosh, J., & Paden, B. (2000). Nonlinear repetitive control. IEEE Transactions on Automatic Control, 45(5), 949–954. 31. Hu, A. P., & Sadegh, N. (2001). Nonlinear non-minimum phase output tracking via output redefinition and learning control. In Proceeding of the American Control Conference (pp. 4264–4269). Arlington, VA. 32. Hu, A. P., & Sadegh, N. (2004). Application of a Recursive minimum-norm learning controller to precision motion control of an underactuated mechanical system. In Proceeding of American Control Conference (pp. 3776–3781). Boston, Massachusetts. 33. Lee, S. J., & Tsao, T. C. (2004). Repetitive learning of backstepping controlled nonlinear electrohydraulic material testing system. Control Engineering Practice, 12(11), 1393–1408. 34. Sadegh, N., Horowitz, R., Kao, W.-W., & Tomizuka, M. (1990). A unified approach to the design of adaptive and repetitive controllers for robotic manipulators. Journal of Dynamic System, Measurement, and Control, 112(4), 618–629. 35. Messner, W., Horowitz, R., Kao, W.-W., & Boals, M. (1991). A new adaptive learning rule. IEEE Transactions on Automatic Control, 36(2), 188–197. 36. Slotine, J. J. E., & Li, W. (1991). Applied nonlinear control. New Jersey: Prentice-Hall. 37. Kim, Y.-H., & Ha, I.-J. (2000). Asymptotic state tracking in a class of nonlinear systems via learning-based inversion. IEEE Transactions on Automatic Control, 45(11), 2011–2027. 38. Dixon, W. E., Zergeroglu, E., Dawson, D. M., & Costic, B. T. (2002). Repetitive learning control: A Lyapunov-based approach. IEEE Transaction on Systems, Man, and Cybernetics, Part B-Cybernetics, 32(4), 538–545. 39. Sun, M., & Ge, S. S. (2006). Adaptive repetitive control for a class of nonlinearly parametrized systems. IEEE Transactions on Automatic Control, 51(10), 1684–1688. 40. Sun, M. (2012). Partial-period adaptive repetitive control by symmetry. Automatica, 48(9), 2137–2144. 41. Xu, J.-X., & Yan, R. (2006). On repetitive learning control for periodic tracking tasks. IEEE Transactions on Automatic Control, 51(11), 1842–1848. 42. Yang, Z., Yam, S., Li, L., & Wang, Y. (2010). Universal repetitive learning control for nonparametric uncertainty and unknown state-dependent control direction matrix. IEEE Transactions on Automatic Control, 55(7), 1710–1715. 43. Quan, Q., & Cai, K.-Y. (2011). A filtered repetitive controller for a class of nonlinear systems. IEEE Transaction on Automatic Control, 56(2), 399–405. 44. Marino, R., Tomei, P., & Verrelli, C. M. (2012). Learning control for nonlinear systems in output feedback form. Systems and Control Letters, 61(12), 1242–1247.

References

17

45. Verrelli, C. M. (2016). A larger family of nonlinear systems for the repetitive learning control. Automatica, 71, 38–43. 46. Consolinia, L., & Verrelli, C. M. (2014). Learning control in spatial coordinates for the pathfollowing of autonomous vehicles. Automatica, 50(7), 1867–1874. 47. Verrelli, C. M., Tomei P., Consolini, L., & Lorenzani, E. (2016). Space-learning tracking control for permanent magnet step motors. Automatica, 73, 223–230. 48. Xu, J.-X., & Qu, Z. (1998). Robust learning control for a class of nonlinear systems. Automatica, 34(8), 983–988. 49. Ham, C., Qu, Z., & Kaloust, J. (2001). Nonlinear learning control for a class of nonlinear systems. Automatica, 37(3), 419–428. 50. Xu, J.-X., & Tian, Y.-P. (2002). A composite energy function-based learning control approach for nonlinear systems with time-varying parametric uncertainties. IEEE Transactions on Automatic Control, 47(11), 1940–1945. 51. Tian, Y.-P., & Yu, X. (2003). Robust learning control for a class of nonlinear systems with periodic and aperiodic uncertainties. Automatica, 39(11), 1957–1966. 52. Tayebi, A. (2004). Adaptive iterative learning control for robot manipulators. Automatica, 40(7), 1195–1203. 53. Tayebi, A., & Chien, C.-J. (2007). A unified adaptive iterative learning control framework for uncertain nonlinear systems. IEEE Transactions on Automatic Control, 52(10), 1907–1973. 54. Chien, C.-J., & Tayebi, A. (2008). Further results on adaptive iterative learning control of robot manipulators. Automatica, 44(3), 830–837. 55. Sun, M., Ge, S. S., & Mareels, I. M. Y. (2006). Adaptive repetitive learning control of robotic manipulators without the requirement for initial repositioning. IEEE Transactions on Robotic, 22(3), 563–568. 56. Shkolnikov, I. A., & Shtessel, Y. B. (2002). Tracking in a class of nonminimum-phase systems with nonlinear internal dynamics via sliding mode control using method of system center. Automatica, 38(5), 837–842. 57. Moore, K. L. (2000). A non-standard iterative learning control approach to tracking periodic signals in discrete-time non-linear systems. International Journal of Control, 73(10), 955–967. 58. Yang, Z., & Chan, C. W. (2010). Tracking periodic signals for a class of uncertain nonlinear systems. International Journal of Robust Nonlinear Control, 20(7), 834–841. 59. Ahn, H.-S., Moore, K. L., & Chen, Y. Q. (2007). Iterative learning control: Robustness and monotonic convergence for interval systems. London: Springer. 60. Moore, K. L. (1993). Iterative learning control for deterministic systems. London: Springer. 61. Chen, Y. Q., & Wen, C. (1999). Iterative learning control: Convergence, robustness and applications. London: Springer. 62. Wang, D., Ye, Y., & Zhang, B. (2014). Practical iterative learning control with frequency domain design and sampled data implementation. Singapore: Springer. 63. Owens, D. H. (2015). Iterative learning control: An optimization paradigm. London: Springer.

Chapter 2

Preliminary

To make this book self-contained, the commonly used and necessary preliminaries are introduced. However, basic control theories, such as transfer functions, and the complete proofs of most theorems have not been included to avoid extensive preliminaries. Readers are expected to possess basic knowledge of control theories or are suggested to refer to the cited references for more information. This chapter is divided into the following four sections: system, transformation, stability, and control problem, aiming to answer the following question: What are the commonly used and necessary preliminaries for designing a repetitive controller?

2.1 System-Related Preliminary In this section, noncausal zero-phase filters, sensitivity and the complementary sensitivity function, the small gain theorem, and positive real systems are introduced.

2.1.1 Noncausal Zero-Phase Filter The concepts of causal and noncausal systems are first introduced. The response of a causal system does not begin before the input function is applied; i.e., its response is dependent only on the present and past inputs and not on future inputs. Given that causal systems do not include future inputs, these systems are practically achievable. The order of the denominator in transfer functions is not less than that of the numerator. For example, for s-transformation, causal systems are expressed in the following form: © Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_2

19

20

2 Preliminary

Q 1 (s) = e−0.08s , Q 2 (s) =

1 . s

For z-transformation, causal systems can be written in the following form: Q 1 (z) =

1 1 , Q 2 (z) = 2 . z z +z+1

A noncausal system does not satisfy the criterion of causal systems; i.e., its response is dependent on the future values of the input functions. In the transfer functions of these systems, the order of the denominator is lesser than that of the numerator. Given that noncausal systems contain future information, these systems are practically not realizable. Therefore, how can noncausal systems be used for RC? If the past trial inputs and outputs are stored in the memory, the information at time t + 1 in trial k − 1 can be used at time t in trial k. Then, these signals are treated as advance or future signals. For example, for s-transformation, causal systems can be expressed in the following form: Q 1 (s) = e0.08s , Q 2 (s) = s. For z-transformation, causal systems can be expressed as follows: Q 1 (z) = z, Q 2 (z) = z 2 + z + 1. Here, the concept of noncausal zero-phase filter is discussed. Definition 2.1 ([1]) A filter is classified  as zero phase when its frequency response H ( jω) for s-transformation (or H e jω for z-transformation) is a real and even function  ofthe radian frequency ω ∈ R, and when H ( jω) > 0 for s-transformation (or H e jω > 0 for z-transformation) in the filter passband(s). Zero-phase filters must be noncausal. For example, a simple noncausal zero-phase filter H1 (z) can be written in the following form: H1 (s) =

N 

ak e−sk ,

k=−N

where ak = a−k , ak ∈ R, k = 1, 2, . . . N . In Fourier space, substituting s = jω into H1 (s) results in H1 ( jω) =

N  k=−N

ak e− jωk = a0 + 2

N  ak cos (ωk) . k=1

It can be observed that H1 ( jω) is a real function depending on ω ∈ R. Consequently, the phase of the filter H1 (s) is zero. A noncausal zero-phase filter can also be modeled as follows:

2.1 System-Related Preliminary

21

H2 (s) = H0 (−s) + H0 (s) , where H0 (s) can be an arbitrary filter. As H2 ( jω) = H0 (− jω) + H0 ( jω) = 2 Re (H0 ( jω)) the filter is a real function depending on ω ∈ R. Then, the phase of the filter H2 (s) is zero. Furthermore, a noncausal zero-phase filter can also be modeled as follows: H3 (s) = H0 (−s) H0 (s) , where H0 (s) can be an arbitrary filter. Given that H2 ( jω) = H0 (− jω) H0 ( jω) = |H0 ( jω)|2 the filter is a real function depending on ω ∈ R. Therefore, the phase of the filter H2 (s) is zero. Similarly, in the case of digital noncausal zero-phase filters, the following forms can exist: N  ak z −k H1 (z) = k=−N

or

  H2 (z) = H0 z −1 + H0 (z)

or

  H3 (z) = H0 z −1 H0 (z) .

2.1.2 Sensitivity and Complementary Sensitivity Function As shown in Fig. 2.1, the tracking error E is given as E (s) = S (s) (R (s) − D (s)) + T (s) M (s) , where S (s) =

P (s) L (s) 1 , T (s) = . 1 + P (s) L (s) 1 + P (s) L (s)

(2.1)

(2.2)

The functions S (s) and T (s) are the closed-loop sensitivity and complementary sensitivity transfer functions, respectively. They satisfy

22

2 Preliminary

Fig. 2.1 Simple LTI closed-loop system

S (s) + T (s) = 1.

(2.3)

Even though S (s) and T (s) are functions of the complex variable s, their frequency responses are functions of jω, where the sum of the two functions is a constant. For disturbance rejection and tracking, based on Eq. (2.1), supω∈R |S ( jω)| is expected to be as small as possible so that the term supω∈R |S ( jω) (R ( jω) − D ( jω))| is also small. On the other hand, supω∈R |T ( jω)| is expected to be as small as possible so that the small term supω∈R |T ( jω) M ( jω)| implies good sensor noise attenuation. The two objectives are in contradiction because of the above relationship (2.3). Therefore, the constraint on S (s) exists. If the sensor noise is ignored, then the tracking error becomes E (s) = S (s) (R (s) − D (s)) .

(2.4)

In this case, can the closed-loop sensitivity transfer function S (s) be designed freely? The answer is No; the constraint on S (s) still exists and is presented in Bode’s sensitivity integral, which was discovered by Hendrik Wade Bode [2]. This formula quantifies some of the limitations in feedback control for LTI systems. The following relationship holds [3, Chap. 11]: 



  1  dω 1 + P ( jω) L ( jω)  0  π =π Re ( pk ) − lim s P (s) L (s) , 2 s→∞ 

ln |S ( jω)| dω =

0



  ln 

(2.5)

where pk ∈ C are the poles of P (s) L (s) in the right half-plane (unstable poles), namely, the zeros of S (s) in the right half-plane (unstable zeros). If P (s) L (s) has at least two more poles than zeros, Eq. (2.5) is simplified as follows: 



ln |S ( jω)| dω = π



Re ( pk ) .

0

Furthermore, if P (s) L (s) has no poles in the right half-plane, then

2.1 System-Related Preliminary

23





ln |S ( jω)| dω = 0.

0

It is found that |S ( jω)| cannot be small at all frequencies regardless of the manner in which the controller L is designed; i.e., |S ( jω)| < 1 at some frequencies and |S ( jω)| ≥ 1 must hold at other frequencies. To summarize, the tracking error in (2.4) cannot be attenuated using feedback for all frequencies. This principle can also be applied to RC.

2.1.3 Small Gain Theorem This section is mainly based on [4]. A signal is a piecewise continuous and bounded function that maps real numbers R to Rn . A set of signals can be given as   S = f : R → Rn . The two subspaces can be defined as follows: S+ = {f ∈ S : f (t) = 0n×1 for all t < 0} S− = {f ∈ S : f (t) = 0n×1 for all t > 0} . The finite-horizon 2-norm is defined as  f2,[0,T ] 

T

f (t) dt

21

2

,

0

√ where x  xT x. The set of signals where this norm is finite is known as the finite-horizon Lebesgue 2-space:   L2 [0, T ] = f2,[0,T ] < ∞ . To address limitations associated with stability, the behavior of signals over infinite time intervals must be considered. The infinite-horizon Lebesgue 2-space is defined as L2 (−∞, ∞) = {f2 < ∞} , where

 f2 



−∞

f (t) dt 2

21

.

The spaces L2 [0, ∞) and L2 [−∞, 0) are defined as L2 [0, ∞) = S+ ∩ L2 (−∞, ∞) and L2 [−∞, 0) = S− ∩ L2 (−∞, ∞) . It is convenient to introduce the

24

2 Preliminary

extended 2-space L2e defined as L2e = {f ∈ L2 [0, T ] , for all T < ∞} . For any complex matrix Q ∈ Cm× p , there exist Y ∈ Cm×m , U ∈ C p× p , and a real matrix  ∈ Rr ×r such that  0r ×( p−r ) U∗ (2.6) Q=Y 0(m−r )×r 0(m−r )×( p−r ) in which  =diag(σ1 , . . . , σr ) with σ1 ≥ σ2 ≥ . . . ≥ σr > 0 and min(m, p) ≥ r. Expression (2.6) is referred to as the singular value decomposition (SVD) of Q. The maximum singular value and minimum singular value of the matrix Q are denoted by σmax (Q) = σ1 , σmin (Q) = σr . Equivalently, they can also be defined as σmax (Q) = max Qu u=1

σmin (Q) = min Qu . u=1

Given these definitions, the following definition and theorem are obtained. Definition 2.2 The feedback system presented in Fig. 2.2 is considered internally stable if each of the four transfer functions mapping w1 and w2 to e1 , and e2 are asymptotically stable (see Definition 2.8). Theorem 2.1 Suppose the systems G1 : L2e → L2e and G2 : L2e → L2e in Fig. 2.2 have finite incremental gains such that supω G1 ( jω) supω G2 ( jω) < 1, where Gi ( jω) = σmax (Gi ( jω)) , i = 1, 2. Then, the following are valid: (1) For all w1 , w2 ∈ L2e , there exist unique solutions e1 , e2 ∈ L2e . (2) For all w1 , w2 ∈ L2 [0, ∞), there exist unique solutions e1 , e2 ∈ L2 [0, ∞). In other words, the closed-loop is internally stable. Proof Please refer to [4, pp. 97–98], where the contraction mapping theorem has been included for more details.  Fig. 2.2 Feedback loop of the two systems

2.1 System-Related Preliminary

25

2.1.4 Positive Real System Definition 2.3 A real symmetric matrix A ∈ Rn×n is symmetric semi-positive definite if xT Ax ≥ 0 for any nonzero x ∈ Rn . Furthermore, the matrix is called a positive-definite matrix if xT Ax > 0 for any nonzero x ∈ Rn . In the following, the notation A ≥ 0 or A > 0 implies that A is a positive semidefinite or positive-definite matrix. Definition 2.4 The rational transfer matrix H (s) is positive real if the following conditions hold: (1) H (s) has no pole in Re (s) > 0, (2) H (s) is real for all positive real s, (3) H (s) + H∗ (s) > 0 for all Re (s) > 0. Definition 2.5 A rational transfer function H (s) that is not identically zero for all s is strictly positive real if H (s − ) is positive real for some  > 0. 1 Theorem 2.2 ([5]) Let   H (s) be a proper rational transfer matrix. Suppose that T det H (s) + H (−s) is not identically zero. Then, H (s) is strictly positive real if and only if (1) H (s) includes all its poles with negative real parts and one of the following three conditions is satisfied. (2) H ( jω) + HT (− jω) > 0 for all ω ∈ R : (1) H (∞) + HT (∞) > 0,   (2) H (∞) + HT (∞) = 0 and lim ω2 H ( jω) + HT (− jω) > 0, and ω→∞

(3) H (∞) + HT (∞) ≥ 0 (but not zero or nonsingular) and there exist positive constants σ and δ such that   ω2 σmin H ( jω) + HT (− jω) ≥ σ, ∀ |ω| ≥ δ. Using the positive real concept, the positive real lemma can be stated as follows. Theorem 2.3 (Positive Real Lemma [5]) Consider that the system x˙ (t) = Ax (t) + Bu (t) y (t) = CT x (t) + Du (t)

(2.7)

is controllable and observable, where A ∈ Rn×n , B, C ∈ Rn×m , D ∈ Rm×m , x ∈ Rn , y, u ∈ Rm . The transfer function H (s) = CT (sIn − A)T B + D is positive real if and only if there exist matrices 0 < P = PT ∈ Rn×n , L ∈ Rn×m and W ∈ Rm×m such that PA + AT P = −LLT PB − C = −LW D + DT = WT W. 1 In control theory, a proper transfer function is a transfer function where the order of the numerator

is not greater than that of the denominator. A strictly proper transfer function is a transfer function where the order of the numerator is less than that of the denominator.

26

2 Preliminary

Theorem 2.4 ([5]) Consider the system in (2.7). Assume that the rational transfer matrix H (s) = CT (sIn − A)T B + D has poles that lie in Re (s) < −γ , where γ > 0 and (A, B, C, D) is a minimal realization of H (s). Then, H (s − ) for  > 0 is positive real if and only if 0 < P = PT ∈ Rn×n , L ∈ Rn×m and W ∈ Rm×m such that PA + AT P = −LLT − 2P PB − CT = −LW D + DT = WT W. Using the positive real property, PA + AT P ≤ 0 can be obtained. The matrix H (s − ) for  > 0 being positive real is in fact strictly positive real. Based on this property, PA + AT P < 0 can be obtained. Moreover, if D = 0, then PB = C.

2.2 Transformation-Related Preliminary In this section, the Schur complement, feedback linearization, and additive state decomposition are introduced.

2.2.1 Schur Complement Let X be a symmetric matrix written as

A B X= BT C



X/A is defined as the Schur complement of A in X X/A = C − BT A−1 B and X/C is defined as the Schur complement of C in X X/C = A − BT C−1 B. Then [6, Appendix A.5.5] [7] X > 0 ⇔ A > 0, X/A > 0, X > 0 ⇔ C > 0, X/C > 0. Furthermore, if A > 0, then X ≥ 0 ⇔ X/A ≥ 0;

2.2 Transformation-Related Preliminary

if C > 0, then

27

X ≥ 0 ⇔ X/C ≥ 0.

2.2.2 Feedback Linearization Three types of feedback linearizations, namely, input–output linearization, input– state linearization, and approximation linearization, are introduced. Most of the basic concepts on input–output linearization and input–state linearization are cited from [8–10], while the basic concepts on approximation linearization are cited from [11, 12]. 2.2.2.1

Input–Output Linearization

(1) SISO Case Consider a class of single-input single-output (SISO) systems x˙ = f (x) + g (x) u y = h (x) .

(2.8)

For example, first, where f : Rn → Rn , g : Rn → Rn , h : Rn → R, x ∈ Rn , u, y ∈ R. The variables x,u, y represent the state, control input, and output, respectively. Consider a smooth scalar function h (x), whose gradient is defined as ∇h =

∂h . ∂x

(2.9)

The gradient ∇h ∈ R1×n is a row vector, and the jth component is denoted by (∇h) j = ∂h/∂ x j . The following definition is presented: Definition 2.6 Consider a smooth scalar function h (x) : Rn → R and a vector field f (x) : Rn → Rn . The map x → ∇h (x) · f (x) is called the Lie derivative of the function h with respect to the vector field f and is denoted by L f h. L f h is assumed to be the directional derivative of the function h along the integral curves of f. Furthermore, the higher order Lie derivative is deduced as L 0f h = h

   i−1  L if h = L f L i−1 f h = ∇ L f h f,

where L if h denotes the ith-order derivative of L f h. If g is another vector field, then the scalar function is written as L g L f h = ∇ (L f h) g.

28

2 Preliminary

Using these definitions, the derivative y is given by y˙ = ∇h x˙ = L f h + L g hu.

(2.10)

If L g h = 0 in the set x ∈ , then −1  u = Lgh (−L f h + v) can transform Eq. (2.10) into y˙ = v,

(2.11)

where v ∈ R is the newly defined control input. If L g h ≡ 0 in the set x ∈ , then Eq. (2.10) becomes y˙ = L f h. In this case, the second derivative of y is given by y¨ = L 2f h + L g L f hu. h = 0 and L g L ρ−1 h = 0 in the set , the If L g h = 0, L g L f h = 0, . . . , L g L ρ−2 f f system (2.8) is called relative degree ρ, where ρ ∈ Z+ and ρ ≤ n. Then, hu y (ρ) = L ρf h + L g L ρ−1 f and

−1 

 ρ−1 ρ −L f h + v u = Lg Lf h

can further transform (2.10) into

y (ρ) = v.

(2.12)

It would appear that the output-feedback linearization can transform the nonlinear control problem into a linear problem if the behavior of y (t) and u (t) is the only subject of interest. The above process is a model transformation. However, x is ndimensional, whereas the input–output system (2.12) only has ρ ≤ n dimensions. Therefore, the disappeared state should be determined. This problem is discussed in the following sections. (2) Problem Formulation Generally, the input–output linearization problem is formulated as follows: Consider a multiple-input and multiple-output (MIMO) nonlinear system of the form x˙ = f (x) + g (x) u y = h (x) ,

(2.13)

2.2 Transformation-Related Preliminary

29

where x ∈ Rn , u ∈ Rm , y ∈ Rm , f : x → Rn , g : x → Rn and h : x → Rm are sufficiently smooth on a domain x ⊂ Rn . The objective of input–output linearization is to find a feedback transformation v = α (x) + β (x) u

and a state transformation

z z= i zo

= ψ (x)

such that

z˙ i z˙ o



=

fi (zi , zo ) Ao zo + Bo v



y = CTo zo ,

(2.14)

where ψ : x → Rn , α : x → Rm and β : x → Rm×m are continuously differentiable functions, and the Jacobian of ψ (x) is nonsingular on x ; Ao ∈ Rρ×ρ ,Bo , Co ∈  Rρ×m are constant matrices with the pair (Ao , Bo ) being controllable, and Ao , CTo observable; fi is a continuously differentiable function. (3) Zero Dynamics The system (2.14) comprises two parts, namely, the internal subsystem z˙ i = fi (zi , zo ) and the external subsystem z˙ o = Ao zo + Bo v. Given that the external part consists of a linear relationship between y and v, it is easy to design controllers, such as those for tracking problems. The result (2.12) is a special case of a transformed system (2.14), where ρ

ρ−1

v = L f h + L g L f hu  T zo = y y˙ · · · y (ρ−1) ∈ Rρ ⎤ ⎡ ⎡ ⎤ ⎡ ⎤ 01 0 0 1 ⎥ ⎢ ⎢ .. ⎥ ⎢0⎥ ⎢ 0 ... ⎥ ⎥ ⎢ ⎥ ⎥ , Bo = ⎢ Ao = ⎢ ⎢ . ⎥ , Co = ⎢ .. ⎥ . ⎥ ⎢ . ⎣ ⎦ ⎣.⎦ . 0 ⎣ . 1⎦ 1 0 0 0 However, if the internal subsystem is unstable, then it may cause zi (t) → ∞ as t → ∞, even if the output has tracked the reference. Consequently, x (t) → ∞ and then u (t) → ∞ as t → ∞, making it meaningless to design the controller. To further explain this, the zero dynamics of the system (2.13) is defined as z˙ i = fi (zi , 0) .

30

2 Preliminary

If the zero dynamics is stable, then the nonlinear system (2.13) becomes the minimum phase. Otherwise, the nonlinear system (2.13) becomes the nonminimum phase. In some cases, there is no internal subsystem, such as ρ = n in the SISO case, which is a minimum-phase system. For SISO LTI systems in the form of transfer functions, if a transfer function has stable zeros, then the transfer function is minimum phase; otherwise, it is nonminimum phase. If there are no zeros, then it is a minimum phase. For example, the s-transfer functions G 1 (s) =

s+1 s−1 1 , G 2 (s) = , G 3 (s) = s (s + 2) s (s + 2) s (s + 2)

are minimum-phase, nonminimum-phase, and minimum-phase functions, respectively. The following z-transfer functions G 1 (z) =

z + 1.5 z − 0.9 z , G 2 (z) = , G 3 (z) = z (z + 0.9) z (z + 0.9) z + 0.9

are nonminimum-phase, minimum-phase, and minimum-phase functions, respectively. In the context of a tracking problem, the internal subsystem is generally required to be input–state stable (see Definition 2.12) for z˙ i = fi (zi , δ) , where state zi is bound, subject to a bound input δ and δ = zo for the tracking problem. In this case, the system is minimum phase. 2.2.2.2

Input–State Linearization

The objective of input–state linearization is to obtain a nonunique diffeomorphism ψ : x → Rn for z = ψ (x) to transform the system (2.13) into z˙ = Az + Bβ −1 (x) (u − α (x)) , where z ∈ Rn , β (x) is nonsingular for ∀x ∈ x and (A, B) is completely controllable. The following feedback controller is then designed: u = α (x) + β (x) v such that z˙ = Az + Bv.   A problem arises that the output will become y = h ψ −1 (z) , which is still nonlinear.

2.2 Transformation-Related Preliminary

2.2.2.3

31

Approximation Linearization

During input–output linearization or input–state linearization, it is possible that h (x) = 0 or β (x) is singular at x = 0; i.e., L g L (ρ−1) h (x) or β (x) is a L g L (ρ−1) f f function that is of the order O (x) rather than O (1). In this case, approximation linearization is then performed. In the case of the input–output linearization problem, the relative degree2 of a system may be not well defined; therefore, the input–output linearizing control law is no longer valid. Given that the relative degree fails to exist, a set of functions of the state ψi (x), i = 1, . . . , ρ, is used to approximate the output and its derivatives in a special way. Using the SISO system (2.8) as an example, the first function ψ1 (x) is used to approximate the following output function: h (x) = ψ1 (x) + φ1 (x) , where φ1 (x) is O (x)2 . Differentiating ψ1 (x) along the system trajectories results in ψ˙ 1 (x) = L f ψ1 (x) + L g ψ1 (x) u. If L g ψ1 (x) is O (x) or of a higher order, it is neglected in our choice of ψ2 (x) such that ψ2 (x) = L f ψ1 (x) . Iteratively, L f+gu ψρ (x) = b (x) + a (x) u + φρ (x, u) , where a (x) is O (1). Example 2.1 ([11, 12]) Consider the following system: ⎤ ⎡ ⎤ ⎡ ⎤ x˙1 0  2 x2  ⎢ x˙2 ⎥ ⎢ B x1 x − G sin x3 ⎥ ⎢ 0 ⎥ 4 ⎢ ⎥=⎢ ⎥ + ⎢ ⎥u ⎣ x˙3 ⎦ ⎣ ⎦ ⎣0⎦ x4 x˙4 1 0       ⎡

f(x)

g(x)

y = x1 ,  h(x)

where x = [x1 x2 x3 x4 ]T . The output y is successively differentiated thrice until the input u appears algebraically for the first time in y (3) = Bx2 x42 − BGx4 cos x3 + 2Bx1 x4 u.

2 The

number of times needed to take the derivative of the output signal before the input shows up.

32

2 Preliminary

However, the coefficient of u is zero whenever x1 or x4 is zero. Therefore, it follows that the relative degree of this system is not well defined at the point of interest for the given output. Thus, the exact input–output linearization approach is inapplicable to this problem. In this case, the approximation linearization method is more appropriate. A nonlinear change in coordinates z = ψ (x) cause the system to transform as follows: z 1 = y = ψ1 (x) z˙ 1 = x2  z 2 =ψ2 (x)

z˙ 2 = −BG sin x3 +    z 3 =ψ3 (x)

Bx1 x 2   4

φ3 (x) (ignoring the term)

z˙ 3 = −BGx4 cos x3    z 4 =ψ4 (x)

z˙ 4 = BGx42 sin x3 + BG cos x3 u.       b(x)

a(x)

  Therefore, the relative degree is defined in this set x3 ∈ − π2 , π2 . The design is given as   1 −BGx42 sin x3 + v . u= BG cos x3 As a result, the equations are rearranged to obtain z˙ = Az + Bv + φ (x) y = CT z, where ⎤ ⎡ 0 z1 ⎢0 ⎢ z2 ⎥ ⎥ ⎢ z=⎢ ⎣ z3 ⎦ , A = ⎣ 0 z4 0 ⎡

1 0 0 0

0 1 0 0

⎤ ⎡ ⎤ ⎤ ⎡ ⎤ ⎡ 0 1 0 0 ⎢0⎥ ⎢0⎥ ⎢ Bx1 x 2 ⎥ 0⎥ 4 ⎥ ⎢ ⎥ ⎥ , B = ⎢ ⎥ , φ (x) = ⎢ ⎣0⎦ ⎣ 0 ⎦,C = ⎣0⎦. 1⎦ 0 0 0 1

2.2.3 Additive State Decomposition 2.2.3.1

Basic Concept [13]

Consider an “original” system as follows: x˙ = f (t, x,u) , x (0) = x0 ,

(2.15)

2.2 Transformation-Related Preliminary

33

Fig. 2.3 Additive state decomposition on a dynamical control system resulting in a primary system and secondary system

where x∈ Rn . First, a “primary” system is considered, having the same dimensions as the original system:   x˙ p = fp t, xp , up , xp (0) = xp,0 ,

(2.16)

where xp ∈ Rn . From the original and primary systems, the following “secondary” system is derived:   x˙ − x˙ p = f (t, x,u) − fp t, xp , up , x (0) = x0 . New variables xs ∈ Rn are defined as follows: xs  x − xp , us  u − up .

(2.17)

Then, the secondary system can be further written as follows:     x˙ s = f t, xp + xs ,up + us − fp t, xp , up xs (0) = x0 − xp,0 .

(2.18)

From the definition (2.17), it follows that x (t) = xp (t) + xs (t) ,t ≥ 0.

(2.19)

The process is shown in Fig. 2.3.

2.2.3.2

Example

Example 2.2 The concept of the additive state decomposition has been implicitly mentioned in the existing literature. The tracking controller design is an existing example that often requires a reference system to derive the error dynamics. The reference system (primary system) is assumed to be expressed as follows:

34

2 Preliminary

x˙ r = f (t, xr , ur ) , xr (0) = xr,0 . Based on the reference system, the error dynamics (secondary system) are derived as follows: x˙ e = f (t, xe + xr ,u) − f (t, xr , ur ) , xe (0) = x0 − xr,0 , where xe x − xr . This is a commonly used step to transform a tracking problem into a stabilization problem when adaptive control is used. Example 2.3 Consider a class of systems as follows: ⎧ ⎨ x˙ (t) = (A + A (t)) x (t) + Ad x (t − T ) + Br (t) e (t) = − (C + C (t))T x (t) + r (t) ⎩ x (θ ) = ϕ (θ ) , θ ∈ [−T, 0]

(2.20)

The formula (2.20) is selected as the original system and the primary system is designed as follows: ⎧ ⎨ x˙ p (t) = Axp + Ad xp (t − T ) + Br (t) ep (t) = −CT xp + r (t) . ⎩ xp (θ ) = ϕ (θ ) , θ ∈ [−T, 0]

(2.21)

Then, the secondary system is determined using the rule (2.18): ⎧ ⎨ x˙ s (t) = (A + A (t)) xs (t) + Ad xs (t − T ) + A (t)xp (t) . e (t) = − (C + C (t))T xs (t) − CT (t)xp (t) ⎩ s xs (θ ) = 0, θ ∈ [−T, 0]

(2.22)

Through   additive state decomposition, e (t) = ep (t) + es (t) . Because e (t) ≤ ep (t) + es (t) , the tracking error e (t) can be analyzed using ep (t) and es (t) separately. If ep (t) and es (t) are bounded and small, then e (t) is also the same. Fortunately, (2.21) is a linear time-invariant system and is independent of the secondary system (2.22) for analysis where many tools, such as the transfer function, are available. In contrast, the transfer function tool cannot be directly applied to the original system (2.20) because it is time-varying. For more details, please refer to [13]. Example 2.4 Consider a class of nonlinear systems represented as follows: x˙ = Ax + Bu + φ (y) + d, x (0) = x0 y = CT x,

(2.23)

where x, y, u represent the state, output, and input, respectively, and the function φ (·) is nonlinear. The objective is to design u such that y (t) − r (t) → 0 as t → ∞.

2.2 Transformation-Related Preliminary

35

Fig. 2.4 Additive-state-decomposition-based tracking control

The formula (2.23) is selected as the original system and the primary system is designed as follows: x˙ p = Axp + Bup + φ (r) + d, xp (0) = x0 yp = CT xp .

(2.24)

Then, the secondary system is determined by the following rule (2.18):   x˙ s = Axs + Bus + φ yp + CT xs − φ (r) , xs (0) = 0 ys = CT xs ,

(2.25)

where us  u − up . Then, x = xp + xs and y = yp + ys . Here, the task yp − r → 0 is assigned to the linear time-invariant system (2.24) (a linear time-invariant system is simpler than a nonlinear system). On the other hand, the task xs → 0 is assigned to the nonlinear system (2.25) (a stabilizing control problem is simpler than a tracking problem). If the two tasks are accomplished, then y = yp + ys → 0. The basic concept is to decompose an original system into two subsystems in charge of simpler subtasks. Then, the controllers for the two subtasks are designed and finally combined to achieve the original control task. For more details, please refer to [14–16]. The process is shown in Fig. 2.4. Additive state decomposition is also used for stabilizing control [17]. Furthermore, additive state decomposition is extended to additive output decomposition [18].

2.2.3.3

Comparison with Superposition Principle

A well-known example implicitly using additive state decomposition is the superposition principle, which is widely used in physics and engineering.

36

2 Preliminary

Table 2.1 Relationship between superposition principle and additive state decomposition Suitable systems Emphasis Superposition principle Additive state decomposition

Linear Linear\nonlinear

Superposition Decomposition

Superposition Principle.3 For all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus. For a simple linear system, x˙ = Ax + B (u1 + u2 ) , x (0) = 0 the statement of the superposition principle is given as x = xp + xs , where x˙ p = Axp + Bu1 , xp (0) = 0 x˙ s = Axs + Bu2 , xs (0) = 0. This result can also be derived using the additive state decomposition. Moreover, the relationship of the superposition principle and additive state decomposition is presented in Table 2.1. According to Table 2.1, additive state decomposition can be applied to linear and nonlinear systems.

2.3 Stability-Related Preliminary In this section, Barbalat’s lemma, stability concepts related to ordinary differential equations, and functional differential equations are introduced.

2.3.1 Barbalat’s Lemma Definition 2.7 ([24, p. 123]) Suppose g : [0, ∞) → R. The function g (t) is uniformly continuous on [0, ∞) if for any ε > 0 there exists a δ = δ (ε) > 0 such that |d| < δ implies |g (t + d) − g (t)| < ε for all t on [0, ∞) . Lemma 2.1 (Barbalat’s Lemma [24, p. 123]) Suppose f : [0, ∞) → R. If the differentiable function f (t) has a finite limit as t → ∞, and if f˙ (t) is uniformly continuous, then f˙ (t) → 0 as t → ∞. 3 This

principle was first discussed for dielectrics by Hopkinson [19]. Prior research [20–22] may help understand this concept.

2.3 Stability-Related Preliminary

37

Remark 2.1 Given that g (t + d) − g (t) = g˙ (t + sd) d, s ∈ [0, 1] using the mean value theorem, the function g (t) is uniformly continuous on [0, ∞) if g˙ is bounded on [0, ∞) . Consequently, the continuous function g (t) = sin t is uniformly continuous on [0, ∞) , because g˙ (t) = cos t is bounded. The continuous function g (t) = t 2 or et is not uniformly continuous. A function where f (t) →constant as t → ∞ does not imply that f˙ (t) → 0 as t → ∞. A necessary condition is required where f˙ (t) is uniformly continuous. For example, the function f (t) = will vanish, but

  1 sin t 2 t

    1 f˙ (t) = − 2 sin t 2 + 2 cos t 2  0. t

2.3.2 Ordinary Differential Equation For simplicity, the dynamical system in the form of x˙ = f (t, x)

(2.26)

is further discussed, t ≥ t0 ≥ 0, where f (t, 0) = 0, and x = 0 is a solution. Definition 2.8 For system (2.26), the solution x = 0 is stable if, for any t0 ≥ 0, ε > 0, there exists δ = δ (ε, t0 ) > 0, such that x (t0 ) < δ implies x (t) < ε for all t ≥ t0 . Otherwise, the solution is unstable. The solution x = 0 is uniformly stable, if for any ε > 0, there exists δ = δ (ε) > 0 independent of t0 such that if x (t0 ) < δ, then x (t) < ε for all t ≥ t0 . For system (2.26), the solution x = 0 is asymptotically stable if it is stable, and there exists some δ = δ (t0 ) > 0 such that x (t0 ) < δ, which implies that x (t) → 0 as t → ∞. The solution x = 0 is uniformly asymptotically stable if it is uniformly stable, and there exists some δ > 0 independent of t0 such that for any ε > 0, there exists a T = T (ε) > 0 satisfying the solution x (t) < ε, ∀t ≥ t0 + T (ε) , ∀ x (t0 ) < δ. For system (2.26), an equilibrium state x = 0 is exponentially stable if there exist α, λ, δ > 0 such that x (t) ≤ α x (t0 ) e−λ(t−t0 ) for ∀ x (t0 ) < δ and t ≥ t0 . Definition 2.9 A solution x (t) to Eq. (2.26) with x (t0 ) is uniformly bounded, if there exists ε > 0, independent of t0 , and for every a ∈ (0, ε) , there is δ = δ (ε) > 0, independent of t0 , such that x (t0 ) ≤ a ⇒ x (t) < δ, ∀t ≥ t0 . The solution x (t)

38

2 Preliminary

to Eq. (2.26) with x (t0 ) is uniformly ultimately bounded with ultimate bound δ > 0, if there exists ε > 0, independent of t0 , and for every a ∈ (0, ε) , there is T = T (ε, δ) > 0 such that x (t0 ) < a ⇒ x (t) ≤ δ for all t ≥ t0 + T . Remark 2.2 The definition of stability in Definition 2.8 is also called stability in Lyapunov or the Lyapunov stability. Stability implies that the system trajectory can be kept arbitrarily close to the origin by starting sufficiently close to it. For stability, the system trajectory may never reach the origin. However, for asymptotical stability, a state initially starting close to the origin initially will eventually converge to the origin eventually. Furthermore, exponential stability will show how the system trajectory converges to the origin. The three definitions given above present the following relationships: exponential stability ⊂ asymptotical stability ⊂ stability. Local stability is related to the initial state x (t0 ) < δ, which is determined by δ > 0, whereas global stability is independent of the initial state or δ = ∞. The concept of input-to-state stability (ISS) is then introduced [10]. Definition 2.10 A continuous function α : [0, a) → [0, ∞) belongs to the class K if it is strictly increasing and α (0) = 0. It belongs to the class K∞ if a = ∞ and α (r ) → ∞ as r → ∞. Definition 2.11 A continuous function β : [0, a) × [0, ∞) → [0, ∞) belongs to the class K L if, for each fixed s, the mapping β (r, s) belongs to the class K with respect to r and, for each fixed r , the mapping β (r, s) is decreasing with respect to s and β (r, s) → 0 as s → ∞. These definitions introduced the concept of ISS. Consider the system x˙ = f (t, x,u)

(2.27)

where f : [0, ∞) × Rn × Rm → Rn . Definition 2.12 The system (2.27) is input-to-state stable if there exist a class K L function β and a class K function γ such that for any initial state x (t0 ) and any bounded input u (t) , the solution x (t) exists for all t ≥ t0 and satisfies  x (t) ≤ β (x (t0 ) , t − t0 ) + γ

sup u (τ ) .

t0 ≤τ ≤t

The Lyapunov-like theorem given below provides a sufficient condition for ISS. Theorem 2.5 Let V : [0, ∞) × Rn → R be a continuously differentiable function such that α1 (x) ≤ V (t, x) ≤ α2 (x) ∂V ∂V + f (t, x,u) ≤ −W (x) , ∀ x ≥ ρ (u) > 0 ∂t ∂x

2.3 Stability-Related Preliminary

39

∀ (t, x,u) ∈ [0, ∞) × Rn × Rm , where α1 , α2 are class K∞ functions, ρ is a class K function, and W (x) is a continuous positive definition function on Rn . The system (2.27) is then input-to-state stable with γ = α1−1 ◦ α2 ◦ ρ.

2.3.3 Functional Differential Equation Let xt (s)  x (t + s) , s ∈ [−τ, 0] , τ > 0, where xt ∈ C ([−τ, 0] , Rn ) . The dynamical system in the form of a retarded functional differential equation is given as follows: (2.28) x˙ = f (t, xt ) with xt0 (s) = φ(s) , s ∈ [−τ, 0] , t0 ∈ R. The function f : R × C ([−τ, 0] , Rn ) → Rn is supposed to ensure that the solution x (t0 , φ) (t) through (t0 , φ) is continuous in (t0 , φ, t) in the domain of definition of the function. Moreover, f (t, 0n×1 ) = 0n×1 , and x = 0n×1 is the equilibrium state. Definition 2.13 ([23]) For system (2.28), the solution x = 0 is stable if, for any t0 ∈ R, ε > 0, there exists δ = δ (ε, t0 ) such that φ[−τ,0] ≤ δ implies xt (t0 , φ)[−τ,0] ≤ ε for t ≥ t0 . The solution x = 0 is uniformly stable if the number δ in the definition of stable is independent of t0 . The solution x = 0 is asymptotically stable if it is stable and there exists δ = δ (t0 ) > 0 such that φ[−τ,0] ≤ δ implies x (t0 , φ) (t) → 0 as t → ∞. The solution x = 0n×1 is uniformly asymptotically stable if it is uniformly stable and there exist δ > 0 such that for every ε > 0, there is a T = T (δ, ε) such that φ[−τ,0] ≤ δ implies x (t0 , φ) (t) ≤ ε for t ≥ t0 + T for every t0 ∈ R. The solution x = 0 is exponentially stable if there are α, λ, δ > 0 such that x (t0 , φ) (t) ≤ α φ[−τ,0] e−λ(t−t0 ) for ∀ φ[−τ,0] ≤ δ and t ≥ t0 . Consider the following general perturbed time-delay equation: x˙ (t) = f (t, xt , w)

(2.29)

with x (t0 + s) = φ(s) , s ∈ [−τ, 0] , t0 ∈ R, τ > 0, where xt ∈ C ([−τ, 0] , Rn ) and w (t) ∈ Rm is a piecewise continuous and bounded perturbation. Driven by w, the function f : R × C ([−τ, 0] , Rn ) × Rm → Rn is supposed to ensure that the solution x (t0 , φ) (t) through (t0 , φ) is continuous in (t0 , φ, t) in the domain of definition of the function. Definition 2.14 A solution xt (t0 , φ) to Eq. (2.29) with x (t0 + s) = φ (s) , s ∈ [−τ, 0] is uniformly bounded, if there exists ε > 0, independent of t0 , and for every a ∈ (0, ε) , there is δ = δ (ε) > 0, independent of t0 , such that φ[−τ,0] ≤ a ⇒ x (t0 , φ) (t) ≤ δ, ∀t ≥ t0 . The solution xt (t0 , φ) to Eq. (2.29) with

40

2 Preliminary

x (t0 + s) = φ (s) , s ∈ [−τ, 0] is uniformly ultimately bounded with ultimate bound δ > 0, if there exists ε > 0, independent of t0 , and for every a ∈ (0, ε) , there is T = T (ε, δ) > 0 such that φ[−τ,0] ≤ a ⇒ x (t0 , φ) (t) ≤ δ for all t ≥ t0 + T . This definition proposes a method for determining the ultimate bound and convergence rate for Eq. (2.29) when the Lyapunov functionals and their derivatives along with the solutions to Eq. (2.29) are available. The following preliminary result is discussed before proceeding further. Lemma 2.2 ([26]) Let g (t) be a continuous function with g (t) ≥ 0 for all t ≥ t0 − r0 and k0 > sup g (t0 + s) . Let g˙ (t) ≤ −α1 g (t) + α2 sup g (t + s) + β s∈[−r0 ,0]

s∈[−r0 ,0]

for t ≥ t0 where r0 , α1 , α2 , β > 0. If α1 > α2 , then g (t) ≤ g0 + k0 e−λ0 (t−t0 ) , for t ≥ t0 , where g0 = β/ (α1 − α2 ) and λ0 is the unique solution to the equation −λ0 = −α1 + α2 eλ0 r0 . The following theorem can help determine the ultimate bounds and convergence rates for perturbed time-delay systems when the Lyapunov functionals and their derivatives are available. Theorem 2.6 ([27]) Suppose that there exists a Lyapunov functional V (t, xt ) : [t0 , ∞) × C ([−τ, 0] , Rn ) → [0, ∞) such that γ1 x (t)2 ≤ V (t, xt ) ≤ γ2 x (t)2 +

N1 

 ρi

i=1 N1 N2

+

 ρi j i=1 j=1



0

−τi



0

−τi

0

−τ j +s

xt (s)2 ds

xt (ξ )2 dξ ds + c

(2.30)

where N1 , N1 , N2 ∈ N, γ1 , γ2 , c,ρi , ρ  i j ,τi , τi , τ j  0, and there exists σ  0 such that (2.31) V˙ (t, xt ) ≤ −γ3 x (t)2 + σ, γ3 > 0 where V˙ (t, xt ) is the derivative of V (t, xt ) along with the solutions to Eq. (2.29). Then, (i) V (t, xt ) satisfies (2.32) V (t, xt ) ≤ v0 + ke−λt , where λ is the unique solution to the equation −λ=− k>

sup V (s, xs ) , and

s∈[t0 −τ,t0 ]

v1 + v2 1 v1 + v2 λτ − + e , v3 v3 v3

(2.33)

2.3 Stability-Related Preliminary

41

  r1 = max (τi ) , r2 = max τi + τ j , τ = max (r1 , r2 ) i i, j ⎛ ⎞ N1 N2 N 1    γ2 σ ⎝ 1 v0 = σ + ρi τi + ρi j τi τ j + τi2 ⎠ + c γ3 γ3 i=1 2 i=1 j=1 v1 =

N1  ρi

γ i=1 3



, v2 =



N1 N2   ρi j

γ3

i=1 j=1

τi , v3 =

(2.34)

γ2 . γ3

(ii) x (t0 , φ) (t) satisfies % x (t0 , φ) (t) ≤

& v0 + γ1

k −λt e 2 , γ1 > 0 γ1

(2.35)

which implies that the solutions to Eq. (2.29) √ are uniformly bounded and uniformly ultimately bounded with ultimate bound v0 /γ1 + , where  > 0 is an arbitrarily small number. Proof Let V (t) = V (t, xt ) and x (t) = x (t0 , φ) (t) for simplicity. From the inequality (2.31), the following equation is obtained: x (t)2 ≤

σ − V˙ (t) . γ3

(2.36)

Substituting inequality (2.36) into inequality (2.30) yields N1   γ2  σ − V˙ (t) + ρi V (t) ≤ γ3 i=1 N1

+

N2

 i=1 j=1

ρi j



0

−τi



0

−τ j +s



0

−τi

σ − V˙ (t + s) ds γ3

σ − V˙ (t + ξ ) dξ ds + c. γ3

(2.37)

Note that  N1  ρi 0 ˙ − V (t + s) ds ≤ −v1 V (t) + v1 sup V (t + s) γ s∈[−r1 ,0] i=1 3 −τi and N1 N2   ρi j  0  0 − V˙ (t + ξ )dξ ds ≤ −v2 V (t) + v2 sup V (t + s) .   γ s∈[−r2 ,0] 3 +s −τ −τ i j i=1 j=1 



Then, the right-hand side of inequality (2.30) becomes

42

2 Preliminary

V (t) ≤ −v3 V˙ (t) − v1 V (t) + v1 sup V (t + s) s∈[−r1 ,0]

− v2 V (t) + v2 sup V (t + s) + v0 , s∈[−r2 ,0]

where v0 , v1 , v2 , v3 are defined in (2.34). Rearranging the inequality along with the above results gives v1 + v2 v0 v1 + v2 + 1 V (t) + sup V (t + s) + . V˙ (t) ≤ − v3 v3 s∈[−τ,0] v3

(2.38)

Note that V (t) is a continuous function with V (t) ≥ 0 for all t ≥ −τ. Using Lemma 2.2, inequality (2.32) is obtained, where k > sup V (s)2 and λ satisfies Eq. s∈[t0 −τ,t0 ]

(2.33). Furthermore, if γ1 x (t)2 ≤ V (t) with γ1 > 0, then inequality (2.35) is satisfied. Definition 2.14 can validate this proof.  Corollary 2.1 Suppose that there exists a Lyapunov functional V (t, xt ) : [t0 , ∞) × C ([−τ, 0] , Rn ) → R+ ∪ {0} such that  γ1 x (t) ≤ V (t, xt ) ≤ γ2 x (t) + 2

t

2

x (s)2 ds,

(2.39)

t−T

where γ1 , γ2 > 0. If there exists σ  0 such that V˙ (t, xt ) ≤ −γ3 x (t)2 + σ,

(2.40)

where γ3 > 0, then the √solutions to Eq. (2.29) are uniformly ultimately bounded with respect to the bound σ (γ2 + T ) / (γ1 γ3 ). Proof Based on Theorem 2.6 c,ρi , ρ  i j ,τi , τi , τ j are set to zero except for ρ1 = 1,  τ1 = T . Remark 2.3 The constant c used in (2.30) enlarges the class of functionals. For example, the constant c can represent the upper bound ' t of some variable independent of the state or some bounded functionals such as t−τ satT (x (s))sat(x (s))ds, where sat(·) denotes a saturated term. Remark 2.4 The existing sufficient conditions in the setting of the Lyapunov– Razumikhin theorems or the Lyapunov functionals are often in the form [28, 29] α1 (xt (0)) ≤ V (xt ) ≤ α2 (xt c ) V˙ (xt ) ≤ −α3 (xt c ) + σ,

(2.41) (2.42)

where ·c denotes a norm on the space C ([−τ, 0] , Rn ), and α1 , α2 , α3 are K∞ functions (or α3 is a K function). By eliminating xt c in the inequality (2.42), the following solution can be obtained:

2.3 Stability-Related Preliminary

43

  V˙ (xt ) ≤ − α3 ◦ α2−1 (V (xt )) + σ.

(2.43)

By employing the inequality (2.43), the ISS or the property of uniformly ultimate boundedness can be obtained. However, the Lyapunov functional with properties (2.30) and (2.31) is of the following form: α1 (xt (0)) ≤ V (xt ) ≤ α2 (xt c ) V˙ (xt ) ≤ −α3 (xt (0)) + σ,

(2.44)

where ·c differs from · such that the former is defined on the space C ([−τ, 0] , Rn ), while the latter is defined on the space Rn . Positive constants c1 and c2 may not exist to hold the relationship c1 xt (0) ≤ xt c ≤ c2 xt (0). Without this relationship, inequality (2.43) is difficult to obtain based on only inequality (2.44). Consequently, the ISS or the property of uniform ultimate bounding is difficult to obtain using existing methods.

2.4 Rejection Problem and Tracking Problem Assume the state-space representation of system P in Fig. 2.5 is as follows: x˙ = f (x) + g (x) u, x (0) = x0 y = h (x) ,

(2.45)

where f : Rn → Rn , f (0) = 0, g : Rn → Rn×m , u ∈ Rm , h : Rn → Rm is continuous with h (0) = 0, and x0 ∈ Rn . The objective is to design controller C to obtain e (t) → 0 as t → ∞ (e is shown in Fig. 2.5) while the state x is bounded. Figure 2.5 shows that depending on how r ∈ Rm appears in the closed-loop, the problem above can be classified into a rejection problem and a tracking problem. To clarify the problems considered, two assumptions are imposed on (2.45). Assumption 2.1 There exists a differentiable function V : Rn → [0, ∞) such that 

V (x) ≥ c1 x2 ∂ V (x) ∂x

(2.46)

T

f (x) ≤ −c2 x2 ,

(2.47)

where c1 , c2 > 0. Assumption 2.2 The signal r is constant, i.e., r˙ = 0 or r is generated by x˙ r = 0, r = h (xr ) , where xr ∈ Rn is the desired state.

(2.48)

44

2 Preliminary

(1) Rejection Problem In Fig. 2.5a, the state-space representation of system P is written based on (2.45) x˙ = f (x) + g (x) (v + r) y = h (x) ,

(2.49)

where v ∈ Rm and u = v + r. The control problem is formulated to design the controller v for obtaining e (t) = −y (t) → 0 as t → ∞, while the state x is bounded. This is a rejection problem. Equation (2.49) is rewritten as follows: x˙ = f (x) + g (x) u y = h (x) . To obtain the controller, the Lyapunov function is designed as 1 W (x) = V (x) + uT u. 2 Then, W˙ (x) =



∂ V (x) ∂x



T

f (x) +

∂ V (x) ∂x

T

g (x) u + uT v˙ ,

(2.50)

where r˙ = 0 is utilized (Assumption 2.2). According to (2.50), if the controller v is designed as ∂ V (x) v˙ = −gT (x) ∂x then, using (2.46) and (2.47) in Assumption 2.1, x (t) → 0 can be represented as t → ∞ in (2.45), which further implies y (t) → 0 as t → ∞. (2) Tracking Problem In Fig. 2.5b, the tracking problem is formulated to design the controller v to obtain e (t) = r (t) − y (t) → 0 as t → ∞ in (2.45), while the state x is bounded. By subtracting (2.48) from (2.45), the tracking problem is often solved based on an error system as follows:

Fig. 2.5 Tracking and rejection problems

2.4 Rejection Problem and Tracking Problem

z˙ = f˜ (z, xr ) + g (x) v e = h˜ (z, xr ) ,

45

(2.51)

where z  x − xr , h˜ (z, xr )  h (xr ) − h (z + xr ) , and f˜ (z, xr )  f (z + xr ) . From these definitions, one has h˜ (0, xr ) ≡ 0 and f˜ (0, xr ) ≡ 0. Obviously, function f˜ differs from function f. This difference implies that function V in Assumption 2.1 cannot be used directly. A new feedback control term may be needed to stabilize the error system. Moreover, xr or z cannot be obtained directly. For some classes of signals, xr is difficult to obtain from r. These are the challenges faced when designing controller C. Based on the analysis above, the tracking problem appears to be more difficult to solve than the rejection problem.

References 1. Smith, J. O. (2007). Introduction to digital filters: With audio applications. W3K Publishing. 2. Bode, H. W. (1945). Network analysis and feedback amplifier design. New York: Van Nostrand. 3. Aström, K. J., & Murray, R. M. (2010). Feedback systems: An introduction for scientists and engineers. Princeton, NJ: Princeton University Press. 4. Green, M., & Limebeer, D. J. N. (1995). Robust linear control. Englewood Cliffs: Prentice Hall. 5. Lozano, R., Brogliato, B., Egeland, O., & Maschke, B. (2000). Dissipative systems analysis and control: Theory and applications. New York: Springer. 6. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press. 7. Zhang, F. (Ed.). (2005). The Schur complement and its applications. US: Springer. 8. Isidori, A. (1995). Nonlinear control systems. New York: Springer-Verlag. 9. Alleyne, A., & Pomykalski, M. (2000). Control of a class of nonlinear systems subject to periodic exogenous signals. IEEE Transactions on Control Systems Technology, 8(2), 279– 287. 10. Khalil, H. K. (2002). Nonlinear systems. Englewood Cliffs, NJ: Prentice-Hall. 11. Hauser, J., Sastry, S., & Kokotovic, P. (1992). Nonlinear control via approximate input-output linearization: The ball and beam example. IEEE Transaction on Automatic Control, 37(3), 392–398. 12. Ghosh, J., & Paden, B. (2000). Nonlinear repetitive control. IEEE Transactions on Automatic Control, 45(5), 949–954. 13. Quan, Q, & Cai, K.-Y. (2009). Additive decomposition and its applications to internal-modelbased tracking. In Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference (pp. 817–822). Shanghai, China. 14. Quan, Q., Lin, H., & Cai, K.-Y. (2014). Output feedback tracking control by additive state decomposition for a class of uncertain systems. International Journal of Systems Science, 45(9), 1799–1813. 15. Quan, Q., Cai, K.-Y., & Lin, H. (2015). Additive-state-decomposition-based tracking control framework for a class of nonminimum phase systems with measurable nonlinearities and unknown disturbances. International Journal of Robust and Nonlinear Control, 25(2), 163– 178. 16. Quan, Q., Jiang, L., & Cai, K.-Y. (2014). Discrete-time output-feedback robust repetitive control for a class of nonlinear systems by additive state decomposition. online: http://arxiv.org/abs/ 1401.1577.

46

2 Preliminary

17. Quan, Q., Du, G.-X., & Cai, K.-Y. (2016). Proportional-integral stabilizing control of a class of MIMO systems subject to nonparametric uncertainties by additive-state-decomposition dynamic inversion design. IEEE/ASME Transactions on Mechatronics, 21(2), 1092–1101. 18. Quan, Q., & Cai, K.-Y. (2012). Additive-output-decomposition-based dynamic inversion tracking control for a class of uncertain linear time-invariant systems. In The 51st IEEE Conference on Decision and Control (pp. 2866–2871). Maui, Hawaii. 19. Murnaghan, F. D. (1928). The Boltzmann-Hopkinson principle of superposition as applied to dielectrics. Journal of the AIEE, 47(1), 41–43. 20. Bromwich, T. J. (1917). Normal coordinates in dynamical systems. Proceedings of the London Society, 2(1), 401–448. 21. Carson, J. R. (1917). On a general expansion theorem for the transient oscillations of a connected system. Physical Review, 10(3), 217. 22. Carson, J. R. (1919). Theory of the transient oscillations of electrical networks and transmission systems. Transactions of the American Institute of Electrical Engineers, 38(1), 345–427. 23. Hale, J. K., & Verduyn, L. S. M. (1993). Introduction to functional differential equations. New York: Springer. 24. Slotine, J. J. E., & Li, W. (1991). Applied nonlinear control. New Jersey: Prentice-Hall. 25. Oucheriah, S. (1999). Robust tracking and model following of uncertain dynamic delay systems by memoryless linear controllers. IEEE Transactions on Automatic Control, 44(7), 1473–1577. 26. Oucheriah, S. (2001). Adaptive robust control of a class of dynamic delay systems with unknown uncertainty. International Journal of Adaptive Control and Signal Processing, 15(1), 53–63. 27. Quan, Q., & Cai, K.-Y. (2012). A new method to obtain ultimate bounds and convergence rates for perturbed time-delay systems. International Journal of Robust and Nonlinear Control, 22(16), 1873–1880. 28. Teel, A. R. (1998). Connections between Razumikhin-type theorems and the ISS nonlinear small gain theorem. IEEE Transaction on Automatic Control, 43(7), 960–964. 29. Pepe, P., & Jiang, Z.-P. (2006). A Lyapunov methodology for ISS and iISS of time-delay systems. Systems and Control Letters, 55(12), 1006–1014.

Chapter 3

Repetitive Control for Linear System

Linear time-invariant (LTI) systems are systems that are both linear and time-invariant. A system is considered linear if the output of the system is scaled by the same amount as the input given to the system. Moreover, this system follows the superposition principle, which implies that the sum of all the inputs will be the sum of the outputs of the individual inputs. Time-invariant systems are systems in which the output caused by a particular input does not change with time and only depends on when that input was applied. This class of systems is very important in the control field where many mature tools and methods exist. Before discussing nonlinear systems, the results of LTI systems must be reviewed to allow for the easy introduction of repetitive control (RC, or repetitive controller, which is also designated as RC) for nonlinear systems. On the other hand, any RC methods applicable to nonlinear systems should be first applicable to LTI systems. Therefore, it is necessary to apply these methods to LTI systems first and then move onto nonlinear systems. Moreover, some RC methods for nonlinear systems are an extended form or a combination of the methods in LTI systems. Therefore, it is important and necessary to first be aware of the RC methods for LTI systems. LTI systems often employ two types of models, i.e., transfer function models and state-space models. This chapter aims to answer the following question: How do you design a repetitive controller for LTI systems based on transfer function models and state-space models? Answering this question involves the introduction to RC based on continuoustime transfer functions and discrete-time transfer functions. For state-space models, some uncertain continuous-time linear state-delayed systems are considered typical LTI systems. An RC and a filtered RC that employ output feedback are proposed. Although the model considered here is infinite-dimensional, it can also cover its finite-dimensional LTI counterpart. A part of contents in this chapter is based on [1].

© Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_3

47

48

3 Repetitive Control for Linear System

3.1 Repetitive Control based on Transfer Function Model The plug-in single-input single-output (SISO) RC system is considered in the form of both continuous-time transfer functions and discrete-time transfer functions. The methods for designing the dynamic compensator and the low-pass filter are discussed in detail.

3.1.1 Continuous-Time Transfer Function The design and synthesis methods for RC systems vary according to different configurations. In the case of the plug-in SISO RC system shown in Fig. 3.1, the design problem mainly involves choosing and optimizing the dynamic compensator B (s) and the low-pass filter Q (s) to make the output Y (s) track the reference R (s) while compensating for the unknown disturbance D (s). Selection of controller parameters involves a trade-off among steady-state accuracy, robustness, and transient response of an RC system. Here, P (s) is the generalized controlled plant. The controller in Fig. 3.1 is written as U (s) = E (s) + Ur (s) , where E (s) is the current feedback and Ur (s) is the filtered repetitive control (FRC, or filtered repetitive controller, which is also designated as FRC) term. Furthermore, the FRC is written as Ur (s) = or

Q (s) e−sT B (s) E (s) 1 − Q (s) e−sT

Ur (s) = Q (s) e−sT (Ur (s) + B (s) E (s)) .

Fig. 3.1 Plug-in RC system diagram

(3.1)

3.1 Repetitive Control based on Transfer Function Model

49

Fig. 3.2 Plug-in repetitive control system transformation

The tracking error is written as E (s) = =



1



−sT 1 + 1 + Q(s)e −sT B (s) P (s) 1−Q(s)e

(R (s) − D (s))

   1 1 −sT (R (s) − D (s)) , 1 − Q e (s) 1 + P (s) 1 − (1 − B (s) G (s)) Q (s) e−sT

(3.2) where G (s) = P (s) / (1 + P (s)) . The tracking error is presented in Fig. 3.2. Theorem 3.1 In the RC system shown in Fig. 3.1, suppose (i) P (s) and 1/ (1 + P (s)) , B (s) , Q (s) are stable; (ii) supω∈R |(1 − B ( jω) G ( jω)) Q ( jω)| < 1. If e1 ∈ L2 [0, ∞) , then e ∈ L2 [0, ∞), where e1 = L −1 D (s))) and e = L −1 (E (s)) .

(3.3)

  1 − Q (s) e−sT (R (s) −

Proof Based on condition (i), (1 − B (s) G (s)) Q (s) is stable, which is a map L2e → L2e . Moreover, e−sT is also a map L2e → L2e with supω∈R e jωT  = 1. Based on condition (ii) and the small gain theorem (see Theorem 2.1), the closedloop formed by(1 − B (s) G (s)) Q (s) and e−sT is internally stable. Moreover, 1/ (1 + P (s)) is stable. Therefore, the map from e1 to e is L2 [0, ∞) → L2 [0, ∞) and can be concluded using this proof.  According to Theorem 3.1, stability is dependent on the two main elements of the controller: B (s) and Q (s). The ideal design is B (s) G (s) = 1, Q (s) = 1. If so, then (3.2) becomes E (s) =

  1 1 − e−sT (R (s) − D (s)) . 1 + P (s)

(3.4)

50

3 Repetitive Control for Linear System

In this case, given that e1 = (r (t) − d (t)) − (r (t − T ) − d (t − T ))  r (t) − d (t) 0 ≤ t ≤ T = 0 T 0 is the sampling time. Then, the discrete-time RC is written as Ur (z) = or

Q (z) z −N B (z) E (z) 1 − Q (z) z −N

Ur (z) = Q (z) z −N (Ur (z) + B (z) E (z)) .

Consequently, the tracking error is written as E (z) =

   1 1 1 − Q (z) z −N (R (z) − D (z)) , −N 1 + P (z) 1 − (1 − B (z) G (z)) Q (z) z

where G (z) = P (z) / (1 + P (z)) .

52

3 Repetitive Control for Linear System

Theorem 3.2 In the RC system shown in Fig. 3.1, suppose (i) P (z) and 1/ (1+ P (z)) , B (z) , Q (z) are stable; (ii)        supω∈R  1 − B e jωT G e jωT Q e jωT  < 1.   If e1 ∈ L2 [0, ∞) , then e ∈ L2 [0, ∞), where e1 = Z −1 1 − Q (z) z −N (R (z) − D (z)) and e = Z −1 (E (z)) . 

Proof The proof is similar to the proof of Theorem 3.1.

Remark 3.3 (on the design of B (z) and Q (z)) The principle for choosing B (z) and Q (z) is similar to that in the continuous-time transfer functions. The comprehensive and detailed designs can be found elsewhere [4]. The ZPETC algorithm for z-transformation is given. Let N (z) , G (z) = D (z) where D (z) = a0 + a1 z + a2 z 2 + · · · + an z n N (z) = b0 + b1 z + b2 z 2 + · · · + bm z m (n ≥ m) . For a nonminimum-phase system, N (z) can be divided into two parts N (s) = N a (z) N u (z) , where a z m−k , N a (z) = b0a + b1a z + · · · + bm−k u N u (z) = b0u + b1u z + · · · + bm−k z k , k ∈ Z+ , k ≤ m.

Here, the solutions to N a (z) = 0 include all stable zeros, while the solutions to N u (z) = 0 include all unstable zeros. Using the ZPETC algorithm, B (z) can be obtained as   D (z) N u z −1 , B (z) = a N (z) (N u (1))2 where B (z) G (z) = Then,

  N u (z) N u z −1 (N u (1))2

.

  2  jωTs   jωTs   N u e jωTs  B e G e = . (N u (1))2

3.1 Repetitive Control based on Transfer Function Model

53

    The combination B e jωTs G e jωTs is a zero-phase combination with B (1) G (1) = 1, when ω = 0. Using this simple method, it is expected that B (z) G (z) ≈ 1 in the low-frequency band. Remark 3.4 (on the higher order RC) One of the major drawbacks of RC is the sensitivity of the control accuracy to the period variation of the external signals. It was shown in [6] that with a period variation as small as 1.5% for an LTI system, the gain of the internal model part of the RC drops from ∞ to 10. As a consequence, the tracking accuracy is far from satisfactory, especially for high-precision control. For this purpose, higher order RCs comprising several delay blocks in series were proposed to improve the robustness of the control accuracy against period variations [5–8]. The higher order RCs are expressed in the following form: Ur (z) =

Q (z) W (z) z −N B (z) E (z) , 1 − Q (z) W (z) z −N

(3.5)

where W (z) is the gain adjusted or the higher order RC function given by W (z) =

p 

wi z −(i−1)N

(3.6)

i=1

with

p  wi = 1. For the traditional RC, W (z) = 1. In higher order RC, the amplitude i=1

of the sensitivity transfer function is forced to be small in the range around the frequencies 2π k/T so that the tracking error will be insensitive to small period variations of the external signals. However, according to Bode’s sensitivity integral (see Sect. 2.1.2), the resulting sensitivity transfer function will be larger at frequencies far from 2π k/T , such as ω = (2k ± 1) π/T.

3.2 Repetitive Control Based on State-Space Model For state-space models, some uncertain continuous-time linear state-delayed systems are considered as types of typical LTI systems. An RC and FRC using output feedback are proposed. Although the model considered here is infinite-dimensional, it can also cover its finite-dimensional LTI counterpart as a special case. Most of the content in this section has been derived from another paper [1].

54

3 Repetitive Control for Linear System

3.2.1 Problem Formulation A class of linear state-delayed systems is given as follows: x˙ (t) = A0 x (t) + A1 x (t − τ ) + Bu (t) + d (t) y (t) = CT x (t)

(3.7)

with an initial condition x (θ ) = φ (θ ) , θ ∈ [−τ, 0] , where A0 , A1 ∈ Rn×n , B, C ∈ Rn×m ; x (t) ∈ Rn is the state of the system, τ > 0 is the time delay, y (t) ∈ Rm is the output of the system, u (t) ∈ Rm is the input, d (t) ∈ Rn denotes the unknown T -periodic disturbance, i.e., d (t) = d (t + T ), and φ (t) is a bounded vector-valued function representing the function of the initial condition. The control objective aims at designing u (t) to drive y (t) to track the desired trajectory yd (t) with period T . For system (3.7), the following assumption is imposed: Assumption 3.1 Matrix A1 and delay τ are unknown, but A1 AT1 < γ In and γ > 0 are known. Matrices A0 , B and C are known, and the pair (A0 , B) is controllable. The controller u (t) in system (3.7) is designed as u (t) = uf (t) + ub (t) , where uf and ub denote the learning-based feedforward control term and the feedback control term, respectively. The feedback control term ub (t) is used to stabilize A0 . Given that the pair (A0 , B) is controllable by Assumption 3.1, ub (t) always exists. Therefore, without losing generality, matrix A0 is assumed to be a stable matrix and ub (t) ≡ 0 for simplicity. The following section focuses on discussing the design of uf .

3.2.2 Repetitive Controller Design In Sect. 3.2.1, an RC is proposed to track the periodic reference signals for a class of linear state-delayed systems subject to uncertainties and T -periodic disturbances. To relax the stability condition on the closed-loop system, an FRC is further discussed in Sect. 3.2.2.

3.2 Repetitive Control Based on State-Space Model

3.2.2.1

55

Repetitive Controller Design

In this section, an assumption on the existence of the desired control input is first introduced. Assumption 3.2 There exists a bounded and continuous control ud (t) = ud (t + T ), which when substituted for u (t) in system (3.7) causes y (t) to track yd (t) perfectly. The reference system is x˙ d (t) = A0 xd (t) + A1 xd (t − τ ) + Bud (t) + d (t) yd (t) = CT xd (t) ,

(3.8)

where xd (t) ∈ Rn plays the role of the desired state. By subtracting (3.7) from (3.8), the error dynamical system is described as x˙˜ (t) = A0 x˜ (t) + A1 x˜ (t − τ ) + Bu˜ (t) y˜ (t) = CT x˜ (t) ,

(3.9)

where x˜  xd − x, y˜  yd − y and u˜  ud − uf . Lemma 3.1 Consider system (3.7) with Assumptions 3.1–3.2 under the following RC: uf (t) = uf (t − T ) + K0 (t) BT Px˜ (t) ⎧ ⎨ 0m×m t ∈ [−T, 0) uf (t) = 0, t ∈ [−T, 0) , K0 (t) = K1 (t) t ∈ [0, T ) , ⎩ K t ∈ [T, ∞)

(3.10)

where every element of the matrix K0 (t) ∈ Rm×m is continuous on [−T, ∞) and 0 < K = KT ∈ Rm×m . If there exist 0 < P = PT ∈ Rn×n and α > 0 such that

P PA0 + AT0 P + αIn P −γ −1 αIn

0 ⇔ (3.11). Then,

3.2 Repetitive Control Based on State-Space Model



t

0





57 t

x˜ T (s) R2 x˜ (s) ds ≤ −

˜ ds =⇒ W˙ (s, x˜ , u)

0 t

˜ =⇒ x˜ T (s) R2 x˜ (s) ds ≤ W (0, x˜ , u) 0  t 1 ˜x (s) 2 ds ≤ ˜ . W (0, x˜ , u) lim t→∞ 0 λmin (R2 ) ˜ ≤ 0, ˜x (t) ≤ λmin1(P) W Therefore, x˜ ∈ L2 [0, ∞). Moreover, given that W˙ (t, x˜ , u) ˜ for ∀t ∈ [0, ∞) . Then, x˜ ∈ L∞ [0, ∞) . Consequently, x˜ ∈ L2 [0, ∞) ∩ (0, x˜ , u) L∞ [0, ∞) . Furthermore, the tracking error y˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞). This concludes the proof.  Knowledge of xd (t) usually implies that the reference model is considered a priori. However, xd (t) and/or x (t) may not be accessible. In this case, a controller that only requires the output error is employed. Theorem 3.3 Consider system (3.7) with Assumptions 3.1, 3.2 under the following RC: uf (t) = uf (t − T ) + K0 (t) y˜ (t) uf (t) = 0, t ∈ [−T, 0) .

(3.17)

If there exist 0 < P = PT ∈ Rn×n and 0 < α ∈ R that satisfy the inequality (3.11) and BT P = CT , then the tracking error y˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . 

Proof Using Lemma 3.1 makes the proof trivial. Remark 3.5 (on Assumption 3.2) Taking the Laplace transform of (3.8) yields  −1 yd (s) = G (s) ud (s) + CT sIn − A0 − A1 e−τ s d (s) ,

 −1 B ∈ Rm×m . If G (s) is invertible, then where G (s) = CT sI − A0 − A1 e−τ s ud (s) can be written as  −1 ud (s) = G−1 (s) yd (s) − G−1 (s) CT sI − A0 − A1 e−τ s d (s) .

(3.18)

  For example, suppose G (s) = 1−e−τ1 s +s and d (s) ≡ 0; thus ud (s) = 1 − e−τ s + s yd (s) implies ud (t) = yd (t) −yd (t − τ ) +˙yd (t) . Based on this example, there exist some desired trajectories, such as triangular and square waves, which cannot be tracked perfectly by the system (3.8), regardless of the values of A0 , A1 , B,and C because y˙ d (t) does not exist. Therefore, an appropriate differentiability condition of the desired trajectory yd (t) and the disturbance d (t) is needed. For the given example, the appropriate differentiability condition indicates that yd (t) is a firstorder differentiable. For any periodic, bounded, and continuous trajectory and disturbance that satisfy the appropriate differentiability condition, if G (s) is invertible

58

3 Repetitive Control for Linear System

 −1 and G−1 (s), G−1 (s) CT sI − A0 − A1 e−τ s are stable transfer matrices, then ud (t) = L −1 (ud (s)) in (3.18) is periodic, bounded, continuous, and unique [9] (given that our focus is restricted to functions that are continuous on [0, ∞)). Remark 3.6 (on K0 (t) in (3.10)) As discussed in [10], K0 (t) is introduced to f when t ∈ [−T, ∞) . Assume that K0 (t) ≡ K. Then, ensure  that u (t) is continuous   uf 0− = 0 and uf 0+ = KBT Px˜ (0) . Given that the initial state error x˜ (0) is often not equal to zero, uf (t) is discontinuous at t = 0 and will cause the actuator to switch and excite the high-frequency   modes of the system. On the other hand, if K0 (t) is defined as in (3.10), uf 0+ = 0 holds. It implies that uf (t) is continuous when t ∈ [−T, T ) . It is easy to show that uf (t) is continuous when t ∈ [−T, ∞) by induction. Remark 3.7 (on the condition BT P = CT in Theorem 3.3) A necessary condition for Theorem 3.3 to hold is that the following system x˙ (t) = A0 x (t) + Bu (t) y (t) = CT x (t) is strictly positive real so that PA0 + AT0 P < 0 and BT P = CT . Moreover, CT B = 0m×m . Otherwise, BT PB = CT B = 0 which contradicts P, being a positive-definite matrix.

3.2.2.2

Filtered Repetitive Controller Design

As shown in Sect. 3.2.2, perfect tracking can be achieved using the proposed RC. However, the condition of Theorem 3.3 is difficult to satisfy because of the restriction BT P = CT . Therefore, an FRC is further developed. Given that the transfer function of a time-delay system is not rational, the related theory in [11] cannot be applied directly. Therefore, the first task is to extend the theory to systems represented by irrational transfer functions.  −1 ∈ Cm×n . The minimal realizaIn Fig. 3.3, Gv (s) = CT sIn − A0 − A1 e−sτ tion of the low-pass filter Q(s)Im is assumed as follows: x˙ q (t) = Aq xq (t) + Bq uq (t) yq (t) = CqT xq (t) , where Aq ∈ Rmnq ×mnq , Bq , Cq ∈ Rmnq ×m , yq (t) , uq (t) ∈ Rm , and xq ∈ Rmnq is the state variable. The structure of the FRC in Fig. 3.3 can be expressed as follows: x˙ q (t) = Aq xq (t) + Bq CqT xq (t − T ) − Bq KCT x (t) + Bq Kyd (t) uf (t) = CqT xq (t − T ) − KCT x (t) + Kyd (t) . Combining systems (3.7) and (3.19) results in

(3.19)

3.2 Repetitive Control Based on State-Space Model

59

Fig. 3.3 Filtered repetitive control system

z˙ (t) = D1 z (t) + D2 z (t − τ ) + D3 z (t − T ) + B D ξ (t) y˜ (t) = CTD z (t) + D D ξ (t) ,

(3.20)

where



yd A0 − BKCT 0 A1 0 x , D1 = , ,ξ = , D2 = z= d 0 0 xq −Bq KCT Aq



  0 BCq BK In −C D3 = , BD = , CD = , D D = Im 0 . 0 Bq CqT Bq K 0 0

Let Ger (s) ∈ Cm×(m+n) denote the transfer function matrix from ξ (t) to y˜ (t) , represented by  −1 BD + DD . Ger (s) = CTD sIn+nq − D1 − D2 e−sτ − D3 e−sT The exponential stability of the transfer function given above is equivalent to that of the following differential equation: z˙ (t) = D1 z (t) + D2 z (t − τ ) + D3 z (t − T ) .

(3.21)

Before proceeding further with the discussion, the following theorem is needed. A sequence { Q i (s)| i = 1, 2, · · · } is introduced to represent a family of filters and Q¯ (s) = lim Q i (s) . i (t) represents the fundamental solution of Eq. (3.21) with i→∞

different filters Q(s) = Q i (s). The theorem on FRC for plants represented by rational transfer functions (Theorem 2 in [11]) is extended as follows.   Theorem 3.4 Suppose (i) for an arbitrary but fixed bounded interval −ω f , ω f ,   Q¯ (s) = 1 holds on −ω f , ω f and reference signal ξ (t) with period T contains only frequencies lower than ω f ; (ii) i (t) ≤ K e−αt independent of i, where K , α > 0; (iii) Gv (s) is stable. Then, the tracking error y˜ i (t) in the modified RC system (3.20)

60

3 Repetitive Control for Linear System

with Q = Q i satisfies lim lim sup ˜yi [kT,(k+1)T ] = 0.

i→∞

(3.22)

k→∞

i Proof In Fig. 3.3, Ger (s) denotes the transfer function matrix from ξ (s) to y˜ i (s). Here, y˜ i (s) is the output with Q i (s) and can be written as follows:

y˜ i (s) = y˜ i,1 (s) + y˜ i,2 (s), i i where y˜ i,1 (s) = Ger,1 (s) yd (s) and y˜ i,2 (s) = Ger,2 (s) d (s) . Given that Gv (s) is stai −αt holds, independent of i, Ger,1 ble and i (t) ≤ K e (s) is exponentially stable and independent of i. Consequently, i i Ger,2 (s) = −Ger,1 (s) Gv (s)

is also exponentially stable and independent of i. For simplicity, only consider i Ger,1 (s) , which is written as follows: −1  −1  i G (s) Ger,1 (s) = Im + K Im − Q i (s) e−T s Im   −1 = 1 − Q i (s) e−T s Im − Q i (s) e−T s Im + KG (s) . i Every element of Ger,1 (s) possesses zeros represented by {s ∈ C |1 − Q i (s)  i e−T s = 0 . G ip1 , p2 (s) is the element in the p1 th row and p2 th column of Ger,1 (s) , i 1 ≤ p1 , p2 ≤ m. Then, G p1 , p2 (s) is also exponentially stable, independent of i, and    possesses zeros represented by s ∈ C 1 − Q i (s) e−T s = 0 . Let r (t) be any T any periodic reference with frequencies lower than ω f , which is used to represent  element of reference signal ξ (t) . Suppose y˜ ip1 , p2 (t) = L −1 G ip1 , p2 (s) ∗ r (t) with Q = Q i . If the following equation holds

  lim lim sup  y˜ ip1 , p2 [kT,(k+1)T ] = 0

i→∞

(3.23)

k→∞

then the conclusion (3.22) is validated because every element of y˜ i (t) is a linear combination of y˜ ip1 , p2 (t) , 1 ≤ p1 , p2 ≤ m. Therefore, in the remainder of the proof, only the result (3.23) needs to be considered. k/T. Given Let N be the largest integer such that |ω N | < ω f , where   ωk = 2π i = −βki of any 0 < ε < 1, there exist zeros αki ± jβki , k = 0, 1, · · · N β0 = 0, β−k G ip1 , p2 (s) such that   i α + jβ i − jωk  < ε, k = 0, ±1, · · · ± N k k for all sufficiently large i. This implies that

(3.24)

3.2 Repetitive Control Based on State-Space Model

61

 i 2  i 2 αk + βk − ωk < ε2 , k = 0, ±1, · · · ± N .     Given that G ip1 , p2 α0i = 0 and G ip1 , p2 αki + jβki = 0, G ip1 , p2 (s) can be written as   G ip1 , p2 (s) = G˜ ip1 , p2, 0 (s) s − α0i  2  2  , k = 1, · · · , N . G ip1 , p2 (s) = G˜ ip1 , p2, k (s) s − αki + βki Because r (t) contains only frequencies lower than ω f , it can be written as r (t) =

N 

(ak sin (ωk t) + bk cos (ωk t)) .

k=0

Then, y˜ ip1 , p2 (t) in (3.23) can be written as follows: y˜ ip1 , p2 (t) =

N     ak L −1 G ip1 , p2 (s) ωk / s 2 + ωk2 k=0

+

N     bk L −1 G ip1 , p2 (s) s/ s 2 + ωk2 k=0

=

N    2  2   2  / s + ωk2 ak L −1 G˜ ip1 , p2, k (s) ωk s − αki + βki k=0

+

N    2  2   2  / s + ωk2 . (3.25) bk L −1 s G˜ ip1 , p2, k (s) s − αki + βki k=0

Without loss in generality, consider r (t) = sin (ωk t) . In this case, y˜ ip1 , p2 (t) is represented as follows:   2  2   2  / s + ωk2 y˜ ip1 , p2 (t) = L −1 G˜ ip1 , p2, k (s) ωk s − αki + βki   = ωk L −1 G˜ ip1 , p2, k (s)   2  2    + L −1 G˜ ip1 , p2, k (s) ωk αki + βki − ωk2 − 2αki s / s 2 + ωk2 . As a result,     y˜ ip1 , p2 (t) = ωk L −1 G˜ ip1 , p2 ,k (s) ∗ δ (t) + L −1 G˜ ip1 , p2 ,k (s) ∗ u˜ (t) ,

(3.26)

    2 2 where δ (t) denotes the Dirac delta function, u˜ (t) = αki + βki − ωk2 sin (ωk t) − 2αki ωk cos (ωk t). Given that G˜ ip , p (s) is exponentially stable and independent of 1

2

62

3 Repetitive Control for Linear System

i, G˜ ip1 , p2, k (s) is also exponentially stable and independent of i. Then, Eq. (3.26) is further written as follows [11]:  i  y˜

p1 , p2

 ¯ + C1 sup |u˜ (t)| (t) ≤ C0 e−αt t∈[0,∞)

≤ C0 e

−αt ¯

+ C1 C2 ε.

(3.27)

¯ C0 , C1 , C2 > 0 are independent of i. It is easy to Here, sup |u˜ (t)| ≤ C2 ε, and α, t∈[0,∞)

verify that y˜ ip1 , p2 (t) in Eq. (3.25) also has a form similar to inequality (3.27). Using inequalities (3.24), ε can be made arbitrarily small for a sufficiently large i. It follows that the conclusion (3.23) holds and this concludes the proof.  The filter Q(s) should be appropriately selected for a trade-off between good tracking performance and stability margin. When the bandwidth of Q(s) increases, the tracking performance also improves, but the stability margin decreases, and vice versa [12]. In particular, if Ger (s) is stable with Q(s) = 1 and Gv (s) is stable, then precise tracking can be achieved, which is discussed in Theorem 3.3. To achieve better performance, Q(s) can be selected as a zero-phase low-pass filter. Based on the conclusion of Theorem 3.4, only the internal stability of the system needs to be considered, as shown in Fig. 3.3, i.e., the stability of Eqs. (3.21) and (3.28). Consider (3.28) x˙ (t) = A0 x (t) + A1 x (t − τ ) . The stability of Eq. (3.28) implies the stability of Gv (s). Furthermore, the stability of (3.28) can be determined by inequality (3.11). For (3.21) and (3.28), note that uniform asymptotic stability is equivalent to exponential stability. Till now, the tracking problem was converted from uniform asymptotic stability of a neutral time-delay system1 to uniform asymptotic stability of the more familiar retarded time-delay systems (3.21) and (3.28). In general, the latter is easier to handle. The following theorem gives a sufficient condition for uniform asymptotic stability of system (3.21) using the frequency-domain approach. Theorem 3.5 Under Assumption 3.1, for (3.21), suppose (i) D1 has no eigenvalues in the closed left half-plane; (ii) λmax (M1 ( jω)) < 0.5, ∀ω ∈ R. Then, z (t) = 0 is uniformly asymptotically stable where −1   −1  . D3 DT3 + γ In+nq − jωIn+nq − DT1 M1 ( jω) = jωIn+nq − D1 Proof Let z1 (t) = z (t − T ) , z2 (t) = z (t − τ ) , z3 (t) = z (t). Then, (3.21) can be rewritten as ⎞ ⎛ ⎞ ⎛ ⎞⎛ 0 0 In+nq z1 (t + T ) z1 (t) ⎝ z2 (t + τ ) ⎠ = ⎝ 0 0 In+nq ⎠ ⎝ z2 (t) ⎠ . D3 D2 D1 z˙ 3 (t) z3 (t) 1A

pure RC system is a neutral time-delay system in a critical case. Please refer to Chapter 4 for detail.

3.2 Repetitive Control Based on State-Space Model

63

Using [13, Lemma 4], z (t) = 0 is uniformly asymptotically stable in (3.21) if and only if D1 has no eigenvalues  in the closed left half-plane, namely, condition (i), and det I2(n+nq ) − zM2 ( jω) = 0 holds in { z ∈ C| |z| ≤ 1} for ω ∈ R, where  −1 − D1 D sI −1 3 M2 (s) =  n+nq sIn+nq − D1 D3

 −1  sIn+nq − D1 D  −1 2 . sIn+nq − D1 D2

  The condition σmax (M2 ( jω)) < 1 implies that det I2(n+nq ) − zM2 ( jω) = 0 in { z ∈ C| |z| ≤ 1} for ω ∈ R. Therefore, the remainder of the proof focuses on σmax (M2 ( jω)) < 1. The matrix M2 ( jω) M2T (− jω) can be written as  M2 ( jω) M2T (− jω) =

 M3 ( jω) M3 ( jω) , M3 ( jω) M3 ( jω)

−1   −1  D3 DT3 + D2 DT2 − jωIn+nq − DT1 . Using where M3 ( jω) = jωIn+nq − D1 Assumption 3.1,

A1 AT1 0 ≤ γ In+nq . D2 DT2 = 0 0 Then,

 M2 ( jω) M2T (− jω) ≤

M1 ( jω) M1 ( jω) M1 ( jω) M1 ( jω)

 (3.29)

From the definition, M1 ( jω) = M1 (− jω) . Therefore, M1 ( jω) is a real matrix. Given that     M1 ( jω) M1 ( jω) −1 2M1 ( jω) M1 ( jω) =N N M1 ( jω) M1 ( jω) 0 0   0 In+nq N= −In+nq In+nq then,    λmax M2 ( jω) M2T (− jω)  ≤ 2λmax (M1 ( jω))

σmax (M2 ( jω)) =

using inequality (3.29). Therefore, if condition (ii) holds, then σmax (M2 ( jω)) < 1, which concludes the proof. 

64

3 Repetitive Control for Linear System

Although the stability of differential equation (3.21) can be determined by its characteristic function and the criterion may be less conservative, the stability criterion will become increasingly difficult to verify as the system dimensions increase. In the following section, the delay-independent criterion of differential equation (3.21) is derived in terms of linear matrix inequalities (LMIs). This makes the criterion quite feasible to define using a computer. Theorem 3.6 Under Assumption 3.1, for (3.21), if there exist 0 < Qi = QiT ∈ R(n+nq )×(n+nq ) , i = 1, 2, α > 0 and a matrix K ∈ Rm×m such that ⎤ ⎡ Q1 D1 + DT1 Q1 + αIn+nq + Q2 Q1 Q1 D3 ⎣ Q1 −γ −1 αIn+nq 0 ⎦ < 0 (3.30) 0 −Q2 DT3 Q1 then z (t) = 0 is uniformly asymptotically stable. Proof Select the Lyapunov functional to be  V2 (t,z) = zT (t) Q1 z (t) + α

t

 zT (s) z (s) ds +

t−τ

t

zT (s) Q2 z (s) ds. t−T

Taking the derivative of V2 (t, z) along with the solution of (3.21) yields V˙2 (t, z) = zT (t) R3 z (t) − H1T H1 − H2T H2 ≤ zT (t) R3 z (t) ,

(3.31)

where T R3 = DT1 Q1 + Q1 D1 + αIn+nq + Q2 + α −1 Q1 D2 DT2 Q1 + Q1 D3 Q−1 2 D3 Q1 √ −1 T √ H1 = α D2 Q1 z (t) − αz (t − τ )  1 −1 1 H2 = Q22 DT3 Q1 z (t) − Q22 z (t − T ) .

Assumption 3.1 implies that D2 DT2 ≤ γ In+nq . Thus, (3.31) becomes V˙2 (z, t) ≤ zT (t) R4 z (t) , T where R4 = D1T Q1 + Q1 D1 + αI + Q2 + γ α −1 Q1 Q1 + Q1 D3 Q−1 2 D3 Q1 . Using the Schur complement, the following inequalities are made equivalent to each other:

R4 < 0 ⇔ (3.30). If condition (3.30) holds, then z (t) = 0 is uniformly asymptotically stable in (3.21). 

3.2 Repetitive Control Based on State-Space Model

65

The stability criteria in Theorems 3.5, 3.6 are delay-independent. Furthermore, delay-dependent stability criteria [14, 15] can also be developed for (3.21) when a bound on the time delay τ is known. Readers can refer to other literature for less conservative stability criteria on (3.21).

3.2.3 Numerical Simulation 3.2.3.1

Model

Consider system (3.7) with parameter matrices as follows:





−1 1 0.1 0.2 1 1 ,B = ,C = , A1 = −3 −3 0.15 0.1 1 1



−1 1 0.1 0.2 0 1 ,B = , Case 2: A0 = ,C = , A1 = −3 −3 0.15 0.1 1 1 Case 1: A0 =

where A0 , B, C are known a priori, whereas A1 is assumed unknown except A1 AT1 < 0.3I2 ; τ = 5 s is the unknown time delay. The unknown periodic disturbance is d (t) = [sin (t) sin2 (t)]T , and the desired trajectories for Case 1 and Case 2 are yd,1 (t) = sin (t) and yd,2 (t), respectively, where yd,2 (t) is a triangular waveform with the first period in the following form:  yd,2 (t) =

2 t T

2−

0 ≤ t < T2 . ≤t 0 is the cutoff frequency of Q(s). Therefore, the controller (3.19) is written as follows: x˙ q (t) = −ωc xq (t) + ωc xq (t − T ) + k y˜ (t) u f (t) = ωc xq (t − T ) + k y˜ (t) .

(3.34)

Given that the stability of (3.28) is ensured in Case 1, only the stability of (3.21) needs to be considered. In Case 2, when ωc = 60rad/s, the positive-definite solution to inequality (3.30) is solved using the LMI control toolbox as follows:

3.2 Repetitive Control Based on State-Space Model

67

Fig. 3.5 Change of maximum absolute value of error at the ith period



⎤ 0.0413 0.0033 0.0012 Q1 = ⎣ 0.0033 0.0043 −0.0024 ⎦ , k = 3 0.0012 −0.0024 0.0864 ⎡ ⎤ 0.0464 −0.0038 0.0175 Q2 = ⎣ −0.0038 0.0064 −0.0152 ⎦ , α = 0.0192. 0.0175 −0.0152 4.7121 In Case 2, when ωc = 60rad/s is selected, the tracking performance of the controller (3.34) with different desired trajectories yd,1 (t), yd,2 (t) is shown in Fig. 3.4. Given that the desired trajectories yd,1 (t) and yd,2 (t) have the same amplitude and are independent of C1 in inequality (3.27), the tracking performance is determined by ε in inequality (3.27). In this case, given that the triangular waveform yd,2 (t) has more high-frequency components than sinusoid yd,1 (t), the value ε is smaller for yd,1 (t). Therefore, the tracking performance under the desired trajectory yd,1 (t) is better than that under the desired trajectory yd,2 (t) . This conclusion coincides with the observation in Fig. 3.5. Figure 3.6 depicts the tracking performance of the controller (3.34) with different cutoff frequencies ωc = 10, 50, 90 rad/s in Case 2, when using the desired trajectory yd,1 (t). Remark 3.8 Figure 3.6 shows that the best tracking performance is achieved when ωc = 90 rad/s. It is noteworthy that the difference between the tracking performance under ωc = 50 rad/s and ωc = 90 rad/s is not substantial. In this simulation, the stability margin decreases as ωc in the filter (3.33) increases, which leads to a decrease in ε and increase in C1 C2 in (3.27). Therefore, C1 C2 ε in (3.27) may change slightly when ωc is large enough. In practice, it is difficult to determine the variation of

68

3 Repetitive Control for Linear System

Fig. 3.6 Change of maximum absolute value of error at the ith period

C1 C2 ε and the tracking performance with different ωc . Nevertheless, the parameter ωc can be adjusted by observing the tracking performance in practice. In terms of tracking performance, the RC presented in Theorem 3.3 outperforms the FRC (3.19). However, FRC has fewer restrictions and is applicable to more general systems.

References 1. Quan, Q., Yang, D., Cai, K.-Y., & Jun, J. (2009). Repetitive control by output error for a class of uncertain time-delay systems. IET Control Theory and Applications, 3(9), 1283–1292. 2. Park, H.-S., Chang, P. H., & Lee, D. Y. (1999). Continuous zero phase error tracking controller with gain error compensation. In Proceedings of the American Control Conference (pp. 3554– 3558). San Diego, California. 3. Yao, W.-S., & Tsai, M.-C. (2005). Analysis and estimation of tracking errors of plug-in type repetitive control systems. IEEE Transactions on Automatic Control, 50(8), 1190–1195. 4. Longman, R. W. (2010). On the theory and design of linear repetitive control systems. European Journal of Control, 16(5), 447–496. 5. Chang, W. S., Suh, I. H., & Kim, T. W. (1995). Analysis and design of two types of digital repetitive control systems. Automatica, 31(5), 741–746. 6. Steinbuch, M. (2002). Repetitive control for systems with uncertain period-time. Automatica, 38(12), 2103–2109. 7. Steinbuch, M., Weiland, S., & Singh, T. (2007). Design of noise and period-time robust high order repetitive control with application to optical storage. Automatica, 43(12), 2086–2095. 8. Pipeleers, P., Demeulenaere, B., & Sewers, S. (2008). Robust high order repetitive control: Optimal performance trade offs. Automatica, 44(10), 2628–2634. 9. Schiff, J. L. (1999). The laplace transform: Theory and applications. Berlin: Springer. 10. Xu, J.-X., & Yan, R. (2006). On repetitive learning control for periodic tracking tasks. IEEE Transactions on Automatic Control, 51(11), 1842–1848.

References

69

11. Hara, S., Yamamoto, Y., Omata, T., & Nakano, M. (1988). Repetitive control system: A new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7), 659–668. 12. Doh, T. Y., & Chung, M. J. (2003). Repetitive controller design for linear systems with timevarying uncertainties. IEE Proceedings-Control Theory and Applications, 150(4), 427–432. 13. Agathoklis, P., & Foda, S. (1989). Stability and the matrix Lyapunov equation for delay differential systems. International Journal of Control, 49(2), 417–432. 14. Kolmanovskii, V. B., Niculescu, S. I., & Richard, J. P. (1999). On the Liapunov-Krasovskii functionals for stability analysis of linear delay systems. International Journal of Control, 72(4), 374–384. 15. Li, T., Guo, L., & Lin, C. (2007). A new criterion of delay-dependent stability for uncertain time-delay systems. IET Control Theory and Applications, 1(3), 611–616.

Chapter 4

Robustness Analysis of Repetitive Control Systems

The main problem in control theory is controlling the output of a system to achieve the asymptotic tracking of desired signals and/or asymptotic rejection of disturbances. Among the existing approaches to asymptotic tracking and rejection, tracking via an internal model, which can handle the exogenous signal1 from a fixed family of functions of time, is one of the important approaches [1]. The basic concept of tracking via the internal model originated from the internal model principle (IMP) [2, 3]. The IMP states that if any exogenous signal can be regarded as the output of an autonomous system, then the inclusion of this signal model, i.e., the internal model, in a stable closed-loop system can ensure asymptotic tracking and asymptotic rejection of the signal. Given that the exogenous signals under consideration are often nonvanishing, the characteristic roots of these autonomous systems that generate these exogenous signals are neutrally stable. To produce asymptotic tracking and asymptotic rejection, if a given signal has a certain number of harmonics, then a corresponding number of neutrally stable internal models (one for each harmonic) should be incorporated into the closed-loop based on the IMP. Repetitive control (RC, or repetitive controller, also abbreviated as RC) is a specialized tracking method via an internal model for the asymptotic tracking and rejection of general T -periodic signals [4]. The works of [4–6] analyzed the robustness of RC systems in the frequency domain. They reported that the RC system, which is an internal model system2 with infinite neutrally stable internal models (one for each harmonic), lacks robustness. With this in mind, the following question arises intuitively:

1 The

term “exogenous signal” is used to refer to both the desired signal and the disturbance when there is no need to distinguish them. 2 A system incorporating internal models is called an internal model system for simplicity. © Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_4

71

72

4 Robustness Analysis of Repetitive Control Systems

How robust is an internal model system when the number of neutrally stable internal models (one for each harmonic) increases? To answer this question, the stability margin, robustness analysis of internal model systems, and robustness analysis of RC systems must be considered.

4.1 Measurement of Stability Margin The tolerance of a modeling error in the broad term is called the stability margin, a classical gain and phase margin of the two familiar measures of stability margin for single feedback loops [7]. Meanwhile, systems that can tolerate plant variability and uncertainty are called robust. Therefore, the stability margin problem is a robustness problem. To measure the stability margin of a general closed-loop system, the convergence rate of solutions is proposed. Then, the proposed measure is proven to be reasonable. Consider a general system subject to an exogenous signal r (t) as follows: x˙ = f (t, x) + r (t) .

(4.1)

Let x = 0 be an equilibrium point of the system x˙ = f (t, x) ,

(4.2)

where f : [0, ∞) × Rn → Rn is continuously differentiable, and the Jacobian matrix satisfies ∂f /∂x  ≤ l, ∀t ∈ [0, ∞). Assume that there exist k, η > 0 such that the solutions of the system (4.2) satisfy x (t) ≤ k x (0) e−ηt , where η is convergence rate. If system (4.2) is exponentially stable, then any bounded exogenous signal r (t) will lead system (4.1) to uniform ultimate boundedness. In this way, the stability margin problem of system (4.1) is reduced to that of system (4.2). Consider system (4.2) subject to the uncertainty f (t, x) as follows: x˙ = f (t, x) + f (t, x) ,

(4.3)

where f (t, x) ≤ b f x, b f > 0 is a measure of the uncertainty. In the following sections, the larger convergence rate η of the solutions of system (4.2) implies that the system can tolerate the uncertainty f (t, x) with a larger b f , i.e., system (4.2) is still exponentially stable in the presence of an uncertainty. This indicates that the stability margin increases as the convergence rate increases. Using the converse Lyapunov theorem in [8, pp. 162–165], there exists a function V : [0, ∞) × Rn → R that satisfies the inequality

4.1 Measurement of Stability Margin

73

c1 x2 ≤ V (t, x) ≤ c2 x2 2 ∂V + ∂∂xV f (t, x) ∂t  ≤ −c3 x ,  ∂ V  ≤ c4 x ∂x where   1  k2  1 − e−2lδ , c2 = 1 − e−2ηδ , 2l 2η  η−l 2k  2 −ηδ 1 − e− 2η δ . c3 = 1 − k e , c4 = η−l c1 =

Taking the derivative of V along the solutions of (4.3) yields   ∂V ∂V ∂V + f (t, x) + f (t, x) ≤ −c3 + c4 b f x2 . ∂t ∂x ∂x If b f < c3 /c4 , then (4.3) is still exponentially stable. A larger c3 /c4 . implies that system (4.2) can tolerate the uncertainty f (t, x) with a larger b f . Suppose g (η) denotes the function of η: g (η) =

If

c3 = c4

1 − k 2 e−ηδ  . − η−l 2k 2η δ 1 − e η−l

∂g (η) > 0, ∀η > 0 ∂η

then the stability margin increases as η increases. Therefore, it is reasonable to use the convergence rate of the solutions of a closed-loop system to measure its robustness. Proposition 4.1 For k, l > 0, δ > 0 always exists such that ∂g (η) /∂η. > 0, ∀η > 0. Proof The effect of changes in η on g (η) can be evaluated as follows:     η−l ∂g (η) = α 1 − k 2 e−ηδ + (η − l) δk 2 e−ηδ 1 − e− 2η δ ∂η   lδ − η−l δ − α (η − l) 1 − k 2 e−ηδ e 2η , 2η2 2  η−l where α = 1/2k 1 − e− 2η δ > 0. Rearranging the above equation yields

74

4 Robustness Analysis of Repetitive Control Systems

   η−l ∂g (η) = α 1 − k 2 e−ηδ 1 − e− 2η δ ∂η   η−l + α (η − l) δk 2 e−ηδ 1 − e− 2η δ  lδ − η−l δ  e 2η − α (η − l) 1 − k 2 e−ηδ 2η2   η−l = α (η − l) δk 2 e−ηδ 1 − e− 2λ δ 

 Term 1   η−l lδ + α 1 − k 2 e−ηδ 1 − 1 + (η − l) 2 e− 2η δ . 2η  



 Term 2 Term 3

Then, the signs of the three terms on the right-hand side of the above equation are checked. (i) Term 1. For any δ > 0,   η−l (η − l) δk 2 e−ηδ 1 − e− 2η δ > 0. (ii) Term 2. If δ > − η1 ln

1 , k2

then 1 − k 2 e−ηδ > 0. η−l

(iii) Term 3. If η − l ≥ 0 and δ are sufficiently large, then 1 + (η − l) 2ηlδ2 ≤ e 2η δ ; if η−l

η − l < 0 and δ are sufficiently large, then 1 + (η − l) 2ηlδ2 < 0 < e 2η δ . Therefore, if δ is sufficiently large, then η−l lδ 1 − 1 + (η − l) 2 e− 2η δ ≥ 0. 2η Based on the above analysis, it can be concluded that, for k, l > 0, δ > 0 always exists such that ∂g (η) /∂η > 0 for all η > 0.  Remark 4.1 Consider a simple linear system x˙ = Ax. If A is a stable matrix without Jordan blocks, then k > 0 exists such that x (t) ≤ k x (0) e−ηt , where η = − maxRe(λ (A)) is the convergence rate. The convergence rate η is also the closest distance between the eigenvalues and the imaginary axis. Figure 4.1 shows that if matrix A is marginally stable, then η = 0; and if matrix A is stable, then η > 0. The latter is more stable than the former, where the larger convergence rate implies better robustness. This observation is consistent with the conclusion of Proposition 4.1.

4.2 Robustness Analysis of Internal Model System

75

Fig. 4.1 Convergence rate and its stability

4.2 Robustness Analysis of Internal Model System The robustness of an internal model system is analyzed using the convergence rate. Figures 1.1, 1.2 and 1.3 in Chap. 1 show that a general internal model system is depicted as shown in Fig. 4.2. In Fig. 4.2, the state differential equation of the internal model, yi (s) = Gi (s) ui (s) , is described as x˙ i = Ai xi + Bi ui yi = CTi xi ,

(4.4)

where xi ∈ Rn i , yi ∈ Rm i , ui ∈ Rm p , Ai ∈ Rn i ×n i , and Bi , Ci ∈ Rn i ×m p . The state differential equation of the plant, namely, yp (s) = Gp (s) up (s) , is described as x˙ p = Ap xp + Bp up yp = CTp xp ,

Fig. 4.2 General internal model system

(4.5)

76

4 Robustness Analysis of Repetitive Control Systems

where xp ∈ Rn p , yp ∈ Rm p , up ∈ Rm i , Ap ∈ Rn p ×n p , and Bp , Cp ∈ Rm i ×n p . The closed-loop system formed by (4.3) and (4.4) is given as x˙ = Ax,

where A=

Ai Bp CTi Bi CTp Ap



(4.6)

,x =

xi xp

.

(4.7)

Then, the robustness of the internal model system is analyzed as the number of neutrally stable internal models increases. During the increase in the neutrally stable internal models, matrix A is always assumed to be stable; otherwise, it is meaningless to discuss robustness. Suppose λ1 , λ2 , . . . , λn p +n i are the eigenvalues of matrix A and λ¯ =

n p +n i

1 λk . n p + n i k=1

Given that matrix A is real, λ¯ is real as well λ¯ = =

n p +n i

1 Re (λk ) n p + n i k=1

1 tr (A) . np + ni

From the above equation, it is easy to verify that λ¯ ≤

max

k=1,...,n p +n i

Re (λk ) ,

  where “=” implies that Re(λ1 ) =Re(λ2 ) = · · · =Re λn p +n i . For a general system, − max Re(λk ) represents the convergence rate. Therefore, if λ¯ → 0, then k=1,...,n p +n i

maxRe(λk ) will approach zero, i.e., the stability margin decreases as λ¯ approaches zero. Before analyzing max Re(λk ) as n i increases formally, the assumption below k=1,...,n p +n i

is required. Assumption 4.1 The sum of the diagonal elements of Ai is nonnegative. Proposition 4.2 Under Assumption 4.1, for any given Bi , Ci , Ap , Bp and Cp , if A is always stable, then max Re (λk ) = 0. lim n i →∞k=1,...,n p +n i

4.2 Robustness Analysis of Internal Model System

77

Proof Given that λ¯ =

1 tr (A) , np + ni

recalling the form of A in (4.7), one has λ¯ =

  1 1 tr (Ai ) + tr Ap . np + ni np + ni

Given that the sum of the diagonal elements of Ai is nonnegative under Assumption 4.1,   1 tr Ap ≤ λ¯ (n i ) ≤ max Re (λk ) . k=1,...,n p +n i np + ni Given that A is always stable,

max

k=1,...,n p +n i

so

lim

n i →∞ n p

Consequently, lim

max

n i →∞k=1,...,n p +n i

  Re(λk ) < 0. Note that tr Ap is a constant,

  1 tr Ap = 0. + ni

Re(λk ) = 0.



Based on the IMP, if the exogenous signals are composed of more frequency components, the internal model-based controller will contain a corresponding number of neutrally stable internal models to achieve the asymptotic tracking and rejection of the exogenous signals. On the other hand, using Proposition 4.2, the stability margin decreases as the number of neutrally stable internal models increases. Therefore, the tracking accuracy of an internal model system contradicts its robustness. This conclusion suggests that to improve the robustness of the system, the internal model-based controller should contain few neutrally stable internal models or major internal models rather than both. Given that the major frequency band is dominant in the desired signals, practical demands satisfied. For example, although   are virtually both internal models 1/s and 1/ 1 − e−sT can achieve asymptotic tracking and asymptotic rejection of the unit step signal, the internal model 1s should be chosen to increase the robustness of the closed-loop system. Remark 4.2 (on Assumption 4.1) If G i (s) = then,

1   s s 2 + ω2



⎞ 0 0 0 Ai = ⎝ 0 0 ω ⎠ . 0 −ω 0

The sum of the diagonal elements of Ai is zero. If

78

4 Robustness Analysis of Repetitive Control Systems

 N   2  2 s + ωk G i (s) = 1/ s 

k=1

then

⎞ 0 ⎟ ⎜ E1 ⎟ ⎜ ⎟ ⎜ E2 Ai = ⎜ ⎟, ⎟ ⎜ .. .. ⎠ ⎝ . . EN ⎛

where in Ai denotes a term not used in the development and Ek =

0 ωk −ωk 0

, k = 1, . . . , N .

In this case, the sum of the diagonal elements of Ai is zero. Therefore, Assumption 4.1 is easily satisfied.

4.3 Robustness Analysis of Repetitive Control System 4.3.1 Stability Margin of Repetitive Control System Many engineering signals, such as robotics and servo mechanisms, are periodic or can at least be well approximated by a periodic signal over a large time interval either in the form of desired signals or disturbances. Based on the IMP, if a periodic exogenous signal has a finite number of harmonics, then a finite number of internal models (one for each harmonic) can be used to produce asymptotic tracking and/or asymptotic rejection. Similarly, given that a general T -periodic signal has an infinite number of harmonics, an infinite number of internal models are required for exact tracking. A general T -periodic signal can be regarded as the output of an autonomous system as 1 y (s) = 1 − e−sT with an appropriate initial value. Then, the internal model-based control with an infinite-dimensional internal model 1−e1−sT , i.e., an RC, can achieve asymptotic tracking and/or asymptotic rejection for a general T -periodic signal. A number of authors have observed that RC lacks robustness [4, 6]. Moreover, [4] proved that for a class of general linear plants, the exponential stability of RC systems can be achieved only when the plant is proper but not strictly proper. Various modifications of RC were proposed to achieve a trade-off between tracking accuracy and robustness. Let G i (s) = 1−e1−sT in Fig. 4.2. Then, G i (s) can be written as

4.3 Robustness Analysis of Repetitive Control System

1  n i  n i →∞ 1+ s

G i (s) = lim

k=1

79

T 2s2 4π 2 k 2

.

Under Proposition 4.2, for the above internal model, lim

max

n i →∞k=1,...,n p +n i

Re (λk ) = 0.

This implies that there is no positive real number η such that x (t) ≤ ke−ηt , even if linear RC systems are asymptotically stable, i.e., the convergence rate is η = 0 for this system.

4.3.2 Limitation of Repetitive Control System When designing an RC, knowledge of the exact time period of the external signals is often required. However, the practical time period is different from the time period in the design because the exact time period is difficult to measure, and the discretization of systems results in uncertainties in the period. These uncertainties can be modeled as ε (t), which satisfy w∗ (t) = w (t) + ε (t) , where w∗ ∈ C P0 T ∗ ([0, ∞) , Rn ) is the practical signal and w ∈ C P0 T ([0, ∞) , Rn ) . In general, ε ∈ L∞ [0, ∞) . The uncertainties ε (t) can also be considered as consisting of other disturbances, such as sensor noises. In the following section, the disturbance rejection performance of RC systems will be discussed. 4.3.2.1

Linear System

From Fig. 4.3, y (s) = G (s) u (s) can be written as x˙ (t) = Ax (t) + Bu (t) y (t) = CT x (t) ,

(4.8)

where x (t) ∈ Rn , y (t) , u (t) ∈ R, A ∈ Rn×n , and B, C ∈ Rn . The controller u (s) = G i (s) (y (s) + ε(s)) can be written as follows: Fig. 4.3 RC system in the presence of disturbance

80

4 Robustness Analysis of Repetitive Control Systems

u (t) = u (t − T ) + CT x (t) + ε (t) u (θ ) = 0, θ ∈ [−T, 0) ,   where G i (s) = 1/ 1 − e−sT and ε ∈ L∞ [0, ∞). Given that x˙ (t − T ) = Ax (t − T ) + Bu (t − T ) , the closed-loop system can be described as x˙ (t) − Hx˙ (t − T ) = A (x (t) − x (t − T )) + B (u(t) − u (t − T ))   = A + BCT x (t) − Ax (t − T ) + Bε (t) ,

(4.9)

where H = In . Typically, the spectral radius satisfies ρ (H) < 1. Therefore, the neutral-type equation (4.9) is a critical case [9, 10]. The above analysis is consistent with the existing results demonstrating that RC systems lack robustness. Linear RC systems are neutral-type time-delay systems in a critical case, which have an infinite number of poles tending to the imaginary axis as shown in Fig. 4.4 [4], [11, Lemma 7.1]. For these systems, the asymptotic stability is not equivalent to the exponential stability. As a result, the stability of this system is difficult to establish when subject to a persistent disturbance. Let ε ∈ L∞ [0, ∞) and x (t) be viewed as the input and state, respectively. The state can be written as  t  (t − s) Bε (s) ds, x (t) = 0

where  (t) is the fundamental solution of   x˙ (t) − Hx˙ (t − T ) = A + BCT x (t) − Ax (t − T ) . If there exists α > 0 such that  (t) ≤ e−αt , then  x (t) ≤ 0

t

e−α(t−s) ds · B ε∞ =

1 B ε∞ . α

If  (t) is only asymptotic convergence rather than exponential convergence, such as  (t) ≤ 1t , then 

t

1 ds · B ε∞ t − s 0 = ∞ · B ε∞ .

x (t) ≤

Therefore, the input-to-state stability of the perturbed system is difficult to establish (4.9).

4.3 Robustness Analysis of Repetitive Control System

81

Fig. 4.4 Eigenvalues of neutral-type time-delay systems in critical case

4.3.2.2

Nonlinear System

To clarify the previous work, it is assumed that vˆ is the learning variable, v is the desired signal, and v˜ = v − vˆ is the learning error. The Lyapunov-based approach is similar to the traditional adaptive- control approach, where v is a T -periodic signal for the former and is a constant for the latter. Consequently, an intermediate result (an assumption) is given in [12] to form the framework of the Lyapunov-based method. Intermediate Result: The functions f : [0, ∞) × Rn → Rn and b : [0, ∞) × Rn → Rn×m are bounded when e (t) is bounded. Moreover, there exist a differentiable function V0 : [0, ∞) × Rn → [0, ∞) , a positive-definite matrix M (t) = MT (t) ∈ Rn×n with 0 < λ M In < M (t) , λ M > 0, and a matrix h (t, e) ∈ Rm such that V˙0 (t, e) ≤ −λ M e (t)2 + hT (t, e) v˜ (t) .

(4.10)

Based on the intermediate result, the controller v (t) = v (t − T ) + h (t, e) .

(4.11)

82

4 Robustness Analysis of Repetitive Control Systems

The proof needs to employ a Lyapunov Rn × C ([−T, 0], Rm ) → [0, ∞) as follows: 1 V (t, e,˜vt ) = V0 (t,e) + 2



0

−T

functional

V : [0, ∞) ×

˜vt (θ )2 dθ.

The derivative of V (t, e,˜vt ) along the solutions of (4.10) and (4.11) is V˙ (t, e,˜vt ) ≤ −λ M e (t)2 . Integrating both sides of V˙ (t, e,˜vt ) ≤ −λ M e (t)2 from 0 to t results in  λM

t

e (s)2 ds ≤ V (0, e (0) ,˜v0 ) − V (t, e,˜vt )

0

≤ V (0, e (0) ,˜v0 ) . Therefore, e ∈ L2 [0, ∞); however, e (t) is nonexponentially stable. Similarly, if disturbance ε ∈ L∞ [0, ∞) exists in the feedback, v (t) = v (t − T ) + h (t, e) + ε (t) . Then, based on inequality (4.10), the derivative of V (t, e,˜vt ) along the solutions of (4.11) is V˙ (t, e,˜vt ) ≤ −λ M e (t)2 + v˜ T (t) ε (t) . In the presence of ε ∈ L∞ [0, ∞), the convergence performance of the tracking error e and the learning error v˜ are not easy to determine. In theory, the tracking error e and learning error v˜ could tend to infinity.

4.3.3 Filtered Repetitive Control Nonperiodic disturbance may result in instability of an RC system because of its nonexponential stability. This motivates the design of a controller to achieve a tradeoff between robustness and tracking accuracy. To improve robustness, a suitable filter is introduced as shown in Fig. 4.5, resulting in a filtered repetitive controller (FRC, or filtered repetitive control, which is also designated as FRC) in which the loop gain is reduced at high frequencies3 [4]. Stability is achieved at the expense of sacrificing high-frequency performance. However, with appropriate design, an FRC can often achieve an acceptable trade-off between tracking performance and stability, a trade-off that broadens the application of RC in practice. 3 In

this book, the term “modified” in [4] is replaced with the more descriptive term “filtered”.

4.3 Robustness Analysis of Repetitive Control System

83

Fig. 4.5 A suitable filter Q(s) is introduced into an RC resulting in an FRC

The filter Q(s) plays a role in the selection of major internal models to improve the robustness of the system. Let Q (s) = 1/ (s + 1) for simplicity, where  > 0. Then, 1 y (s) . u (s) = 1 1 − s+1 e−sT Let xc =

1 u; s+1

then, u (s) = xc (s) e−sT + y (s) .

Therefore,  x˙c (t) = −xc (t) + u (t) u (t) = xc (t − T ) + y (t) .

(4.12)

The closed-loop system formed by (4.5) and (4.12) is given as 

 0 0 In



    −1 Cp x˙c (t) xc (t) = 0 Ap + Bp CTp x˙ (t) x (t)    1 0 xc (t − T ) . + x (t − T ) Bp 0

As discussed above, an FRC system is a retarded-type system where asymptotic stability is equivalent to the exponential stability [11, Lemma 5.3]. This implies that there exists a positive real number λ M such that x (t) ≤ ke−λ M t if the FRC system is asymptotically stable. In the presence of a persistent disturbance, the FRC system has bounded-input bounded-output stability. Therefore, an FRC system is more robust than its corresponding RC system.

84

4 Robustness Analysis of Repetitive Control Systems

4.4 Summary In this chapter, to compare the robustness of closed-loop systems with different numbers of neutrally stable internal models, the convergence rate of solutions of a closed-loop system was proposed to measure their robustness. For a linear system, the convergence rate is related to the least upper bound of the real parts of the characteristic roots. By investigating the location of the characteristic roots of the closed-loop system with a different number of neutrally stable internal models, it was proven that incorporating more neutrally stable internal models results in a less robust closed-loop system. The proposed question has been answered by showing that the robustness of an internal model system decreases as the number of neutrally stable internal models (one for each harmonic) increases. The robustness and limitation of RC systems were also analyzed. Finally, based on previous results, the rationale for the use of FRC was explained.

References 1. Isidori, A., Marconi, L., & Serrani, A. (2003). Robust Autonomous Guidance: An Internal Model Approach. London: Springer. 2. Francis, B. A., & Wonham, W. M. (1976). The internal model principle of control theory. Automatica, 12(5), 457–465. 3. Wonham, W. M. (1976). Towards an abstract internal model principle. IEEE Transactions on Systems, Man, and Cybernetics-Part B, 6(11), 735–740. 4. Hara, S., Yamamoto, Y., Omata, T., & Nakano, M. (1988). Repetitive control system: a new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7), 659–668. 5. Hillerström, G. (1994). On Repetitive Control. Ph.D. Thesis, Luleå University of Technology, Sweden. 6. Lee, R. C. H., & Smith, M. C. (1998). Robustness and trade-offs in repetitive control. Automatica, 34(7), 889–896. 7. Safonov, M., & Athans, M. (1981). A multiloop generalization of the circle criterion for stability margin analysis. IEEE Transaction on Automatic Control, 26(2), 415–422. 8. Khalil, H. K. (2002). Nonlinear Systems. Upper Saddle River, NJ: Prentice-Hall. 9. Rabaha, R., Sklyarb, G. M., & Rezounenkoc, A. V. (2005). Stability analysis of neutral type systems in Hilbert space. Journal of Differential Equations, 214(2), 391–428. 10. Quan, Q., Yang, D., & Cai, K.-Y. (2010). Linear matrix inequality approach for stability analysis of linear neutral systems in a critical case. IET Control Theory and Applications, 4(7), 1290– 1297. 11. Hale, J. K., & Lunel, S. M. V. (1993). Introduction to functional differential equations. New York: Springer. 12. Messner, W., Horowitz, R., Kao, W.-W., & Boals, M. (1991). A new adaptive learning rule. IEEE Transactions on Automatic Control, 36(2), 188–197.

Chapter 5

Filtered Repetitive Control with Nonlinear Systems: Linearization Methods

A direct method for solving the repetitive control (RC, or repetitive controller; also abbreviated as RC) problem for nonlinear systems is to transform them into linear time-invariant (LTI) systems using feedback linearization. Existing design methods that facilitate RC design can be used directly on transformed systems; however, some nonlinear systems cannot be linearized. Therefore, transformed systems are often subject to nonlinearities—assumed to be weak in this chapter—such as Lipschitz conditions or sector conditions. This chapter aims to answer the following question: How is RC applied to linearized systems subject to weak nonlinearities? To answer this question, RC for systems with state-related nonlinearities and RC for systems with input-related nonlinearities need to be discussed. Conventional analysis tools, such as sector conditions, are employed in this study.

5.1 Repetitive Control for System with State-Related Nonlinearity Using the exact input–output linearization or input–state linearization, a nonlinear system can be converted into a linear system to make conventional RC applicable. Readers should refer to Chap. 3 for details. When using approximation linearization, a state-related nonlinearity appears in the system model in the form of (5.1). The methods proposed in this section can be used.

© Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_5

85

86

5 Filtered Repetitive Control with Nonlinear Systems …

5.1.1 Problem Formulation Consider a class of multiple-input and multiple-output (MIMO) uncertain nonlinear systems as follows [1, 2]: x˙ = Ax + Bu + φ (x) + d, x (0) = x0 y = CT x,

(5.1)

where A ∈ Rn×n is a stable constant matrix, and B, C ∈ Rn×m are constant matrices. The function φ : Rn → Rn is a nonlinear function vector, x (t) ∈ Rn is the state vector, y (t) ∈ Rm is the output, u (t) ∈ Rm is the control, and d (t) ∈ Rn is a bounded T -periodic disturbance. It is assumed that only y (t) is available from the measurement. The reference yd (t) ∈ Rm is a known and sufficiently smooth T -periodic signal, t ≥ 0. In the following discussion, we exclude the variable t except when necessary for the sake of convenience. For system (5.1), the following assumptions are considered. Assumption 5.1 There exist positive-definite matrices 0 < P = PT ∈ Rn×n , 0 < Q = QT ∈ Rn×n and a sufficiently small ε > 0 such that PA + AT P = −Q  T ∂φ ∂φ P ≤ Q − εIn , ∀x ∈ Rn + P ∂x ∂x Assumption 5.2 Function φ : [0, ∞) × Rn → Rn belongs to the sector1 [K1 , K2 ] . Assumption 5.3 There exists a T -periodic ud such that x˙ d = Axd + Bud + φ (xd ) + d yd = CT xd .

(5.2)

Assumption 5.4 The desired state xd (t) ∈ Rn is a known and sufficiently smooth T -periodic signal, where yd = CT xd . Under Assumptions 5.1–5.4, the objective here is to design a tracking controller u such that e ∈ L2 [0, ∞) ∩ L∞ [0, ∞), where e  yd − y. For nonlinear function φ, either Assumption 5.1 or Assumption 5.2 will be used later. In the following discussion, an example is given to transform a dynamic equation of a robot manipulator into the form of (5.1). Example 5.1 ([2]) The dynamic equation of a robot manipulator is described by   J (θ ) θ¨ = τ + h θ , θ˙ , 1 An

introduction to sector will be given in Sect. 5.1.3.

5.1 Repetitive Control for System with State-Related Nonlinearity

87

  where h θ , θ˙ ∈ R3 denotes the term with respect to the gravity and the centrifugal force, etc., and J (θ) ∈ R3×3 represents the inertia matrix. The control input τ ∈ R3 is given as    τ = J (θ) −k1 θ + k2 θ˙ + bu − h¯ θ , θ˙ , (5.3)   where u ∈ R3 is the new control input and h¯ θ, θ˙ is a nominal term used to com  pensate for the term h θ , θ˙ . Then, the system above is reduced to   θ¨ = −k1 θ − k2 θ˙ + bu + J−1 (θ) h˜ θ , θ˙ ,       T where h˜ θ , θ˙ = h θ , θ˙ − h¯ θ , θ˙ . By setting x = [θ T θ˙ ]T , the system above is of the form x˙ = Ax + Bu + φ (x) , where    0 0 I3 ,B = , −k1 I3 −k2 I3 bI3      0  ¯ θ, θ˙ = J−1 (θ) h˜ θ , θ˙ . φ (x) = ¯ , φ φ θ, θ˙ 

A=

Currently, the original problem is formulated in the form of (5.1). In the following discussion, the positive-definite matrices P, Q are found to satisfy Assumption 5.1 by solving the Riccati-type equation PA + AT P = −Q. By setting Q = I6 for simplicity,  P=

p11 I3 p12 I3 p12 I3 p22 I3



is obtained, where p12 p22 Hence, given that

 

2 2 = p21 = −k1 + k1 + b /b  

2 2 = −k2 + k2 + b (1 − 2 p12 ) /b2 .

88

5 Filtered Repetitive Control with Nonlinear Systems …

∂φ = ∂x then

 P 

To satisfy P

∂φ ∂x ∂φ ∂x



T

T

+

+

∂φ P=⎣ ∂x

∂φ P ∂x



0 0

∂ φ¯ ∂ φ¯ ∂θ ∂ θ˙



∂ φ¯  ∂θ¯ p12 ∂∂θφ

p12



+ +



∂ φ¯ ∂ θ˙  ∂ φ¯ ∂ θ˙



∂ φ¯  ∂θ¯ p22 ∂∂θφ

p12

+ +

⎤

∂ φ¯ ∂ θ˙  ⎦ . ∂ φ¯ ˙ ∂θ

(5.4)

≤ Q − εI6 , the following two methods are often used:

• (1) High gain feedback: If b is set to be sufficiently large, then p12 , p22 become small, and the matrix (5.4) approaches the zero matrix.   • (2) Nonlinear compensation: Find a nonlinear term h¯ θ , θ˙ that is as close to   ¯ ¯ h θ , θ˙ as possible so that the gains of ∂∂θφ , ∂∂φθ˙ can be reduced to a sufficiently small value; therefore, matrix (5.4) approaches the zero matrix. The output matrix can be designed as T  C = PB = p12 I3 p22 I3 . If the objective is to drive θ to θ d , then the desired output is set to be yd = p12 θ d + p22 θ˙ d .

5.1.2 Repetitive Controller Design Under Assumption 5.1 Before introducing the main result, the mean value theorem is first introduced. Theorem 5.1 (Mean Value Theorem) [4, p. 74] Let φ : Rn → Rn be continuously differentiable in an open convex set D ⊂ Rn . For any x, x + p ∈ D, φ (x + p) −  1 ∂φ  φ (x) = 0 ∂z  dt · p. z=x+tp

With the mean value theorem in hand, the designed RC and its analysis are stated in the following theorem. Theorem 5.2 Under Assumptions 5.1, 5.3, 5.4, the RC is designed for system (5.1) as follows: u (t) = u (t − T ) + BT Px˜ (t) , (5.5) where x˜ = xd − x. Then, x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) and e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Proof Subtracting system (5.1) from the reference system (5.2) and using Theorem 5.1 yields

5.1 Repetitive Control for System with State-Related Nonlinearity

x˙˜ = A˜x + Bu˜ +



1

0

89

  ∂φ  ds x˜ ∂z z=x+s x˜

e = CT x˜ ,

(5.6)

where u˜  ud − u. Given that ud is T -periodic, the RC (5.5) is rewritten as u˜ (t) = u˜ (t − T ) − BT Px˜ (t) .

(5.7)

Design a Lyapunov functional as  V1 (t) = x˜ T (t) Px˜ (t) +

t

u˜ T (s) u˜ (s) ds.

t−T

Taking the derivative of V1 along the controller (5.7) and the solutions to the error system (5.6) yields   V˙1 =˜xT P A˜x + Bu˜ +

1 0

      T  1 ∂φ  ∂φ  ds x˜ + A˜x + Bu˜ + ds x˜ Px˜   ∂z z=x+s x˜ 0 ∂z z=x+s x˜

+ u˜ T u˜ − u˜ T (t − T ) u˜ (t − T ) .

Recalling (5.7), u˜ (t − T ) = u˜ (t) + BT Px˜ (t) . Then,    T   ∂φ ∂φ   V˙1 = − x˜ T Q˜x + x˜ T + P dt x˜ + 2˜xT PBu˜ P  ∂z z=x+t x˜ ∂z  0 z=x+t x˜ T    T T T + u˜ u˜ − u˜ + B Px˜ u˜ + B Px˜ 

1



≤ − x˜ T Q˜x + 2˜xT PBu˜ + x˜ T (Q − εIn ) x˜ − x˜ T PBBT Px˜ − 2˜xT PBu˜ = − ε ˜x 2 − x˜ T PBBT Px˜ ≤ − ε ˜x 2 . Consequently, ε ˜x (t) ≤ −V˙1 (t) ⇒



t

2

0





This implies that 0

1 1

˜x (s) 2 ds ≤ − V1 (t) + V1 (0) . ε ε

˜x (s) 2 ds ≤ 1ε V1 (0) < ∞, i.e., x˜ ∈ L2 [0, ∞) . Moreover,

V˙1 (t) ≤ 0. This leads to λmin (P) ˜x (t) 2 ≤ V1 (t) ≤ V1 (0) , ∀t ≥ 0

90

5 Filtered Repetitive Control with Nonlinear Systems …

namely,



˜x (t) ≤

1 V1 (0). λmin (P)

It can be concluded that x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Given that e = CT x˜ , e ∈  L2 [0, ∞) ∩ L∞ [0, ∞) . If only y (t) is available from the measurement, the following result is obtained. Corollary 5.1 Under Assumptions 5.1, 5.3, 5.4, if C = PB and the RC are designed for system (5.1) as u (t) = u (t − T ) + e (t) (5.8) then x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) and e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) , where e=ed −e. Proof This is a direct result of Theorem 5.2.



For a strictly positive real system (see Sect. 2.1.4), there exists a matrix 0 < P = PT ∈ Rn×n such that PA + AT P < 0 and PB = C.

5.1.3 Repetitive Controller Design Under Assumption 5.2 5.1.3.1

Sector Nonlinearity

Consider a scale function h (t, u) that satisfies the following inequalities: αu 2 ≤ uh (t, u) ≤ βu 2

(5.9)

for any t ∈ R and u ∈ R, where β ≥ α and α, β ∈ R. The function h belongs to the sector [α, β] , which is shown in Fig. 5.1. This sector nonlinearity is equivalent to (h (t, u) − αu) (βu − h (t, u)) ≤ 0.

(5.10)

Definition 5.1 ([3]) A memoryless function h : [0, ∞) × Rn → Rn is said to belong to the sector (1) [0, ∞) , if uT h (t, u) ≥ 0; (2) [K1 , ∞) , if uT (h (t, u) − K1 u) ≥ 0, with K1 = K1T > 0; (3) [0, K2 ] with K2 = K2T > 0 if hT (t, u) (h (t, u) − K2 u) ≤ 0; (4) [K1 , K2 ] with K = K2 − K1 = KT > 0 if (h (t, u) − K1 u)T (h (t, u) − K2 u) ≤ 0. Proposition 5.1 If φ ∈ [K1 , K2 ] , then

5.1 Repetitive Control for System with State-Related Nonlinearity

91

Fig. 5.1 Sector nonlinearity

ψ T (x) ψ (x) ≤ ψ T (x) (K2 − K1 ) x

(5.11)

x (K2 − K1 ) ψ (x) ≤ x (K2 − K1 ) (K2 − K1 ) x,

(5.12)

T

T

T

T

where ψ (x) = K2 x − φ (t, x) . Proof Given that φ ∈ [K1 , K2 ] , we have (φ (x) − K1 x)T (φ (x) − K2 x) ≤ 0. Then, ψ T (x) (ψ (x) − (K2 − K1 ) x) ≤ 0.

(5.13)

Consequently, inequality (5.11) holds. Inequality (5.13) is further written as (ψ (x) − (K2 − K1 ) x + (K2 − K1 ) x)T (ψ (x) − (K2 − K1 ) x) ≤ 0.

(5.14)

Excluding the term (ψ (x) − (K2 − K1 ) x)T (ψ (x) − (K2 − K1 ) x) ≥ 0 from (5.14) yields xT (K2 − K1 )T ψ (x) − xT (K2 − K1 )T (K2 − K1 ) x ≤ 0. Therefore, (5.12) holds.   Define v = (K2 − K1 ) u and ψ¯ (v) = ψ (K2 − K1 )−1 v . Then, u = (K2 − K1 )−1 v

  ψ¯ (v) = K2 (K2 − K1 )−1 v − φ (K2 − K1 )−1 v . Proposition 5.2 If φ ∈ [K1 , K2 ] , then



92

5 Filtered Repetitive Control with Nonlinear Systems … T ψ¯ (v) ψ¯ (v) ≤ vT ψ¯ (v) ≤ vT v.



Proof This is a direct result of Proposition 5.1.

5.1.3.2

(5.15)

Repetitive Controller Design Based on Time-Domain Analysis

Theorem 5.3 Under Assumptions 5.2, 5.3, 5.4, the RC is designed for the error system (5.1) as follows: u (t) = u (t − T ) + BT Px˜ (t) and there exist 0 < P = PT ∈ Rn×n and α > 0 such that   −P (A + K2 ) − (A + K2 )T P − α K 2 In P > 0, P αIn

(5.16)

(5.17)

where x˜  xd − x. Then, x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) and e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Proof Subtracting system (5.1) from the reference system (5.2) yields the error system as x˙˜ = A˜x + Bu˜ + (φ (xd ) − φ (x)) e = CT x˜ ,

(5.18)

˜ u − u. This equation is further written as where u  d x˙˜ = (A + K2 ) x˜ + Bu˜ − (ψ (x) − ψ (xd )) , where ψ (x) = K2 x − φ (x) . Given that ud is T -periodic, the RC (5.16) can be rewritten as (5.19) u˜ (t) = u˜ (t − T ) − BT Px˜ (t) . ˜ as Design a Lyapunov functional V2 (t) = V2 (t, x˜ , u)  V2 (t) = x˜ T (t) Px˜ (t) +

t

u˜ T (s) u˜ (s) ds.

t−T

Taking the derivative of V along the solutions of the error system (5.18) and the controller (5.19) yields

5.1 Repetitive Control for System with State-Related Nonlinearity

93

V˙2 =˜xT P ((A + K2 ) x˜ + Bu˜ + (ψ (x) − ψ (xd ))) + ((A + K2 ) x˜ + Bu˜ + (ψ (x) − ψ (xd )))T Px˜ + u˜ T u˜ − u˜ T (t − T ) u˜ (t − T ) . Given that

ψ (x) − ψ (xd ) ≤ K ˜x

then,

2˜xT P (ψ (x) − ψ (xd )) ≤ α −1 x˜ T PPx˜ + α K 2 x˜ T x˜

for any α > 0. Then,   V˙2 ≤˜xT P (A + K2 ) + (A + K2 )T P + α −1 PP + α K 2 x˜ + 2˜xT PBu˜ + u˜ T u˜ − u˜ T (t − T ) u˜ (t − T )   =˜xT P (A + K2 ) + (A + K2 )T P + α −1 PP + α K 2 In − PBBT P x˜   ≤˜xT P (A + K2 ) + (A + K2 )T P + α −1 PP + α K 2 In x˜ . If P (A + K2 ) + (A + K2 )T P + α −1 PP + α K 2 In < −εIn , then V˙2 (t) ≤ −ε ˜x (t) 2 . Consequently, ε ˜x (t) 2 ≤ −V˙2 (t) ⇒



t 0





This implies that 0

1 1

˜x (s) 2 ds ≤ − V2 (t) + V2 (0) . ε ε

˜x (s) 2 ds ≤ 1ε V2 (0) < ∞, namely, x˜ ∈ L2 [0, ∞) . More-

over, V˙2 (t) ≤ 0. This leads to λmin (P) ˜x (t) 2 ≤ V2 (t) ≤ V2 (0) , ∀t ≥ 0. 

In other words

˜x (t) ≤

1 V2 (0). λmin (P)

It can be concluded that x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Given that e = CT x˜ , e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Using the Schur complement (see Sect. 2.2.1), the following inequalities are found to be equivalent to each other:

94

5 Filtered Repetitive Control with Nonlinear Systems …

−P (A + K2 ) − (A + K2 )T P − α −1 PP − α K 2 In > 0 

−P (A + K2 ) − (A + K2 ) P − α K In P P αIn T

2



⇔ > 0. 

This concludes the proof.

Furthermore, the following theorem holds even if only y (t) is available from measurement. However, the condition will be more conservative because PB = C has to be satisfied additionally. Corollary 5.2 Under Assumptions 5.2, 5.3, 5.4, the RC is designed for (5.1) as follows: u (t) = u (t − T ) + e (t) , (5.20) and there exist 0 < P = PT ∈ Rn×n and α > 0 such that   −P (A + K2 ) − (A + K2 )T P − α K 2 I P >0 P αI PB = C,

(5.21)

where e  ed − e. Then, x˜ ∈ L2 [0, ∞) ∩ L∞ [0, ∞) and e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Proof This is a direct result of Theorem 5.3.

5.1.3.3



Repetitive Controller Design Based on Frequency-Domain Analysis

The RC system is shown in Fig. 5.2a. Theorem 5.4 Under Assumptions 5.2, 5.3, 5.4, the RC is designed for system (5.1) as follows: u (s) = F (s) e (s) , (5.22) where

−1  Q (s) e−T s + L. F (s) = M (s) Im − Q (s) e−T s

Suppose

−1  (K2 − K1 ) sIn − A − K2 + BF (s) CT

is stable and  −1     sup (K2 − K1 ) jωIn − A − K2 + BF ( jω) CT  < 1.

ω∈R

(5.23)

5.1 Repetitive Control for System with State-Related Nonlinearity

95

Fig. 5.2 Repetitive control for a system with state-related nonlinearity

Then, the closed-loop system is internally stable. Proof The closed-loop system is depicted in Fig. 5.2a. The transformation of the closed-loop system is depicted in Fig. 5.2b, wherethe external inputs are ignored.   ¯ Using the inequalities in (5.15) in Proposition 5.2, ψ (v) ≤ v for any v. There    fore, ψ¯  ≤ 1. Then, based on the condition (5.23), this proof is concluded by using the small gain theorem (see Theorem 2.1).  Given that e−T s in F (s) will make it complex to check stability and perform norm computation under the condition (5.23), another theorem is given to solve this problem; this theorem considers only rational polynomials. Theorem 5.5 Under Assumptions 5.2, 5.3, 5.4, the RC is designed for the system (5.1) as in (5.22). Suppose H (s) , Q (s) , M (s) are stable, and sup T1 ( jω) < 1,

(5.24)

ω∈R

where 

Q −QCT T1 = 0 K



0 In H−1 BM −H−1



96

5 Filtered Repetitive Control with Nonlinear Systems …

−1  H (s) = sIn − A − K2 + BLCT . Then, the closed-loop system is internally stable. Proof Five marked points a, b, c, e and f are shown in Fig. 5.2a, which satisfy b = Q (a + e) u = Ma + Le x = (sIn − A)−1 (K2 x − c + Bu) f = (K2 − K1 ) x e = −CT x. Then,

      0 a b In Q −QCT . = H−1 BM −H−1 c f 0 K    T1

On the other hand,

   −T s   e Im 0 a b = . c f 0 ψ¯ (·)    

The transformation of the closed-loop system is depicted in Fig. 5.2c, where the external inputs are ignored. From the definition and Proposition 5.2,  ≤ 1 holds. Then, using the small gain theorem (see Theorem 2.1), this proof is concluded.  Theorems 5.4, 5.5 only consider the stability of the closed-loop. The RC is expected to attenuate the periodic component of the tracking error. Furthermore, the reason this system can track T -Periodic signals is partly explained in Chap. 9.

5.2 Repetitive Control for Systems with Input-Related Nonlinearity In the following discussion, the system with input-related nonlinearities is also considered because it is often encountered in practice.

5.2.1 Problem Formulation Consider the following nonlinear system: x˙ = Ax + Bh (t, u) + d, x (0) = x0 y = CT x + Du,

(5.25)

5.2 Repetitive Control for Systems with Input-Related Nonlinearity

97

where A ∈ Rn×n is a constant matrix, B, C ∈ Rn×m and D ∈ Rm×m are constant matrices, function h : [0, ∞) × Rm → Rm belongs to the sector [K1 , K2 ] . The vector x (t) ∈ Rn is the state vector, y (t) ∈ Rm is the output, u (t) ∈ Rm is the control, and d (t) ∈ Rn is an unknown bounded T -periodic disturbance. The objective here is to design a tracking controller u such that e ∈ L2 [0, ∞) ∩ L∞ [0, ∞), where e  yd − y. Example 5.2 A common input-related nonlinearity is a class of saturation functions, which can be expressed as ⎧ ⎪ ⎨ sign (x1 ) min (a, |x1 |) .. , sat (x) = . ⎪ ⎩ sign (xn ) min (a, |xn |) where x = [x1 x2 · · · xn ]T . The saturation function belongs to the sector [0, In ].

5.2.2 Repetitive Controller Design The nonlinear system (5.25) can be further written as x˙ = Ax + B (K2 u − ψ (u)) + d, x (0) = x0 y = CT x + Du,

(5.26)

where ψ (u) = K2 u − h (t, u) . Define v = (K2 − K1 ) u and ψ¯ (v) = ψ ((K2 − K1 )−1 v . Then, u = (K2 − K1 )−1 v

  ψ¯ (v) = K2 (K2 − K1 ) −1 v − h t, (K2 − K1 )−1 v . With the new definition,   x˙ = Ax + B B1 v − ψ¯ (v) + d, x (0) = x0 y = CT x, where B1 = K2 (K2 − K1 )−1 . With the newly defined system, the following theorem is given. Theorem 5.6 The RC is designed for system (5.25) as follows: u (s) = (K2 − K1 )−1 F (s) E (s) ,

(5.27)

98

5 Filtered Repetitive Control with Nonlinear Systems …

Fig. 5.3 Repetitive control for a system with input-related nonlinearity

−1  where F (s) = L (s) Im − Q (s) e−T s Q (s) e−T s +K. Suppose (Im + FPB1 )−1 F and P are stable and   sup (Im + FPB1 )−1 FP ( jω) < 1,

ω∈R

(5.28)

where P = CT (sIm − A)−1 B + D. Then, the closed-loop system is internally stable. Proof The closed-loop system isdepicted in Fig. 5.3a, b. Using the inequalities in     ¯ (5.15) in Proposition 5.2, ψ (v) ≤ v for any v. Therefore, ψ¯  ≤ 1. Then, by combining the condition (5.28) with the small gain theorem Theorem 2.1, this proof is concluded.  The result of Theorem 5.6 is similar to that of [5]. However, given that e−T s in F (s) will complicate checking stability and norm computation in (5.28), another theorem is given to solve this problem, where only rational polynomials need to be considered. Theorem 5.7 The RC is designed for system (5.25) as in (5.27). Suppose that (Im + PB1 K)−1 , Q, L and P are stable, and sup T2 ( jω) < 1,

ω∈R



where T2 =

QQ LK



 0 Im . − (Im + PB1 K)−1 PB1 L (Im + PB1 K)−1 P

Then, the closed-loop system is internally stable.

(5.29)

5.2 Repetitive Control for Systems with Input-Related Nonlinearity

99

Proof Five marked points a, b, c, d, e are shown in Fig. 5.3a c, which satisfy b = Q (a + e) d = K1 e + La e = −P (B1 d − c) . Then,       0 Im b QQ a = . d LK c − (Im + PB1 K)−1 PB1 L (Im + PB1 K)−1 P    T2

On the other hand,

     −T s e Im 0 a b = . c d 0 ψ¯ (·)    

Using Proposition 5.2,  ≤ 1 holds. Then, using the small gain theorem (Theorem 2.1), this proof is concluded. The closed-loop system is depicted in Fig. 5.3c, where external inputs are ignored. 

5.3 Summary The RC design for nonlinear systems was developed along with feedback linearization and backstepping. Differential geometric techniques were combined with the IMP to develop a nonlinear RC strategy. Systems with both state-related nonlinearities and input-related nonlinearities were considered, where the Lyapunov methods and small gain theorem are important tools. To use the small gain theorem, the basic concept is to transform all nonlinear terms into a lumped term with a gain smaller than or equal to 1. However, there is a problem in that after feedback linearization, the disturbance may be changed to involve nonlinearities depending on the states. In our modeling, these difficulties were ignored. Otherwise, the problem would become complex. Further, methods are introduced in the following chapters.

References 1. Hara, S., Omata, T., & Nakano, M. (1985). Synthesis of repetitive control systems and its applications. In Proceedings of IEEE Conference on Decision and Control (pp. 1387–1392), New York, USA. 2. Omata, T., Hara, S., & Nakano, M. (1987). Nonlinear repetitive control with application to trajectory control of manipulators. Journal of Robotic Systems, 4(5), 631–652.

100

5 Filtered Repetitive Control with Nonlinear Systems …

3. Khalil, H. K. (2002). Nonlinear systems. Englewood Cliffs, NJ: Prentice-Hall. 4. Dennis, J. E., & Schnabel, R. B. (1996). Numerical methods for unconstrained optimization and nonlinear equations. SIAM. 5. Lin, Y. H., Chung, C. C., & Hung, T. H. (1991). On robust stability of nonlinear repetitive control system: Factorization approach. In Proceeding of American Control Conference (pp. 2646–2647), Boston, MA, USA.

Chapter 6

Filtered Repetitive Control with Nonlinear Systems: An Adaptive-Control-Like Method

An adaptive-control-like method is the leading repetitive control (RC, or repetitive controller, also RC) method for nonlinear systems. Before using this method, a tracking problem for nonlinear systems needs to be converted into a rejection problem for nonlinear error dynamics subject to periodic disturbance. This conversion reduces the difficulties of the original problem. Furthermore, the periodic disturbance is considered as an unknown parameter that needs to be learned. As discussed in Chap. 4, the stability is enhanced by introducing a low-pass filter in RC for linear systems, resulting in a filtered repetitive controller (FRC, or filtered repetitive control, which is also designated as FRC). Similarly, it is also necessary to introduce a filter into RC for nonlinear systems. However, the theory on FRC proposed in [1] is derived in the frequency domain and can be applied only with difficulty, if at all, to nonlinear systems. Given this reason, the tracking performance needs to be analyzed in the time domain. This chapter aims to answer the following question: How is the filtered repetitive controller designed for nonlinear systems using the adaptive-control-like method? To answer this question, the problem formulation in the form of a rejection problem for nonlinear error dynamics subject to periodic disturbance is discussed. Moreover, a simple example is given to show the similarity between the classical adaptivecontrol and the adaptive-control-like method for RC. A new model of periodic signals is established for the adaptive-control-like method, which is a key step forward in FRC design. Applications to the robotic manipulator tracking and attitude control of a quadcopter are given to help in better understanding the method. Most contents in this chapter are based on [2, 3, 5].

© Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_6

101

102

6 Filtered Repetitive Control with Nonlinear Systems …

6.1 Problem Formulation To illustrate the generality of FRC, the following error dynamics is considered [6, 7]:   e˙ (t) = f (t, e (t)) + b (t, e (t)) v (t) − vˆ (t) . (6.1) Here, e (t) ∈ Rn is a tracking error vector, v (t) = [v1 · · · vm ]T ∈ Rm is a disturbance, vˆ (t) ∈ Rm is a signal designed to compensate for v (t) . For simplicity, the initial time is set to t0 = 0. Throughout this chapter, the following assumptions are imposed on the dynamics (6.1). Assumption 6.1 ([6, 7]) The functions f : [0, ∞) × Rn → Rn and b : [0, ∞) × Rn → Rn×m are bounded when e (t) is bounded on R+ . Moreover, there exists a differentiable function V0 : [0, ∞) × Rn → [0, ∞) , a positive-definite matrix M (t) = MT (t) ∈ Rn×n with 0 < λ M In < M (t) , λ M > 0, and a matrix R (t) ∈ Rn×m such that V0 (t, e (t)) ≥ c0 e (t)2   V˙0 (t, e (t)) ≤ −eT (t) M (t) e (t) + eT (t) R (t) v (t) − vˆ (t) ,

(6.2) (6.3)

where c0 > 0. Assumption 6.2 The disturbance v satisfies v ∈ C P0 T ([0, ∞) , Rm ). Moreover, v (t) =sat(v (t))  [satβ1 (v1 ) ... satβm (vm )]T , where  satβi (ξi ) =

for |ξi | ≤ βi ξi , ∀ξi ∈ R, sgn (ξi ) βi for |ξi | > βi

(6.4)

where sgn(·) denotes the standard signum function. Assumption 6.3 The compensation signal vˆ (t) is subject to saturation sat(·) . Under Assumptions 6.1, 6.2, our objective is to design a single FRC with the following two properties: (i) with certain filter parameters, lim e (t) = 0; (ii) with t→∞

another set of appropriate filter parameters, for any value of e (0), e (t) is uniformly ultimately bounded. Furthermore, under Assumptions 6.1–6.3, our objective is to design a saturated FRC with the following two properties: (i) with certain filter parameters, lim e (t) = 0; (ii) for any value of e (0), e (t) is uniformly ultimately t→∞ bounded.

6.2 Simple Example

103

6.2 Simple Example For simplicity, simple error dynamics is first considered as   e˙ (t) = −e (t) + φ (e (t)) v (t) − vˆ (t) ,

(6.5)

where e (t) , φ (e (t)) , v (t) , vˆ (t) ∈ R. The signal v (t) is unknown. A nonnegative function V1 (e (t)) is selected to be V1 (e (t)) =

1 2 e (t) . 2

Taking the time derivative V1 along the error dynamics (6.5) yields V˙1 (e (t)) = e (t) (−e (t) + φ (e (t)) v˜ (t)) = −e2 (t) + e (t) φ (e (t)) v˜ (t) , where v˜ (t) = v (t) − vˆ (t) . The adaptive term is introduced to eliminate the cross term e (t) φ (e (t)) v˜ (t) . The method to design a controller vˆ will be discussed in two cases in the following section. (1) Case 1: v (t) ≡ v0 , Classical Adaptive Control To eliminate the cross term e (t) φ (e (t)) v˜ (t) , a nonnegative function V2 (e (t) , v˜ (t)) is selected as 1 V2 (e (t) , v˜ (t)) = V1 (e (t)) + v˜ 2 (t) 2 whose time derivative is   V˙2 (e (t) , v˜ (t)) = −e2 (t) + e (t) φ (e (t)) v˜ (t) + v˜ (t) v˙ (t) − v˙ˆ (t) . Given that v (t) ≡ v0 , v˙ (t) ≡ 0. Then,   V˙2 (e (t) , v˜ (t)) = −e2 (t) − v˜ (t) vˆ˙ (t) − e (t) φ (e (t)) . Therefore, v˙ˆ (t) = e (t) φ (e (t)) will make V˙2 (e (t) , v˜ (t)) = −e2 (t) . Then,  t lim e2 (s)ds ≤ V1 (e (0) , v˜ (0)).

t→∞ 0

(2) Case 2: v (t) = v (t − T ) , Repetitive Control To eliminate the cross term e (t) φ (e (t)) v˜ (t) , a nonnegative function V3 (t, e, v˜ ) is selected as  1 t 2 v˜ (s) ds V3 (t, e, v˜ ) = V1 (e (t)) + 2 t−T

104

6 Filtered Repetitive Control with Nonlinear Systems …

Table 6.1 Classical adaptive control versus Repetitive control Classical adaptive control Unknown signal Learning law

v˙ (t) ≡ 0 v˙ˆ (t) = e (t) φ (e (t))

Lyapunov function

1 2 2e

(t) + 21 v˜ 2 (t)

Repetitive control v (t) = v (t − T ) vˆ (t) = vˆ (t − T ) + e (t) φ (e (t))  1 2 1 t 2 2 e (t) + 2 t−T v˜ (s)ds

whose time derivative is  1 2 v˜ (t) − v˜ 2 (t − T ) . V˙3 (t, e, v˜ ) = −e2 (t) + e (t) φ (e (t)) v˜ (t) + 2 A learning law is designed as vˆ (t) = vˆ (t − T ) + e (t) φ (e (t)) . Given that v (t) = v (t − T ) , then v˜ (t) = v˜ (t − T ) − e (t) φ (e (t)) . Consequently, V˙3 (t, e, v˜ ) = −e2 (t) − e2 (t) φ 2 (e (t)) .  Therefore, lim

t→∞ 0

t

e2 (s)ds ≤ V3 (0, e, v˜ ).

Based on the above, the design ideas are very similar, as summarized in Table 6.1.

6.3 New Model of Periodic Signal A new model to describe periodic signals is proposed. To show the difference, the commonly used model of periodic signals is given first. Any v ∈ C P0 T ([0, ∞) , Rm ) can be generated by the model [1] x (t) = x (t − T ) v (t) = x (t) x (θ ) = v (T + θ ) , θ ∈ [−T, 0] ,

(6.6)

where t ≥ 0 and x (t) ∈ Rm is the state. Using the internal model principle (IMP) [8], the asymptotic rejection of a periodic disturbance is expected to be achieved by incorporating the above model (6.6), i.e., 1−e1−sT Im (the transfer function of (6.6)), into the closed-loop system, which is also the basic concept of RC [1]. A new model to describe v ∈ C P0 T ([0, ∞) , Rm ), which will help in the design of the FRC for nonlinear system (6.1), is given in Lemma 6.1. Lemma 6.1 If A0,ε > 0 and A1,ε < 1, then, for any v ∈ C P0 T ([0, ∞) , Rm ) , there exists a signal δ ∈ C P0 T ([0, ∞) , Rm ) such that

6.4 Filtered Repetitive Controller Design

105

A0,ε x˙ p (t) = −xp (t) + A1,ε xp (t − T ) + δ (t) v (t) = xp (t) + δ (t) xp (θ ) = v (T + θ ) , θ ∈ [−T, 0] ,

(6.7)

where A0,ε , A1,ε ∈ Rm×m . If A0,ε = 0 and A1,ε = Im , then any v ∈ C P0 T ([0, ∞) , Rm ) can be generated by (6.7) with δ (t) ≡ 0. 

Proof See Appendix. Without loss of generality, assume that   xp = sat xp .

(6.8)

The new model (6.7) is a time-delay system driven by an external signal δ (t) , t ≥ 0. The time-delay system can be designed appropriately to achieve a tradeoff between its stability and the bound on δ (t) , t ∈ [0, ∞). Let A0,ε = εIm and A1,ε = (1 − ε) Im . Next, the relation between δ (t) and ε will be examined by the frequency response method. The Laplace transform of (6.7) is δ (s) =

εs + 1 − (1 − ε) e−sT v (s) , εs + 2 − (1 − ε) e−sT



G(s)

where δ (s) and v (s) are the Laplace transforms of δ (t) and v (t), respectively. The frequency responses of G (s) with T = 2π and ε = 0.01, 0.001 are shown in Fig. 6.1. As shown in Fig. 6.1, G (s) has a comb shape with notches matching the frequencies of the periodic signal v (t). This makes the frequency response of G (s) close to zero at k = 0, 1, 2, . . . . As the parameter ε decreases, the periodic components, especially in the low-frequency band, will be attenuated by G (s) more strongly. Given that the low-frequency band is dominant in most periodic signals, this will result in a smaller upper bound on δ (t). In particular, δ (t) ≡ 0 when ε = 0.

6.4 Filtered Repetitive Controller Design 6.4.1 Controller Design Based on the new model (6.7), to compensate for the disturbance v (t) , the FRC is designed as follows: A0,ε v˙ˆ (t) = −ˆv (t) + A1,ε vˆ (t − T ) + ke RT (t) e (t)

(6.9)

106

6 Filtered Repetitive Control with Nonlinear Systems … 20

20log(G(jω))

0

−20

−40

−60

0

5

10

15

20

25

30

35

40

45

50

30

35

40

45

50

ω(Hz) 20

20log(G(jω))

0

−20

−40

−60

0

5

10

15

20

25 ω(Hz)

Fig. 6.1 Frequency responses of G (s) . The upper plot shows ε = 0.01, the lower plot shows ε = 0.001

with vˆ (θ ) = 0, for θ ∈ [−T, 0] , where t > 0, A0,ε , A1,ε ∈ Rm×m , ke > 0, and R (t) is as in (6.3). The FRC (6.9) can be considered not only as an RC, but also as a “modified” RC as described in [1]. (1) In particular, the controller (6.9) with A0,ε = 0 and A1,ε = Im reduces to vˆ (t) = vˆ (t − T ) + ke RT (t) e (t) .

(6.10)

Therefore, the controller (6.9) with A0,ε = 0 is an RC. (2) The Laplace transforms of vˆ (t) and ke RT (t) e (t) are denoted by vˆ (s) and re (s), respectively. Then, the Laplace transform from re (s) to vˆ (s) is defined by −1  B (s) re (s) . vˆ (s) = Im − Q (s) e−sT

(6.11)

−1  A1,ε is called the Q-filter in [1] and B (s) = Here, Q (s) = sA0,ε + Im −1  sA0,ε + Im . Therefore, this controller is a “modified repetitive controller” as described in [1]. Subtracting Eqs. (6.9) from (6.7) yields

6.4 Filtered Repetitive Controller Design

107

A0,ε v˙˜ (t) = −˜v (t) + A1,ε v˜ (t − T ) − ke RT (t) e (t) + δ (t) ,

(6.12)

where v˜  xp − vˆ . In Eq. (6.12), the initial condition on v˜ is bounded. The exact value of the initial condition is not relevant as the following results hold globally. Combining the error dynamics (6.1) and Eq. (6.12) yields closed-loop error dynamics in the form E˙z (t) = fa (t, z (t)) + fd (z (t − T )) + ba (t, z (t)) σ (t) .

(6.13)

Here     δ v˜ m+n ,σ = z= ∈ Rm+n ∈R 0 e   −˜v (t) − ke RT (t) e (t) ∈ Rm+n fa (t, z (t)) = f (t, e (t)) + b (t, e (t)) v˜ (t)   A1,ε v˜ (t − T ) ∈ Rm+n fd (z (t − T )) = 0   A0,ε 0 ∈ R(m+n)×(m+n) E= 0 In   0 Im ∈ R(m+n)×2m . ba (t, z (t)) = 0 b (t, e (t)) The closed-loop error dynamics (6.13) is a retarded-type system (a type of timedelay system) if A0,ε is nonsingular; But it will become a neutral-type system (another type of time-delay system) [9, p. 38, 59] if A0,ε = 0.

6.4.2 Stability Analysis The closed-loop error dynamics (6.13) will be analyzed with the help of a Lyapunov functional using the results stated in Theorem 6.1. Theorem 6.1 . For the error dynamics (6.1), suppose Assumptions 6.1, 6.2 hold, and vˆ (t) is generated by the FRC (6.9), in which the parameters satisfy ke > 0, A0,ε = AT0,ε ≥ 0 and A1,ε = Im − αA0,ε . Then, (i) if A0,ε = 0, then lim e (t) = t→∞ 0; (ii) if A0,ε > 0, α ensures A1,ε < 1, and if V0 (t, e) in (6.2) further satisfies V0 (t, e) ≤ c1 e (t)2 with c1 > 0, then, for any value of z (0), z (t) is uniformly ultimately bounded. Proof For (6.13), a nonnegative function V (t, zt ) is selected as follows:  V (t, zt ) = 2ke V0 (t, e (t)) + v˜ T (t) A0,ε v˜ (t) +

t

t−T

v˜ T (s) v˜ (s) ds,

(6.14)

108

6 Filtered Repetitive Control with Nonlinear Systems …

where V0 satisfies Assumption 6.1 and A0,ε = AT0,ε ≥ 0. Under Assumptions 6.1, 6.2, taking the time derivative V (t, zt ) along (6.13) yields V˙ (t, zt ) ≤ −2ke eT (t) M (t) e (t) + v˜ T (t) Hv˜ (t) + g (t) + 2˜vT (t) δ (t) , (6.15) where H = −Im + AT1,ε A1,ε g (t) = −˜vT (t) AT1,ε A1,ε v˜ (t) + 2˜vT (t) AT1,ε v˜ (t − T ) − v˜ T (t − T ) v˜ (t − T ) . Using g (t) ≤ 0, inequality (6.15) becomes V˙ (t, zt ) ≤ −2ke eT (t) M (t) e (t) + v˜ T (t) Hv˜ (t) + 2 ˜v (t) δ (t) .

(6.16)

Based on the above results, conclusions (i) and (ii) will be proven in detail in the Appendix. 

6.5 Filtered Repetitive Controller Design with Saturation According to (6.9), an FRC with saturation is designed as   vˆ (t) = sat vˆ ∗ (t)

  A0,ε v˙ˆ ∗ (t) = −ˆv∗ (t) + A1,ε sat vˆ ∗ (t − T ) + ke RT (t) e (t)

(6.17)

with vˆ (θ ) = 0, for θ ∈ [−T, 0] , where t  0, A0,ε , A1,ε ∈ Rm×m , ke > 0, and R (t) is as in (6.3). For saturation, the following preliminary results are obtained. Subtracting (6.17) from (6.7) yields A0,ε v˙˜ ∗ (t) = −˜v∗ (t) + A1,ε v˜ (t − T ) − ke RT (t) e (t) + δ (t) ,

(6.18)

where v˜ ∗  xp − vˆ ∗ and v˜  xp − vˆ . Lemma 6.2 If A0,ε = 0 and A1,ε = Im , then v˜ T (t) v˜ (t) − v˜ T (t − T ) v˜ (t − T ) ≤ −2ke v˜ T (t) RT (t) e (t) . Proof See Appendix.



Theorem 6.2 . For the error dynamics (6.1), suppose that Assumptions 6.1–6.3 hold, and vˆ (t) is generated by the FRC (6.17) in which the parameters satisfy ke > that (i) if A0,ε = 0, then 0, A0,ε = AT0,ε ≥ 0 and A1,ε = Im − αA0,ε . It is claimed lim e (t) = 0; (ii) if A0,ε > 0, α ensures A1,ε < 1, and if V0 (t, e) in (6.2)

t→∞

6.5 Filtered Repetitive Controller Design with Saturation

109

further satisfies V0 (t, e) ≤ c1 e (t)2 and R (t) ≤ c2 with c1 , c2 > 0, then, for any value of e (0) , e (t) is uniformly ultimately bounded. Proof (i) First, consider A0,ε = 0 and A1,ε = Im . In this case, for the closed-loop error dynamics (6.1) and (6.17), a nonnegative function V (t, zt ) is chosen as follows:  V (t, zt ) = 2ke V0 (t, e (t)) +

t

v˜ T (s) v˜ (s) ds,

t−T

where V0 satisfies Assumptions 6.1. Under Assumptions 6.1–6.3, taking the time derivative V (t, zt ) along (6.1) and (6.17) yields V˙ (t, zt ) ≤ −2ke eT (t) M (t) e (t) + 2ke eT (t) R (t) v˜ (t)   + v˜ T (t) v˜ (t) − v˜ T (t − T ) v˜ (t − T ) . Using Lemma 6.2, V˙ (t, zt ) ≤ −2ke eT (t) M (t) e (t) + 2ke eT (t) R (t) v˜ (t) − 2ke v˜ T (t) RT (t) e (t) ≤ −2ke eT (t) M (t) e (t) . Then,

V˙ (t, zt ) ≤ −2ke λ M e (t)2 .

From the above inequality, V˙ (t, zt ) ≤ 0. Therefore, V (t, zt ) ≤ V (0, z0 ) . Further, 2ke λ M e (t)2 ≤ −V˙ (t, zt ) ⇒



t 0

e (s)2 ds ≤

1 V (0, z0 ) . 2ke λ M

Therefore, e ∈ L2 [0, ∞) ∩ L∞ [0, ∞) . Because e, v ∈ L2 [0, ∞) and vˆ ∈ L2 [0, ∞) by saturation, e˙ ∈ L2 [0, ∞) . Consequently, e (t) is uniformly continuous. Therefore, lim e (t) = 0 by Barbalat’s Lemma (Lemma 2.1). t→∞ (ii) Next, consider A0,ε > 0. Under Assumptions 6.1–6.3, taking the time derivative V0 (t, e (t)) along (6.1) yields V˙0 (t, e (t)) ≤ −eT (t) M (t) e (t) + eT (t) R (t) v˜ (t) .     √ Given that v˜ =sat xp −sat vˆ ∗ by (6.8), then ˜v (t) ≤ 2 mβ. Furthermore, given that R (t) ≤ c2 , it follows that

110

6 Filtered Repetitive Control with Nonlinear Systems …

√ V˙0 (t, e (t)) ≤ −eT (t) M (t) e (t) + 2 mβc2 e (t) √ ≤ −λ M e (t)2 + 2 mβc2 e (t) . Then,

√ 4 mβc2 λM 2 ˙ e (t) , ∀ e (t) ≥ V0 (t, e (t)) ≤ − . 2 λM

On the other hand, c0 e (t)2 ≤ V0 (t, e (t)) ≤ c1 e (t)2 . Using Theorem 2.5, system (6.1) is input-to-state stable with respect to v˜ . Moreover, e (t) is uniformly ultimately bounded (this can be observed from the proof of Theorem 2.5).

6.6 Application 1: Robotic Manipulator Tracking This example is from previous works [2, 3].

6.6.1 Problem Formulation The dynamics of an n-degree-of-freedom manipulator are described by the following differential equation: D (q (t)) q¨ (t) + C (q (t) , q˙ (t)) q˙ (t) + G (q (t)) = τ (t) + w (t) ,

(6.19)

where q (t) ∈ Rn denotes the vector of generalized displacements in robot coordinates, τ (t) ∈ Rn denotes the vector of generalized control input forces in robot coordinates; D (q (t)) ∈ Rn×n is the manipulator inertial matrix, C (q (t) , q˙ (t)) ∈ Rn×n is the vector of centripetal and Coriolis torques and G (q (t)) ∈ Rn is the vector of gravitational torques; w ∈ C P1 T ([0, ∞) , Rn ) is the T -periodic and persistent nonperiodic disturbances. It is assumed that only q (t) and q˙ (t) are available from measurements. The filtered tracking error is defined as e (t) = q˙˜ (t) + μq˜ (t) ,

(6.20)

where μ > 0, q˜ (t) = qd (t) − q (t) , and qd (t) is a desired trajectory. The following assumptions are required, which are common to robot manipulators.

6.6 Application 1: Robotic Manipulator Tracking

111

(A1) The inertial matrix D (q (t)) is symmetric and uniformly positive definite and bounded, i.e., 0 < λ D In ≤ D (q (t)) ≤ λ¯ D In < ∞, ∀q (t) ∈ Rn ,

(6.21)

where λ D , λ¯ D > 0. ˙ (q (t)) − 2C (q (t) , q˙ (t)) is skew-symmetric; hence, (A2) The matrix D   ˙ (q (t)) − 2C (q (t) , q˙ (t)) x = 0, ∀x ∈ Rn . xT D (A3) The linear-in-the-parameters property [10] is written as follows: ˙ q˙ e , q¨ e , t) p, (6.22) D (q (t)) q¨ e (t) + C (q (t) , q˙ (t)) q˙ e (t) + G (q (t)) = (q, q, where q˙ e (t) = q˙ d (t) + μq˜ (t), q¨ e (t) = q¨ d (t) + μq˙˜ (t) , p ∈ Rl is the vector of ˙ q˙ e , q¨ e , t) ∈ Rn×l is a known matrix, unknown constant parameters, and (q, q, denoted by (t) for brevity. For a given desired trajectory qd ∈ C P2 T ([0, ∞) , Rn ), our objective is to design a controller such that lim e (t) = 0. t→∞ From the definition of the filtered tracking error (6.20), both q˜ (t) and q˜˙ (t) can be viewed as outputs of a stable system with e (t) as an input, which means that q˜ (t) and q˙˜ (t) are bounded or approach zero if e (t) is bounded or approaches zero. Assumptions (A1), (A2) are very common to a robot manipulator; and (A3) illustrates the separation of the unknown parameters and the known functions, which is often used in the literature on adaptive control [10–13].

6.6.2 Controller Design Design τ (t) as follows: ˆ (t) , τ (t) = Pe (t) + (t) pˆ (t) + w

(6.23)

where P ∈ Rn×n is a positive-definite matrix, pˆ (t) ∈ Rl is the estimate of p in (A3), ˆ (t) ∈ Rn is the estimate of w (t) . By employing the property (6.22) and conand w troller (6.23), the filtered error dynamics are obtained as follows: D (q (t)) e˙ (t) + C (q (t) , q˙ (t)) e (t)     ˆ (t) . = −M (t) e (t) + (t) p − pˆ (t) + w (t) − w Furthermore, it follows that   D (q (t)) e˙ (t) + C (q (t) , q˙ (t)) e (t) = −Pe (t) + R (t) v (t) − vˆ (t) ,

(6.24)

112

6 Filtered Repetitive Control with Nonlinear Systems …

where T T     ˆ T (t) , vˆ (t) = pˆ T (t) w ˆ T (t) . R (t) = (t) In , v (t) = pT w Then, τ (t) in (6.23) is rewritten as τ (t) = Pe (t) + R (t) vˆ (t) .

(6.25)

  Here, the unknown signal v ∈ C P1 T [0, ∞) , Rn+l contains the periodic disturbance w ∈ C P1 T ([0, ∞) , Rn ) and the unknown parameter p ∈ Rl , and vˆ (t) ∈ Rn is an estimate of v (t), which is provided by the designed FRC. Furthermore, the above system can be written in the form of (6.1) with   f (t, e (t)) = −D−1 (q (t)) C (q (t) , q˙ (t)) + M (t) e (t)   b (t, e (t)) = D−1 (q (t)) (t) In     p pˆ (t) v (t) = , vˆ (t) = . ˆ (t) w (t) w

(6.26)

6.6.3 Assumption Verification The positive-definite function is defined as V0 (t, e (t)) =

1 T e (t) D (q (t)) e (t) . 2

Then, c0 e (t)2 ≤ V0 (t, e (t)) ≤ c1 e (t)2 , where c0 = 21 λ D and c1 = 21 λ¯ D are positive real numbers. Using the skew-symmetry ˙ (q (t)) − 2C (q (t) , q˙ (t)) , the time derivative of V0 (t, e) along (6.1) of matrix D with (6.26) is evaluated as (6.3) with R (t) = [ (t) In ]. Therefore, Assumption 6.1 is satisfied. Given that p is a vector of constant parameters  and τ d (t) is the vector of T -periodic disturbances, v (t) in (6.26) satisfies v ∈ C P0 T [0, ∞) , Rn+l . Therefore, Assumption 6.2 is satisfied.

6.6.4 Numerical Simulation In this chapter, a three-degree-of-freedom manipulator is considered, where ⎡

⎤ I1 + l3 q32 0 0 0 l2 + l3 0 ⎦ D (q) = ⎣ 0 0 l3

6.6 Application 1: Robotic Manipulator Tracking

113



⎤ l3 q3 q˙3 0 l3 q3 q˙1 0 0 0 ⎦ ˙ =⎣ C (q, q) −l3 q3 q˙1 0 0 ⎡ ⎤ 0 G = ⎣ (l2 + l3 ) g ⎦ 0

(6.27)

with parameters I1 = 0.8 kg/m2 , l2 = 2 m, l3 = 1 m, and g = 9.8 m2 /s The parameters I1 , l2 and l3 are assumed to be unknown for the controller design. The general expression for the desired position trajectories, with continuous bounded position, velocity, and acceleration, is given by [10]  ⎧  0 1 ⎪ ⎪ ⎨ cd t, 0, t1 , qd , qd 0 ≤ t < t1 qd (t) = qd1   t1 ≤ t ≤ t 2 ⎪ ⎪ ⎩ c t, t , T, q0 , q1 t2 < t ≤ T d 2 d d      (t − t0 )3 (t − t0 )4 (t − t0 )5  f 0 f 0 q − q0 . cd t, t0 , t f , q , q = q + 10  3 − 15  4 + 6  5 t f − t0 t f − t0 t f − t0

Simulation studies are conducted for cases with and without learning using the following sets of data: T = 3 s, t1 = 1 s, t2 = 2 s  T  T qd0 = 0 0.5 0.5 , qd1 = 1 1 1   2πt  ⎤ ⎡ 0.5 1 − cos  2πt 3 ⎦ v (t) = ⎣ 0.5  sin 3 t  − 1 0.5 cos 2πt 3  T q0 (0) = 0.3 0.4 0.6  T q˙ 0 (0) = 0 0 0 . The disturbance is assumed as   2πt  ⎤ 0.5 1 − cos  2πt 3 ⎦ w (t) = ⎣ 0.5  sin 2πt3 t  0.5 cos 3 − 1 ⎡

  which is unknown for the controller design except for τd ∈ C P0 T [0, ∞) ; R3 . According to the dynamics in [10], the following expressions are obtained: T  p = I1 l 2 l 3

114

6 Filtered Repetitive Control with Nonlinear Systems …

⎤ q32 q¨e,1 + q3 q˙3 q˙e,1 0 q ¨ ⎥ ⎢ e,1 +q3 q˙1 q˙e,3 ⎥,

=⎢ ⎦ ⎣ 0 q¨e,2 + g q¨e,2 + g 0 0 q¨e,3 − q3 q˙1 q˙e,1 ⎡

where qi and qe,i are the ith elements of qi and qe,i , respectively. In (6.25), select P = 10I3 and design vˆ (t) according to (6.9) as εv˙ˆ (t) = −ˆv (t) + (1 − 0.0001ε) vˆ (t − T ) + RT (t) e (t) For tracking performance comparison, the performance index is introduced as Jk =

sup

q˜ (t) ,

t∈[(k−1)T,kT ]

where k = 1, 2, . . . .

6.6.4.1

Without Input Delay

If ε = 0, then the above parameters satisfy the conditions of Theorem 6.1. As shown in Fig. 6.2, the performance index Jk approaches 0 as k increases, which implies that q˜ (t) approaches 0 as t → ∞. This result is consistent with conclusion (i) in Theorem 6.1. If ε = 2, 1, 0.5, the chosen parameters satisfy the conditions of Theorem 6.1. As shown in Fig. 6.2, the performance index Jk is bounded. This result is consistent with conclusion (ii) in Theorem 6.1. As shown in Fig. 6.2, the tracking performance improves as ε decreases. This is consistent with the conclusion for linear systems that, recalling (6.11), as the increases, i.e., ε decreases, the tracking performance bandwidth of Q (s) = 1−2ε εs+1 improves, and vice versa.

6.6.4.2

With Input Delay

Consider the robotic manipulator with the following input delay as follows: D (q (t)) q¨ (t) + C (q (t) , q˙ (t)) q˙ (t) + G (q (t)) + τ d (t) = τ (t − dε ) , where dε > 0 is the input delay; the controllers are designed as above. When RC is adopted (ε = 0) and dε = 0.001 s, it is observed from Fig. 6.3 that a state of the closed-loop system diverges at approximately 30 s, which confirms that the stability of the RC systems is insufficiently robust. Using the FRC with ε = 0.5 and prolonging the input delay from dε = 0.001 s to dε = 0.01 s, it is observed from Fig. 6.3 that the closed-loop system can still track the desired trajectory. This shows that the FRC system is more robust than its corresponding RC system. From the simulations, the

6.7 Application 2: Quadcopter Attitude Control

115

Fig. 6.2 Tracking performance with different values of ε

FRC provides the flexibility to select the filter parameters to achieve a trade-off between tracking performance and stability. More importantly, FRC is shown to be effective because input delay is common in real-world applications.

6.7 Application 2: Quadcopter Attitude Control The proposed method is further applied to the attitude control of a quadcopter [4] subject to disturbances. This example is from a previous work [5].

6.7.1 Problem Formulation The unit quaternion is a vector denoted by (q0 q) , where q0 (t) ∈ R, q (t) ∈ R3 are the scalar and vector parts of the unit quaternion, respectively, and q02 (t) + q (t)2 = 1. The unit quaternion, which is free of singularity, is used to represent the attitude kinematics of a quadcopter as follows [14]:

116

6 Filtered Repetitive Control with Nonlinear Systems …

Fig. 6.3 Time responses of qd,3 (t) and q3 (t) with input delay. The solid lines represent response, and dotted lines represent desired trajectory

1 (q (t) × ω (t) + q0 (t) ω (t)) 2 1 q˙0 (t) = − qT (t) ω (t) , 2 q˙ (t) =

(6.28) (6.29)

where ω (t) ∈ R3 denotes the angular velocity of the airframe in the body-fixed frame; × is the cross product. For simplicity, the dynamic equation of attitude motion is assumed as (6.30) ω˙ (t) = −J−1 ω (t) × Jω (t) + J−1 τ (t) + v (t) , where J ∈ R3×3 is the inertial matrix, τ (t) ∈ R3 is the control torque, and v (t) ∈ R3 is the disturbance vector. By the coordinate transformation x = ω + 2q, systems (6.28)–(6.30) can be converted as follows: 1 (q (t) × x (t) + q0 (t) x (t)) − q0 (t) q (t) 2 1 q˙0 (t) = − qT (t) x (t) + qT (t) q (t) 2 q˙ (t) =

(6.31)

6.7 Application 2: Quadcopter Attitude Control

117

x˙ (t) = −J−1 ω (t) × Jω (t) + J−1 τ (t) + (q (t) × x (t) + q0 (t) x (t) − 2q0 (t) q (t)) + v (t) .   Design τ = J J−1 ω × Jω − (q × x + q0 x) + 2q0 q − 2x − vˆ . Then 1 (q (t) × x (t) + q0 (t) x (t)) − q0 (t) q (t) 2 1 q˙0 (t) = − qT (t) x (t) + qT (t) q (t) 2 x˙ (t) = −2x (t) + v (t) − vˆ (t) . q˙ (t) =

(6.32)

The above system can be rewritten in the form of (6.1) with T  e = qT x T  1 (q (t) × x (t) + q0 (t) x (t)) − q0 (t) q (t) 2 f (t, e (t)) = −2x (t)   0 . b= I3 Here, q0 (t) is generated by q˙0 (t) = − 21 qT (t) x (t) + qT (t) q (t).

6.7.2 Assumption Verification For the system e˙ (t) = f (t, e (t)), the Lyapunov function is selected as follows: V0 (t, e (t)) = (1 − q0 (t))2 + qT (t) q (t) + xT (t) x (t) .   The derivative of V0 (t) along e˙ (t) = f (t, e (t)) + b v (t) − vˆ (t) is   ∂ V0 ∂ V0 + f (t, e (t)) + 2xT (t) v (t) − vˆ (t) V˙0 (t) = ∂t ∂e   = qT (t) (−2q (t) + x (t)) − 4xT (t) x (t) + 2xT (t) v (t) − vˆ (t)   ≤ − e (t)2 + 2xT (t) v (t) − vˆ (t) . Let qT (t) q (t) = sin2

θ 2

(6.33)

and q0 (t) = cos θ2 , where 0 ≤ θ ≤ π [15, pp. 198]. Then

(1 − q0 (t))2 + qT (t) q (t) = 2 − 2q0 (t) θ = 2 − 2 cos 2 θ = 4 sin2 . 4

118

6 Filtered Repetitive Control with Nonlinear Systems …

Given that sin2 θ4 ≤ sin2 4 q (t)2 . Consequently,

θ 2

when 0 ≤ θ ≤ π, (1 − q0 (t))2 + qT (t) q (t) ≤

1 e (t)2 ≤ V0 (t, e (t)) ≤ 4 e (t)2 . 2

(6.34)

From (6.34) and (6.33), Assumption 6.1 is satisfied.

6.7.3 Controller Design and Numerical Simulation The simulation parameters are selected as follows: the inertia matrix J of a quadcopter is similar to that in [14], namely, J =diag(0.16, 0.16, 0.32) kg·m2 . The initial conditions of (6.28)–(6.30) are q0 (0) = 0.707, q (0) = [−0.4 −0.3 0.5]T and ω (0) = [0 0 0]T rad/s. The external signal v (t) = [sin (t) sin (t + 1) cos (t) sin2 (t + 1)]T N·m is assumed to be the vector of T -periodic disturbances (T = 2π ). Assumption 6.2 is satisfied. The FRC is designed as εv˙ˆ (t) = −ˆv (t) + (1 − ε) vˆ (t − T ) + 2x (t) . Under Assumptions 6.1, 6.2, Theorem 6.1 implies that e (t) → 0 as t → ∞ if ε = 0. If ε > 0, the tracking error e (t) is uniformly ultimately bounded. Define E (t)  e (t) . As shown in Fig. 6.4, E (t) approaches 0 as t increases with ε = 0 and E (t) is bounded with ε = 0.1 or 0.05. These results are consistent with conclusions (i), (ii) in Theorem 6.1.

6.8 Summary The adaptive-control-like method relies on the model of the parameters required for estimation. Therefore, a new model of periodic signals was established, which is a key step in FRC design. By incorporating this model in the controller, approximate tracking can be achieved. The resulting controller is an FRC. Compared with existing RCs, the FRC provides flexibility to select different filter parameters, such as A0,ε , to satisfy the performance requirements and tolerate some small uncertainties, such as input delay. If a periodic disturbance is present, the proposed controller with A0,ε = 0 causes the tracking error to approach zero as t → ∞. To demonstrate its effectiveness, the proposed method is applied to periodic disturbance rejection in a class of robotic manipulators and attitude control of a quadcopter. Simulation data show that a trade-off between tracking performance and stability can be achieved by tuning the filter parameters. Furthermore, the simulation data show that FRC can deal with a small input delay while the corresponding RC cannot, which demonstrates the effectiveness of FRC.

6.9 Appendix

119

Fig. 6.4 Evolution of E (t) with different values of ε under T -periodic disturbances

6.9 Appendix 6.9.1 Proof of Lemma 6.1 Let (6.6) be the original system. To apply additive state decomposition (see Sect. 2.2.3), select the primary system as follows: A0,ε x˙ p (t) = −xp (t) + A1,ε xp (t − T ) + δ (t) xp (θ ) = v (T + θ ) , θ ∈ [−T, 0] ,

(6.35)

where A0,ε , A1,ε ∈ Rn×n , δ (t) ∈ Rn . From the original system (6.6) and the primary system (6.35), the secondary system is determined by additive state decomposition as follows:   −A0,ε x˙ p (t) = −xs (t) + xs (t − T ) + I − A1,ε xp (t − T ) − δ (t) xs (θ ) = 0, θ ∈ [−T, 0] .

(6.36)

By additive decomposition, x (t) = xp (t) + xs (t) . Let δ (t) ≡ xs (t − T ). Then

(6.37)

120

6 Filtered Repetitive Control with Nonlinear Systems …

v (t) = xp (t) + δ (t + T ) . Furthermore, using (6.35), model (6.6) can be written as follows: A0,ε x˙ p (t) = −xp (t) + A1,ε xp (t − T ) + δ (t) v (t) = xp (t) + δ (t + T ) xp (θ ) = v (T + θ ) , θ ∈ [−T, 0] .

(6.38)

In the following sections, the bound on δ (t) will be discussed, t ∈ [0, ∞) . From (6.36) and (6.37),   A0,ε x˙ p (t) = −xp (t) − I − A1,ε xp (t − T ) + v (t)

(6.39)

  If A0,ε > 0 and A1,ε < 1, then A0,ε x˙ p (t) = −xp (t) − In − A1,ε xp (t − T ) is exponentially stable [16]. Consequently, there exists a bounded and T -periodic function xp (t) that satisfies (6.39) [17]. Note that δ (t) ≡ v (t − T ) − xp (t − T ) is also a bounded and T -periodic function. Consequently, (6.38) can be rewritten in the form of (6.7). If A0,ε = 0 and A1,ε = Im , then (6.6) can be rewritten as (6.7) with δ (t) ≡ 0.

6.9.2 Detailed Proof of Theorem 6.1 (i) If A0,ε = 0, then δ (t) ≡ 0 by Lemma 6.1. Thus, (6.16) becomes V˙ (t, zt ) ≤ −2ke eT (t) M (t) e (t) . Then,

(6.40)

V˙ (t, zt ) ≤ −2c3 e (t)2 .

From the above inequality, V (t, zt ) ≤ V (0, z0 ) . Further,



t

e (s)2 ds ≤

0

(6.41)

1 V (0, z0 ) . 2c3

Therefore, e ∈ L∞ [0, ∞) ∩ L2 [0, ∞). Next, it is proved that e (t) is uniformly continuous. If this is true, then lim e (t) = 0 by Barbalat’s Lemma. Given that e ∈ t→∞

L∞ [0, ∞) , e (t) ∈ K ⊂ Rm for a compact set K . By Assumption 6.1, there exists b f , bb > 0 such that f (t, e (t)) ≤ b f , b (t, e (t)) for all (t, e (t)) ∈ [0, ∞) × K . Let t1 and t2 be any real numbers such that 0 < t2 − t1 ≤ h. Then,

6.9 Appendix

121

 t2 e (t2 ) − e (t1 ) = ˙ e ds (s) t1  t2 = (f (s, e (s)) + b (s, e (s)) v˜ (s)) ds t1



t2

≤ b f (t2 − t1 ) + bb

˜v (s) ds.

(6.42)

t1

Note that, by (6.41), V (t, zt ) is bounded for all t > 0 with respect to the bound bV > 0. Hence,  t

˜v (s)2 ds ≤ bV

t−T

for all t > 0. Thus,



t2

˜v (s)2 ds ≤ N bV ,

(6.43)

t1

where N = (t2 − t1 ) /T  + 1 and (t2 − t1 ) /T  represents the floor integer of (t2 − t1 ) /T . Using the Cauchy–Schwarz inequality, 

t2



t2

˜v (s) ds ≤

t1

 21 

2

t2

1 ds t1

˜v (s) ds 2

 21

.

t1

Consequently, e (t2 ) − e (t1 ) in (6.42) is further bounded by √ e (t2 ) − e (t1 ) ≤ b f h + h 1 h, where (6.43) is utilized and h 1 = ⎛ h=⎝



N bV bb . Therefore, for any ε > 0, there exists

−h 1 +

"

h 21 + b f ε

2b f

⎞2 ⎠ >0

such that e (t2 ) − e (t1 ) < ε for any 0 < t2 − t1 < h. This implies that e (t) is uniformly continuous. (ii) By considering the form of V (t, zt ) in (6.14), the inequality  γ1 z (t)2 ≤ V (t, zt ) ≤ γ2 z (t)2 +

t

z (s)2 ds

(6.44)

t−T

      is satisfied with γ1 = min 2ke c1 , λmin A0,ε and γ2 = max 2ke c2 , λmax A0,ε . If A0,ε > 0, then δ ∈ L∞ [0, ∞) by Lemma 6.1. Then, inequality (6.16) becomes V˙ (t, zt ) ≤ −2κ1 z (t)2 + 2κ2 z (t) ,

122

6 Filtered Repetitive Control with Nonlinear Systems …

   where κ1 = min λ M ke , 21 λmin A0,ε and κ2 = δ∞ . Given that κ1 > 0, the following inequality holds: −κ1 z (t)2 + 2κ2 z (t) − κ1−1 κ22 ≤ 0. Then,

V˙ (t, zt ) ≤ −γ3 z (t)2 + σ

(6.45)

is satisfied with γ3 = κ1 and σ = κ1−1 κ22 . Then, for any value of z0 , the solutions to Eq. (6.13) are uniformly ultimately bounded according to Corollary 2.1. This can conclude this proof by noting that e (t) ≤ z (t) based on the definition of z (t).

6.9.3 Proof of Lemma 6.2 Before proving Lemmas 6.2, 6.3 is first introduced. Lemma 6.3 Given vectors a, b ∈Rm , suppose that a =sat(a) . Then (a − sat (b))T (b − sat (b)) ≤ 0.

(6.46)

Proof Let a = [a1 · · · am ]T and b = [b1 · · · bm ]T . Then, (a − sat (b))T (b − sat (b)) =

m %  T   ai − satβi (bi ) bi − satβi (bi ) .

(6.47)

i=1

T    In the following section, the term ai − satβi (bi ) bi − satβi (bi ) is considered. It is claimed that T    (6.48) ai − satβi (bi ) bi − satβi (bi ) ≤ 0. With it, inequality (6.46) holds according to (6.47). Next, inequality (6.48) will be proved. For the case |bi | ≤ βi , equality (6.48) holds due to bi −satβi (bi ) = 0. There is a strict inequality when |bi | > βi . It follows for the case bi < −βi that    ai − satβi (bi ) bi − satβi (bi ) = (ai + βi ) (bi + βi ) .   Given that ai +  βi > 0 according to a =sat(a) and bi + βi < 0, ai − satβi (bi )  bi − satβi (bi ) < 0. For the case bi > βi , it follows that    ai − satβi (bi ) bi − satβi (bi ) = (ai − βi ) (bi − βi ) .   Given that ai −  βi < 0 according to a =sat(a) and bi − βi > 0, ai − satβi (bi )   bi − satβi (bi ) < 0. Therefore, (6.48) holds regardless of bi .

6.9 Appendix

123

With Lemma 6.3, the proof of Lemma 6.2 is now possible. For simplicity, denote v˜ (t − T ) by v˜ −T (t) . If A0,ε = 0 and A1,ε = Im , then xp = v and δ = 0. By (6.17), 0 = −ˆv∗ (t) + vˆ (t − T ) + ke RT (t) e (t) . On the other hand, 0 = xp (t) − xp (t − T ) . Then, v˜ −T = v˜ ∗ + ke RT e    = v˜ + vˆ − vˆ ∗ + ke RT e . Consequently,  T     T v˜ −T = v˜ T v˜ + vˆ − vˆ ∗ + ke RT e vˆ − vˆ ∗ + ke RT e + 2˜vT vˆ − vˆ ∗ + ke RT e v˜ −T  T   = v˜ T v˜ + vˆ − vˆ ∗ + ke RT e vˆ − vˆ ∗ + ke RT e   T  ∗   − 2 v − sat vˆ ∗ vˆ − sat vˆ ∗ + 2˜vT ke RT e.   T  ∗   vˆ − sat vˆ ∗ ≤ 0 by Lemma 6.3, it folFurthermore, given that v − sat vˆ ∗ lows that T v˜ −T ≤ −2ke v˜ T RT e. v˜ T v˜ − v˜ −T

References 1. Hara, S., Yamamoto, Y., Omata, T., & Nakano, M. (1988). Repetitive control system: a new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7), 659–668. 2. Quan, Q., & Cai, K.-Y. (2011). A filtered repetitive controller for a class of nonlinear systems. IEEE Transaction on Automatic Control, 56(2), 399–405. 3. Quan, Q., & Cai, K.-Y. (2011). Filtered repetitive control of robot manipulators. International Journal of Innovative Computing, Information and Control, 7(5(A)), 2405–2415. 4. Quan, Q. (2017). Introduction to multicopter design and control. Singapore: Springer. 5. Quan, Q., & Cai, K.-Y. (2013). Internal-model-based control to reject an external signal generated by a class of infinite-dimensional systems. International Journal of Adaptive Control and Signal Processing, 27(5), 400–412. 6. Messner, W., Horowitz, R., Kao, W.-W., & Boals, M. (1991). A new adaptive learning rule. IEEE Transactions on Automatic Control, 36(2), 188–197. 7. Dixon, W. E., Zergeroglu, E., Dawson, D. M., & Costic, B. T. (2002). Repetitive learning control: a Lyapunov-based approach. IEEE Transaction on Systems, Man, and Cybernetics, Part B-Cybernetics, 32(4), 538–545. 8. Francis, B. A., & Wonham, W. M. (1976). The internal model principle of control theory. Automatica, 12(5), 457–465. 9. Hale, J. K., & Lunel, S. M. V. (1993). Introduction to functional differential equations. New York: Springer.

124

6 Filtered Repetitive Control with Nonlinear Systems …

10. Sun, M., Ge, S. S., & Mareels, I. M. Y. (2006). Adaptive repetitive learning control of robotic manipulators without the requirement for initial repositioning. IEEE Transactions on Robotic, 22(3), 563–568. 11. Ge, S. S., Lee, T. H., & Harris, C. J. (1998). Adaptive neural network control of robotic manipulators. London, U.K.: World Scientific. 12. Tayebi, A. (2004). Adaptive iterative learning control for robot manipulators. Automatica, 40(7), 1195–1203. 13. Lewis, F. L., Lewis, D., & Dawson, M. (2004). Robot manipulator control: Theory and practice. New York: Marcel Dekker. 14. Zhang, R., Quan, Q., & Cai, K.-Y. (2011). Attitude control of a quadrotor aircraft subject to a class of time-varying disturbances. IET Control Theory and Applications, 5(9), 1140–1146. 15. Isidori, A., Marconi, L., & Serrani, A. (2003). Robust autonomous guidance: An internal model-based approach. London: Springer. 16. Dugard, L., & Verriest, E. I. (1998). Stability and control of time-delay systems. London: Springer. 17. Burton, T. A., & Zhang, S. (1986). Unified boundedness, periodicity, and stability in ordinary and functional differential equations. Annali di Matematica Pura ed Applicata, 145(1), 129– 158.

Chapter 7

Continuous-Time Filtered Repetitive Control with Nonlinear Systems: An Additive-State-Decomposition Method

Error dynamics are often required for nonlinear systems using an adaptive-controllike method. However, this requirement restricts the applications of repetitive control (RC, or repetitive controller, which is also designated as RC) and fails to emphasize the special features and importance of RC. Moreover, it is difficult and also computationally expensive to derive error dynamics for nonminimum-phase nonlinear systems, especially when the internal dynamics are subject to an unknown disturbance. Therefore, there are very few RC works on nonminimum-phase nonlinear systems. In this chapter, a new method is introduced for a class of nonminimumphase nonlinear systems in parallel to the previously proposed methods, such as the feedback linearization method and adaptive-control-like method. The novel RC design is based on the additive-state-decomposition-based tracking control framework [1]. This chapter aims to answer the following question: How is the repetitive controller designed to tackle the tracking problem for a class of nonlinear systems using the additive-state-decomposition-based tracking control framework? To answer this question, the problem formulation and an introduction to additivestate-decomposition and the additive-state-decomposition-based RC framework are presented. The application of this controller to the TORA benchmark problem is reported. Most contents in this chapter are based on [2, 3].

7.1 Problem Formulation Consider the following class of uncertain nonlinear systems similar to those in [4–7]: x˙ (t) = Ax (t) + Bu (t) + φ (y (t) , y˙ (t)) + d (t) , x (t0 ) = x0 y (t) = CT x (t) , © Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_7

(7.1) 125

126

7 Continuous-Time Filtered Repetitive Control …

where A ∈ Rn×n is a known stable constant matrix, B, C ∈ Rn×m are known constant matrices, φ : Rm × Rm → Rn is a known nonlinear function vector, x (t) ∈ Rn is the state vector, y (t) ∈ Rm is the output, u (t) ∈ Rm is the control, and d (t) ∈ Rn is an unknown bounded T -periodic disturbance. It is assumed that only y (t) , y˙ (t) ∈ Rm are available through measurements. The reference r (t) ∈ Rm is a known and sufficiently smooth T -periodic signal, t ≥ 0. In the following section, for convenience, the initial time is set as t0 = 0 and the variable t is omitted except when necessary. For system (7.1), the following assumption is made:   Assumption 7.1 The pair A, CT is observable. The objective here is to design a controller u such that y (t) − r (t) → 0 as t → ∞ or with good tracking accuracy, i.e., y − r is ultimately bounded by a small value.   Remark 7.1 Given that the pair A, CT is observable based on Assumption 7.1, there always exists a vector L ∈ Rn×m such that A + LCT is stable, whose eigenvalues can   be assigned freely. Consequently, (7.1) can be rewritten as x˙ = A + LCT x + Bu + (φ (y, y˙ ) − Ly) + d. Therefore, we assume A to be stable without loss of generality. In the following section, the nonminimum/minimum-phase property of the nonlinear system (7.1) is discussed. Proposition 7.1 If and only if the following linear system x˙ (t) = Ax (t) + Bu (t) y (t) = CT x (t) .

(7.2)

is nonminimum/minimum phase, then system (7.1) without external signals is nonminimum/minimum phase. Proof According to [8, Theorem 13.1, p. 516], for (7.2), there exists a diffeomorphism transformation     η T1 (x) z= = T (x) = T2 (x) ξ and γ : Rn → Rm×m , α : Rn → Rm such that     η˙ f0 (η, ξ ) = Ac ξ + Bc γ (x) (u − α (x)) ξ˙ y = CTc ξ , where η ∈ Rρ , ξ ∈ Rn−ρ , (Ac , Bc , Cc ) is a canonical form representation of a chain of ρ ∈ N integrators;

7.1 Problem Formulation

127

  ∂T1 (x) (Ax + Bu) ∂x x=T −1 (z)   ∂T2 (x) Ac ξ + Bc γ (x) (u − α (x)) = . (Ax + Bu) ∂x −1 f0 (η, ξ ) =

(z)

x=T

System (7.2) is said to be minimum phase if the zero dynamics η˙ = f0 (η, 0) has an asymptotically stable equilibrium point. Otherwise, it is nonminimum phase. Using the same transformation T (x) for system (7.1) without external signals, ⎤    Ax + Bu + φ CT x, CT x˙  −1   T x=T (z) ⎦ ∂T 1 (x) T ˙ Ax + Bu + φ C x x, C  ∂x x=T −1 (z) ⎡   f0 (η, ξ ) + ∂T∂x1 (x) φ CT x, CT x˙  −1 =⎣  Tx=T T(z) ∂T 2 (x) Ac ξ + Bc γ (x) (u − α (x)) + ∂x φ C x, C x˙ 

⎡   η˙ =⎣ ξ˙

∂T 1 (x) ∂x

x=T

y=

⎤ ⎦ −1 (z)

CTc ξ

whose zero dynamics is η˙ = f0 (η, 0) because    ∂T1 (x)  T ∂T1 (x)  T  φ C x, C x˙  = φ (0, 0) = 0. ∂x ∂x x=T −1 (z),ξ =0 x=T −1 (z),ξ =0 Based on the above analysis, the systems (7.2) and (7.1) without external signals have the same zero dynamics. According to this definition, if and only if the following linear system (7.2) is nonminimum/minimum phase, then system (7.1) without external signals is nonminimum/minimum phase.  Remark 7.2 If the considered nonlinear system (7.1) is a single-input single-output (SISO) nonminimum-phase system, the transfer function from u to y is CT (sIn − A)−1 B =

N (s) . D (s)

Based on Proposition 7.1, system (7.1) without external signals is nonminimum/ minimum phase if N (s) has zeros on the right-half s-plane/open left-half s-plane. Example 7.1 Consider the following simple nonminimum-phase nonlinear system:

128

7 Continuous-Time Filtered Repetitive Control …

x˙1 = 5x1 − 10x2 + sin x2 + d1 x˙2 = 4x1 − 8x2 + u y = x2 ,

(7.3)

where u (t) ∈ R is the control signal and d1 (t) ∈ R is an unknown T -periodic signal. The output y (t) is measurable. The objective is to design a controller u such that the output y (t) − r (t) → 0 as t → ∞, while keeping the other states bounded; here, r (t) is a T -periodic reference, and its derivative is bounded. Given that the internal dynamics of (7.3) is x˙1 = 5x1 , which is unstable, the considered problem is an RC problem for a nonminimum-phase nonlinear system. The system (7.3) can be rewritten as follows:            x˙1 5 −10 x1 0 sin x2 d = + u+ + 1 x˙2 x2 0 0 4 −8 1       x˙

A0





x

B

φ 0 (y)

d

y = 0 1 x, x (0) = x0 , 

(7.4)

CT

  where A0 is unstable. Given that the pair A0 , CT is observable, the system (7.4) can be further written as (7.1) with  A=

   5 −12 sin y + 2y , φ (y, y˙ ) = , 4 −9 y

where the eigenvalues of A are −1, −3. Therefore, A is stable.

7.2 Additive-State-Decomposition-Based Repetitive Control Framework Based on the additive state decomposition (see Sect. 2.2.3), the considered system (7.1) is decomposed into two systems: a linear time-invariant (LTI) system (7.5) including all external signals as the primary system, together with the secondary system (7.6) whose equilibrium point is zero. Given that the output of the primary system and state of the secondary system can be observed, the original RC problem for system (7.1) can be correspondingly decomposed into two problems: an outputfeedback RC problem for an LTI primary system and a state-feedback stabilization problem for the left secondary system. Because the tracking task is only assigned to the LTI system, it is easier than that for the nonlinear system (7.1).

7.2 Additive-State-Decomposition-Based Repetitive Control Framework

129

7.2.1 Decomposition Consider system (7.1) as the original system. Based on the principle mentioned above, the primary system is selected as follows: x˙ p = Axp + Bup + φ (r, r˙ ) + d yp = CT xp , xp (0) = x0 .

(7.5)

Then, the secondary system is determined by the original system (7.1) and primary system (7.5) using rule (2.18) such that   x˙ s = Axs + Bus + φ CT xp + CT xs , CT x˙ p + CT x˙ s − φ (r, r˙ ) ys = CT xs , xs (0) = 0,

(7.6)

where us = u − up . According to (2.19), x = xp + xs and y = yp + ys .

(7.7)

The secondary system (7.6) can be further rewritten as   x˙ s = Axs + Bus + φ r + CT xs − ep , r˙ + CT x˙ s − e˙ p − φ (r, r˙ ) ys = CT xs , xs (0) = 0,

(7.8)

where ep  r − yp . If ep (t) ≡ 0, then (xs , us ) = 0 is an equilibrium point of (7.8). The controller design for decomposed systems (7.5), (7.6) will use their outputs or states as feedback, which are unknown. For this purpose, an observer is proposed to estimate yp and xs . Theorem 7.1 Suppose that an observer is designed to estimate yp and xs in (7.5), (7.6) as follows: yˆ p = y − CT xˆ s xˆ˙ s = Aˆxs + Bus + φ (y, y˙ ) − φ (r, r˙ ) , xˆ s (0) = 0.

(7.9) (7.10)

Then, yˆ p (t) ≡ yp (t) and xˆ s (t) ≡ xs (t) , t ≥ 0. Proof Subtracting (7.10) from (7.6) results in x˙˜ s = A˜xs , x˜ s (0) = 0,1 where x˜ s = xs − xˆ s . Then, x˜ s (t) ≡ 0, t ≥ 0. This implies that xˆ s (t) ≡ xs (t) , t ≥ 0. Conse quently, using (7.7), yˆ p (t) = y (t) −CT xˆ s (t) ≡ yp (t) . The measurement y may be inaccurate in practice. In this case, it is expected that small uncertainties eventually result in xˆ s being close to xs . Accordingly, the 1 Given that the initial values x s

(0) , xˆ s (0) are both assigned by the designer, they are all determinate.

130

7 Continuous-Time Filtered Repetitive Control …

matrix A is required to be stable in the relationship x˙˜ s = A˜xs in the above proof. This requirement can be easily satisfied (see Remark 7.2). It is clear from (7.5)–(7.8) that if the controller up drives yp (t) − r (t) → 0 and the controller us drives ys (t) → 0 as t → ∞, then y (t) − r (t) → 0 as t → ∞. The strategy here is to assign the tracking task to the primary system (7.5) and the stabilization task to the secondary system (7.8). Given that the system (7.5) is a classical LTI system, standard design methods in either the frequency or time domain can be used to handle the output-feedback tracking problem, which is easier than dealing with (7.1) directly. If yp = CT xp = r, then zero is an equilibrium point of the secondary system (7.8). Given that the state of (7.8) can be obtained by (7.10), the design is also easier than that for a nonminimumphase system. Notice that xs is a virtual state, not the true state x. Accordingly, additive state decomposition offers a way to simplify the original control problem. Example 7.2 (Example 7.1 Continued) According to (7.5), the primary system is 

      5 −12 0 sin r + 2r + d1 x1,p = + u + x2,p r 4 −9 1 p       x1,p x (0) yp = 0 1 = x0 , 1,p x2,p x2,p (0)

x˙1,p x˙2,p





(7.11)

which is an LTI system. The objective of (7.11) is to drive yp (t) − r (t) → 0 as t → ∞. Then, the secondary system is determined using the original system (7.3) and the primary system (7.11) such that 

        5 −12 x1,s 0 sin y + 2y sin r + 2r + us + − x2,s 4 −9 1 y r       x1,s x (0) = 0, (7.12) , 1,s ys = 0 1 x2,s x2,s (0)

x˙1,s x˙2,s





=

where y = yp + ys . System (7.12) can be rewritten as   x˙1,s = 5x1,s − 10x2,s + sin x2,s + r − sin r + ga,1 x˙2,s = 4x1,s − 8x2,s + u s + ga,2 ys = x2,s

(7.13)

      where ga,1 = 2 yp − r + sin x2,s + yp − sin x2,s + r and ga,2 = yp − r. From the definitions of ga,1 and ga,2 , it is clear that ga,1 → 0 and ga,2 → 0 as yp − r → 0. Based on Theorem 7.1, the observer is designed to estimate yp and xs in (7.11), (7.12) as follows:

7.2 Additive-State-Decomposition-Based Repetitive Control Framework

131

Fig. 7.1 Additive-state-decomposition flow



yˆp = y − CT xs           x˙ˆ1,s 5 −12 xˆ1,s 0 sin y + 2y sin r + 2r = + u + − xˆ2,s 4 −9 1 s y r xˆ˙2,s

(7.14)

xˆ s (0) = 0. Then yˆp (t) ≡ yp (t) and xˆ s (t) ≡ xs (t) , t ≥ 0.

7.2.2 Controller Design The considered system is decomposed into two systems in charge of corresponding tasks. In this section, the controller design is described in the form of problems with respect to two-component tasks. The entire process is shown in Fig. 7.1. Problem 7.1 For (7.5), an RC   z˙ p = α p zp , yp , r, · · · , r(N )   up = up zp , yp , r, · · · , r(N )

(7.15)

is designed such that ep (t) → B (0, δ) as t → ∞, where δ = δ (r, d) > 0 depends on the reference r and disturbance d, and r(k) denotes the kth derivative of r, k = 1, · · · , N . Given that system (7.5) is a classical LTI system, some standard RC design methods in the frequency domain can be used to handle a generally bounded disturbance. Using a filtered repetitive controller (FRC, or filtered repetitive control, which is also designated as FRC), the output often cannot track the desired signal asymptotically. Therefore, the case ep (t) → B (0, δ) must be taken into consideration besides ep (t) → 0 in Problem 7.1. Problem 7.2 For (7.8) (or (7.6)), a controller

132

7 Continuous-Time Filtered Repetitive Control …

  z˙ s = α s zs , xs , r, · · · , r(N )   us = us zs , xs , r, · · · , r(N )

(7.16)

is designed such that (i) the closed-loop system is input-to-state stable with respect to the input ep ,  xs (t) ≤ β (xs (0) , t) + γ

     sup ep (s) , t ≥ 0,

(7.17)

0≤s≤t

where r(k) denotes the kth derivative of r, k = 1, · · · , N , the function β is a class K L function, and γ is aclass K function and (ii) the closed-loop system is asymptotically stable if ep (t) → 0 as t → ∞. Remark 7.3 If ep is nonvanishing, then Problem 7.2 is a classical input-to-state stability problem. Readers can refer to [8, 9] for information on   designing a controller that satisfies input-to-state stability. In particular,if ep(t) → 0 as t → ∞, then xs (t) → 0 as t → ∞ by (7.17). In addition, if ep (t) → 0 as t → ∞, then the e

input-to-state stability can be relaxed as well. Let xs p and xs0 be the solution of (7.8) There is a study [10] that discusses and that of (7.8) with ep (t) ≡ 0, respectively.  e the conditions where xs p (t) − xs0 (t) ≤ θ e−ηt is satisfied for θ, η > 0. In this case, only the asymptotic stability of (7.8) with ep (t) ≡ 0 needs to be considered rather than the input-to-state stability. Using the solutions of the two problems, the following theorem can be stated. Theorem 7.2 Under Assumption 7.1, suppose (i) Problems 7.1, 7.2 are solved and (ii) the controller for system (7.1) is designed as Observer: x˙ˆ s = Aˆxs + Bus + φ (y, y˙ ) − φ (r, r˙ ) , xˆ s (0) = 0 yˆ p = y − CT xˆ s

(7.18)

Controller:   z˙ p = α p zp , yˆ p , r, · · · , r(N ) , zp (0) = 0   z˙ s = α s zs , xˆ s , r, · · · , r(N ) , zs (0) = 0   up = up zp , yˆ p , r, · · · , r(N )   us = us zs , xˆ s , r, · · · , r(N ) u = up + us .

(7.19)

Then, the output of system (7.1) satisfies y (t) − r (t) → B (0, δ + C γ (δ)) as t → ∞. In particular, if δ = 0, then the output of system (7.1) satisfies y (t) − r (t) → 0 as t → ∞. Proof See Appendix.



7.2 Additive-State-Decomposition-Based Repetitive Control Framework

133

Remark 7.4 The proposed control framework has two advantages over the commonly used control framework. First, under the proposed framework, a two- degreeof-freedom controller is designed [11]. In the standard feedback framework, an intrinsic conflict exists between performance (trajectory tracking and disturbance rejection) and robustness. Therefore, the proposed framework has the potential to avoid conflicts between tracking performance and robustness. Second, the proposed framework is compatible with various methods designed in both the frequency and time domains. Example 7.3 (Example 7.2 Continued) The primary system (7.11) is written as yp (s) = G p (s) u p (s) + dp (s) , s−5 and dp (s) = CT (sI2 − A)−1 (φ (r, r˙ ) where G p (s) = CT (sI2 − A)−1 B = s 2 +4s+3 (s) + d (s) + x0 ) . For Problem 7.1, using the zero-phase error tracking controller algorithm (see Sect. 3.1.1), the RC is designed as

u p (s) = where

Q (s) e−T s B (s) ep (s) , 1 − Q (s) e−T s

(7.20)

  400 s 2 + 4s + 3 (−s − 5) 1 , Q (s) = . B (s) = 25 (s + 10) (s + 5) (s + 8) 0.1s + 1

The tracking performance of the primary system (7.11) driven by the controller (7.20) is shown in Fig. 7.2, where the output yp tracks r with good tracking accuracy, while keeping all states bounded.   For the secondary system, we define z s = 6x1,s − 10x2,s + sin x2,s + r − sin r. Then, the secondary system (7.13) is written as x˙1,s = −x1,s + z s + ga,1      z˙ s = cos x2,s + r − 10 4x1,s − 8x2,s + u s + ga,2     + 6 −x1,s + z s + cos x2,s + r r˙ − r˙ cos r. The controller is designed as u s = −4x1,s + 8x2,s       1   + −2 −x1,s + z s − cos x2,s + r r˙ + r˙ cos r − 5z s . cos x2,s + r − 10 (7.21) Consequently, (7.13) becomes

134

7 Continuous-Time Filtered Repetitive Control …

,

up

Fig. 7.2 Tracking performance of the primary system (7.11)

x˙1,s = −x1,s + ga,1     z˙ s = −5z s + cos x2,s + r − 10 ga,2 .

(7.22)

Therefore, the closed-loop system (7.22) is input-to-state stable with respect to the input ep . Combining observers (7.14), (7.20), and (7.21) yields the final controller as Q (s) e−T s B (s) eˆp (s) 1 − Q (s) e−T s u s = −4xˆ1,s + 8xˆ2,s       1   + −2 −xˆ1,s + zˆ s − cos xˆ2,s + r r˙ + r˙ cos r − 5ˆz s cos xˆ2,s + r − 10

u p (s) =

u = up + us,

(7.23)

  where eˆp = r − yˆp and zˆ s = 6xˆ1,s − 10 xˆ2,s + sin xˆ2,s + r − sin r. The tracking performance of system (7.3) driven by controller (7.23) is shown in Fig. 7.3, where the output y tracks r with good tracking accuracy, while keeping all states bounded.

7.3 Application to TORA Benchmark Problem

135

u

Fig. 7.3 Final tracking performance

7.3 Application to TORA Benchmark Problem The translational oscillator with a rotating actuator (TORA) system, or rotational– translational actuator (RTAC), has been widely used in several studies over the past several years as a benchmark problem for testing novel nonlinear control schemes [12]. Figure 7.4 shows that this system consists of a cart attached to a wall with a spring. The cart is affected by a disturbance force F ∈ R. An unbalanced point mass rotates around the axis in the center of the cart, which is actuated by a control torque N ∈ R. The horizontal displacement of the cart is denoted by xc ∈ R and the angular displacement of the unbalanced point mass is denoted by θ ∈ R.

Fig. 7.4 Oscillating eccentric rotor [12]

136

7 Continuous-Time Filtered Repetitive Control …

In this section, after normalization and transformation, the TORA plant is simplified using the following state-space representation [13]: x˙1 = x2 x˙2 = −x1 + ε sin x3 + d2 x˙3 = x4 x˙4 = u, x (0) = x0 ,

(7.24)

where 0 < ε < 1, x3 = θ, τ (t) ∈ R is the dimensionless control torque, and d2 (t) ∈ R is an unknown T -periodic signal. All states x = [x1 x2 x3 x4 ]T ∈ R4 are measurable. The objective is to design a controller u such that the output y (t) = x3 (t)− r (t)  → 0 as t → ∞, while keeping the other states bounded, where r (t) ∈ (−π 2, π 2) is a T -periodic reference and its derivative is bounded. Remark 7.5 TORA is not globally feedback linearizable owing to the weak, sinusoidtype nonlinear interaction ε sin x3 in (7.24). Given that the internal dynamics of (7.24) are x˙1 = x2 x˙2 = −x1 which are unstable, the considered problem is a tracking problem for a nonminimumphase nonlinear system.

7.3.1 Additive State Decomposition of TORA Benchmark The nonminimum-phase nonlinear system is decomposed into two systems via additive state decomposition: an LTI primary system, leaving the secondary system subject to nonlinearity. The tracking task is only assigned to the LTI system. On the other hand, the task of handling nonlinearity is assigned to the secondary system. These two tasks are easier than the original tasks. Therefore, the original tracking problem is simplified via additive state decomposition. By introducing the zero term εD (C + aB)T x − εD (y + a y˙ ) ≡ 0 system (7.24) becomes     x˙ = A0 + εD (C + aB)T + KCT x + Bu + φ 0 (y) − εD (y + a y˙ ) − Ky + d   A

y = C x, x (0) = x0 , T

φ(y, y˙ )

(7.25)

7.3 Application to TORA Benchmark Problem

137

where ⎡

⎤ ⎡ ⎤ ⎡ ⎤ 0 100 0 0 ⎢ −1 0 0 0 ⎥ ⎢0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ A0 = ⎣ ,B = ⎣ ⎦,C = ⎣ ⎥ , 0 0 0 1⎦ 0 1⎦ 0 000 1 0 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 0 0 ⎢ ε sin y ⎥ ⎢ d2 ⎥ ⎢1⎥ ⎢ ⎥ ⎢ ⎥ ⎥ D=⎢ ⎣ 0 ⎦ , φ 0 (y) = ⎣ 0 ⎦ , d = ⎣ 0 ⎦ . 0 0 0   Remark 7.6 Although the pair A0 , CT is unobservable, the pair (A0 + εD  (C + aB)T , CT is observable. Therefore, there always exists a K such that A is a stable matrix.

7.3.2 Filtered Repetitive Controller Design for Primary System We define a filtered tracking error as ep = y˜p + a y˙˜p ,

(7.26)

where y˜p = yp − r and a > 0. Both y˜p and y˙˜p can be viewed as outputs of a stable system with ep as its input, which means that y˜p and y˙˜p are bounded if ep is bounded. In addition, y˜p (t) → 0 and y˙˜p (t) → 0 given that ep (t) → 0. Inspired by the design in [14], the FRC u p is designed as follows:   ε z˙ p (t) = −z p (t) + 1 − ε2 z p (t − T ) + L 1 ep (t)   u p z p , xp , r = L2T xp (t) + L 3 z p (t) , z p (s) = 0, s ∈ [−T, 0] ,

(7.27)

where L 1 , L 3 ∈ R and L2 ∈ R4 will be specified later. Then, the closed-loop system corresponding to the primary system of (7.25), namely (7.5), and (7.27) is given as      z˙ p (t) z p (t) ε 0 −1 L 1 (C + aB)T = x˙ p (t) xp (t) 0 I4 BL 3 A + BLT2  



E

Aa

    −L 1 (r + ar˙ ) (t) z p (t − T ) 1 − ε2 0 + + . 0 0 xp (t − T ) φ (r (t) , r˙ (t)) + d (t)   

Aa,−T

da (t)

(7.28)

138

7 Continuous-Time Filtered Repetitive Control …

The transfer function from da to ep is ep (s) =

 −1 1 p (s) CTa s E − Aa − Aa,−T e−sT da (s) , L1 

(7.29)

G(s)

  where p (s) = εs + 1 − 1 − ε2 e−sT , and Ca = [1 0 0 0 0]T . Given that da is a T periodic signal and the closed-loop system (7.28) incorporates an internal model for any T -periodic signal, it is expected that ep has a small ultimate bound. To achieve this performance, it is first required that the closed-loop system without external signal da be stable. An easy parameter choice rule is given in Proposition 7.1. Proposition 7.1 Suppose the parameters in (7.27) are selected as 2 L 1 = κ1 , L2 = H − K, L 3 = − κ1 , a

(7.30)

where 0 < ε < 1, 0 < κ1 , κ2 and H = − a1 [0 ε κ2 1 + κ2 ]T . Then (i) without external   xp , signal da , (7.28) is exponentially stable for any 0 < ε < 1; (ii) in (7.28),    ξ  are bounded on [0, ∞). There exists a δ > 0 such that y˜p a ≤ δ and and   ˙   y˜p  ≤ δ. a



Proof See Appendix.

7.3.3 Stabilizing Controller Design for Secondary System       Given that  y˜p a ≤ δ and  y˙˜p  ≤ δ, it is expected that the ultimate bound of xs  a can be adjusted by δ > 0. For example, there exists γ > 0 such that xs a ≤ γ δ. Therefore,   y − r a ≤  y˜p a + C xs a ≤ (1 + C γ ) δ. The secondary system of (7.25), namely, (7.8), is rewritten as x˙1,s = x2,s

  x˙2,s = −x1,s + ε sin x3,s + r − ε sin r + g x˙3,s = x4,s x˙4,s = KT xs + u s , xs (0) = 0,

(7.31)

7.3 Application to TORA Benchmark Problem

139

    where g = ε sin yp + x3,s − ε sin r + x3,s − ε (C + aB)T xp + ε (r + ar˙ ) . Our constructive procedure was inspired by the  designin [2]. The controller design procedure starts from the marginally stable x1,s , x2,s -subsystem.   • Step 1: Consider the x1,s , x2,s -subsystem of (7.31) with x3,s viewed as the virtual 2 2 + x2,s results in control input. Differentiating the quadratic function V1 = x1,s     V˙1 = εx2,s sin x3,s + r − sin r + εx2,s g.

(7.32)

Then, the following certainty-equivalence-based virtual controller is introduced: , x3,s = −batanx2,s + x3,s

(7.33)

where b ∈ R. Consequently, x˙1,s = x2,s x˙2,s



−batanx2,s = −x1,s + ε sin 2





−batanx2,s + 2r cos 2



+ g ,

(7.34)

where     − ε sin r − batanx2,s + g. g = ε sin −batanx2,s + r + x3,s

(7.35)

  −batanx2,s +2r To ensure cos > 0, the design parameter b is selected to be 0 < 2      b < min2 1 − 2 |r (t)| π . Given that r (t) ∈ (−π 2, π 2), b always exists. t

The term CE is used here because x3,s = −batanx2,s makes V1 in (7.32) negative semidefinite when g (t) ≡ 0, t ≥ 0.   , x4,s -subsystem and a nonlinear • Step 2: Backstepping will be applied to the x3,s to the origin. With the aid of (7.31) and controller u s is designed to drive x3,s is (7.33), the time derivative of the new variable x3,s = x4,s + ψ + b x˙3,s

where ψ = b 1+x1 2 as follows: x4,s

2,s

1 g, 2 1 + x2,s

(7.36)

    −x1,s + ε sin x3,s + r − ε sin r . We define a new variable = x3,s + x4,s + ψ. x4,s

Then, (7.36) becomes x˙3,s = −x3,s + x4,s +b

1 g. 2 1 + x2,s

(7.37)

140

7 Continuous-Time Filtered Repetitive Control …

Using the definition (7.37), the time derivative of the new variable x4,s is x˙4,s = x˙3,s + x˙4,s + ψ˙ ˙ = −x3,s + x4,s + KT xs + u s + ψ.

We design u s for the secondary system (7.31) as follows:   ˙ − 2x4,s − KT xs − ψ. u s xp , xs , r = x3,s

(7.38)

  Then, the x3,s , x4,s -subsystem becomes = −x3,s + x4,s +b x˙3,s

1 g 2 1 + x2,s

x˙4,s = −x4,s .

(7.39)

     ≤ bδ1 and x  = 0 as ga ≤ δ1 , where δ1 > 0. It is easy to see that x3,s 4,s a a       Proposition 7.2 Suppose  y˜p a ≤ δ and  y˙˜p  ≤ δ, where δ is sufficiently small. a The controller u s for the secondary system (7.31) is designed as (7.38), where 0 <   b < min2 1 − 2 |r (t)| π . Then, there exists γ > 0 such that xs a ≤ γ δ and xs  t

is bounded on [0, ∞). Proof See Appendix.



7.3.4 Controller Synthesis for Original System The controller design described above is based on the condition that xp and xs are known a priori. A problem arises in which the states xp and xs cannot be measured directly but x = xp + xs can be. Considering this problem, the following observer is proposed to estimate the states xp and xs . Unlike traditional observers, this proposed observer can estimate the states of the primary system and secondary system directly, rather than asymptotically or exponentially. This is possible because unlike a real system, either the primary system or secondary system is virtual, whose initial values can be assigned freely by the designers. The observer presented in Proposition 7.3 based on the secondary system of (7.25) can estimate xs . Proposition 7.3 For the primary system (7.25) and secondary system (7.31), the following observer x˙ˆ s = Aˆxs + φ (y) − φ (r ) − εD (C + aB)T x + εD (r + ar˙ ) + Bu s xˆ p = x − xˆ s , xˆ s (0) = 0 (7.40)

7.3 Application to TORA Benchmark Problem

141

satisfies xˆ p (t) ≡ xp (t) and xˆ s (t) ≡ xs (t) , t ≥ 0. 

Proof The proof is similar to that of Theorem 7.1.

Proposition 7.4 Suppose that the conditions of Propositions 7.1–7.3 hold. Let the controller τ in system (7.24) be designed as     ε z˙ p (t) = −z p (t) + 1 − ε2 z p (t − T ) + L 1 (C + aB)T xˆ p − r − ar˙ (t)     τ = KT x + u p z p , xˆ p , r + u s xˆ p , xˆ s , r , (7.41) where z p (s) = 0, s ∈ [−T, 0] , xˆ p and xˆ s are given by (7.40), u p (·) is defined as on in (7.27), and u s (·) is defined as in (7.38). Then, (i) x and z p  are  bounded  y − r a ≤ ρδ, where  y˜p a ≤ δ and ∞); and (ii) there exists ρ > 0 such that [0,   ˙   y˜p  ≤ δ. a

Proof For the original system (7.24), its primary and secondary systems have the following relations:  xs and y = yp + ys . With controller  (7.41), for  x = xp + xp  and z p  are bounded on [0, ∞) and  y˜p  ≤ δ and the primary system, a   ˙   y˜p  ≤ δ according to Proposition 7.1. On the other hand, for the secondary sysa

tem, there exists γ > 0 such that xs a ≤ γ δ, and xs  is bounded on [0, ∞) based on Proposition 7.2. Proposition  7.3 ensures that xˆ p (t) ≡ xp (t) and xˆ s (t) ≡ xs (t) , t ≥ 0. Therefore, (i) x and z p  are bounded on [0, ∞); and (ii) thereexists  ρ>0     such that y − r  ≤ ρδ, where ρ = ρ (1 + C γ ) ,  y˜p  ≤ δ, and  y˜˙p  ≤ δ.  a

a

a

The closed-loop control system is shown in Fig. 7.5, which includes the TORA plant (7.24), observer (7.40), and controller (7.41).

Fig. 7.5 Closed-loop control system of TORA

142

7 Continuous-Time Filtered Repetitive Control …

7.3.5 Numerical Simulation T In the simulation, set ε = 0.2 and the initial value  2πx0 = [0.1 0 0 0] in (7.24). The reference r is selected to be r (t) = 0.5 sin sin T t , and disturbance d2 (t) =   0.1 cos 2π t , where period T = 3 s. The objective here is to design controller τ such T that the output y (t) − r (t) is uniformly ultimately bounded with a small ultimate bound, while the other states are bounded. The parameters are selected as a = 1 and K = [0 −ε −1 −2]T . Then, A in (7.25) satisfies maxRe(λ (A)) = −0.01 < 0. Furthermore, according to Proposition 7.1, the parameters of the controller (7.41) are selected as ε = 0.01, L 1 = 2, L2 = 0, and L3 = −4. The parameter b is selected as b = 1.5(1 − 1 π ) < min 2 1 − 2 |r | π . t∈[0,∞)

With the chosen parameters, the amplitude response of each element of the transfer function G (s) in (7.29) is plotted in Fig. 7.6. The figure shows that each element of G (s) has a comb shape with notches matching the frequencies of the considered periodic signals. This makes the amplitude of each element of G (s) close to zero , k = 0, 1, .. in the low-frequency band. Given that low frequencies dominate at k 2π T the spectrum of da (t) , it is expected that ep exists with a small ultimate bound. 50

0

Magnitude (dB)

−50

−100

2*π/T

4*π/T 6*π/T

8*π/T

−150

−200

−250

−300

−350

0

1

2

3

4

5

6

Frequency (rad/sec)

Fig. 7.6 Amplitude response of transfer function G (s) in (7.29)

7

8

9

10

7.3 Application to TORA Benchmark Problem

143

Fig. 7.7 Evolution of all states

The TORA plant (7.24) is driven by controller (7.41) with the parameters above. The evolutions of all states of (7.24) are shown in Fig. 7.7. According to the simulation, the proposed controller τ drives the tracking error y (t) − r (t) to be uniformly ultimately bounded with a small ultimate bound. Moreover, the other states are bounded.

7.4 Summary In this chapter, the output-feedback RC problem for a class of nonminimum-phase nonlinear systems was solved under the additive-state-decomposition-based tracking control framework. The key concept is to decompose the RC problem into two well-solved control problems using additive state decomposition: an RC for a primary LTI system and a state-feedback stabilizing control for a secondary nonlinear system. Given that the RC problem is only limited to LTI systems, existing RC methods can be applied directly and the resulting closed-loop systems can be analyzed in the frequency domain. On the other hand, the state-feedback stabilization only needs to consider the “secondary” stabilization problem so that the nonminimumphase behavior is avoided, which is an important feature because nonminimum-phase behavior will restrict the application of basic nonlinear controllers. Finally, the RC

144

7 Continuous-Time Filtered Repetitive Control …

Fig. 7.8 Three RC design approaches

can be combined with the stabilizing controller to achieve the original control goal. The rotational position tracking problem for the TORA plant was solved using the proposed additive-state-decomposition RC. The differences among the three proposed methods, namely, the design processes of the feedback linearization method, adaptive-control-like method, and additive-state-decomposition-based method are shown in Fig. 7.8.

7.5 Appendix 7.5.1 Proof of Theorem 7.2 The proof of Theorem 7.1shows that observer (7.18) will make yˆ p (t) ≡ yp (t) and xˆ s (t) ≡ xs (t) , t ≥ 0.

(7.42)

The remainder of the proof is composed of two parts: (i) for (7.5), the controller u p drives yp (t) − r (t) → B (0, δ) as t → ∞, and (ii) based on the result of (i), for (7.8), the controller us drives ys (t) → B (0, C γ (δ)) as t → ∞. Then, the controller u = up + us drives y (t) − r (t) → B (0, δ + C γ (δ)) as t → ∞ in system (7.1). (i) Suppose that Problem 7.1 is solved. Therefore, using (7.42), controller (7.15) can drive ep (t) = yp (t) − r (t) → B (0, δ) as t → ∞. (ii) Let us consider the secondary system (7.8). Suppose that Problem 7.2 is solved. Using (7.42), the controller

7.5 Appendix

145

  z˙ s = α s zs , xˆ s , r, · · · , r(N )   us = us zs , xˆ s , r, · · · , r(N ) drives the output ys such that ys (t) ≤ C xs (t)



≤ C β (xs (0) , t) + C γ

     sup ep (s) , t ≥ 0.

0≤s≤t

  Based on the result of (i), ep (t) → B (0, δ) as t → ∞. This implies that ep (t) ≤ δ + ε when t ≥ T1 . Then       ys (t) ≤ C β (xs (T1 ) , t − T1 ) + C γ sup ep (s) T1 ≤s≤t

≤ C β (xs (T1 ) , t − T1 ) + C γ (δ + ε) , t ≥ T1 . Given that C β (xs (T1 ) , t − T1 ) → 0 as t → ∞ and ε can be arbitrarily small, it can be concluded that ys (t) → B (0, C γ (δ)) as t → ∞. Given that y = CT xp + CT xs , it can be concluded that driven by controller (7.19), the output of system (7.1) satisfies y (t) − r (t) → B (0, δ + C γ (δ)) as t → ∞. In particular, if δ = 0, the output of system (7.1) satisfies y (t) − r (t) → 0 as t → ∞.

7.5.2 Proof of Proposition 7.1 We select a Lyapunov functional as follows:  V = εξ 2 +

t

t−T

2 1 2 1 1 2 x3, p + ax4, p , ξ 2 (θ ) dθ + x1, p + x 2, p + 2 2 2

where xp = [x1, p x2, p x3, p x4, p ]T . With the parameters (7.30), the derivative of V along (7.28) without external signal da yields    2 V˙ ≤ −ε2 2 − ε2 ξ 2 − κ2 x3, p + ax4, p ≤ 0. Consequently, without external signal da , (7.28) is asymptotically stable for any 0 < ε < 1. For the linear retarded time-delay system, asymptotic stability is equivalent to exponential stability. Then, withoutexternal signal da , (7.28) is exponentially stable  for any 0 < ε < 1. Consequently, xp and  ξ  are bounded on [0, ∞) and there   ˙    exists δ > 0 such that y˜p a ≤ δ and  y˜p  ≤ δ. a

146

7 Continuous-Time Filtered Repetitive Control …

7.5.3 Proof of Proposition 7.2 This proof is composed of three  parts.       Part 1: If y˜p a ≤ δ and  y˙˜p  ≤ δ, then, from the definition of φ (y) , ga ≤ a from (7.39), it can be observed (2 +a) εδ regardless of the value of ys . Consequently,      g x ≤ b = + a) bεδ and = 0 when the controller u s for the that x3,s (2 a 4,s a a      + secondary system (7.31) is designed as (7.38). Then, in (7.35), g a ≤ ε x3,s a ga = (bε    (2 + a) εδ.  + 1) Part 2: x2,s  ≤c (bε + 1) (2+ a) εδ and x2,s  ≤ c (bε + 1) (2 + a) εδ. Given that 0 < b < min2 1 − 2 |r (t)| π , the derivative V1 in (7.32) is negative semideft

inite when g (t) ≡ 0, i.e.,     −batanx2,s + 2r −batanx2,s cos ≤ 0, V˙1 = εx2,s sin 2 2 where the equality holds at some time instant t ≥ 0 if and only if x2,s (t) ≡ 0. Using LaSalle’s invariance principle [8, Theorem 4.4, p.128], lim x1,s (t) = 0 and t→∞   lim x2,s (t) = 0 when g (t) ≡ 0. Owing to the particular structure of x1,s , x2,s t→∞   subsystem (7.34), using [15, Lemma 3.5], it is shown that x1,s , x2,s -subsystem (7.34) is small with a linear gain, say c > 0. Suppose that δ is sufficiently  small.   g  ≤ (bε + 1) (2 + a) εδ is sufficiently small as well. Therefore, x1,s  ≤ Then, a a     c g a = c (bε + 1) (2 + a) εδ and x2,s  ≤ c (bε + 1) (2 + a) εδ. Part 3: From the definitions of x3,s and x4,s , there exists γ > 0 such that xs a ≤     -subsystem, xs (t) is bounded in γ δ. For the x1,s , x2,s -subsystem and x3,s , x4,s any finite time. With the obtained result xs a ≤ γ δ, xs (t) is bounded on [0, ∞).

References 1. Quan, Q., Cai, K.-Y., & Lin, H. (2015). Additive-state-decomposition-based tracking control framework for a class of nonminimum phase systems with measurable nonlinearities and unknown disturbances. International Journal of Robust and Nonlinear Control, 25(2), 163– 178. 2. Quan, Q., & Cai, K.-Y. (2013). Additive-state-decomposition-based tracking control for TORA benchmark. Journal of Sound and Vibration, 332(20), 4829–4841. 3. Quan, Q., & Cai, K.-Y. (2015). Repetitive control for TORA benchmark: An additive-statedecomposition-based approach. International Journal of Automation and Computing, 12(3), 289–296. 4. Marino, R., & Santosuosso, G. L. (2005). Global compensation of unknown sinusoidal disturbances for a class of nonlinear nonminimum phase systems. IEEE Transactions on Automatic Control, 50(11), 1816–1822. 5. Ding, Z. (2003). Global stabilization and disturbance suppression of a class of nonlinear systems with uncertain internal model. Automatica, 39(3), 471–479.

References

147

6. Lan, W., Chen, B. M., & Ding, Z. (2006). Adaptive estimation and rejection of unknown sinusoidal disturbances through measurement feedback for a class of non-minimum phase non-linear MIMO systems. International Journal of Adaptive Control and Signal Processing, 20(2), 77–97. 7. Marino, R., Santosuosso, G. L., & Tomei, P. (2008). Two global regulators for systems with measurable nonlinearities and unknown sinusoidal disturbances. In Honor of A. Isidori, A. Astolfi & R. Marconi (Eds.), Analysis and design of nonlinear control systems. Berlin: Springer. 8. Khalil, H. K. (2002). Nonlinear systems. Englewood Cliffs, NJ: Prentice-Hall. 9. Sontag, E. D. (2007). Input to state stability: Basic concepts and results. In P. Nistri and G. Stefani, (Eds.), Nonlinear and optimal control theory (pp. 166–220). Berlin: Springer. 10. Sussmann, H. J., & Kokotovic, P. V. (1991). The peaking phenomenon and the global stabilization of nonlinear systems. IEEE Transactions on Automatic Control, 36(4), 424–440. 11. Morari, M., & Zafiriou, E. (1989). Robust process control (1st ed.). Upper Saddle River, NJ: Prentice-Hall. 12. Wan, C.-J., Bernstein, D. S., & Coppola, V. T. (1996). Global stabilization of the oscillating eccentric rotor. Nonlinear Dynamics, 10(1), 49–62. 13. Zhao, J., & Kanellakopoulos, I. (1998). Flexible backstepping design for tracking and disturbance attenuation. International Journal of Robust and Nonlinear Control, 8(4–5), 331–348. 14. Quan, Q., & Cai, K.-Y. (2011). A filtered repetitive controller for a class of nonlinear systems. IEEE Transactions on Automatic Control, 56(2), 399–405. 15. Sussmann, H. J., Sontag, E. D., & Yang, Y. (1994). A general result on the stabilization of linear systems using bounded controls. IEEE Transactions on Automatic Control, 39(12), 2411–2425.

Chapter 8

Sampled-Data Filtered Repetitive Control With Nonlinear Systems: An Additive-State-Decomposition Method

In the previous chapter, continuous-time filtered repetitive control (FRC, or filtered repetitive controller, which is also designated as FRC), was proposed using an additive-state-decomposition-based method for nonlinear systems. In this chapter, the sampled-data FRC is proposed for the same nonlinear system, but with a different problem under the additive-state-decomposition-based tracking control framework [1]. • One of the major drawbacks of RC is the sensitivity of the control accuracy to the period variation of the external signals. It is shown in [2] that with a period variation as small as 1.5% for an LTI system, the gain of the internal model part of the RC drops from ∞ to 10. Consequently, the tracking accuracy may be far from satisfactory, especially for high-precision control. For this purpose, higher order RCs composed of several delay blocks in series were proposed to improve the robustness of control accuracy against period variations [2–7]. However, these methods are inapplicable directly to nonlinear systems because they are all based on transfer functions and frequency-domain analysis. The primary motivation is to design an RC for nonlinear systems to improve their robustness against period variation. • Although modern control systems are often implemented via digital processors, RC schemes are often designed for nonlinear continuous-time systems. Unlike LTI systems, the zero-order hold equivalent of a nonlinear continuous-time system cannot be represented as explicit, exact sampled-data models. Only approximate controller design methods can be applied by considering the discretization error as an external disturbance, which often requires the resulting closed-loop system to be input-to-state stable (ISS, or input-to-state stability, which is also designated as ISS) with respect to discretization error [7]. However, a linear continuoustime RC system is a neutral-type system in a critical case.1 The characteristic neutral-type system in the critical case is in the form x˙ (t) −x˙ (t − T ) = A1 x (t) +A2 x (t − T ) , where x (t) ∈ Rn , T > 0, and A1 , A2 ∈ Rn×n [8, 9].

1 The

© Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_8

149

150

8 Sampled-Data Filtered Repetitive Control …

equation of the neutral-type system has an infinite sequence of roots with negative real parts approaching zero. Consequently, only nonexponential stability can be guaranteed in the critical case [8], which further implies that the ISS property cannot be obtained. Therefore, in theory, a sampled-data RC cannot be obtained by discretizing a continuous-time RC directly for nonlinear systems. The second motivation is to design a sampled-data RC for continuous-time nonlinear systems. Based on the above discussion, a sampled-data robust RC problem for nonlinear systems is both challenging and practical. This chapter aims to answer the following question: How can a sampled-data FRC be designed to tackle the period variation problem for nonlinear systems under the additive-state-decomposition-based tracking control framework? To answer this question, the problem formulation, additive state decomposition of the original problem, and controller design need to be discussed, along with an illustrative example.

8.1 Problem Formulation 8.1.1 System Description Consider a class of single-input single-output (SISO) nonlinear systems [10–13]: x˙ (t) = Ax (t) + bu (t) + φ (y (t)) + d (t) , x (0) = x0 y (t) = cT x (t) ,

(8.1)

where A ∈ Rn×n is a constant matrix, b ∈ Rn and c ∈ Rn are constant vectors, φ: R → Rn is a nonlinear function with φ(0) = 0, x (t) ∈ Rn is the state, y (t) ∈ R is the output, u (t) ∈ R is the control, and d (t) ∈ Rn is a periodic bounded disturbance with period T > 0. The reference r (t) ∈ R is sufficiently smooth with period T . In the following section, for convenience, the variable t is omitted except when necessary. Two assumptions regarding nonlinear system (8.1) are made as follows:   Assumption 8.1 The pair A, cT is observable, and the matrix A ∈ Rn×n is stable.2 Assumption 8.2 Only y (t) is available from measurements. Remark 8.1 The nonlinear function φ can be arbitrary rather than a weak nonlinear term. Here, its form is not specified. If φ(0) = a = 0, then (8.1) can be rewritten   A is unstable, then, because of the observability of A, cT in Assumption 8.1, there always exists a vector p ∈ Rn such thatA + pcT is stable, whose eigenvalues can be assigned freely. Then, (8.1) can be rewritten as x˙ = A + pcT x + bu + (φ (y) − py) + d. Therefore, without loss of generality, A is assumed to be stable.

2 If

8.1 Problem Formulation

151

as P x (t) = Ax (t) + bu (t) + φ  (y (t)) + (d (t) + a) , where φ  (y (t)) =φ(y (t)) − a with φ  (0) = 0. Furthermore, if (8.1) is subject to model uncertainties, such as P x = Ax + bu + φ (y) + φ (x) + d and the model uncertainty φ (x) is not too large, then the controller can also be designed based on (8.1) by taking φ (x) + d as a new disturbance. The robustness analysis is further carried out based on the small gain theorem as described in [14]. Here, φ (x) is ignored for simplicity.

8.1.2 Objective The continuous-time system (8.1) is controlled using a sampled-data RC with a sampling period Ts > 0, where N Ts instead of T is taken as the period in the sampled-data controller design, N ∈ N. More precisely, u in (8.1) is constant during a sampling interval, so that u (t) = u (kTs ) , t ∈ [kTs , (k + 1) Ts ) , k ∈ N. In practice, the disturbance period of T is not known exactly or is variable; namely, period T is uncertain, which causes the period variation T − N Ts . On the other hand, given that N Ts instead of T is used in the controller and N Ts = T in general, T can also be considered as a variation of N Ts . Let T = N Ts +  be the true period, where  is perturbation. Using N Ts in the design, y − r is uniformly ultimately bounded with the ultimate bound de > 0. Here, robustness can be roughly understood and de / is small. Under Assumptions 8.1, 8.2, for a given desired output r , the objective is to design a sampled-data output-feedback RC for the nonlinear system (8.1) such that y − r is uniformly ultimately bounded with the ultimate bound being robust against period variation. Remark 8.2 Here, for simplicity, the higher order RC is used to handle the period variation caused by multiple sources. If only the period variation caused by sampling needs to be compensated, the interpolator design or low-pass filter modification is recommended [15, pp. 477–480]. Remark 8.3 An example is presented to show that output feedback makes feedback linearization infeasible. Consider the following uncertain nonminimum-phase system: x˙1 = x1 + x2 +

x22 − 0.5d 1 + x22

x˙2 = −5x1 − 4x2 + u − 0.5d y = x2 ,

(8.2)

where only x2 ∈ R is available from measurements, and d ∈ R is unknown to the controller designer. In the following section, (8.2) is transformed into a linear system subject to a periodic disturbance term. Let

152

8 Sampled-Data Filtered Repetitive Control …

z 2 = x2 +

x22 1 + x22

(8.3)

whose derivative is 



2x2

z˙ 2 = 1 +  2 1 + x22 By designing

(−5x1 − 4x2 + u − 0.5d) .



2x2

−1

u = 5x1 + 4x2 + 1 +  2 1 + x22 we have

v,

˜ z˙ 2 = v + d,

(8.4)

(8.5)

where v ∈ R is a virtual control input, and 

2x2



d˜ = −0.5 1 +  2 1 + x22

d∈R

(8.6)

is a new disturbance. On the other hand, substituting (8.3) into (8.2) results in x˙1 = x1 + z 2 − 0.5d

(8.7)

Combining (8.5) and (8.7) yields x˙1 = x1 + z 2 − 0.5d ˜ z˙ 2 = v + d.

(8.8)

In the process, there exist two problems: (i) In (8.4), the state x1 is used. However, only x2 (output) is available from measurements. Therefore, linearization cannot be realized directly. (ii) In addition, the new disturbance d˜ in (8.6) involves state  2 2x2 / 1 + x22 Therefore, (8.8) is in fact not a linear system. Hence, from this simple example, it is clear that it is not easy to transfer (8.1) into a linear system with a periodic disturbance term.

8.2 Sampled-Data Output-Feedback Robust Repetitive Control Using Additive State Decomposition Based on additive state decomposition, the considered system (8.1) is decomposed into two subsystems: an LTI system including all external signals as the primary

8.2 Sampled-Data Output-Feedback Robust Repetitive …

153

Fig. 8.1 Additive state decomposition and different discrete-time controller designs for the primary system (8.9) and secondary system (8.12)

system, together with the secondary nonlinear system whose equilibrium point is zero, as shown in Fig. 8.1. In the following section, the decomposition process and benefits from decomposition are introduced.

8.2.1 Additive State Decomposition 8.2.1.1

Decomposition Process

Consider the system (8.1) as the original system. The primary system is selected as follows: x˙ p = Axp + bu p + φ (r ) + d yp = cT xp , xp (0) = x0 .

(8.9)

Then, the secondary system is determined by subtracting the primary system (8.9) from the original system (8.1) as follows:   x˙ − x˙ p = Ax + bu + φ (y) + d − Axp + bu p + φ (r ) + d y − yp = cT x − cT xp , x (0) − xp (0) = 0.

(8.10)

154

8 Sampled-Data Filtered Repetitive Control …

Let xs  x − xp , ys  y − yp , u s  u − u p .

(8.11)

Then, the secondary system (8.10) is further written as   x˙ s = Axs + bu s + φ r + ys + ep − φ (r ) ys = cT xs , xs (0) = 0,

(8.12)

where ep  yp − r. If ep ≡ 0, then   x˙ s = Axs + bu s + φ r + cT xs − φ (r ) ys = cT xs , xs (0) = 0.

(8.13)

Thus, (xs , u s ) = 0 is an equilibrium point of (8.13) because it can make the left and right sides equal regardless of the value of the reference r . Based on (8.11), the following relation holds: x = xp + xs , y = yp + ys , u = u p + u s .

(8.14)

The controller design for the decomposed systems (8.9) and (8.10) will use the output yp and state xs as feedback, where an observer is proposed for this purpose. Lemma 1 ([1]) Suppose that an observer is designed to estimate yp and xs in (8.9) and (8.12) as follows: yˆp = y − cT xˆ s x˙ˆ s = Axˆ s + bu s + φ (y) − φ (r ) , xˆ s (0) = 0.

(8.15) (8.16)

Then, yˆp ≡ yp and xˆ s ≡ xs . Proof Subtracting (8.16) from (8.12) results in x˙˜ s = Ax˜ s , x˜ s (0) = 0, 3 Here, x˜ s = xs − xˆ s . Then, x˜ s ≡ 0. This implies that xˆ s ≡ xs . Consequently, using (8.14), we have  yˆp ≡ y − cT xˆ s ≡ yp .

8.2.1.2

Benefits of Decomposition

Additive state decomposition has two benefits. • First, given that the output of the primary system and state of the secondary system can be observed,4 The original tracking problem for system (8.1) is correspond(0) and xˆ s (0) are both assigned by the designer and are all determinate. 4 Using (8.15) and (8.16), yˆ and x ˆ s are obtained instead of the true state x. Meanwhile, x or xp is p still unknown. We avoid solving x using additive state decomposition. 3 Given that (8.16) and (8.12) are the only models existing in the design, the initial values x s

8.2 Sampled-Data Output-Feedback Robust Repetitive …

155

ingly decomposed into two problems: an output-feedback tracking problem for an LTI “ primary” system (yp − r → B (δ1 ),5 δ1 > 0) and a state-feedback stabilization problem for the complementary “secondary” system (ys → B (δ2 ) , δ2 > 0), which is shown in Fig. 8.1. Consequently, y − r → B (δ1 + δ2 ) based on (8.14). Given that the tracking task is only assigned to the LTI component, the task is therefore easier than that for the nonlinear system (8.1). State-feedback stabilization is also easier than output-feedback stabilization. • Second, for the two decomposed components, different discrete-time controller design methods can be employed (as shown in Fig. 8.1). It is appropriate to follow the discrete-time model design for the discrete-time RC design of the linear primary system. On the other hand, a state-feedback stabilization problem for the secondary system is independent of RC. (The ISS property cannot be obtained for a traditional RC system.) The resultant closed-loop system can be rendered ISS. Then, the emulation design will be adopted for the discrete-time controller design of the nonlinear secondary system.

8.2.2 Controller Design for Primary System and Secondary System Meanwhile, the original tracking problem for the system (8.1) is correspondingly decomposed into two problems: an output-feedback tracking problem for an LTI primary system (8.9) and a state-feedback stabilization problem for the complementary secondary system (8.12). We will attempt to solve these problems in the following section.

8.2.2.1

Problem 1 on Primary System (8.9)

Given that (8.9) is an LTI system with an exogenous additive perturbation term given by φ (r ) + d, using sample-and-hold on the input and output with sampling period Ts , (8.9) can be written as yp (z) = P (z) u p (z) + dr (z) ,

(8.17)

T where P (z) = cT (zI − F)−1 Hb, F = eATs , H = 0 s eAs ds, and dr (z) represents the contribution of φ (r ) + d to the output. Since A is stable by Assumption 8.1, namely, Re(λ (A)) < 0, |λ (F)| < 1 regardless of the value of Ts > 0. This implies that P (z) is stable. Similar to [3, 4], a sampled-data output-feedback RC can be designed for (8.17). The corresponding problem is stated as follows: 5 Here, B (δ)

0 as t → ∞.

 {ξ ∈ R ||ξ | ≤ δ } , δ > 0; the notation x (t) → B (δ) denotes min |x (t) − y| → y∈B (δ)

156

8 Sampled-Data Filtered Repetitive Control …

Problem 8.1 For (8.17) (or (8.9)), we design a sampled-data output-feedback RC as (8.18) u p (z) = C (z) ep (z) such that, in the time domain, ep (kTs ) = r (kTs ) − yp (kTs ) → B (δ) textitas k → ∞, where Q (z) W (z) z −N C (z) = 1 + L (z) 1 − Q (z) W (z) z −N and δ > 0 is expected to be as small as possible. Stability analysis of the closed-loop system corresponding to (8.17) and (8.18) is given by Proposition 8.2. Proposition 8.1 Let u p in (8.17) be designed as in (8.18). Suppose i) Pc (z) , P (z) , L (z) and Q (z) are stable, and ii)    Q (z) W (z) z −N (1 − T (z) L (z)) < 1, ∀ |z| = 1,

(8.19)

where Pc (z) = 1 / (1 + P (z)) and T (z) = P (z) Pc (z) . Then, the tracking error ep is uniformly ultimately bounded. Furthermore, if Z −1

   1 − Q (z) z −N (r (z) − dr (z)) → 0,

then ep (kTs ) = r (kTs ) − yp (kTs ) → 0 as k → ∞. Proof By substituting (8.18) into (8.17), the tracking error of the primary system is written as   ep (z) = Pc (z) K (z) 1 − Q (z) W (z) z −N (r (z) − dr (z)) ,

(8.20)

  where K (z) = 1 / 1 − Q (z) W (z) z −N (1 − T (z) L (z)) . A sufficient criterion for stability of the closed-loop system now requires Pc (z) and K (z) to be stable. The transfer function Pc (z) is stable using condition i). For the stability of K (z), to apply the small gain theorem [17, pp. 97–98], Q (z) W (z) z −N (1 − T (z) L (z)) is required to be stable first. This requires Pc (z) , P (z) , and Q (z) W (z) z −N to be stable, which are satisfied by the given conditions. Therefore, if (8.19) holds, K (z) is stable through the small gain theorem.Then, the tracking error  ep is uniformly ultimately bounded. Furthermore, taking 1 − Q (z) W (z) z −N (r (z) − dr (z)) as a new input in (8.20), given that Pc (z)K (z) is stable, ep (k) = r (k) − yp (k) → 0   if Z −1 1 − Q (z) z −N (r (z) − dr (z)) → 0 as k → ∞. From Proposition 8.1, stability depends on three main elements of controller (8.18): L (z) , Q (z) , and W (z). The ideal design is 1 − T (z) L (z) = 0, Q (z) = 1.

(8.21)

8.2 Sampled-Data Output-Feedback Robust Repetitive …

157

If so, then condition (8.19) is satisfied and (8.20) becomes   ep (z) = Pc (z) 1 − W (z) z −N (r (z) − dr (z)) .

(8.22)

  Consequently, ep (kTs ) = r (kTs ) − yp (kTs ) → 0 with Z −1 1 − W (z) z −N (r (z) − dr (z))) → 0 as k → ∞. However, (8.21) is not often satisfied. Remark 8.4 (on the design of L (z) ) In practice, the transfer function T (z) may be nonminimum phase. Therefore, a stable L (z) cannot satisfy T (z) L (z) = 1 exactly. Here, the Taylor expansions of the transfer function inverse are used to design L (z). When there are zeros outside the unit circle for T (z), it can be rewritten in the following form: Tn (z) T + (z) Tn− (z) = kT n , T (z) = k T Td (z) Td (z) where Tn+ (z) is the cancelable part containing only stable zeros, Tn− (z) is the noncancelable part containing only unstable zeros, and k T is the gain. Based on the decomposition, the filter L (z) is designed as L (z) =

1 Td (z) ˆ − T (z) , k T Tn+ (z) n,inv

(8.23)

− where Tˆn,inv (z) is the Taylor expansion of 1 /Tn− (z) . Although the designed L (z) −N is noncausal, (8.18)  can be realized  using the one-period delay term z . This design iωTs iωTs   ≈ 0 at least for the low-frequency band, L e can often ensure 1 − T e or possibly for all frequencies ω ∈ [0, π /Ts ]. The details of the above design and other related designs are presented in [15, pp. 468–470].

Remark 8.5 (on the design of Q (z)) The design of L (z) has ensured that T (z) L (z) ≈ 1 in the low-frequency band so that the stability criterion (8.19) holds in the low-frequency band. However, the stability criterion may be violated in the high-frequency band. Based on the choice of L (z) , the filter Q (z) is selected to be a zero-phase low-pass filter [15, pp. 473–475] , [16]

k=n q

Q (z) =

ak z k

k=−n q

  which aims to attenuate the term  Q (z) W (z) z −N (1 − T (z) L (z)) in the highfrequency band. Similarly, although Q (z) is noncausal, (8.18) can be realized using the one-period delay term z −N . On the other hand, using (8.20), the term 1 − Q (z) W (z) z −N will determine the tracking performance directly. The trade-off

158

8 Sampled-Data Filtered Repetitive Control …

between stability must be considered to achieve a balance.  and tracking   performance   In particular, if 1 − T eiωTs L eiωTs  ≈ 0 for all frequencies ω ∈ [0, π /Ts ], then Q (z) is chosen to be Q (z) = 1. Remark 8.6 (on the design of W (z)) W (z) is the gain adjustment or the higher order RC function, given by p

wi z −(i−1)N (8.24) W (z) = i=1

with

p

wi = 1. For a traditional RC, W (z) = 1. With redundant freedom, the approi=1

priate weighting coefficients w1 , w2 , ...w p can be designed to improve the robustness of the tracking accuracy with respect to the period variation of r − dr [2, 3]. An approach was further proposed in [4] to design higher order RCs that yield an optimal trade-off between the robustness for period-time uncertainty and sensitivity for nonperiodic inputs.

8.2.2.2

Problem 2 on Secondary System (8.12)

Further, Problem 8.1 is solved. In the following section, the design of the sampleddata controller for the nonlinear system (8.12) is discussed. Before proceeding further, a lemma is introduced. Definition 8.1 ([7, Definition 2]) The system x˙ = f (x, u (x) , dc )

(8.25)

is ISS with respect to dc if there exist β ∈ K L and γ ∈ K such that the solutions of the system satisfy x (t) ≤ β ( x (0) , t) +γ ( dc ∞ ), ∀x (0) , dc ∈ L∞ , ∀t ≥ 0. Suppose that the feedback is implemented via sample-and-hold as u (t) = u (x (kTs )) , t ∈ [kTs , (k + 1) Ts ) , k ∈ N.

(8.26)

Lemma 8.2 ([7, Corollary 5]) If the continuous-time system (8.25) is ISS, then there exist β ∈ K L and γ ∈ K such that given any triple of strictly positive numbers  x , dc , ν , there exists T ∗ > 0 such that for all Ts ∈ (0, T ∗ ) , x (0) ≤ x , dc ∞ ≤ dc , the solutions of the sampled-data system x˙ = f (x, u (x (k)) , dc ) satisfy the following condition: x (k) ≤ β ( x (0) , kTs ) + γ ( dc ∞ ) + ν, k ∈ N.

(8.27)

Lemma 8.2 states that if the continuous-time closed-loop system is ISS, then the sampled-data system with the emulated controller will be semiglobally practically ISS with a sufficiently small Ts . Using Lemma 8.2, Problem 8.2 will be introduced.

8.2 Sampled-Data Output-Feedback Robust Repetitive …

159

Problem 8.2 For (8.12), design a controller u s (t) = κ(xs (kTs )), t ∈ [kTs , (k + 1) Ts )

(8.28)

such that the closed-loop system is ISS with respect to the input ep , namely,

xs (t) ≤ β ( xs (0) , t) + γ sup ep (t) + ν,

(8.29)

t≥0

where t ≥ 0, γ is a class K function, β is a class K L function, ν > 0 can be made small by reducing the sampling period Ts . Based on the emulation design, a continuous-time controller u s (t) is first designed based on the continuous-time plant model (8.12). Then, the obtained continuous-time controller is discretized according to the sampling period. For the secondary system (8.12), a locally Lipschitz static state feedback is designed as follows: u s (t) = κ(xs (t))

(8.30)

whose discretization form is (8.28). Then, substituting (8.30) into (8.12) yields   x˙ s = f t, xs , ep ,

(8.31)

    where f t, xs , ep = Axs + bκ(xs ) + φ r + cT xs + ep − φ (r ) . With respect to the ISS problem for (8.31), the following result can be obtained. Proposition 8.2 For (8.31), suppose that there exists a continuously differentiable function V1 : [0, ∞) × Rn → R such that α1 ( xs ) ≤ V1 (t, xs ) ≤ α2 ( xs )    ∂ V1  ∂ V1 + f t, xs , ep ≤ −V2 (xs ) , ∀ xs ≥ ρ ep > 0 ∂t ∂xs   ∀ t, xs , ep ∈ [0, ∞) × Rn × R, where α1 , α2 are class K∞ functions, ρ is a class K function, and V2 (x) is a continuous function on Rn . Then, given   positive-definite any triple of strictly positive numbers xs , ep , ν , there exist a class K function γ , a class K L function β and T ∗ > 0 such that for all Ts ∈ (0, T ∗ ) , xs (0) ≤ xs , supt≥0 ep (t) ≤ ep , the solutions of the sampled-data system formed by (8.12) and (8.28) satisfy (8.29), where γ = α1−1 ◦ α2 ◦ ρ. Proof Proof of [18, Theorem 4.19, p.176] can be imitated to show that the continuoustime system (8.31) is ISS with γ = α1−1 ◦ α2 ◦ ρ. Then, based on Lemma 8.2, it can be concluded that the solutions of the sampled-data system formed by (8.12) and (8.28) are semiglobally practically ISS. 

160

8 Sampled-Data Filtered Repetitive Control …

8.2.3 Controller Synthesis for Original System The two designed controllers (8.18) and (8.28) for the two subsystems can be combined to solve the original problem. The result is stated in Theorem 8.1 Theorem 8.1 Suppose (i) Problems 8.1, 8.2 are solved; (ii) the observer–controller for system (8.1) is designed as follows: = y (kTs ) − cT xˆ s (kTs ) yˆp (kTs ) xˆ s ((k + 1) Ts ) = F xˆ s (kTs ) + Hbu s (kTs ) T + 0 s eAs (y (kTs + s) − r (kTs + s)) ds, xˆ s (0) = 0

(8.32)

and the controller for system (8.1) is designed as    u p (t) = u p (kTs ) = Z −1 C (z) r (z) − yˆp (z) u s (t) = κ(ˆxs (kTs )) u (t) = u p (t) + u s (t)

(8.33)

for t ∈ [kTs , (k + 1) Ts ) , k ∈ N. Then, the output of system (8.1) satisfies y (kTs ) − r (kTs ) → B(δ + c γ (δ) + c ν) as k → ∞, where δ is defined in Problem ˙ 8.1, and ν, γ are defined in Problem 8.2. Furthermore, if φ (r (t)) ≤ lφ˙ and d˙ (t) ≤ ld˙ , ∀t ≥ 0, then y (t) − r (t) → B(δ + c γ (δ) + c ν + c ν  ) as   T t → ∞, where ν  = 0 s eA(Ts −s) lφ˙ + ld˙ dsTs . Proof Using Lemma 8.1, the estimates in the observer (8.32) satisfy xˆ p ≡ xp and xˆ s ≡ xs . Then, the controller u p in (8.33) can drive ep (kTs ) = yp (kTs ) − r (kTs ) → B(δ) as k → ∞ using the solution to Problem 8.1. On the other hand, suppose that Problem 8.2 is solved. Based on (8.29),       xs (kTs ) ≤ β xs k  Ts , k − k  Ts + γ 



sup ep (t) + ν,

t≥k  T

where β  is a class K L function and k  ≤ k. Then, ys (kTs ) ≤ c xs (kTs )       ≤ c β  xs k  Ts , k − k  Ts

+ c γ sup ep (t) + c ν. t≥k  T

Based on Problem 8.1, ep (kTs ) → B(δ) as k → ∞. This implies that for a given ε > 0, there exists an N0 ∈ N such that supt≥k  T ep (t) ≤ δ + ε when k   N0 . Then, ys (kTs ) ≤ c β  ( xs (N0 Ts ) , (k − N0 ) Ts ) + c γ (δ + ε) + c ν, k  N0 . Given that c β  ( xs (N0 Ts ) , (k − N0 ) Ts ) → 0 as k → ∞ and ε can be

8.2 Sampled-Data Output-Feedback Robust Repetitive …

161

selected to be arbitrarily small, it can be concluded that ys (kTs ) → B( c γ (δ) + c ν) as k → ∞. Furthermore, let us consider the behavior of the sampled-data system during a sampling interval. Using (8.29), ys (t) → B( c γ (δ) + c ν) as t → ∞. In the following section, we further consider the primary system. Let x˜ p,k (t) = xp (kTs + t) − xp (kTs ) , y˜p,k (t) = cT x˜ p (t) , where t ∈ [0, Ts ]. Then, using the Lagrange mean value theorem, x˙˜ p,k (t) = Ax˜p,k (t) + φ˙ (r (ξ1 )) t + d˙ (ξ2 ) t, where kTs ≤ ξ1 , ξ2 ≤ kTs + t. Given that φ˙ (r (t)) ≤ lφ˙ and d˙ (t) ≤ ld˙ , then y˜ p,k (t) ≤ c x˜ p,k  t  A(t−s)  e l ˙ + ld˙ dst ≤ c φ 0

≤ c ν  for all k ∈ N. Therefore, y (t) − r (t) → B(δ + c γ (δ) + c ν + c ν  ).



Remark 8.7 Based on Theorem 8.1, the ultimate bound de ≤ δ + c γ (δ) + c ν + c ν  , which is determined by the properties of the original system, reference/disturbance, controller, and sampling period. A robust RC is designed for the primary system to reduce δ.

8.3 Illustrative Example 8.3.1 Problem Formulation Herein, a single-link robot arm with a revolute elastic joint rotating in the vertical plane is discussed as an application example [10]: x˙ = A0 x + bu + φ 0 (y) + d, x (0) = x0 y = cT x.

(8.34)

162

8 Sampled-Data Filtered Repetitive Control …

Here, ⎤ ⎡ ⎤ 0 1 0 0 0 ⎥ ⎢0⎥ ⎢ − KJ − FJl KJ 0 l l l ⎥,b = ⎢ ⎥, A0 = ⎢ ⎣0⎦ ⎣ 0 0 0 1 ⎦ Fm K K 1 0 − − Jm Jm Jm ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ 0 0 1 ⎢ d1 ⎥ ⎢0⎥ ⎢ − Mgl sin y ⎥ Jl ⎥,d = ⎢ ⎥, ⎢ ⎥ c=⎢ ⎦ ⎣0⎦ ⎣ 0 ⎦ , φ 0 (y) = ⎣ 0 d2 0 0 ⎡

(8.35)

where x = [x1 x2 x3 x4 ]T contains the link displacement (rad), link velocity, rotor displacement, and rotor velocity, respectively; d1 and d2 are unknown disturbances. The parameters in (8.35) are Jl = 2, Jm = 0.5, k = 0.05, M = 0.5, g = 9.8, l = 0.5, and Fl = Fm = 0.2. The control u is the torque delivered by the motor. A0 is found to be unstable. Choose p = [−2.10 −1.295 −9.36 3.044]T . Then, the system (8.34) can be formulated as (8.1) with A = A0 + pcT and φ (y) = φ 0 (y) − py, where A is stable. Assume that the desired trajectory is r (t) = 0.05 + 0.1 sin (2π t /T ), while the periodic disturbances are d1 (t) = 0.04 sin (2π t /T ) and d2 (t) = 0.02 cos (2π t /T ) sin (2π t /T ) , where T = 20π /3 s. Let the sampling period be Ts = 0.1 s. Then, N = 209.

8.3.2 Controller Design 8.3.2.1

Controller Design for Primary System

Under the sampling period Ts = 0.1 s, the discrete-time transfer function T (z) is T (z) = (z + 9.399)Tm (z) , where Tm (z) =

9.8895 ∗ 10−8 (z + 0.9493)(z + 0.09589) (z 2 − 1.819z + 0.8279)(z 2 − 1.929z + 0.9314)

and there exists an unstable zero −9.399 in T (z) . Therefore, T (z) is a nonminimum phase. According to (8.23), L (z) is designed as L (z) =

− Tˆn,inv (z)

Tm (z)

− where Tˆn,inv (z) is the Taylor expansions of

1 z+9.399

,

(8.36) and is designed as

8.3 Illustrative Example

163 The amplitude of tracking error transfer function

5 W(z)=1 −N W(z)=1.85−0.85z

Magnitude

4 3 2 1 0

0

5

10

15 Frequency (rad/sec)

20

25

30

The amplitude of tracking error transfer function for frequencies from (0 − 0.9 rad/sec) 5 W(z)=1 W(z)=1.85−0.85z−N

Magnitude

4 3 2 1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Frequency (rad/sec)

Fig. 8.2 Amplitude of

1−Q(z)W (z)z −N 1 1+P(z) 1−Q(z)W (z)z −N (1−T (z)L(z))

z z2 1 1 − . + ≈ 2 3 9.399 9.399 9.399 z + 9.399      It is easy to check that 1 − T eiωTs L eiωTs  ≤ 0.0013 for all frequencies ω ∈ [0, π /Ts ]. Then, select Q (z) = 1. According to [3], a simpler second-order RC function is selected to improve the robustness against the period variation as follows: − Tˆn,inv (z) =

W (z) = 1.85 − 0.85z −N

(8.37)

The amplitudes of the transfer function in (8.20) with both W (z) = 1 and W (z) = 1.85 − 0.85z −N are plotted in Fig. 8.2. This figure shows that if a small variation around the given period occurs (corresponding to frequencies at 0.3, 0.6, 0.9, ...), then the magnitude variation caused by the higher order RC is less than that caused by the regular RC. Therefore, the higher order RC is less sensitive to period variation. Consequently, the higher order RC can improve the robustness of the tracking accuracy against the period variation, which will be further confirmed in the next section.

164

8.3.2.2

8 Sampled-Data Filtered Repetitive Control …

Controller Design for Secondary System

For the system (8.10), using the backstepping technique [18], we design u s (xs ) = μ1 +

Jl (v + μ2 ) , K

(8.38)

where v = −7.5xs,1 −19xs,2 −17η3 −7η4 , μ1 = −η3 + JKm xs,1 − JKm xs,3 − FJmm xs,4 ,    2   μ2 = FJll η4 + Mgl + r¨ ) cos xs,1 + r − Mgl ( xs,2 + r˙ sin xs,1 + r +¨r cos (r ) Jl (η3 Jl     −˙r 2 sin (r )), η3 = − FJll xs,2 − KJl xs,1 − xs,3 − Mgl (sin xs,1 + r − sin (r )), η4 =   Mgl    Jl  Fl K − Jl η3 − Jl xs,2 − xs,4 − Jl ( xs,2 + r˙ cos xs,1 + r −˙r cos (r )), xs = [xs,1 xs,2 xs,3 xs,4 ]T . The controller (8.38) can solve Problem 8.2.

8.3.3 Controller Synthesis and Simulation The final controller is given by (8.33), where L (z) and Q (z) in u p are selected as in Sect. 8.3.2, while u s is selected as in (8.38). The variables yp and xs are estimated by the observer (8.32) with the sensor sampling rate Tss = 0.01 s. In the controller combination, the variables yp and xs will be replaced by yˆp and xˆ s , respectively. To compare the robustness of the tracking accuracy against the period variation, both W (z) = 1 and W (z) = 1.85 − 0.85z −N are taken into consideration, and the true period is assumed to be 20π (1 + α) /3 , where α is the perturbation. First, the transient response and tracking performance are shown in Fig. 8.3. The tracking error is uniformly ultimately bounded. At first, the traditional RC and second-order RC have a big overshooting because at the first period, the RC does not work. Compared with the traditional RC, the second-order RC takes more time to adjust because it uses the previous two period tracking errors rather than one period used by the traditional RC. However, after several periods, the output corresponding to each controller tracks the reference gradually with a satisfied steady-state tracking error. To examine the robustness against period variation quantitatively, the ultimate bound is plotted as a function of the perturbation α. Figure 8.4 shows that the ultimate bound is small if α is small. This implies that the proposed sampled-data RC can drive y to track r. More importantly, the ultimate bound of the steady-state tracking error produced by the proposed second-order RC is less sensitive to the perturbation α in comparison with that produced by the traditional RC. Therefore, our initial goal is achieved where a discrete-time output-feedback RC is designed for the nonlinear system (8.1) such that y can track r and robustness against period variation is improved.

300

400

500

500 0

100

400

300

t (sec)

200

400

r y

r y

500

500

-0.2

0

0.2

0.4

0.6

-0.2

0

0.2

0.4

0.6

0

0

Fig. 8.3 Transient response and tracking performance of the period mismatch with traditional and second-order RC

t (sec)

400

-0.4

300

-0.4

200

-0.2

-0.2

100

0

0.4

0

r y 0.2

0

300

=0,W(z)=1.85-0.85z -N

0.6

200

=-0.02, W(z)=1.85-0.85z -N

100

t (sec)

0

=0,W(z)=1

t (sec)

0.2

0.4

0.6

-0.2

200

-0.2

100

0

0.4

0

r y

0.6

0.2

0

=-0.02,W(z)=1

0.2

0.4

0.6

300

t (sec)

200

400

100

300

t (sec)

200

400

=0.02.W(z)=1.85-0.85z -N

100

=0.02.W(z)=1

r y

r y

500

500

8.3 Illustrative Example 165

166

8 Sampled-Data Filtered Repetitive Control … 0.12

Ultimate bound of the steady−state tracking error

W(z)=1 W(z)=1.85−0.85z−N 0.1

0.08

0.06

0.04

0.02

0 −0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

α

0.04

0.06

0.08

0.1

Fig. 8.4 Ultimate bound as a function of the period mismatch with traditional and second-order RC

8.3.4 Comparison A comparison is made to demonstrate the effectiveness of the proposed method. The considered system is (8.34) with parameters (8.35) except for ⎡



0

⎢ − Mgl (sin y+y ) ⎥ ⎥, Jl φ 0 (y) = ⎢ ⎦ ⎣ 0 0 2

(8.39)

where the new nonlinear term has the additional term y 2 to make the comparison obvious. The other parameters are the same as those in Sect. 8.3.1. The compared control is based on linearization using Taylor’s expansion. Through linearization, system (8.34) becomes x˙ = Ax + bu + d, x (0) = x0 y = cT x,

(8.40)

8.3 Illustrative Example

167

Compared control

0.4

r y

0.2

0

-0.2

0

50

100

150

200

250

300

350

400

t (sec) Proposed control

0.4

r y

0.2 0 -0.2 -0.4

0

50

100

150

200

250

300

350

400

t (sec) Fig. 8.5 Transient response and tracking performance for comparison with traditional RC

where

  ∂φ 0 cT x  A = A0 +   ∂x

. x=0

Given that only y rather than x is available and A is stable in this case, the poles of A are not assigned here via feedback. Under the sampling period Ts = 0.1s, the sampled-data RC is designed for comparison based on (8.40) following the procedure in Sect. 8.3.2.1. The proposed method is designed similar to Sect. 8.3.2, but the controller for the secondary system is designed based on (8.39). The transient response and tracking performance are shown in Fig. 8.5. The compared control based on linearization cannot track the period reference anymore, but the proposed control method can achieve a satisfying tracking performance, which demonstrates its effectiveness.

168

8 Sampled-Data Filtered Repetitive Control …

8.4 Summary In this chapter, a sampled-data output-feedback robust RC problem for a class of nonminimum-phase systems with measurable nonlinearities was solved under the additive-state-decomposition-based tracking control framework. Existing controller design methods in both the frequency and time domains were employed to make the robustness and discretization for a continuous-time nonlinear system tractable. In the given illustrative example, our initial goal was achieved where a sampleddata output-feedback RC was compared using a simpler second-order filter with a classical RC, and the output could track the reference and robustness against period variation was improved. Moreover, the proposed method outperformed the RC based on linearization. Given that the sampled-data control (sample-and-hold control) is a type of hybrid control, sampled-data controllers for secondary nonlinear systems can also be designed under the framework of hybrid control [19]. Furthermore, the observer-based sampled-data control methods can also be applied to secondary nonlinear systems [20].

References 1. Quan, Q., Cai, K.-Y., & Lin, H. (2015). Additive-state-decomposition-based tracking control framework for a class of nonminimum phase systems with measurable nonlinearities and unknown disturbances. International Journal of Robust and Nonlinear Control, 25(2), 163– 178. 2. Steinbuch, M. (2002). Repetitive control for systems with uncertain period-time. Automatica, 38(12), 2103–2109. 3. Steinbuch, M., Weiland, S., & Singh, T. (2007). Design of noise and period-time robust high order repetitive control with application to optical storage. Automatica, 43(12), 2086–2095. 4. Pipeleers, P., Demeulenaere, B., & Sewers, S. (2008). Robust high order repetitive control: optimal performance trade offs. Automatica, 44(10), 2628–2634. 5. Kurniawan, E., Cao, Z., Mahendra, O., & Wardoyo, R. (2014). A survey on robust repetitive control and applications. IEEE International Conference on Control System, Computing and Engineering, 524–529. 6. Liu, T., & Wang, D. (2015). Parallel structure fractional repetitive control for PWM inverters. IEEE Transactions on Industrial Electronics, 62(8), 5045–5054. 7. Nesi´c, D., & Teel, A. (2001). Sampled-data control of nonlinear systems: An overview of recent results. In S. Moheimani (Ed.), Perspectives in robust control. series Lecture notes in control and information sciences (Vol. 268, pp. 221–239). Berlin: Springer. 8. Quan, Q., Yang, D., & Cai, K.-Y. (2010). Linear matrix inequality approach for stability analysis of linear neutral systems in a critical case. IET Control Theory Application, 62(7), 1290–1297. 9. Hale, J. K., & Verduyn, L. S. M. (1993). Introduction to functional differential equations. New York: Springer. 10. Marino, R., & Tomei, P. (1995). Nonlinear control design: geometric, adaptive and robust. London: Prentice-Hall. 11. Ding, Z. (2003). Global stabilization and disturbance suppression of a class of nonlinear systems with uncertain internal model. Automatica, 39(3), 471–479. 12. Lee, S. J., & Tsao, T. C. (2004). Repetitive learning of backstepping controlled nonlinear electrohydraulic material testing system. Control Engineering Practice, 12(11), 1393–1408.

References

169

13. Quan, Q., & Cai, K.-Y. (2013). Additive-state-decomposition-based tracking control for TORA benchmark. Journal of Sound and Vibration, 332(20), 4829–4841. 14. Wei, Z.-B., Ren, J.-R., & Quan, Q. (2016). Further results on additive-state-decompositionbased output feedback tracking control for a class of uncertain nonminimum phase nonlinear systems. In 2016 Chinese Control and Decision Conference (CCDC), 28–30 May 2016. 15. Longman, R. W. (2010). On the theory and design of linear repetitive control systems. European Journal of Control, 16(5), 447–496. 16. Smith, J. O. (2007). Introduction to digital filters: with audio applications. W3K Publishing. 17. Green, M., & Limebeer, D. J. N. (1994). Robust linear control. Englewood Cliffs: Prentice-Hall. 18. Khalil, H. K. (2002). Nonlinear systems. Upper Saddle River, NJ: Prentice-Hall. 19. Goebel, R., Sanfelice, R. G., & Teel, A. R. (2012). Hybrid dynamical systems: modeling, stability, and robustness. Princeton: Princeton University Press. 20. Mao, J., Guo, J., & Xiang, Z. (2018). Sampled-data control of a class of uncertain switched nonlinear systems in nonstrict-feedback form. International Journal of Robust and Nonlinear Control, 28(3), 918–939.

Chapter 9

Filtered Repetitive Control with Nonlinear Systems: An Actuator-Focused Design Method

The internal model principle (IMP) was first proposed by Francis and Wonham [2, 3]. It states that if any exogenous signal can be regarded as the output of an autonomous system, then the inclusion of this signal model, namely, internal model, in a stable closed-loop system can assure asymptotic tracking or asymptotic rejection of the signal. Until now, to the best of the authors’ knowledge, there exist at least two viewpoints on IMP. In the early years, for linear time-invariant (LTI) systems, IMP implies that the internal model is to supply closed-loop transmission zeros which cancel the unstable poles of the disturbances and reference signals. This is called cancelation viewpoint here and only works for problems able to be formulated in terms of transfer functions. In the mid-1970s, Francis and Wonham proposed the geometric approach [4] to design an internal model controller [2, 3]. The purpose of internal models is to construct an invariant subspace for the closed-loop system and make the regulated output zero at each point of the invariant subspace. This is called geometrical viewpoint here. By the geometrical viewpoint, Isidori and Byrnes in the early 1990s further extended it from LTI systems to nonlinear time-invariant systems [5]. This work inspires the development of internal modelbased controller design methods greatly up to now [6–12]. Also, by the geometrical viewpoint, the finite-dimensional output regulation problem has been generalized for both infinite-dimensional systems and reference/disturbance signals generated by some infinite-dimensional exosystems [13–22]. However, the two viewpoints on IMP are difficult to handle general periodic signal tracking problems for nonlinear systems subject to periodic disturbances generated by infinite-dimensional exosystems, because the resulting closed-loop system, which contains the copy of such an exosystem, is nonlinear and infinite-dimensional. On the one hand, the cancelation viewpoint cannot be applied to nonlinear systems directly because it relies on transfer functions. On the other hand, the existing theories on geometric approach for infinite-dimensional systems cannot be applied to nonlinear systems directly because they rely on the linear operator theory. This chapter will propose a new viewpoint, namely, the actuator-focused viewpoint, on IMP. The actuator-focused viewpoint can overcome the drawbacks mentioned in both the cancelation viewpoint and the geometrical viewpoint if general periodic external signals are considered. Then, based © Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_9

171

172

9 Filtered Repetitive Control with Nonlinear Systems …

on the new viewpoint, the actuator-focused design method is proposed. This chapter aims to answer the question as below: What is the actuator-focused viewpoint and how is it applied to repetitive control problems? The answer to this question involves the problem formulation, an introduction to actuator-focused viewpoint on IMP, actuator-focused RC design method, and numerical examples. This chapter presents a revised version of a paper that was published [1].

9.1 Motivation and Objective Unlike the general tracking controller design procedure, the internal model-based controller design only needs to consider the stability of the closed-loop system rather than the closed-loop error dynamics. This will lead to a simpler controller design procedure. For simplicity, it is assumed that, for SISO systems, the transfer functions of the system P and the controller C are P (s) and C (s), respectively. In Fig. 9.1a, the transfer function from r to e (corresponding to r and e, respectively) is written as follows: 1 r (s) . e (s) = 1 − C (s) P (s) In Fig. 9.1b, the transfer function from r to e is written as follows: e (s) =

1 P (s) r (s) . 1 − C (s) P (s)

If P (s) is stable, the controller C only needs to ensure that all poles of 1 (1 − C (s) P (s)) lie in the left s-plane and one closed-loop zero of 1 (1 − C (s) P (s)) is 0 so that the closed-loop zero can cancel the unstable pole of r (s) = a/s, where a is constant. According to IMP, the controller C should contain an integral term 1/s. Therefore, both tracking problem and rejection problem (see Chap. 2) can be reduced to a stabilization problem. Figure 9.2 is given to show the dif-

Fig. 9.1 Tracking problem and rejection problem

9.1 Motivation and Objective

173

Fig. 9.2 Comparison between two design ideas

ference between the general tracking controller design and the internal model-based controller design. In summary, the general tracking controller design needs to obtain the closed-loop error system and then analyze its stability. These will cause trouble in designing the tracking controller. However, the internal model-based controller design only needs to analyze the stability of the closed-loop system. The major difference between the two design ideas is whether or not to obtain the closed-loop error dynamics, because the closed-loop error system, in fact, has converted a tracking problem into a simpler rejection problem. According to this, if a special class of tracking signals is focused on, such as they belong to a fixed family of all solutions from an autonomous system, then the internal model-based controller design looks a more promising method. Unfortunately, existing viewpoints on IMP are difficult to support the general periodic signal (generated by an infinite-dimensional autonomous system) tracking of nonlinear systems. Objective. By taking these into account, a new viewpoint on IMP is proposed to support general periodic signal tracking of nonlinear systems. Based on the new viewpoint, it is further expected that the resulting design method will outperform general design methods when dealing with the periodic signal tracking problem.

174

9 Filtered Repetitive Control with Nonlinear Systems …

9.2 Actuator-Focused Viewpoint on Internal Model Principle The general idea of the actuator-focused viewpoint on IMP is introduced first. Then it is used to explain the role of the internal models for step signals, sine signals, and general periodic signals, respectively. Finally, the viewpoint is extended to filtered repetitive control systems. This section is only to clarify the actuator-focused viewpoint in the aspect of signals. The next section will introduce how to generate such signals required by designing controllers.

9.2.1 General Idea As shown in Fig. 9.3, some definitions are given to clarify the general idea of the actuator-focused viewpoint. The internal model is defined as v = I M (s0 , e) .

(9.1)

Here I M : S0 ⊕ E → V , where S0 is a Banach space for the initial conditions (the initial condition can have many forms, such as s0 ∈ Rn or s0 (t) = φ (t) ∈ Rn , t ∈ [−T, 0]); e (t) ∈ E = { f : R → Rm } and v (t) ∈ V = { f : R → Rm } are signal spaces, representing the input and output of the internal model. If there exists an initial condition s0 ∈ S0 such that s = I M (s0 , 0) , then it is said s ∈ I M . Similarly, the plant is defined as y = P (x0 , u) . Here P : X0 ⊕ U → Y , where X0 is a Banach space for the initial conditions; u (t) ∈ U = { f : R → Rm } and y (t) ∈ Y = { f : R → Rm } are signal spaces, representing the input and output of the plant. The steady states e∗ and v∗ of the general system shown in Fig. 9.3 imply that there exist s0∗ ∈ S0 and x0∗ ∈ X0 such that    e∗ = r − P x0∗ , d + I M s0∗ , e∗   v∗ = I M s0∗ , e∗ .

(9.2)

Example 9.1 (on the composition of internal models) Generally, the internal model I M is marginally stable so that it can generate nonvanishing and bounded signals Fig. 9.3 A general system including internal model

9.2 Actuator-Focused Viewpoint on Internal Model Principle

175

Fig. 9.4 Composition of two integral terms

   with any nonzero initial condition. However, I M s0 , I M s0 , 0 is often unbounded if s0 = 0 no matter what s0 is. As shown in Fig. 9.4, if I M is an integral term whose Laplace transform is 1/s, then   I M s0 , 0 = s0 L −1

  1 = s0 u (t) , s

where L −1 is the inverse Laplace transform, u (t) is the unit step function, and s0 ∈ R is the initial condition. However, it can be observed that    I M s0 , I M s0 , 0 = s0 L −1

    1 1 + s0 L −1 2 = s0 u (t) + s0 tu (t) . s s

   The signal I M s0 , I M s0 , 0 isunbounded when s0 = 0 no matter what s0 is. There  / I M if s0 = 0. fore, the compound I M s0 , I M s0 , 0 ∈ Theorem 9.1 As shown in Fig. 9.3, that  suppose  (i) The compound I M s0 , I M s0 , 0 ∈ / I M if s0 = 0; (ii) The steady states e∗ , v∗ ∈ I M ; (iii) For s = I M (s0 , 0) , s (t) ≡ 0 if and only if s0 = 0. Then e∗ = 0. ∗ Proof Prove it by contradiction, namely, e∗ = 0. Since   e ∈ I M by condition (ii),  ∗ there exists an initial condition s0 such that e = I M s0 , 0 according to definition of I M . Then, since e∗ = 0 as supposed, s0 = 0 according to condition (iii). Furthermore, according to (9.2), one further has

  v∗ = I M s0∗ , e∗    = I M s0∗ , I M s0 , 0 .    / I M according to condition (i). This conSince s0 = 0, v∗ = I M s0∗ , I M s0 , 0 ∈  tradicts with v∗ ∈ I M in condition (ii). So, e∗ = 0. Remark 9.1 For condition (i) in Theorem 9.1, it is reasonable    as shown in Exams s0 , 0 does not imply I M , I ple 9.1. Also, it is necessary. The form I M 0 M    s0 , I M s0 , 0  ∈ / I M according to the definition, because there may exist s0 ∈ S0      such that I M s0 , I M s0 , 0 = I M s0 , 0 . Condition (ii) in Theorem 9.1 implies states e, v will tend to steady states belonging to I M eventually. Condition (iii) is also reasonable as shown in Example 9.1. In the following section, Theorem 9.1 will be applied to three examples. Readers can obtain more intuitional explanation.

176

9 Filtered Repetitive Control with Nonlinear Systems …

Remark 9.2 In the analysis, the controller (actuator), namely, (9.1) or see the dashed box in Fig. 9.3, is focused on. For this reason, the new viewpoint is called the actuatorfocused viewpoint.

9.2.2 Three Examples Three examples will further show the general idea on the actuator-focused viewpoint in detail.

9.2.2.1

Step Signal

Since the Laplace transformation model of a unit step signal and an integral term are the same, namely, 1/s, the inclusion of the model 1/s in a stable closed-loop system can assure perfect tracking or complete rejection of the unit step signal according to IMP. Two viewpoints on IMP are given to explain the reason. • Cancelation Viewpoint: As shown in Fig. 9.5, the transfer function from the desired signal to the tracking error is written as follows:   1 1 1 s y = (s) d s + G (s) s 1 + 1s G (s) 1 . = s + G (s)

e (s) =

(9.3)

Then it only requires to verify whether or not the roots of the equation s + G (s) = 0 are all in the left s-plane, namely, whether or not the closed-loop system is stable. If all roots are in the left s-plane, then the tracking error tends to zero as t → ∞. Therefore, the tracking problem has been reduced to a stability problem of the closed-loop system. • Actuator-Focused Viewpoint: The actuator-focused viewpoint in the following will give a new explanation on IMP without using transfer functions. In this example, I M is an integral term whose Laplace transform is 1/s. The conditions (i) and (iii) of Theorem 9.1 hold as shown in Example 9.1. In the following, the condition (ii) of Theorem 9.1 is examined. Let G (s) = cT (sI − A)−1 b + d, where A ∈ Rn×n , b, c ∈ Rn , d ∈ R. The minimal realization of y = G (s) v is

Fig. 9.5 Step signal tracking

9.2 Actuator-Focused Viewpoint on Internal Model Principle

177

x˙ = Ax + bv y = cT x + dv. As shown in Fig. 9.5, the resulting closed-loop system becomes        x˙ x A b 0 = + . v˙ −cT −d v yd z˙

z

Aa

The solution is z (t) = e

Aa t

t

z (0) +

(9.4)

w

eAa (t−s) wds,

0

where w is constant. If the closed-loop system is stable, then the matrix Aa is stable, namely, the real parts of eigenvalues of Aa are negative. As a result, z (t) will tend to a constant vector, say [x∗T v∗ ]T , as t → ∞. Consequently, v (t) and e (t) = yd − cT x (t) − dv (t) will tend to constants v∗ and e∗ = yd − cT x∗ − dv∗ as t → ∞, respectively. Therefore, v∗ , e∗ ∈ I M . According to Theorem 9.1, it can be claimed that e∗ = 0. An intuitive interpretation is also given by contradiction. One has v˙ (t) = e (t) because of the integral term. If e∗ = 0, then v (t) =

(9.5)

e (s)ds will tend to infinity



as t → ∞. This contradicts with v being a constant. So, e∗ = 0. The explanation is somewhat different from Theorem 9.1, but their essential ideas are the same. As shown above, to confirm that the tracking error e (t) → 0 as t → ∞, it is only required to verify whether or not the closed-loop system without external signals is exponentially stable. This implies that the tracking problem has been reduced to a stability problem. Remark 9.3 Another way is given to explain why v (t) and e (t) will tend to constants as t → ∞ in the following. If the closed-loop system without external signals is exponentially stable, then when the system is driven by a unit step signal, the closedloop system is uniformly bounded and uniformly ultimately bounded. By the fixed point theory [24, pp. 164–182], there exists a constant solution to (9.4). Since the closed-loop system without external signals is exponentially stable, v (t) and e (t) will tend to the constant solution as t → ∞.

9.2.2.2

Sine Signal

Similarly, the actuator-focused viewpoint is applicable to explaining how the internal models work when the external signal is a sine signal. If the external signal is in the form a0 sin (ωt + ϕ0 ), where a0 , ϕ0 are constants, then perfect tracking or complete

178

9 Filtered Repetitive Control with Nonlinear Systems …

Fig. 9.6 Sine signal tracking

  rejection can be achieved by incorporating the model 1 s 2 + ω2 into the closedloop system. • Cancelation Viewpoint: As shown in Fig. 9.6, the transfer function from the desired signal to the tracking error is written as follows: e (s) =

1 1+

1 G s 2 +ω2

(s)

yd (s)

   2  1 2 b1 s + b0 = 2 s +ω s + ω2 + G (s) s 2 + ω2 b1 s + b0 , = 2 s + ω2 + G (s) 0 where the Laplace transformation model of a0 sin (ωt + ϕ0 ) is bs12s+b . Then, it is +ω2 2 2 only required to verify whether or not the roots of the equation s + ω + G (s) = 0 are all in the left s-plane, namely, whether or not the closed-loop system is stable. Therefore, the tracking problem has been reduced to a stability problem of the closed-loop system. 1 • Actuator-Focused Viewpoint: Because of the term s 2 +ω 2 , the relationship between v (t) and e (t) can be written as

e (t) = v¨ (t) + ω2 v (t) .

(9.6)

If the closed-loop system without external signals is exponentially stable, then, when the system is driven by an external signal in the form of a0 sin (ωt + ϕ0 ), it is easy to see that v (t) and e (t) will tend to signals in the form of a sin (ωt + ϕ), where a and ϕ are constants. Consequently, e (t) → (a sin (ωt + ϕ)) + ω2 (a sin (ωt + ϕ)) ≡ 0 as t → ∞ by (9.6) no matter what a and ϕ are. Therefore, to confirm that the tracking error tends to zero as t → ∞, it only requires verifying whether or not the closed-loop system without external signals is exponentially stable. This implies that the tracking problem has been reduced to a stability problem.

9.2 Actuator-Focused Viewpoint on Internal Model Principle

179

Fig. 9.7 An RC system for periodic signal tracking

9.2.2.3

General T -Periodic Signal

If the external signal is in the form of yd (t) = yd (t − T ), which can represent any T -periodic signal, then perfect  or complete rejection can be achieved by   tracking incorporating the model 1 1 − e−sT into the closed-loop system. Two viewpoints on IMP are given to explain the reason. • Cancelation Viewpoint: Similarly, as shown in Fig. 9.7, the transfer function from the desired signal to the error is written as follows: e (s) = =

1 1+

1 1−e−sT

1

G (s)

yd (s)    1 − e−sT

1 − e−sT + G (s) 1 . = −sT 1−e + G (s)

1 1 − e−sT



Then, it is only required to verify whether or not the roots of the equation 1 − e−sT + G (s) = 0 are all in the left s-plane (if the real parts of the roots are strictly less than zero, then the closed-loop system is exponentially stable [25, p. 34, Corollary 7.2.]). Therefore, the tracking problem has been reduced to a stability problem of the closed-loop system. • Actuator-Focused Viewpoint: The actuator-focused viewpoint in the following will give a new explanation on IMP without using transfer  functions. Inthis exam −sT . The term internal model whose Laplace transform is 1 1 − e ple, I M is an   I M s0 , I M s0 , 0 in Theorem 9.1 can be written as v (t) = v (t − T ) + e (t) e (t) = e (t − T ) , where v (θ ) = s0 (θ ) ∈ R, e (θ ) = s0 (θ ) ∈ R, θ ∈ [−T, 0] . Given any θ ∈ [−T, 0] , one has v (kT + θ ) = s0 (θ ) + ks0 (θ ) which is not a period signal if s0 (θ ) = 0, θ ∈ [−T, 0] . Therefore,   theconditions / I M if (i) of Theorem 9.1 holds, namely, the composition I M s0 , I M s0 , 0 ∈

180

9 Filtered Repetitive Control with Nonlinear Systems …

s0 (θ ) = 0, θ ∈ [−T, 0] . Obviously, the conditions (iii) of Theorem 9.1 also hold. If the closed-loop system without external signals is exponentially stable, then, according to the solution of functional functions [25], it can be proven that v (t) and e (t) will both tend to T -periodic signals when the system is driven by a T -periodic signal. Therefore, v∗ , e∗ ∈ I M . According to Theorem 9.1, it can be claimed that e∗ = 0. An intuitive interpretation is also given by contradiction. One has e (t) = v (t) − v (t − T ) . (9.7) Consequently, based on (9.7), it can be concluded that e (t) → 0 as t → ∞ if v∗ ∈ I M (a T -periodic signal). As shown above, to examine the tracking error tending to zero as t → ∞, it only requires verifying whether or not the closedloop system without external signals is exponentially stable. This implies that the tracking problem has been reduced to a stability problem. Remark 9.4 Another way is given to explain why v (t) and e (t) will tend to T periodic solutions as t → ∞. If the closed-loop system without external signals is exponentially stable, then when the system is driven by a T -periodic signal, the closed-loop system is uniformly bounded and uniformly ultimately bounded. By the fixed point theory [24, pp. 164–182], there exists a T -periodic solution for v (t) and e (t). Since the closed-loop system without external signals is exponentially stable, v (t) and e (t) will tend to the T -periodic solution as t → ∞.

9.2.3 Filtered Repetitive Control System Subject to T-Periodic Signal How to stabilize an RC system is not an easy problem due to the inclusion of the time-delay element in the positive feedback loop. It was proven in [13] that stability of RC systems could be achieved for continuous-time systems only  when the plants are proper but not strictly proper. Moreover, the internal model 1 1 − e−sT may lead to instability of the system. The stability of RC systems is insufficiently robust. Taking these into account, low-pass filters are introduced into RCs to enhance the stability of RC systems, resulting in filtered repetitive controllers (FRCs, or filtered repetitive control, also designated FRC) which can improve the robustness of the closed-loop systems. With an appropriate filter, the FRC can usually achieve a satisfactory trade-off between tracking performance and stability, broadens   which in turn −sT replaces itsapplication in practice. For example: the model Q 1 − Q e (s) (s)   1 1 − e−sT  resulting in the closed-loop system shown in Fig. 9.8. Furthermore, if Q (s) = 1 (1 + εs), then the relationship between v (t) and e (t) is e (t) = v (t) − v (t − T ) + ε˙v (t) .

(9.8)

9.2 Actuator-Focused Viewpoint on Internal Model Principle

181

Fig. 9.8 An FRC system for periodic signal tracking

If the closed-loop system without external signals is exponentially stable, then, when the system is driven by a periodic signal, it is easy to see that v (t) and e (t) will both tend to periodic signals as t → ∞. Because of the relationship (9.8), it can be concluded that e (t) − εv˙ (t) → 0. This implies that the tracking error can be adjusted by the filter Q (s) or say ε. Moreover, if v˙ (t) is bounded in t uniformly with respect to (w.r.t) ε as ε → 0, then lim e (t, ε) = 0. On the other hand, increasing ε can t→∞,ε→0

improve the stability of the closed-loop system. Therefore, a satisfactory trade-off between stability and tracking performance can be achieved by using the FRC. As seen above, the new viewpoint not only explains IMP in the time domain but also gives an explanation for the FRC. For the T -periodic signal tracking, it can be concluded that if a periodic signal model with tracking error as the input is incorporated into a closed-loop system all of whose states tend to periodic signals, then perfect tracking is achieved. From the actuator-focused viewpoint, the transfer function is not utilized again. Instead, some tools are needed to verify the system’s behavior as t → ∞. This enlarges the range of tools which can be chosen. For periodic signal tracking, some conditions only need to seek to verify whether or not the system states tend to periodic signals. There exist many conditions on the existence of periodic solutions, which usually rely on the stability of the closed-loop system [24, pp. 263–279]. Consequently, the stability of the closed-loop system is all that is needed. Therefore, the T -periodic signal tracking problem has been reduced to a stability problem of the closed-loop system.

9.3 Actuator-Focused Repetitive Controller Design Method By the aforementioned actuator-focused viewpoint, the actuator-focused RC design method is further proposed to establish conditions that the viewpoint requires. Then the periodic signal tracking problem for linear periodic systems and nonlinear systems is solved.

182

9 Filtered Repetitive Control with Nonlinear Systems …

9.3.1 Linear Periodic System Consider the following linear periodic system: x˙ (t) = A (t) x (t) + B (t) u (t) + d (t) y (t) = CT (t) x (t) + D (t) u (t) ,

(9.9)

where matrices A (t + T ) = A (t) ∈ Rn×n , B (t + T ) = B (t) ∈ Rn×m , C (t + T ) = C (t) ∈ Rn×m , and D (t + T ) = D (t) ∈ Rm×m are bounded; x (t) ∈ Rn is the system state, u (t) ∈ Rm is the control input, d ∈ CT0 ([0, ∞) , Rn ) is a T -periodic disturbance. The objective of the control input u is to make y (t) track a T -periodic desired signal yd ∈ CT0 ([0, ∞) ; Rm ) . For the system (9.9), similar to Eq. (9.8), an FRC is taken in the form as Aε v˙ (t) = −v (t) + (Im − αAε ) v (t − T ) + L1 (t) e (t) u (t) = L2 (t) x (t) + v (t) ,

(9.10)

where e  yd − y, Aε ∈ Rm×m is a positive semidefinite matrix, α > 0, L1 (t + T ) = L1 (t) ∈ Rm×m is nonsingular and L2 (t + T ) = L2 (t) ∈ Rm×n . Moreover, L1 (t) and L2 (t) are bounded. The introduction of variable α is used for stability analysis easier like in [26, Theorem 1] (also see the role of α in (9.43) and (9.44)), which is often a small positive value. Then   y (t) = CT (t) + D (t) L2 (t) x (t) + D (t) v (t) . Next, by combining the system (9.9) and FRC (9.10), the resulting closed-loop system is written as follows: E˙z (t) = Aa (t) z (t) + Aa,−T z (t − T ) + Ba (t) w (t) ,

(9.11)

where        v yd −Im − L1 D −L1 CT + DL2 , Aa = z= ,w = d x B A + BL2 E = diag (Aε , In ) , Aa,−T = diag (Im − αAε , 0) , Ba = diag (L1 , In ) . The following lemma states the relationship between the periodic solution and stability. Consider a general perturbed time-delay system x˙ (t) = f (t, xt , w) , t ≥ t0

(9.12)

9.3 Actuator-Focused Repetitive Controller Design Method

183

with xt0 (s) = φ (s) , s ∈ [−τ, 0] , τ > 0, where x (t) ∈ Rn , w (t) ∈ Rm is a piecewise continuous and bounded perturbation. The function f : [t0 , ∞) × C ([−τ, 0] , Rn ) × Rm → Rn is supposed to be continuous and takes bounded sets into bounded sets. Here, let initial time t0 = 0 for simplicity. Lemma 9.1 ([24, pp. 249–251]) For (9.12), suppose (i) f (t, xt , w (t)) = f (t + T, xt , w (t + T )) , (ii) f (t, xt , w) satisfies a local Lipschitz condition in xt , (iii) x (t + T ) is a solution of (9.12) whenever x (t) is a solution of (9.12). If solutions of (9.12) are uniformly bounded and uniformly ultimately bounded, then (9.12) has a T -periodic solution. Lemma 9.2 Suppose that the solution z (t) = 0 of the differential equation E˙z (t) = Aa (t) z (t) + Aa,−T z (t − T )

(9.13)

is globally exponentially stable. Then the resulting closed-loop system in (9.11) has a unique globally exponentially stable T -periodic solution z∗ . Proof Since z (t) = 0 of (9.13) is globally exponentially stable, the solutions of the resulting closed-loop system (9.11) are uniformly bounded and uniformly ultimately bounded. Then the resulting closed-loop system in (9.11) has a T -periodic solution according to Lemma 9.1. Suppose, to the contrary, that (9.11) has two solutions of (9.11), denoted by z1∗ , z2∗ , satisfying E˙z1∗ (t) = Aa (t) z1∗ (t) + Aa,−T z1∗ (t − T ) + Ba (t) w (t)

(9.14)

E˙z2∗ (t) = Aa (t) z2∗ (t) + Aa,−T z2∗ (t − T ) + Ba (t) w (t) .

(9.15)

Subtracting (9.15) from (9.14) resulting in E˙ze∗ (t) = Aa (t) ze∗ (t) + Aa,−T ze∗ (t − T ) , where ze∗ = z1∗ − z2∗ . Since the solution z (t) = 0 of (9.13) is globally exponentially stable, ze∗ = 0, which implies z1∗ = z2∗ . Therefore the resulting closed-loop system in  (9.11) has a unique stable T -periodic solution z∗ = z1∗ = z2∗ . Theorem 9.2 Suppose that (9.13) is globally exponentially stable. Then, the resulting closed-loop system in (9.11) has a T -periodic solution z∗ = [v∗T x∗T ]T . Furthermore,   e a ≤ sup L1−1 (t) Aε  ( ˙v a + α v a ) . (9.16) t∈[0,T ]

If z (t) = 0 in (9.13) is globally exponentially stable uniformly w.r.t Aε as Aε → 0, e (t, Aε ) a = 0. then lim t→∞, Aε →0

Proof By Lemma 9.2, the resulting closed-loop system in (9.11) has a unique globally exponentially stable T -periodic solution z∗ . By using (9.10), it follows that

184

9 Filtered Repetitive Control with Nonlinear Systems …

L1 (t) e (t) = Aε v˙ (t) + v (t) − (1 − αAε ) v (t − T ) . Taking · a on both sides of the equation above yields    −1   L1 (t) Aε (˙v (t) + αv (t − T ))  e a = lim sup   −1 t→∞  +L1 (t) (v (t) − v (t − T ))          ≤ lim sup L1−1 (t) Aε (˙v (t) + αv (t − T )) + lim sup L1−1 (t) (v (t) − v (t − T )) t→∞ t→∞      ≤ sup L1−1 (t) Aε  ˙v a + α v a , (9.17) t∈[0,T ]

where the condition that the solutions of (9.11) approach the T -periodic solution is used so that   lim sup L1−1 (t) (v (t) − v (t − T )) = 0. t→∞

If (9.13) is globally exponentially stable uniformly w.r.t Aε as Aε → 0, then ˙v a + α v a is bounded uniformly w.r.t Aε as Aε → 0. Consequently, Aε ( ˙v a + α v a ) → 0 as Aε → 0. This implies that e (Aε ) a → 0 as Aε → 0 by (9.17).  A sufficient condition is given in Theorem 9.3 to ensure that z (t) = 0 in (9.13) is globally exponentially stable. Theorem 9.3 If Aε > 0 and there exist matrices P = PT ∈ Rn×n , 0 < Q = QT ∈ Rm×m , λ1 > 0 such that 

PAa (t)

+ AaT (t) P T Aa,−T P

+ Q PAa,−T −Q

0 < PE + ET P  ≤ −λ1 In+m

(9.18) (9.19)

then z (t) = 0 in (9.13) is globally exponentially stable. In particular, if Aε = 0, (9.19) holds and there exists λ2 > 0 such that   sup (Im + L (t) D (t))−1  < 1 t∈[0,T ]



0 0 0 λ2 In+m

(9.20)

 ≤ PE + ET P

(9.21)

then z (t) = 0 in (9.13) is globally exponentially stable. Proof See Appendix.



Remark 9.5 By Theorem 9.2, the periodic signal tracking problem for linear periodic systems (9.9) can be converted to a stability problem (9.13) during which error dynamics are not required. It should be noted that the FRC (9.10) still works if the

9.3 Actuator-Focused Repetitive Controller Design Method

185

disturbance d in system (9.9) is unmatched. This is an advantage of the proposed actuator-focused RC design method. Remark 9.6 As for Theorem 9.3, it is only a sufficient condition for globally exponential stability of (9.13) independent of the period T , where the condition (9.18) is used for establishing a Lyapunov function. Since det (PE) = det (P) det (E) = det (P) det (Aε ) one has det(PE) = 0 if det(Aε ) = 0, where det(·) is the determinant of a matrix. In this case, (9.18) does not hold. According to this, Aε > 0 is necessary for (9.18). When Aε = 0, the condition (9.20) implies D = 0. This is consistent with the result for LTI systems. It was proved in [13] that, for a class of general linear plants, the exponential stability of RC systems could be achieved only when the plant is proper (D = 0) but not strictly proper.1 Remark 9.7 Since Aa (t) is T -periodic, the linear matrix inequalities (9.18) and (9.19) cannot be solved by commonly used tools directly. Roughly, an easy way is T ,i = to select a sufficient number of sampling points in one period, namely, ti = i M 0, . . . , M, M ∈ N, by which Aa (t1 ) , . . . , Aa (t M ) are expected to represent for the T -periodic Aa (t) well. Then, the time-varying linear matrix inequality (9.19) is replaced with M time-invariant linear matrix inequalities at the M sampling times so that commonly used tools are applicable. A brief analysis is given in the following. Given any t ∈ [0, T ] , it is supposed ti ≤ t ≤ ti+1 without loss of generality. Let  PAa (t) + AaT (t) P + Q PAd . H (t) = ATd P −Q 

    ˙ a (t) is bounded. Then H ˙ (t) ≤ κ < ∞, by confining P, Q It is assumed that A when solving the linear matrix inequalities in Theorem 3. Given ελ1 > 0, if the linear matrix inequality (9.19) satisfies   H (ti ) ≤ − λ1 + ελ1 In+m , i = 0, . . . , M

(9.22)

then

t

H (t) = H (ti ) +

˙ (s) ds H

ti

t   H ˙ (s) In+m ds ≤ H (ti ) + ti   ≤ − λ1 + ελ1 In+m + κ (t − ti ) In+m 1 In control theory, a proper transfer function is a transfer function in which the order of the numerator

is not greater than the order of the denominator. A strictly proper transfer function is a transfer function where the order of the numerator is less than the order of the denominator.

186

9 Filtered Repetitive Control with Nonlinear Systems …

  T ≤ − λ1 + ελ1 In+m + κ In+m . M If

Tκ ≤M ελ1

then H (t) ≤ −λ1 In+m .

(9.23)

This implies that if the sampling points are sufficient enough, then (9.22) implies (9.23). Remark 9.8 Generally, the convergence speed mainly depends on the choice of L1 , L2 as they are feedback gains, while the tracking error mainly depends on the choice of Aε , α according to inequality (9.16).

9.3.2 General Nonlinear System In the following, let us consider a general perturbed nonlinear system x˙ = f (t, x, u, d) y = g (x, u) ,

(9.24)

where f : [0, ∞) × Rn × Rm × Rm → Rn , g : Rn × Rm → Rm , and f (t, x, u, d (t)) = f (t + T, x, u, d (t + T )); x (t) ∈ Rn is the system state, u (t) ∈ Rm is the control input, d ∈ CT0 ([0, ∞) , Rm ) is the T -periodic disturbance. The objective of the control input u is to make y (t) track T -periodic desired signal yd ∈ CT0 ([0, ∞) ; Rm ) . For the system (9.24), similar to (9.8), an FRC is taken in the form as Aε v˙ (t) = −v (t) + (1 − αAε ) v (t − T ) + h (t, e) u (t) = ust (x (t)) + v (t) ,

(9.25)

where e  yd − y, Aε ∈ Rm×m is a positive semidefinite matrix, α > 0, h : Rm × Rm → Rm is a continuous function, and ust : Rn → Rm is a state-feedback law employed to stabilize the state of the considered plant (9.24). The functions h (·) and ust (·) are both locally Lipschitz. On the other hand, the continuous function v represents a feedforward input which will drive the output y of (9.24) to track the given desired trajectory yd . Next, the resulting closed-loop system is written as follows: (9.26) E˙z = fa (t, zt , w) ,

9.3 Actuator-Focused Repetitive Controller Design Method

187

where T T   z = vT xT , w = ydT dT E = diag (Aε , In ) , y = g (x, ust (x) + v)   −v + (1 − αAε ) v (t − T ) − h (t, e) . fa (t, zt , w) = f (t, x, ust (x) + v, d) Theorem 9.4 Suppose (i) the solutions of the resulting closed-loop system in (9.26) are uniformly bounded and uniformly ultimately bounded; (ii) h (t, e) → 0 implies e → 0. Then the resulting closed-loop system in (9.26) has a T -periodic solution z∗ = [v∗T x∗T ]T . Let ze  z − z∗ . Furthermore, if     E˙ze = fa t, zt∗ + zet , w − fa t, zt∗ , w

(9.27)

is locally (globally) exponentially stable, then the T -periodic solution z∗ is locally (globally) exponentially stable and h (·, e) a ≤ Aε ( ˙v a + α v a ) holds locally (globally). Furthermore, if ˙v (Aε ) a and v (Aε ) a are bounded uniformly w.r.t Aε as Aε → 0, then lim e (Aε ) a = 0 locally (globally). Aε →0

Proof Since w is a T -periodic function and f (t, x, u, d (t)) = f (t + T, x, u, d (t + T )), one has fa (t, zt , w (t)) = fa (t + T, zt , w (t + T )). Furthermore, fa (t, zt , w) is locally Lipschitz and the solutions of the resulting closed-loop system (9.26) are uniformly bounded and uniformly ultimately bounded. Then the resulting closed-loop system in (9.26) has a T -periodic solution according to Lemma 9.1. Since z = z∗ + ze , one has (9.27). If (9.27) is locally (globally) exponentially stable, ze → 0 as t → ∞ locally (globally). This implies z → z∗ as t → ∞ locally (globally), namely, the T -periodic solution z∗ is locally (globally) exponentially stable. By using (9.25), it follows that h (t, e) = Aε v˙ (t) + v (t) − (1 − αAε ) v (t − T ) .

(9.28)

Taking · a on both sides of the equation (9.28) yields    Aε (˙v (t) + αv (t − T ))   h (·, e) a = lim sup   +v (t) − v (t − T )  t→∞ ≤ lim sup Aε (˙v (t) + αv (t − T )) + lim sup v (t) − v (t − T ) t→∞

t→∞

≤ Aε ( ˙v a + α v a ) , where the condition that the solutions of (9.26) approach the T -periodic solution is used. If ˙v a and v a are bounded uniformly w.r.t Aε as Aε → 0, then

188

9 Filtered Repetitive Control with Nonlinear Systems …

Aε ( ˙v a + α v a ) → 0 as Aε → 0. This implies that h (·, e· ) a → 0 as Aε → 0. Note that h (t, e) → 0 implies e → 0 as t → ∞. Then lim e (Aε ) a Aε →0

= 0.



Remark 9.9 The state-feedback law ust (·) employed is to stabilize the state of the considered plant (9.24), i.e., making condition (i) of Theorem 9.4 hold. It is required that h (t, e) → 0 imply e → 0 according to condition (ii) of Theorem 9.4. The major idea of the actuator-focused RC design is to make h (t, e) as the input of the internal model, i.e., Aε v˙ (t) = −v (t) + (1 − αAε ) v (t − T ) + h (t, e) appearing in (9.25). If the closed-loop system tends to equilibrium, then the tracking error can be analyzed according to the RC itself. This is based on the actuator-focused viewpoint. Remark 9.10 The major advantage of the proposed actuator-focused RC design is to avoid the derivation of error dynamics. This facilitates the tracking controller design. Through incorporating the internal model into the closed-loop system, it is only necessary to ensure that the latter is uniformly bounded and uniformly ultimately bounded. Uniform boundedness and uniformly ultimate boundedness are often related to the exponential stability of the closed-loop system when the exogenous (reference and disturbance) signals are themselves fixed identically at zero.

9.4 Numerical Example In order to demonstrate its effectiveness, the actuator-focused RC design method is further proposed to solve three periodic signal tracking problems for a linear periodic system (time-varying), a minimum-phase nonlinear system, and a nonminimumphase nonlinear system.

9.4.1 Linear Periodic System Consider a linear periodic system (9.9) with 

   0 1 0.3 sin t A (t) = , B (t) = , −1 − 0.3 sin t −2 − 0.6 cos t 1     0 1 d (t) = , C (t) = , D (t) = 1. sin (t + 1) 0.6 cos t The objective is to design u to drive the signal y (t) − yd (t) → 0, where yd (t) = sin t for simplicity. For the system above, according to FRC (9.10), design

9.4 Numerical Example

189

-1

=0 =0.1 =1

-2

-3

-4

-5

-6

-7

0

1

2

3

4

5

6

7

t (sec)

Fig. 9.9 Function ρ (t) in one period

ε˙v (t) = −v (t) + (1 − 0.01ε) v (t − T ) + L 1 (yd (t) − y (t)) u (t) = v (t) , v (θ ) = 0, θ ∈ [−T, 0] ,

(9.29)

where L 1 = 6. Let ρ (t) be the maximal eigenvalue of the matrix at the left side of (9.19). If ρ (t) < 0, ∀t ∈ [0, 2π ], then (9.19) holds. The matrices P and Q in Theorem 9.3 can be found as ⎡ ⎤ ⎡ ⎤ 47.11 24.32 −27.13 23.39 11.63 −11.42 P = ⎣ 11.63 87.32 16.72 ⎦ , Q = ⎣ 24.31 27.87 −9.08 ⎦ . −27.13 −9.08 34.18 −11.42 16.72 56.98 With them, the curve ρ (t) is plotted in Fig. 9.9 with different values ε = 0, 0.1, 1. As shown, (9.19) holds. Meanwhile, with the same matrices P and Q, the condition (9.18) holds when ε = 0.1, 1, the conditions (9.20) and (9.21) hold when ε = 0. According to Theorem 9.3, the designed controller with ε = 0, 0.1, 1 can make the closed-loop system uniformly bounded and uniformly ultimately bounded. When ε = 0, by the actuator-focused viewpoint, the control form (9.29) is to establish an input–output relation as follows: yd (t) − y (t) =

1 (v (t) − v (t − T )) . L1

Since v approaches a T -periodic signal, it can be concluded that yd (t) − y (t) → 0 as t → ∞. When ε = 0.1, 1, the tracking error yd (t) − y (t) is uniformly ultimately bounded. With different values ε = 0, 0.1, 1, the corresponding tracking errors are shown in Fig. 9.10, where the tracking error is nearly zero after 20s when ε = 0, and

190

9 Filtered Repetitive Control with Nonlinear Systems …

1

=0 =0.1 =1

0.8 0.6 0.4 0.2 0 -0.2 -0.4

0

10

20

30

40

50

60

70

t (sec)

Fig. 9.10 Tracking error for linear periodic system with different values ε.

tracking error is greatest when ε = 1. Therefore, the simulation is consistent with our analysis in Theorem 9.2.

9.4.2 Minimum-Phase Nonlinear System The dynamics of an m-degree-of-freedom manipulator are described by the following differential equation ˙ q˙ + G (q) = u, D (q) q¨ + C (q, q) (9.30) where q ∈ Rm denotes the vector of generalized displacements in robot coordinates, u ∈ Rm denotes the vector of generalized control input forces in robot coordinates; ˙ ∈ Rm×m is the vector of D (q) ∈ Rm×m is the manipulator inertial matrix, C (q, q) m centripetal and Coriolis torques and G (q) ∈ R is the vector of gravitational torques. It is assumed that both q and q˙ are available from measurements. Because of no internal dynamics, the system (9.30) is a minimum-phase nonlinear system. Two common assumptions in the following are often made on the system (9.30) [27, 28]. (A1) The inertial matrix D (q) is symmetric, uniformly positive definite, and bounded, i.e., (9.31) 0 < λ D Im ≤ D (q) ≤ λ¯ D Im , ∀q ∈ Rm , where λ D , λ¯ D > 0. ˙ (q) − 2C (q, q) ˙ is skew-symmetric, hence (A2) The matrix D   ˙ (q) − 2C (q, q) ˙ x = 0, ∀x ∈ Rm . xT D

9.4 Numerical Example

191

For a given desired trajectory qd ∈ C P2 T ([0, ∞) , Rm ), the controller u is designed to make q track qd . Define a new state x as follows: x = q˙ + μq, where μ > 0. According to (9.25), a control law u is taken in the form as εv˙ (t) = −v (t) + (1 − αε) v (t − T ) + k ((q˙ d + μqd ) − x) (t) u (t) = v (t) − Mx (t) + G (q (t)) − μD (q (t)) q˙ (t) − μC (q (t) , q˙ (t)) q (t) , (9.32) where v (s) = 0, s ∈ [−T, 0] , 0 < M = MT ∈ Rm×m is positive-definite matrix and k > 0. Substituting the controller (9.32) into (9.30) results in εv˙ (t) = −v (t) + (1 − αε) v (t − T ) + k ((q˙ d + μqd ) − x) (t) x˙ (t) = −D−1 (q (t)) (C (q (t) , q˙ (t)) + M (t)) x (t) + D−1 (q (t)) v (t) . (9.33) The closed-loop system (9.33) can be rewritten in the form of (9.26) with T  z = vT xT , E = diag (εIm , In ) T  w = xdT 0 , xd = q˙ d + μqd (9.34)   −v (t) + (1 − αε) v (t − T ) + k ((q˙ d + μqd ) − x) (t) . fa (t, zt , w) = E−1 −D−1 (q (t)) (C (q (t) , q˙ (t)) + M) x (t) + D−1 (q (t)) v (t) The closed-loop system (9.33) can be rewritten in the form of (9.26). Suppose (i) Assumptions (A1)–(A2) hold, (ii) 0 < αε < 1, ε, α, k > 0. Then the solutions of the closed-loop system (9.33) are uniformly bounded and uniformly ultimately bounded (See Appendix). Then, the solutions of closed-loop system (9.33) are uniformly ultimately bounded. Therefore, the closed-loop system has a T -periodic solution by Lemma 9.1. By the actuator-focused viewpoint, according to (9.33), the control term v is to establish an input–output relationship as follows: xd (t) − x (t) =

1 (˙v (t) + v (t) − (1 − αε) v (t − T )) . k

Suppose qd = [sin t cos t]T with periodicity T = 2π . The parameters of manipulator are chosen as in [29, p. 642]. The controller parameters are chosen as follows: M = 100I2 , μ = 1, α = 0.1, ε = 0.01, k = 200. From the simulation, v approaches a T -periodic solution, then

192

9 Filtered Repetitive Control with Nonlinear Systems …

1.5

q (1) d

1

q(1)

0.5 0 −0.5 −1 −1.5

0

10

20

30

40

50

60

1.5

q (2) d

1

q(2)

0.5 0 −0.5 −1 −1.5

0

10

30

20

40

50

60

t (sec)

Fig. 9.11 Two-degree-of-freedom manipulator tracking

xd (t) − x (t) −

ε (˙v + αv) (t) → 0. k

Therefore, it is expected that q can track qd with good precision. As shown in Fig. 9.11, q has tracked qd with good precision in the 8th cycle. In the controller design, the stability of the closed-loop system is only considered rather than the stability of the error dynamics. As observed above, the controller design is simple, and the tracking result is satisfactory.

9.4.3 Nonminimum-Phase Nonlinear System Consider the following nonlinear system: η˙ = sin η + ξ + dη ξ˙ = u + dξ y = ξ,

(9.35)

9.4 Numerical Example

193

where η (t) , ξ (t) , y (t) ∈ R, dη , dξ ∈ CT0 ([0, ∞) , Rm ) are T -periodic disturbances. Since zero dynamics η˙ = sin η is unstable, the system (9.35) is a nonminimum-phase nonlinear system. The control is required not only to cause y to track yd but also to make the internal dynamics bounded. If existing methods are used to handle this problem, then it may be difficult to obtain the ideal internal dynamics because the disturbance in the internal dynamics is unknown. To the authors’ knowledge, general methods handle such a case only at high computational cost [30, 31]. Compared with the existing design, the proposed design method will simplify the controller design. According to (9.25), a control law u is taken in the form as (see [23] for details) ε˙v (t) = −v (t) + (1 − αε) v (t − T ) + k (yd − y) (t) u (t) = − (q1 + cos η) (−q1 η + z) (t) − ρz (t) − q2 η (t) + v (t) ,

(9.36)

where v (s) = 0, s ∈ [−T, 0] , v, α, ε, k, q1 , q2 , ρ ∈ R and z = ξ + q1 η + sin η. Substituting the controller (9.36) into (9.35) results in εv˙ (t) = −v (t) + (1 − αε) v (t − T ) − k (z − q1 η − sin η) (t) + kyd (t) η˙ (t) = −q1 η (t) + z (t) + dη (t) z˙ (t) = −kz (t) − q2 η (t) + v (t) + dξ (t) + dη (t) (q1 + cos η) (t) .

(9.37)

The stability of the closed-loop system only needs to be considered after incorporating the internal model. It can be proven that the solutions of the resulting closedloop system (9.37) are uniformly bounded and uniformly ultimately bounded (see Appendix). Then the closed-loop system has a T -periodic solution by Lemma 9.1. Suppose dη = 0.1 sin t, dξ = 0.2 sin t, and yd = sin t. The controller parameters are chosen as follows: ε = 0.1, α = 0.1, k = 5, q1 = q2 = 1, ρ = 2. From the simulation, v approaches a T -periodic solution, then (yd − y) (t) −

ε (˙v + αv) (t) → 0. k

Therefore, it is expected that y can track yd with good precision. Figure 9.12 shows the response of the closed-loop system from the given initial condition. The output tracks the desired trajectory very quickly in the second cycle. The internal state η (t) is also bounded. In the controller design, the stability of the closed-loop system is only considered rather than the stability of the error dynamics so that the derivation of the ideal internal dynamics is avoided. Remark 9.11 Let us recall the method in [30], which is used to solve the similar problem. The internal dynamics of (9.35) is η˙ = sin η + ξ + dη .

(9.38)

194

9 Filtered Repetitive Control with Nonlinear Systems … 6

η(t) y(t) y (t)

5

d

4

3

2

1

0

−1 0

10

20

30

40

50

60

t (sec)

Fig. 9.12 Periodic signal tracking

Since dη is unknown, its estimate, namely, dˆη , should be obtained by an observer first. Let ηd be the reference state of η. The nonlinear ideal internal dynamics is η˙ d = sin ηd + yd + dη ,

(9.39)

where ξ in (9.38) has been replaced by its reference yd . According to [30], (9.39) should be linearized as (9.40) η˙ d = ηd + yd + dη for computation in the following. Furthermore, by assuming that the forcing yd + dˆη can be piecewise modeled by a linear exosystem with known characteristic polynomial, the estimate of ηd , namely, ηˆ d , is generated by a designed differential function related to (9.40). With ηˆ d obtained, the tracking problem can be converted to be a stabilizing control problem. As shown, solving the observer and the differential function is time-consuming, especially when the dimension of the whole system is large. Moreover, the error will be generated because of twice approximations, namely, dˆη and then ηˆ d . Therefore, the proposed method here facilitates the controller design and simplifies the designed controllers.

9.5 Summary

195

9.5 Summary A new viewpoint, namely, actuator-focused design, on IMP is proposed in this chapter. It can be used to explain how internal models work in the time domain. Compared with the cancelation viewpoint, the proposed viewpoint can be applied to nonlinear systems. Compared with the geometrical viewpoint, the proposed viewpoint is more suitable to explain how the periodic internal model, an infinite-dimensional internal model, works in the time domain. Guided by the actuator-focused viewpoint, the actuator-focused RC design method is further proposed for periodic signal tracking. In the controller design, the stability of the closed-loop system needs to be considered rather than that of the error dynamics. In order to demonstrate its effectiveness, the proposed design method is applied to RC problems for a linear periodic system (time-varying), a minimum-phase nonlinear system and a nonminimum-phase nonlinear system. From the given examples, the controller design is simple, and the tracking result is satisfactory. Furthermore, from the nonminimum-phase nonlinear system tracking example, the proposed design method provides a possible way to deal with some currently difficult problems.

9.6 Appendix 9.6.1 Proof of Theorem 9.3 Choose a Lyapunov functional as V (zt ) =

 1 T  z (t) PE + ET P z (t) + 2

0 −T

ztT (θ ) Qzt (θ ) dθ.

Taking its derivative along (9.13) yields   V˙ (zt ) = zT (t) PE + ET P z˙ (t) + zT (t) Qz (t) − zT (t − T ) Qz (t − T ) ≤ −λ1 z (t) 2 − λ1 z (t − T ) 2 ≤ 0,

(9.41)

where (9.18) is utilized. Based on (9.41), two conclusions are proven in the following: (i) z (t) = 0 in (9.13) is globally exponentially stable when Aε > 0. For system (9.13), there exist γ1 , γ2 , ρ > 0 such that

γ1 z (t) 2 ≤ V (zt ) ≤ γ2 z (t) 2 + ρ V˙ (zt ) ≤ −λ1 z (t) 2 .

0 −T

zt (θ ) 2 dθ

196

9 Filtered Repetitive Control with Nonlinear Systems …

According to [32, Theorem 1], (9.13) is globally exponential convergence. Furthermore, z (t) = 0 is globally exponentially stable according to the stability definition.   (ii) If supt∈[0,T ] (Im + L1 (t) D (t))−1  < 1, then z (t) = 0 in (9.13) is globally exponentially stable when Aε = 0. For system (9.13), there exist γ2 , ρ > 0 such that

λ2 x (t) ≤ V (zt ) ≤ γ2 z (t) + ρ 2

0

2

V˙ (zt ) ≤ −λ1 z (t) 2 .

−T

zt (θ ) 2 dθ

Similar to the proof in [32, Theorem 1], x (t) is globally exponential convergence. Arranging (9.10) results in v (t) = (Im + L1 (t) D (t))−1 v (t − T )   − (Im + L1 (t) D (t))−1 L1 (t) CT (t) + D (t) L2 (t) x (t) .

(9.42)

  Since supt∈[0,T ] (Im + L (t) D (t))−1  < 1 and x (t) is globally exponential convergence, then v (t) = 0 is globally exponential convergence as well. Consequently, according to the stability definition, z = [vT xT ]T = 0 is globally exponentially stable.

9.6.2 Uniformly Ultimate Boundedness Proof for Minimum-Phase Nonlinear System Design a Lyapunov functional to be k ε 1 V (zt ) = xT (t) D (q (t)) x (t) + vT (t) v (t) + 2 2 2

0

−T

vtT (θ ) vt (θ ) dθ,

where (A1) is utilized. Taking the derivative of V along the solutions of (9.33) results in  1 T v (t) v (t) − vT (t − T ) v (t − T ) V˙ (zt ) = −kxT (t) Mx (t) + kxT (t) v (t) + 2 + vT (t) (−v (t) + (1 − αε) v (t − T )) + kvT (t) (xd (t) − x (t)) αε (2 − αε) T v (t) v (t) + kvT (t) xd (t) , ≤ −kxT (t) Mx (t) − 2 where (A2) is utilized. Since 1 εvT v + 2kvT xd + k 2 xdT xd ≥ 0 ε

9.6 Appendix

197

for any ε > 0, one has V˙ ≤ −kxT Mx −



 αε (2 − αε) 1 − ε vT v + k 2 xdT xd . 2 ε

Here, ε is chosen sufficiently small so that Lyapunov functional satisfies

αε(2−αε) 2

(9.43)

− ε > 0. Therefore, the given

1 0 zt (θ ) 2 dθ γ0 z(t) 2 ≤ V (zt ) ≤ γ1 z(t) 2 + 2 −T   2 2 ˙ V (zt ) ≤ −γ2 z(t) + χ sup xd (t) , t∈[0,T ]

      ¯ kλ where γ0 = min 2D , 2ε , γ1 = max k λ2D , 2ε , γ2 = min kλmin (M) , αε(2−αε) −ε 2 and function χ belongs to class K [29, Definition 4.2, p. 144]. According to [32, Theorem 1], the solutions of (9.33) are uniformly bounded and uniformly ultimately bounded.

9.6.3 Uniformly Ultimate Boundedness Proof for Nonminimum-Phase Nonlinear System Design a Lyapunov functional to be 1 1 ε 1 V (zt ) = p1 η2 (t) + p2 z 2 (t) + v2 (t) + 2 2 2 2

0 −T

vt2 (θ ) dθ,

where z  [η z v]T and p1 , p2 , ε > 0. Taking the derivative of V along the solutions of (9.37) results in  1 2 v (t) − v2 (t − T ) V˙ (zt ) = p1 η (t) η˙ (t) + p2 z (t) z˙ (t) + εv (t) v˙ (t) + 2 αε (2 − αε) 2 v (t) = − p1 q1 η2 (t) − p2 kz 2 (t) − 2 + ( p1 − p2 q2 ) η (t) z (t) + ( p2 − ρ) z (t) v (t) + v (q1 η (t) + sin η (t))   + p1 η(t)dη (t) + p2 dξ (t) − dη (t) (q1 (t) + cos η (t)) z(t) + ρv(t)yd (t). By fixing p1 , q1 and choosing p2 = ρ and 0 < αε < 1, if k is chosen sufficiently large, then

198

9 Filtered Repetitive Control with Nonlinear Systems …

αε (2 − αε) 2 v + ( p1 − p2 q2 ) ηz 2 + ( p2 − ρ) zv + v (q1 η + sin η) ≤ −θ1 η2 − θ2 z 2 − θ3 v2 , − p1 q1 η2 − p2 kz 2 −

(9.44)

where θ1 , θ2 , θ3 > 0. Furthermore, there exists a class K function χ : [0, ∞) → [0, ∞) [29, Definition 4.2, p. 144] such that  V˙ ≤

−θ1 η2



θ2 z 2



θ3 v2



   2  2  2 sup yd (t) + dη (t) + dξ (t) ,

t∈[0,T ]

where θ1 , θ2 , θ3 > 0. Therefore, the given Lyapunov functional satisfies

1 0 zt (θ ) 2 dθ γ0 z(t) 2 ≤ V (zt ) ≤ γ1 z(t) 2 + 2 −T     2  2  2 2     ˙ V (zt ) ≤ −γ2 z(t) + χ sup yd (t) + dη (t) + dξ (t) , t∈[0,T ]

      where γ0 = min 21 p1 , 21 p2 , 2ε , γ1 = max 21 p1 , 21 p2 , 2ε and γ2 = min θ1 , θ2 , θ3 . According to [32, Theorem 1], the solutions of (9.37) are uniformly bounded and uniformly ultimately bounded. Furthermore, ξ is also uniformly ultimately bounded by using the relationship ξ = z − q1 η − sin η.

References 1. Quan, Q., & Cai, K.-Y. (2019). Repetitive control for nonlinear systems: an actuator-focussed design method. International Journal of Control,. https://doi.org/10.1080/00207179.2019. 1639077. 2. Francis, B. A., & Wonham, W. M. (1976). The internal model principle of control theory. Automatica, 12(5), 457–465. 3. Wonham, W. M. (1976). Towards an abstract internal model principle. IEEE Transaction on Systems, Man, and Cybernetics, 6(11), 735–740. 4. Wonham, W. M. (1979). Linear multivariable control: A geometric approach. New York: Springer. 5. Isidori, A., & Byrnes, C. I. (1990). Output regulation of nonlinear systems. IEEE Transactions on Automatic Control, 35(2), 131–140. 6. Isidori, A., Marconi, L., & Serrani, A. (2003). Robust autonomous guidance: An internal model-based approach. London: Springer. 7. Huang, J. (2004). Nonlinear output regulation: Theory and applications. Philadelphia: SIAM. 8. Memon, A. Y., & Khalil, H. K. (2010). Output regulation of nonlinear systems using conditional servocompensators. Automatica, 46(7), 1119–1128. 9. Wieland, P., Sepulchre, R., & Allgower, F. (2011). An internal model principle is necessary and sufficient for linear output synchronization. Automatica, 47(5), 1068–1074. 10. Knobloch, H. W., Isidori, A., & Flockerzi, D. (2014). Disturbance attenuation for uncertain control systems. London: Springer.

References

199

11. Chen, Z., & Huang, J. (2014). Stabilization and regulation of nonlinear systems: A robust and adaptive approach. London: Springer. 12. Trip, S., Burger, M., & De Persis, C. (2016). An internal model approach to (optimal) frequency regulation in power grids with time-varying voltages. Automatica, 64, 240–253. 13. Hara, S., Yamamoto, Y., Omata, T., & Nakano, M. (1988). Repetitive control system: A new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7), 659–668. 14. Weiss, G., & Häfele, M. (1999). Repetitive control of MIMO systems using H∞ design. Automatica, 35(7), 1185–1199. 15. Byrnes, C., Laukó, I., Gilliam, D., & Shubov, V. (2000). Output regulation for linear distributed parameter systems. IEEE Transactions on Automatic Control, 45(12), 2236–2252. 16. Hämäläinen, T. (2005). Robust low-gain regulation of stable infinite-dimensional systems. Doctoral dissertation, Tampere University of Technology, Tampere, Finland. 17. Immonen, E. (2007). Practical output regulation for bounded linear infinite-dimensional state space systems. Automatica, 43(5), 786–794. 18. Immonen, E. (2007). On the internal model structure for infinite-dimensional systems: two common controller types and repetitive control. SIAM Journal on Control and Optimization, 45(6), 2065–2093. 19. Hämäläinen, T., & Pohjolainen, S. (2010). Robust regulation of distributed parameter systems with infinite-dimensional exosystems. SIAM Journal on Control and Optimization, 48(8), 4846–4873. 20. Paunonen, L., & Pohjolainen, S. (2010). Internal model theory for distributed parameter systems. SIAM Journal on Control and Optimization, 48(7), 4753–4775. 21. Natarajan, V., Gilliam, D. S., & Weiss, G. (2014). The state feedback regulator problem for regular linear systems. IEEE Transactions on Automatic Control, 59(10), 2708–2723. 22. Xu, X., & Dubljevic, S. (2017). Output and error feedback regulator designs for linear infinitedimensional systems. Automatica, 83, 170–178. 23. Quan, Q., & Cai, K.-Y. (2010). A new viewpoint on the internal model principle and its application to periodic signal tracking. In The 8th World Congress on Intelligent Control and Automation, Jinan, Shandong (pp. 1162–1167). 24. Burton, T. A. (1985). Stability and periodic solutions of ordinary and functional differential equations. London: Academic Press. 25. Hale, J. K., & Lunel, S. M. V. (1993). Introduction to functional differential equations. New York: Springer. 26. Quan, Q., & Cai, K.-Y. (2011). A filtered repetitive controller for a class of nonlinear systems. IEEE Transaction on Automatic Control, 56, 399–405. 27. Spong, M. W., & Vidyasagar, M. (1989). Robot dynamics and control. New York: Wiley. 28. Lewis, F. L., Abdallah, C. T., & Dawson, D. M. (1993). Control of robot manipulators. New York: Macmillan. 29. Khalil, H. K. (2002). Nonlinear systems. Englewood Cliffs: Prentice-Hall. 30. Shkolnikov, I. A., & Shtessel, Y. B. (2002). Tracking in a class of nonminimum-phase systems with nonlinear internal dynamics via sliding mode control using method of system center. Automatica, 38(5), 837–842. 31. Quan, Q., & Cai, K.-Y. (2017). A new generator of causal ideal internal dynamics for a class of unstable linear differential equations. International Journal of Robust and Nonlinear Control, 27(12), 2086–2101. 32. Quan, Q., & Cai, K.-Y. (2012). A new method to obtain ultimate bounds and convergence rates for perturbed time-delay systems. International Journal of Robust Nonlinear Control, 22, 1873–1880.

Chapter 10

Repetitive Control with Nonlinear Systems: A Contraction Mapping Method

The adaptive-control-like method utilizes Lyapunov-based design techniques to develop feedforward to compensate for unknown disturbance. Moreover, the designed controllers depend on the concrete forms of the Lyapunov functions. The Lyapunovbased design technique is a good choice for nonlinear RC problems. However, in most cases, no general method exists for constructing Lyapunov functions for ordinary problems. Contraction mapping is a widely used tool for iterative learning control (ILC, or iterative learning controller, which is also designated as ILC). Using this tool, the ILC design is straightforward and uniform without requiring Lyapunov functions. Inspired by this concept, a contraction mapping-based RC design is proposed for a class of nonlinear systems in this chapter. This chapter aims to answer the following question: How is contraction mapping used for the analysis of nonlinear RC systems? To answer this question, preliminary lemmas related to contraction mapping, problem formulation, D-type RC design, convergence analysis, and numerical examples must be discussed. This chapter presents a revised version of a paper that was published [1].

10.1 Preliminary Lemma for Contraction Mapping Define ξi+1 (τ )  ξi (τ + T ) , where ξ = x, y and xi , yi : [0, T ] → R+ . They have the following relationship: xi (τ ) ≤ e−a1 τ xi (0) + a2



τ

e−a1 (τ −s) yi (s) ds

(10.1)

0

yi+1 (τ ) ≤ a3 yi (τ ) + a4 xi (τ ) , © Springer Nature Singapore Pte Ltd. 2020 Q. Quan and K.-Y. Cai, Filtered Repetitive Control with Nonlinear Systems, https://doi.org/10.1007/978-981-15-1454-8_10

(10.2) 201

202

10 Repetitive Control with Nonlinear Systems …

where a1 , a2 , a3 , a4 ∈ R+ , i = 0, 1, . . . . The inequalities (10.1) and (10.2) are rewritten as follows:  T e−a1 (T −s) yi (s) ds xi+1 (0) ≤ e−a1 T xi (0) + a2 0  τ −a1 τ xi (0) + a3 yi (τ ) + a2 a4 e−a1 (τ −s) yi (s) ds. yi+1 (τ ) ≤ a4 e 0

Define the following operators: 

T

Q0 (u)  a2 

0

Q1 (u)  a2

τ

e−a1 (T −s) u (s) ds, u ∈ C ([0, T ] , R) e−a1 (τ −s) u (s) ds, u ∈ C ([0, T ] , R)

0

Q2 (u) (τ )  e−a1 τ u, u ∈ R+ 

and zi =

xi (0) yi

 ∈ R × C ([0, T ] , R) .

Then, inequalities (10.1) and (10.2) can be written in a compact form as follows: zi+1 ≤ Qzi , 

where Q=

Q0 e−a1 T a4 Q2 a3 + a4 Q1

(10.3)  .

(10.4)

It is easy to verify that R × C ([0, T ] , R) is a Banach space and Q is a compact linear map. Lemma 10.1 If and only if a3 < 1 and

a2 a4 0,

10.1 Preliminary Lemma for Contraction Mapping

203

where · B is induced Let z¯ i = Qi z0 . If rQ < 1, then  i  by ·c [3, section15.1]. −ωi zi c ≤ ¯zi c ≤ Q  B z0 c < Me z0 c , where zi c = yi [0,T ] + |xi (0)| .  Therefore, if rQ < 1, then yi [0,T ] , |xi (0)| converge to zero exponentially.

10.2 Problem Formulation To illustrate the generality of the proposed RC scheme, we consider the following error dynamics examined in [4, 5]:   e˙ (t) = f (t, e) + b (t, e) v (t) − vˆ (t) ,

(10.6)

where e (t) ∈ Rn is an error vector; v (t) = [v1 · · · vm ]T ∈ Rm is an unknown continuous-time vector function; vˆ (t) ∈ Rm is a subsequently designed learningbased estimate of v (t) and f (t; e) ∈ Rn ; and b (t, e) ∈ Rn×m are continuous vector functions with respect to the arguments t and e. Moreover, we make the following assumption. Assumption 10.1 The unknown continuous-time function vector v (t) is periodic, i.e., v (t + T ) = v (t) , where T is a known period. Moreover, v (t) is within the upper and lower bounds of a saturation function, namely, v (t) = sat (v (t)) , where sat (ξ )  [satβ1 (ξ1 ) ...satβm (ξm )]T  for |ξi | ≤ βi ξi satβi (ξi ) = , ∀ξi ∈ R sgn (ξi ) βi for |ξi | > βi and sgn(·) denotes the standard signum function. A lemma on saturation is needed for the analysis in the next section, which is stated as follows. Lemma 10.3 ([5]) For any a, b ∈ Rm , the following inequality always holds: sat (a) − sat (b) ≤ a − b . Although the Lyapunov functions are certainly useful for this problem, they have particular difficulties at the same time. In most cases, although there exists a Lyapunov function suitable to a general system such as (10.7), it is often difficult to find it. For

204

10 Repetitive Control with Nonlinear Systems …

example, under some conditions, if the trajectories of the system

satisfy

e˙ (t) = f (t, e)

(10.7)

e (t) ≤ k e (t0 ) e−λ(t−t0 ) , ∀t ≥ t0 ≥ 0,

(10.8)

where k, λ ∈ R+ , then there exists a Lyapunov function based on the converse Lyapunov theorem [6, pp. 163–165], which is formulated in an assumption given as follows. Assumption 10.2 The origin of the error system is globally exponentially stable for e˙ (t) = f (t, e) .

(10.9)

Furthermore, there exists a first-order differentiable, positive-definite function V (t,e) and positive constants ci , i = 1, . . . , 4, such that c1 e (t)2 ≤ V (t,e) ≤ c2 e (t)2   ∂V T ∂V + f (t, e) ≤ −c3 e (t)2 ∂t ∂e   ∂V     ∂e  ≤ c4 e (t) .

(10.10) (10.11) (10.12)

Assumption 10.3 There exist ςl , ςh ∈ R+ such that 0 < ςl ≤ b (t, e) ≤ ςh , ∀e ∈ Rn , ∀t ∈ R+ . Assumption 10.4 The function f (t, e) is a continuously differentiable function with respect to the arguments t and e. Furthermore,    ∂f (t, e)     ∂e  ≤ l, ∀t ∈ R+ .

(10.13)

Under Assumptions 10.1–10.4, the objective is to design a vˆ (t) ∈ Rm subject to the saturation sat(·) for the system (10.6) to make e (t) tend to zero exponentially. Remark 10.1 Assumption 10.2 is used to replace the inequality (10.8). The positivedefinite function V (t,e) need not be known in the RC design. If the origin of the error dynamics (10.9) is globally exponentially stable and the trajectories of the system satisfy (10.8), then using the converse Lyapunov theorem, there exists V (t, e) such that ci , i = 1, . . . , 4 can be selected as [6, pp. 163–165]

10.2 Problem Formulation

205

  k 2 1 − e−2λδ 1 − e−2lδ c1 (δ) = , c2 (δ) = 2l 2λ   2k 1 − e−(λ−l)δ , c3 (δ) = 1 − k 2 e−2λδ , c4 (δ) = λ−l

(10.14)

where δ is a positive constant to be chosen and l is defined in (10.13). Remark 10.2 This remark shows that Assumption 10.4 Under  is reasonable.

T    Assumption 10.1, v (t) ≤ bβ , ∀t ∈ R+ , where bβ =  β1 β2 · · · βm  . Given that the learning term vˆ (t) ∈ Rm is designed subject to the saturation sat(·) , the term satisfies    b (t, e) v (t) − vˆ (t)  ≤ 2ςh bβ (10.15) ∀e ∈ Rn , ∀t ∈ R+ , where Assumption 10.3 is utilized. Based on (10.15) and Assumption 10.2, taking the derivative of V defined in Assumption 10.2 along with (10.6) results in V˙ ≤ −c3 e (t)2 + 2c4 e (t) ςh bβ . Using [6, Theorem 4.18, p. 172], e (t) is uniformly bounded. Accordingly, l in (10.13) depends upon the bound on e (t). In this case, the term f (t, e) is only required to satisfy the local Lipschitz condition rather than the global Lipschitz condition. In particular, if the term f (t, e) satisfies the global Lipschitz condition, then l depends on the Lipschitz constant directly.

10.3 Saturated D-Type Repetitive Controller Design and Convergence Analysis 10.3.1 Saturated D-Type Repetitive Controller Design The saturated D-type RC is proposed as follows:   vˆ (t) = sat vˆ (t − T ) + h (t − T ) vˆ (t) = 0, ∀t ∈ [−T, 0) .

(10.16)

  Here, h (t) =  (t) e˙ (t) − f¯ (t, e) , where f¯ (t, e) is assumed to satisfy   f (t, e) − f¯ (t, e) ≤ γ e (t) , γ ∈ R+ .

(10.17)

The function  (t) ∈ Rm×m , a continuous-time matrix on [−T, ∞) with supt  (t) ≤ k < ∞, is developed to make vˆ (t) continuous on [0, ∞) .

206

10 Repetitive Control with Nonlinear Systems …

Remark 10.3 In most cases, f (t, e) and b (t, e) can be written as f (t, e) = f¯ (t, e) + ¯ b¯ and f, b denote certain f (t, e) and b (t, e) = b¯ (t, e) + b (t, e) , where f, and uncertain terms, respectively. Moreover, f¯ (t, e) can also be considered as an approximate function vector of f (t, e) with γ denoting the approximate degree between them. Remark 10.4 In the following design, the parameters in Assumptions 10.2–10.4 and k, λ in (10.8) will be taken as conditions for convergence analysis and will not appear in the controller. The given controller is simple with fewer parameters and can be tuned directly in practice. Remark 10.5 It is easier to estimate e˙ (t − T ) than e˙ (t) at time t. The term e˙ (t) can only use e(t − i Ts ) to estimate e˙ (t), where Ts > 0 is the sampling period, i = 0, 1, . . . . Since the Laplace transform of the derivative is s, which is not a proper transfer function,1 it cannot be realized physically. Therefore, approximation is often needed. However, the term e˙ (t − T ) can be estimated using e(t − T − i Ts ), i = 0, ±1, . . . , ± T /Ts , which can be realized physically. Adequate calculation can be performed to obtain the derivative of e˙ (t − T ) with good precision [7, p. 4], [8].

10.3.2 Convergence Analysis Using Assumptions 10.2–10.3 and the dynamics (10.6), the following inequality is obtained: ∂V + V˙ (t, e) = ∂t



∂V ∂e

T

 f (t, e) +

∂V ∂e

T b (t, e) v˜ (t)

 ∂V T b (t, e) v˜ (t) ≤ −c3 e (t) + ∂e c3 c4 ≤ − V (t, e) + √ ςh V (t, e) ˜v (t) , c2 c1 

2

√ where v˜  v − vˆ . By defining U (t, e)  V (t, e) and using the above inequality, it can be easily verified that [4], [6, p. 203] D + U (t, e) ≤ −b1 U (t, e) + b2 ˜v (t) , where b1 = 2cc32 and b2 = 2√c4c1 ςh . Furthermore, the following inequality results from the comparison lemma [6, pp. 102–103]: U (t, e) ≤ e

−b1 (t−t0 )

 U (t0 , e) + b2

t

e−b1 (t−s) ˜v (s) ds.

(10.18)

t0 1A

proper transfer function is a transfer function where the order of the numerator is not greater than the order of the denominator.

10.3 Saturated D-Type Repetitive Controller Design and Convergence Analysis

207

On the other hand, let φ (t) = vˆ (t − T ) + h (t − T ) . Using Assumption 10.1and the definition of h (t), one has v (t) − φ (t) = v˜ (t − T ) − h (t − T ) = (Im −  (t − T ) b (t − T, e)) v˜ (t − T )   −  (t − T ) f (t − T, e) − f¯ (t − T, e) .

(10.19)

Taking the norm · on both sides of the above equation yields v (t) − φ (t) ≤ b3 ˜v (t − T ) + γ k e (t − T ) ,

(10.20)

where b3 = supt Im −  (t) b (t, e). Given that v (t) =sat(v (t)) according to Assumption 10.1 and vˆ (t) =sat(φ (t)) according to (10.16), ˜v (t) ≤ v (t) − φ (t) based on Lemma 10.3. Then, using inequality (10.20 ), the inequality given above is further bounded as ˜v (t) ≤ b3 ˜v (t − T ) + γ k e (t − T )

(10.21)

using (10.19). Using Assumption 10.2, the inequality (10.21) further becomes ˜v (t) ≤ b3 ˜v (t − T ) + b4 U (t − T ,e) ,

(10.22)

γ k . Define ei (τ )  e (i T + τ ) , v˜ i (τ )  v˜ (i T + τ ) , and Ui (τ ,e)  where b4 = √ c1 U (i T + τ ,e) ,where i = 1, 2, .... Then, inequalities (10.18 ) and (10.22) become

Ui (τ, e) ≤ e−b1 τ Ui (0, e) + b2



τ

e−b1 (τ −s) ˜vi (s) ds

0

˜vi+1 (τ ) ≤ b3 ˜vi (τ ) + b4 Ui (τ, e) .

(10.23)

Based on Lemma 2, the following theorem is obtained. Theorem 10.1 For the system (10.6) under Assumptions 10.1–10.4, suppose the saturated RC is designed as in (10.16) with the parameters satisfying sup Im −  (t) b (t, e) < 1 t

ςh γ k  c1 c3 < . 1 − supt Im −  (t) b (t, e) c2 c4

(10.24)

Then, e(t) = 0 in (10.6) is globally exponentially stable. Proof Based on (10.23), e(t) = 0 in (10.6) is globally exponentially stable if

208

10 Repetitive Control with Nonlinear Systems …

b2 b4 < 1. b1 (1 − b3 )

b3 < 1 and Meanwhile,

b3 < 1 ⇔ sup Im −  (t) b (t, e) < 1 t

b2 b4 < 1 ⇔ (10.24). b1 (1 − b3 )  The next step is to replace cc21 cc34 in (10.24) with k, λ, l, δ. Before proceeding further with this work, the following lemma is required. Lemma 10.4 Under Assumptions 10.2, 10.4, if the trajectories of system (10.9) satisfy (10.8), then 0 < λ ≤ l. Proof This proof can be given by contradiction. Using Assumption 10.4, e (t) , the solution of (10.9) satisfies e−l(t−t0 ) e (t0 ) ≤ e (t) ≤ e (t0 ) el(t−t0 ) [6, p. 107]. Given that the trajectories of system (10.9) satisfy (10.8), e(λ−l)(t−t0 ) ≤ k. If l < λ, 1 ln k. This contradicts e(λ−l)(t−t0 ) ≤ k. Thus, then k < e(λ−l)(t−t0 ) when t ≥ t0 + λ−l λ ≤ l.  Theorem 10.2 The origin of the error system (10.9) is globally exponentially stable with the trajectories satisfying (10.8). Considering the system (10.6) with Assumptions 10.1–10.4 under the control law (10.16), if sup Im −  (t) b (t, e) < 1 t

  ςh γ k  < g δ∗ 1 − supt Im −  (t) b (t, e)

(10.25)

then e(t) = 0 in (10.6) is globally exponentially stable. Here, δ ∗ = arg max g (δ) , δ> lnλk

where g (δ) =

c1 (δ) c3 (δ) c2 (δ) c4 (δ)

(10.26)

with c1 , . . . , c4 defined as in (10.14). Proof The free parameter δ in (10.6) can be selected through the solution to an optimization problem as follows: max g (δ) = δ

s.t.

c1 (δ)c3 (δ) c2 (δ)c4 (δ)

c2 (δ) − c1 (δ) ≥ 0 . ci (δ) > 0 i = 1, . . . , 4 δ>0

(10.27)

10.3 Saturated D-Type Repetitive Controller Design and Convergence Analysis

209

Given that k ≥ 1 (in (10.8) when t = t0 ) and 0 < λ ≤ l according to Lemma 4, the inequality c1 (δ) ≤ c2 (δ) and inequalities ci (δ) > 0, i = 1, 2, 3, 4 will always be satisfied for δ > 0. If δ satisfies δ > lnλk , then c3 (δ) > 0. Therefore, the optimization problem (10.27) is simplified as follows: max g (δ) =

δ> lnλk

c1 (δ) c3 (δ) . c2 (δ) c4 (δ)

Let δ ∗ = arg max g (δ) . If (10.25) holds, then there exist ci (δ ∗ ) , i = 1, 2, 3, 4 satisδ> lnλk

fying (10.24). The proof is now concluded by discussing an application of Theorem 10.1.  Remark 10.6 Considering the concept in [9], the saturated D-type RC control law (10.16) can also be updated based only on the steady-state output so that the convergence condition is more relaxed and monotonic convergence in the repetitive domain can be achieved. In this case, only the condition supt Im −  (t) b (t, e) < 1 needs to be retained. However, there is an increase in the number of repetitive trials. Interested readers can refer to [9] for details.

10.4 Robotic Manipulator Tracking Example 10.4.1 Problem Formulation The dynamics of an m-degree-of-freedom manipulator are described by the following differential equation: D (q (t)) q¨ (t) + C (q (t) , q˙ (t)) q˙ (t) + G = τ (t) + v (t) ,

(10.28)

where q (t) ∈ Rm denotes the vector of generalized displacements in robot coordinates, τ (t) ∈ Rm denotes the vector of generalized control input forces in robot coordinates; D (q (t)) ∈ Rm×m is the manipulator inertial matrix, C (q (t) , q˙ (t)) ∈ Rm×m is the vector of centripetal and Coriolis torques, and G (q (t)) ∈ Rm is the vector of gravitational torques; v ∈ C P1 T ([0, ∞) ; Rm ) is a T -periodic disturbance. It is assumed that only q (t) and q˙ (t) are available from measurements. Define a filtered tracking error as e (t) = q˙˜ (t) + q˜ (t) , (10.29) where q˜ (t) = qd (t) − q (t), and qd (t) is a desired trajectory. The following assumptions common to robot manipulators are required [4–10]. (A1) The inertial matrix D (q (t)) is symmetric, uniformly positive definite, and bounded; i.e.,

210

10 Repetitive Control with Nonlinear Systems …

0 < λ D Im ≤ D (q (t)) ≤ λ¯ D Im < ∞, ∀q (t) ∈ Rm ,

(10.30)

where λ D and λ¯ D are positive real numbers. ˙ (q (t)) − 2C (q (t) , q˙ (t)) is skew-symmetric; hence, (A2) The matrix D   ˙ (q (t)) − 2C (q (t) , q˙ (t)) x = 0, ∀x ∈ Rm . xT D For a given desired trajectory qd ∈ C P2 T ([0, ∞) ; Rm ), our objective is to design a controller such that lim e (t) = 0. t→∞

Remark 10.7 From (10.29), it is known that both q˜ (t) and q˙˜ (t) can be viewed as outputs of a stable system with e (t) as the input, implying that q˜ (t) and q˙˜ (t) are bounded or approach zero if e (t) is bounded or approaches zero. Assumptions (A1)-(A2) are commonly satisfied using a robot manipulator.

10.4.2 Model Transformation We design τ (t) as τ (t) = D (q (t)) q¨ e (t) + C (q (t) , q˙ (t)) q˙ e (t) + G + Pe (t) + vˆ (t) ,

(10.31)

where q˙ e (t) = q˙ d (t) + q˜ (t) , q¨ e (t) = q¨ d (t) + q˙˜ (t) , P ∈ Rm×m is a positivedefinite matrix, and vˆ (t) ∈ Rm is the estimate of v (t) . By employing (10.31), the filtered error dynamics are obtained as follows:   D (q (t)) e˙ (t) + C (q (t) , q˙ (t)) e (t) = −Pe (t) + v (t) − vˆ (t) .

(10.32)

Furthermore, the above system can be written in the form of (10.6) with f (t, e) = −D−1 (q (t)) (C (q (t) , q˙ (t)) e (t) + Pe (t)) b (t, e) = D−1 (q (t)) .

(10.33)

10.4.3 Assumption Verification Given that v ∈ C P1 T ([0, ∞) ; Rm ) is a T -periodic disturbance, Assumption 10.1 holds. Define a Lyapunov function V (t, e) = Then,

1 T e (t) D (q (t)) e (t) . 2

(10.34)

10.4 Robotic Manipulator Tracking Example

211

1 1 λ D e (t)2 ≤ V (t, e) ≤ λ¯ D e (t)2 . 2 2 ˙ (q (t)) − 2C (q (t) , q˙ (t)) , the time derivative Using the skew-symmetry of matrix D of V (t, e) along (10.6) with (10.33) is evaluated as V˙ (t, e) ≤ −eT (t) Pe (t)   ∂V    ¯  ∂e  ≤ λ D e (t) .

(10.35) (10.36)

Therefore, Assumption 10.2 is satisfied. Given that 1 1 ≤ b (t, e) ≤ λD λ¯ D Assumption 10.3 holds. If e (t) is bounded, then q (t) and q˙ (t) are bounded by (10.29). In this case, Assumption 10.4 holds.

10.4.4 Controller Design Design τ (t) as ¯ (q (t) , q˙ (t)) q˙ e (t) + G ¯ + Pe (t) + vˆ (t) ¯ (q (t)) q¨ e (t) + C τ (t) = D   (10.37) vˆ (t) = sat vˆ (t − T ) + h (t − T ) , vˆ (t) = 0, ∀t ∈ [−T, 0) ,    ¯ −1 (q (t)) C ¯ (q (t) , q˙ (t)) e (t) + Pe (t) . Here, D, ¯ C ¯, where h (t) =  (t) e˙ (t) + D and A G¯ are the plant parameters known to the designer.

10.4.5 Numerical Simulation The robot, initial condition, tracking task, and disturbance used for a three-degreeof-freedom manipulator are similar to those in [11], where parameters J p = 0.8, l2 = 2, l3 = 1, and g = 9.8. However, they are only known approximately by the designer as follows: l¯2 = l2 + 0.2 ∗ randn, l¯3 = 1 + 0.2 ∗ randn, J¯p = J p + 0.2 ∗ randn, g¯ = 9.8, (10.38) where randn is a normally distributed random number with mean 0 and standard ¯ C,and ¯ ¯ are obtained. In (10.37), deviation 1 for simulation purposes. Using these, D, G select P to be P ≡ 10I3 and design vˆ (t) according to (10.16) as

212

10 Repetitive Control with Nonlinear Systems …

  vˆ (t) = sat vˆ (t − T ) + h (t − T ) , where    ¯ −1 (q (t)) e˙ (t) + D ¯ −1 (q (t)) C ¯ (q (t) , q˙ (t)) + Pe (t) h (t) = 0.3ke (t) D ⎧ ⎨ 0 t ∈ [−T, 0) t ∈ [0, T ) , β1 = β2 = β3 = 10. ke (t) = 2πt ⎩ 3 1 t ∈ [T, ∞) For tracking performance comparison, the performance index is introduced as Jk  q˜ (t) , where k = 1, 2, . . . . Figure 10.1 shows that the performance sup t∈[(k−1)T,kT ]

index Jk approaches 0 as k increases with three different sets of parameters (l¯2 , l¯3 and J¯p ). These imply that q˜ (t) approaches 0 as t → ∞. This result is consistent with the conclusion in Theorem 10.1. 0

10

(1.8400,1.2760,1.1262) (2.2848,1.5161,1.0674) (2.4763,0.5190,0.7921) (1.9373,0.3584,0.9029)

−1

Jk

10

−2

10

−3

10

−4

10

5

10

15

20

25

30 k

35

40

45

50

55

60

Fig. 10.1 Maximum Euclidean norm of error with respect to the ith period with four different sets of parameters (l¯2 , l¯3 , J¯p )

10.4 Robotic Manipulator Tracking Example

213

10.4.6 Discussion For the considered problem or some linear systems with weak nonlinear terms, (high gain) feedback control with nonlinear compensation can often make the error dynamics (10.9) exponentially stable (based on Lyapunov analysis). Readers can also refer to [12] for a similar example. Although Assumptions 10.1–10.4 in Sect. 10.4.3 are verified individually based on the Lyapunov function (only used for analysis), information concerning the Lyapunov function does not appear in the designed controller (10.37). The Lyapunov analysis cannot be replaced to prove exponential stability in theory. However, in practice, the data-based method can be used to find more real parameters related to exponential convergence. Moreover, the parameters in the proposed controller can be adjusted online, where f¯ (t, e) can be identified with increasing accuracy using the data. In the following section, the conditions in Theorem 10.2 will be verified using only the data acquired from the system. Inequality (10.8) can be written as ln

e (t) ≤ κ − λt, t ≥, 0 e (0)

(10.39)

where t0 = 0 and κ = ln k. For the dynamics of the m-degree-of-freedom manipulator, 50 trajectories of the corresponding error dynamics (10.9) are simulated and plotted in Fig. 10.2 with random initial conditions. This figure shows that the term ln (e (t) / e (0) ) finally decreases linearly with respect to time t. This roughly implies that the origin of the error dynamics (10.9) is exponentially stable for this example. Figure 10.2 shows that the parameters are selected as k = 1.05 and λ = 3. Through simulations, ςh = 1, l = 10.1 and supt I3 − 0.3b (t, e) = 0.9. With the parameters given above, the function g (δ) defined in (10.26) is plotted in Fig. 10.3. This figure shows that g (δ) reaches its maximum g (δ ∗ ) = 0.966 at δ ∗ = 0.0463. Condition (10.25) is satisfied when γ < 0.322. This gives the requirement regarding the nominal system f¯ (t, e)

10.5 Summary This chapter presented a saturated D-type RC for a class of nonlinear systems. To bypass the difficulties in finding Lyapunov functions, a contraction mapping method based on spectral theory was proposed to analyze the closed-loop system. The proposed convergence condition for the closed-loop system depends on the parameters of trajectories (refer to the definitions of c1 , c2 , c3 ,and c4 in (10.14)) rather than the concrete forms of Lyapunov functions. The feasibility of our work was demonstrated through an example involving robotic manipulator tracking. Given that the analysis for a nonlinear RC system is similar to that of the ILC scheme, the gap between RC and ILC has been bridged.

214

10 Repetitive Control with Nonlinear Systems … 0

−5

y= κ−λ*t −10

y −15

y=ln (||e(t)||/||e(0)||)

−20

−25

−30

0

1

2

3 t (sec)

4

5

6

Fig. 10.2 Term ln e (t) / e (0) finally decreases linearly with respect to time t

10.6 Appendix (1) Condition Conversion: if and only if (10.5) holds, then  (z) = 0, ∀ |z| ≥ 1, z ∈ C, where  (z) is defined in (10.48). From the compact maps of spectral theory [3, p. 238, Sect. 21.2], [13], it can be concluded that either rQ < 1 or ∃z 1 ∈ σ (Q) , |z 1 | ≥ 1 is satisfied. If the case of |z| ≥ 1 is excluded, then rQ < 1 holds. This implies rQ < 1 if and only if the solution w to the equation (Q − zI) w = 0, ∀ |z| ≥ 1

(10.40)

is unique zero, where  w=

w1 w2

 ∈ R × C ([0, T ] , R)

and I and 0 are the unit operator and zero in R × C ([0, T ] , R) , respectively. Before proceeding further, (10.40) is converted into an equivalent form that is more convenient for the proofs of sufficiency and necessity. Writing out (10.40) yields

10.6 Appendix

215

1 0.9667 0.9

0.8

0.7

g (δ)

0.6

0.5

0.4

0.3

0.2

0.1

0

0 0.0463

0.6

0.3 0.4

1.2

1

0.8

1.4

1.6

1.8

2

δ

Fig. 10.3 Function g (δ) varies with respect to δ

e

−a1 T



T

w1 − zw1 + a2 

0

a4 e−a1 τ w1 + a3 w2 (τ ) − zw2 (τ ) + a2 a4

τ

e−a1 (T −s) w2 (s) ds = 0

(10.41)

e−a1 (τ −s) w2 (s) ds = 0,

(10.42)

0

where τ ∈ [0, T ] . Let μ (τ ) = e

−a1 τ



τ

w1 + a2

e−a1 (τ −s) w2 (s) ds, τ ∈ [0, T ] .

0

Then, μ˙ (τ ) = −a1 μ (τ ) + a2 w2 (τ ) .

(10.43)

Using the definition of μ (τ ) and (10.41), it is easy to obtain μ (0) = w1 , μ (T ) = λw1 . On the other hand, (10.42) becomes

(10.44)

216

10 Repetitive Control with Nonlinear Systems …

w2 (τ ) =

a4 μ (τ ) . z − a3

(10.45)

Combining (10.43) and (10.45) results in  μ˙ (τ ) = − a1 − a2 Consequently, μ (τ ) = e

a4 z − a3

  a − a1 −a2 z−a4 τ 3

 μ (τ ) .

μ (0) .

(10.46)

Finally, using (10.44), Eq. (10.46) further becomes  (z) w1 = 0, where

(10.47)

  a a1 −a2 z−a4 T

1 −  (z)  1 − e z

3

.

(10.48)

w1 .

(10.49)

Using (10.44), (10.45), and (10.46) yields w2 (τ ) =

  a a1 −a2 z−a4 τ

a4 − e z − a3

3

According to (10.47), if  (z) = 0, ∀ |z| ≥ 1, then the solution w1 to equation (10.47) is zero and consequently, w2 (τ ) ≡ 0 , according to (10.49). This implies rQ < 1 if and only if  (z) = 0, ∀ |z| ≥ 1. Therefore, the lemma is rephrased equivalently to show that if and only if (10.5) holds, then  (z) = 0, ∀ |z| ≥ 1. (2) Proof of sufficiency: if (10.5) holds, then  (z) = 0, ∀ |z| ≥ 1. If a3 < 1, then the definition (10.48) is valid as z − a3 = 0, ∀ |z| ≥ 1. Let z = α + βi, where α, β ∈ R. For ∀T > 0, |z| ≥ 1,    1 −a1 −a2 a4 T  z−a3   | (z)| ≥ 1 −  e  z ≥ 1 − e−q(α,β)T ,

(10.50)

α−a3 . It is easy to verify that 1 − e−q(α,β)T achieves where q (α, β) = a1 − a2 a4 (α−a )2 +β 2 3





a minimum at α ∗ = 1, β ∗ = 0. Thus, the inequality 1 − e−q(α ,β )T > 0 implies ∗ ∗  (z) = 0 based on (10.50). Simplifying 1 − e−q(α ,β )T > 0 results in a2 a4 < 1. a1 (1 − a3 )

(10.51)

This concludes the proof of sufficiency. (3) Proof of necessity: if  (z) = 0, ∀ |z| ≥ 1, then (10.5) holds. This necessary condition is proved by contradiction; i.e., there exists z ∗ such that |z ∗ | ≥ 1 and

10.6 Appendix

217

a2 a4 a2 a4  (z ∗ ) = 0, if a3 ≥ 1 or a1 (1−a ≥ 1. The condition (a3 ≥ 1 or a1 (1−a ≥ 1) is 3) 3) a2 a4 covered by two cases, namely, Case 1: a3 ≥ 1 and Case 2: a3 < 1 and a1 (1−a ≥ 1. 3) As a result, the proof of necessity can be divided into the proof of Case 1 and proof of Case 2. 1) Proof of Case 1: a3 ≥ 1. Let z 1 = a3 + a3 , z 2 = 1a , z 1 , z 2 ∈ R+ . For any 3 T > 0, if a3 > 0 is sufficiently small, then

 (z 1 ) = 1 −

  a a1 −a2 a4 T

1 − e a3 + a3

3





 (z 2 ) = 1 − a3 e

− a1 −a2

0.

Given that  (z) is continuous with respect to z ∈ R and 1 < z 1 < z 2 , there always exists z ∗ ∈ (z 1 , z 2 ) such that  (z ∗ ) = 0 according to the mean value theorem. a2 a4 ≥ 1 and a3 < 1. In this case,  (1) = 1 − 2) Proof of Case 2: a1 (1−a 3)  − a −a

a4



T

 − a −a

a4



T

e 1 2 1−a3 ≤ 0. On the other hand,  (z 0 ) = 1 − z10 e 1 2 z0 −a3 > 0 with a sufficiently large z 0 ∈ R+ . Therefore, there always exists a z ∗ ∈ [1, z 0 ) such that  (z ∗ ) = 0. Proofs of Cases 1–2 conclude the proof of necessity.

References 1. Quan, Quan, & Cai, K.-Y. (2018). Saturated repetitive control for a class of nonlinear systems: A contraction mapping method. Systems and Control Letters, 2018(122), 93–100. 2. van Neerven, J. M. A. M. (1995). Exponential stability of operators and operator semigroups. Journal of Functional Analysis, 130(2), 293–309. 3. Lax, P. D. (2002). Functional analysis. New York: Wiley. 4. Kim, Y.-H., & Ha, I.-J. (2000). Asymptotic state tracking in a class of nonlinear systems via learning-based inversion. IEEE Transactions on Automatic Control, 45(11), 2011–2027. 5. Dixon, W. E., Zergeroglu, E., Dawson, D. M., & Costic, B. T. (2002). Repetitive learning control: A Lyapunov-based approach. IEEE Transaction on Systems, Man, and Cybernetics, Part B-Cybernetics, 32(4), 538–545. 6. Khalil, H. K. (2002). Nonlinear systems. Englewood Cliffs, NJ: Prentice-Hall. 7. Bien, Z., & Xu, J.-X. (1998). Iterative Learning Control: Analysis, Design, Integration and Application. Norwell: Kluwer. 8. Quan, Q., & Cai, K.-Y. (2012). Time-Domain analysis of the Savitzky-Golay filters. Digital Signal Processing, 22(2), 238–245. 9. Yang, Z., & Chan, C. W. (2010). Tracking periodic signals for a class of uncertain nonlinear systems. International Journal of Robust Nonlinear Control, 20(7), 834–841. 10. Sun, M., & Ge, S. S. (2006). Adaptive repetitive control for a class of nonlinearly parametrized systems. IEEE Transactions on Automatic Control, 51(10), 1684–1688. 11. Quan, Q., & Cai, K.-Y. (2011). A filtered repetitive controller for a class of nonlinear systems. IEEE Transactions on Automatic Control, 56, 399–405. 12. Omata, T., Hara, S., & Nakano, M. (1987). Nonlinear repetitive control with application to trajectory control of manipulators. Journal of Robotic Systems, 4, 631–652. 13. Lin, H., & Wang, L. (1998). Iterative learning control theory. Xi’an (In Chinese): Northwestern Polytechnical University Press.