Observability and controllability of general linear systems. 9781138353152, 1138353159

1,315 158 4MB

English Pages [351] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Observability and controllability of general linear systems.
 9781138353152, 1138353159

Table of contents :
Cover
Half Title
Title
Copyright
Contents
Preface
Part I SYSTEM CLASSES
Chapter 1 Introduction
1.1 Time
1.2 Notational preliminaries
1.3 Compact, simple, and elegant calculus
1.4 System behavior
1.5 Control
Chapter 2 IO systems
2.1 IO system mathematical model
2.1.1 Time domain
2.1.2 Complex domain
2.2 IO plant desired regime
2.3 Exercises
CHapter 3 ISO systems
3.1 ISO system mathematical model
3.1.1 Time domain
3.1.2 Complex domain
3.2 ISO plant desired regime
3.3 Exercises
Chapter 4 EISO systems
4.1 EISO system mathematical model
4.1.1 Time domain
4.1.2 Complex domain
4.2 EISO plant desired regime
4.3 Exercises
Chapter 5 HISO systems
5.1 HISO system mathematical model
5.1.1 Time domain
5.1.2 Complex domain
5.2 The HISO plant desired regime
5.3 Exercises
Chapter 6 IIO systems
6.1 IIO system mathematical model
6.1.1 Time domain
6.1.2 Complex domain
6.2 IIO plant desired regime
6.3 Exercises
Part II OBSERVABILITY
Chapter 7 Mathematical preliminaries
7.1 Linear independence and matrices
7.2 Matrix range, null space, and rank
7.3 Linear independence and scalar functions
7.4 Linear independence and matrix functions
7.5 Polynomial matrices. Matrix polynomials
7.6 Rational matrices
Chapter 8 Observability and stability
8.1 Observability and system regime
8.2 Observability de nition in general
8.3 Observability criterion in general
8.4 Observability and stability
8.4.1 System stability
8.4.2 System stability and observability
Chapter 9 Various systems observability
9.1 IO systems observability
9.2 ISO systems observability
9.3 EISO systems observability
9.4 HISO systems observability
9.5 IIO systems observability
Part III CONTROLLABILITY
Chapter 10 Controllability fundamentals
10.1 Controllability and system regime
10.2 Controllability concepts
10.3 Controllability de nitions in general
10.4 General state controllability criteria
10.5 General output controllability criteria
Chapter 11 Various systems controllability
11.1 IO system state controllability
11.1.1 Definition
11.1.2 Criteria
11.2 IO system output controllability
11.2.1 Definition
11.2.2 Criteria
11.3 ISO system state controllability
11.3.1 Definition
11.3.2 Criterion
11.4 ISO system output controllability
11.4.1 Definition
11.4.2 Criteria
11.5 EISO system state controllability
11.5.1 Definition
11.5.2 Criterion
11.6 EISO system output controllability
11.6.1 Definition
11.6.2 Criteria
11.7 HISO system state controllability
11.7.1 Definition
11.7.2 Criterion
11.8 HISO system output controllability
11.8.1 Definition
11.8.2 Criteria
11.9 IIO system state controllability
11.9.1 Definition
11.9.2 Criterion
11.10 IIO system output controllability
11.10.1 Definition
11.10.2 Criteria
Part IV APPENDIX
Appendix A Notation
A.1 Abbreviations
A.2 Indexes
A.2.1 SUBSCRIPTS
A.2.2 SUPERSCRIPT
A.3 Letters
A.3.1 CALLIGRAPHIC LETTERS
A.3.2 FRAKTUR LETTERS
A.3.3 GREEK LETTERS
A.3.4 ROMAN LETTERS
A.4 Name
A.5 Symbols, vectors, sets and matrices
A.6 Units
Appendix B Example
B.1 IO system example
Appendix C Transformations
C.1 Transformation of IO into ISO system
C.2 ISO and EISO forms of IIO system
Appendix D Proofs
D.1 Proof of Lemma 97
D.2 Proof of Theorem 110
D.3 Proof of Theorem 121
D.4 Proof of Theorem 128
D.5 Proof of Theorem 130
D.6 Proof of Theorem 141
D.7 Proof of Theorem 147
D.8 Proof of Theorem 148
D.9 Proof of Theorem 149
D.10 Proof of Theorem 161
D.11 Proof of Theorem 163
D.12 Proof of Theorem 173
D.13 Proof of Theorem 181
D.14 Proof of Theorem 185
D.15 Proof of Theorem 215
Part V INDEX
Author Index
Subject Index

Citation preview

i

i

“K395488˙FM” — 2018/9/27 — 14:14 — page 2 — #2

i

i

Control of Linear Systems Observability and Controllability of General Linear Systems

i

i i

i

i

i

“K395488˙FM” — 2018/9/27 — 14:14 — page 3 — #3

i

i

i

i i

i

i

i

“K395488˙FM” — 2018/9/27 — 14:14 — page 4 — #4

i

i

Control of Linear Systems Observability and Controllability of General Linear Systems

Lyubomir T. Gruyitch

i

i i

i

i

i

“K395488˙FM” — 2018/9/27 — 14:14 — page 6 — #6

i

i

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2019 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20180927 International Standard Book Number-13: 978-1-138-35315-2 (Hardback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page v — #2

i

i

Contents Preface

I

xi

SYSTEM CLASSES

1

1 Introduction 1.1 Time . . . . . . . . . . . . . . . . . . . 1.2 Notational preliminaries . . . . . . . . 1.3 Compact, simple, and elegant calculus 1.4 System behavior . . . . . . . . . . . . . 1.5 Control . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

3 3 6 8 9 16

2 IO systems 2.1 IO system mathematical model 2.1.1 Time domain . . . . . 2.1.2 Complex domain . . . 2.2 IO plant desired regime . . . . . 2.3 Exercises . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

19 19 19 22 28 30

3 ISO systems 3.1 ISO system mathematical model 3.1.1 Time domain . . . . . . 3.1.2 Complex domain . . . . 3.2 ISO plant desired regime . . . . . 3.3 Exercises . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

31 31 31 33 38 40

4 EISO systems 41 4.1 EISO system mathematical model . . . . . . . . . . . . . . . 41 4.1.1 Time domain . . . . . . . . . . . . . . . . . . . . . 41 4.1.2 Complex domain . . . . . . . . . . . . . . . . . . . 47 v i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page vi — #3

i

vi

i

CONTENTS 4.2 4.3

EISO plant desired regime . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62 64

5 HISO systems 5.1 HISO system mathematical model . 5.1.1 Time domain . . . . . . . 5.1.2 Complex domain . . . . . 5.2 The HISO plant desired regime . . 5.3 Exercises . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

65 65 65 66 74 76

6 IIO systems 6.1 IIO system mathematical model 6.1.1 Time domain . . . . . 6.1.2 Complex domain . . . 6.2 IIO plant desired regime . . . . 6.3 Exercises . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

77 77 77 79 90 91

II

. . . . .

. . . . .

OBSERVABILITY

93

7 Mathematical preliminaries 7.1 Linear independence and matrices . . . . . 7.2 Matrix range, null space, and rank . . . . 7.3 Linear independence and scalar functions . 7.4 Linear independence and matrix functions 7.5 Polynomial matrices. Matrix polynomials . 7.6 Rational matrices . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

95 95 97 104 107 120 125

8 Observability and stability 8.1 Observability and system regime . . . . . . 8.2 Observability definition in general . . . . . 8.3 Observability criterion in general . . . . . 8.4 Observability and stability . . . . . . . . . 8.4.1 System stability . . . . . . . . . . 8.4.2 System stability and observability

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

133 133 134 136 139 139 141

9 Various systems observability 9.1 IO systems observability . . 9.2 ISO systems observability . 9.3 EISO systems observability 9.4 HISO systems observability

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

143 143 147 149 150

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page vii — #4

i

i

CONTENTS 9.5

III

vii

IIO systems observability . . . . . . . . . . . . . . . . . . . . 152

CONTROLLABILITY

157

10 Controllability fundamentals 10.1 Controllability and system regime . . 10.2 Controllability concepts . . . . . . . . 10.3 Controllability definitions in general . 10.4 General state controllability criteria . 10.5 General output controllability criteria

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

159 159 159 160 162 165

11 Various systems controllability 11.1 IO system state controllability . . . 11.1.1 Definition . . . . . . . . . 11.1.2 Criteria . . . . . . . . . . . 11.2 IO system output controllability . . 11.2.1 Definition . . . . . . . . . 11.2.2 Criteria . . . . . . . . . . . 11.3 ISO system state controllability . . 11.3.1 Definition . . . . . . . . . 11.3.2 Criterion . . . . . . . . . . 11.4 ISO system output controllability . 11.4.1 Definition . . . . . . . . . 11.4.2 Criteria . . . . . . . . . . . 11.5 EISO system state controllability . 11.5.1 Definition . . . . . . . . . 11.5.2 Criterion . . . . . . . . . . 11.6 EISO system output controllability 11.6.1 Definition . . . . . . . . . 11.6.2 Criteria . . . . . . . . . . . 11.7 HISO system state controllability . 11.7.1 Definition . . . . . . . . . 11.7.2 Criterion . . . . . . . . . . 11.8 HISO system output controllability 11.8.1 Definition . . . . . . . . . 11.8.2 Criteria . . . . . . . . . . . 11.9 IIO system state controllability . . 11.9.1 Definition . . . . . . . . . 11.9.2 Criterion . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

171 171 171 172 185 185 185 195 195 196 197 197 197 201 201 201 203 203 203 206 206 207 210 210 210 214 214 216

. . . . . . . . . . . . . . . . . . . . . . . . . . .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page viii — #5

i

viii

i

CONTENTS 11.10 IIO system output controllability . . . . . . . . . . . . . . . 222 11.10.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 222 11.10.2 Criteria . . . . . . . . . . . . . . . . . . . . . . . . . 222

IV

APPENDIX

227

A Notation A.1 Abbreviations . . . . . . . . . . . . . . A.2 Indexes . . . . . . . . . . . . . . . . . . A.2.1 SUBSCRIPTS . . . . . . . . . A.2.2 SUPERSCRIPT . . . . . . . . A.3 Letters . . . . . . . . . . . . . . . . . . A.3.1 CALLIGRAPHIC LETTERS A.3.2 FRAKTUR LETTERS . . . . A.3.3 GREEK LETTERS . . . . . . A.3.4 ROMAN LETTERS . . . . . A.4 Name . . . . . . . . . . . . . . . . . . . A.5 Symbols, vectors, sets and matrices . . A.6 Units . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

229 . 229 . 230 . 230 . 230 . 230 . 231 . 231 . 233 . 234 . 239 . 240 . 241

B Example 243 B.1 IO system example . . . . . . . . . . . . . . . . . . . . . . . 243 C Transformations 249 C.1 Transformation of IO into ISO system . . . . . . . . . . . . 249 C.2 ISO and EISO forms of IIO system . . . . . . . . . . . . . . 251 D Proofs D.1 Proof D.2 Proof D.3 Proof D.4 Proof D.5 Proof D.6 Proof D.7 Proof D.8 Proof D.9 Proof D.10 Proof D.11 Proof

of of of of of of of of of of of

Lemma 97 . Theorem 110 Theorem 121 Theorem 128 Theorem 130 Theorem 141 Theorem 147 Theorem 148 Theorem 149 Theorem 161 Theorem 163

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

255 255 256 258 270 273 276 277 279 280 282 294

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page ix — #6

i

i

CONTENTS D.12 D.13 D.14 D.15

V

Proof Proof Proof Proof

ix of of of of

INDEX

Theorem Theorem Theorem Theorem

173 181 185 215

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

304 307 309 314

327

Author Index

329

Subject Index

331

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page x — #7

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page xi — #8

i

i

Preface On the state of the art This book concerns observability and controllability as two of the fundamental topics of the time-invariant continuous-time linear control systems, in the sequel called for short control systems or just systems. It shows that in the framework of the linear systems there are still problems left untouchable but solvable. All dynamical systems have their (internal and/or output) dynamics that determines their (internal and/or output) dynamical situation called state and state variables, i.e., its (internal and/or output) state vector all that regardless of the existence or the nonexistence of the input derivatives. The basis of the existing concept of state is that the input vector derivatives do not influence the system (internal and/or output) dynamics, or at least that they do not appear in the system mathematical model. It has become very useful to develop the effective mathematical machinery for the related studies. The theory and the practice have been very well developed only for one class of the physical control systems. In order to treat other classes of the physical control systems, their mathematical models should be transformed formally mathematically with the full loss of the physical sense of the new (mathematical) variables. The concept of the state, the state variables, the state vector and the state space is well defined and widely effectively directly used only in the framework of the dynamical systems in general, and of the control systems in particular, described by the first order vector linear differential state equation and by the algebraic vector linear output equation, which are called Input - State - Output (abbreviated: ISO) (or: state-space) (control) systems. Their mathematical models do not contain any derivative of the input vector function I (.). There is a fundamental lacuna in the control theory due to the nonexistence of the clear, well defined, concept of the state also for the systems subjected to the influence of the input vector derivatives so that the physical xi i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page xii — #9

i

xii

i

PREFACE

meaning and sense of the system variables is preserved.

On the book The author, in addition to the analysis of the scientific papers listed in the bibliography D.15, consulted in particular the books by the following authors: B. D. O. Anderson and J. B. Moore [1], P. J. Antsaklis and A. N. Michel [2], [3], S. Barnett [4], A. Benzaoiua, F. Mesquine and M. Benhayoun [5]; further by L. D. Berkovitz [6], L. D. Berkovitz and N. G. Medhin [7], S. P. Bhattacharyya, A. Datta and L. H. Keel [9], D. Biswa [10], P. Borne, G. Dauphin - Tanguy, J.- P. Richard, F. Rotella and I. Zambettakis [11], W. L. Brogan [13], F. M. Callier and C. A. Desoer [15], [16], C.-T. Chen [18], H. Chestnut and R. W. Mayer [19], M. J. Corless and A. E. Frazho [20], J. J. D’Azzo and C. H. Houpis [21], J. J. D’Azzo, C. H. Houpis and S. N. Sheldon [22], C. A. Desoer [24], C. A. Desoer and M. Vidyasagar [25], F. W. Fairman [27], F. R. Gantmacher [30], [31], G. C. Goodwin [34], Ly. T. Gruyitch [40], [41], [42], M. Haidekker [46], J. P. Hespanh [50], C. H. Houpis and S. N. Sheldon [52], D. G. Hull [53], M. K.-J. Johansson [55], T. Kaczorek [56], T. Kailath [57], D. E. Kirk [64], B. Kisaˇcanin and G. C. Agarwal [65], B. C. Kuo [66], [67], H. Kwakernaak and R. Sivan [68], P. Lancaster and M. Tismenetsky [69], J. M. Maciejowski [71], J. L. Melsa and D. G. Schultz [75], R. K. Miller and A. N. Michel [77], K. Ogata [81], [82], D. H. Owens [83], H. M. Power and R. J. Simpson [86], H. H. Rosenbrock [88], A. Sinha [89], R. E. Skelton [90], D. M. Wiberg [95], R. L. Williams II and D. A. Lawrence [96], W. A. Wolovich [97], W. M. Wonham [98] and B.-T. Yazdan [99]. This book is complementary to them and/or extends, broadens, and generalizes inherently their parts that are related to the dynamical system state concept, observability, and controllability. The book treats observability and controllability for the following five classes of the systems. • The Input-Output (IO) systems described by the ν-th order linear vector differential equation expressed in terms of the output vector Y ∈RN . This class of the control systems has been only partially studied; it has been studied by formally mathematically transforming its mathematical model into the form of the ISO systems. The variables of such transformations lose the physical sense of the system variables if the system is subjected to actions of the derivative(s) of the input vector. The book resolves this lacuna of the control theory. • The (first order) Input-State-Output (ISO) systems determined by the first order linear vector differential equation expressed in terms of

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page xiii — #10

i

i

xiii the state vector X, which is the state equation, X ∈Rn , and by the algebraic vector equation expressed in terms of the output Y, which is the output equation. They are well known as the state-space systems. They contain only one derivative that is the first derivative of the state vector. They do not contain any derivative of the input vector. • The (first order) Extended-Input-State-Output (EISO) systems determined by the first order linear vector differential equation expressed in terms of the state vector X, which is the state equation, X ∈Rn , and by the algebraic vector equation expressed in terms of the output Y, which is the output equation. They contain the first order derivative X(1) of the state vector X and µ derivatives of the input vector I only in the state vector equation, µ > 0 (because for µ = 0 the EISO system becomes the ISO system). The EISO systems have not been studied so far. • The Higher order Input-State-Output (HISO) systems characterized by the α-th order linear vector differential equation expressed in terms of the vector R, which is the state equation, R ∈ Rρ , and by the linear vector algebraic equation of the output vector Y, which is the output equation. This class of the control systems has not been studied so far. • The Input-Internal and Output state (IIO) systems characterized by the α-th order linear vector differential equation expressed in terms of the internal dynamics vector R, which is the internal dynamics, i.e., the internal state, equation, and by the ν-th order linear vector (differential if ν > 0, algebraic if ν = 0) equation expressed in terms of the output vector Y, which is the output (state if ν > 0) equation. The books [36], [40] introduced and initiated the study of this class of the dynamical systems. However, the IIO control systems have not been studied so far. The existence of the actions of the input vector derivatives on the system is the reason, justification, and need to extend and to generalize the state concept of dynamical, hence of control, systems. Various subsidiary statements, results, exercises, and rigorous detailed proofs form the Appendix. The goal of the book is to contribute to the advancement of the linear control systems theory and the corresponding university courses, to open new directions for research and for applications in the framework of time-invariant continuous-time linear control systems. It represents a further

i

i i

i

i

i

“001Book” — 2018/10/15 — 16:15 — page xiv — #11

i

xiv

i

PREFACE

development of the existing linear control systems theory that is not repeated herein. The contributions of the book largely and crucially are beyond the existing control theory.

In gratitude The author expresses his gratitude to: Ms. Nora Konopka, Global Editorial Director – Engineering, for her formidable, exceptionally careful leadership of the publication process during which she proposed to divide the original manuscript of 633 pages into two books, this one and the accompanying book [37], under their titles as they are published . Ms. Michele Dimont, Project Editor, for her devoted leadership of the book’s editing. Ms. Vanessa Garrett, Editorial Assistant – Engineering, for her careful and effective administrative work. All of CRC Press/Taylor & Francis.

The author is grateful to Mr. George Pearson with MacKichan Company for his very kind and effective assistance to improve my usage of the excellent Scientific Work Place software for scientific works. Belgrade, September 28, 2017, March 13, April 23, May 18 - 20, 2018. Lyubomir T. Gruyitch

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 1 — #12

i

i

Part I

SYSTEM CLASSES

1 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 2 — #13

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 3 — #14

i

i

Chapter 1

Introduction 1.1

Time

All processes, motions and movements, all behaviors of the systems and their responses, as well as all external actions on the systems, occur and propagate in time. It is natural from the physical point of view to study the systems directly in the temporal domain. This requires to be clear how we understand what time is and what its properties are, which we explain in brief as follows (for the more complete analysis see: [39], [40], [41], [42], [43], [44]). Definition 1 Time Time (i.e., the temporal variable) denoted by t or by τ is an independent scalar physical variable such that: - Its value called instant or moment determines uniquely when somebody or something started/interrupted to exist, - Its values determine uniquely since when and until when somebody or something existed/exists or will exist, - Its values determine uniquely how long somebody or something existed/exists or will exist, - Its values determine uniquely whether an event E1 occurs then when another event E2 has not yet happened, or the event E1 takes place just then when the event E2 happens, or the event E1 occurs then when the event E2 has already happened, - Its value occupies (covers, encloses, imbues, impregnates, is over and in, penetrates) equally everybody and everything (i.e., beings, objects, energy, matter, and space) everywhere and always, and

3 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 4 — #15

i

4

i

CHAPTER 1. INTRODUCTION

- Its value has been, is, and will be permanently changing smoothly, strictly monotonously continuously, equally in all spatial directions and their senses, in and around everybody and everything, independently of everybody and everything (i.e., independently of beings, objects, energy, matter, and space), independently of all other variables, independently of all happenings, movements, and processes. Time is a basic and elementary constituent of the existence of everybody and of everything [42], [43], [44]. All human trials during millenniums have failed to explain, to express, the nature, the phenomenon, of time in terms of other well-defined notions, in terms of other physical variables and phenomena [42], [43, Axiom 25, p. 52], [44, Axiom 25, p. 53]. The nature of time, the physical content of it, cannot be explained in terms of other basic constituents of the existence (in terms of energy, matter, space) or in terms of other physical phenomena or variables. Time has its own, original, nature that we can only call it the nature of time, i.e., the temporal nature or the time nature [40], [41], [42], [43], [44]. An arbitrary value of time t (τ ), i.e., an arbitrary instant or moment, is denoted also by t (or by τ ), respectively. It is an instantaneous (momentous) and elementary time value. It can happen exactly once and then it is the same everywhere for, and in, everybody and everything (i.e., for, and in, beings, energy, matter, objects, and space), for all other variables, for all happenings, for all movements, for all processes, for all biological, economical, financial, physical, and social systems. It is not repeatable. Nobody and nothing can influence the flow of instants [42], [43], [44]. The physical dimension of time is denoted by [T ], where T stands for time, t [T ]. It cannot be expressed in terms of the physical dimension of another variable. Its physical dimension is one of the basic physical dimensions. It is used to express the physical dimensions of most of the physical variables. A selected unity 1t of time can be arbitrarily chosen and then fixed. If it is second s then 1t = s, which we denote by t h1t i = t hsi . There can be assigned exactly one (which is denoted by ∃!) real number to every moment (instant), and vice versa. The numerical value num t of the moment t is a real number and dimensionless, num t ∈ R and num t [−], where R is the set of all real numbers. Theorem 2 Universal time speed law, [42], [43], [44] Time is the unique physical variable such that the speed vt (vτ ) of the evolution (of the flow) of its values and of its numerical values:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 5 — #16

i

1.1. TIME

i

5

a) is invariant with respect to a choice of a relative zero moment tzero , of an initial moment t0 , of a time scale and of a time unit 1t , i.e., invariant relative to a choice of a time axis, invariant relative to a selection of spatial coordinates, invariant relative to everybody and everything, and b) its value (its numerical value) is invariant and equals one arbitrary time unit per the same time unit (equals one), respectively,



vt = 1[T T −1 ] 1t 1−1 = 1[T T −1 ] 1τ 1−1 = vτ , numvt = numvτ = 1, τ t (1.1) relative to arbitrary time axes T and Tτ , i.e., its numerical value equals 1 (one) with respect to all time axes (with respect to any accepted relative zero instant tzero , any chosen initial instant t0 , any time scale and any selected time unit 1t ), with respect to all spatial coordinate systems, with respect to all beings and all objects. The uniqueness of time, the constancy and the invariance of the time speed determine that time itself is not relative and cannot be relative [38], [39], [42], [43], [44], [80]. Time set T is the set of all moments. It is open, unbounded, and connected set. It is in the biunivoque (one-to-one) correspondence with the set R of all real numbers, T = {t : num t ∈ R, dt > 0, t(1) ≡ 1}, ∀t ∈ T , ∃!x ∈ R =⇒ x = num t and ∀x ∈ R, ∃!t ∈ T =⇒ num t = x, num inf T = num tinf = −∞ ∈ / T and num supT = num tsup = ∞ ∈ / T. (1.2) The rule of the correspondence determines an accepted relative zero numerical time value tzero , a time scale and a time unit denoted by 1t (or by 1τ ). The time unit can be ... , millisecond, second, minute, hour, day, ... , which Newton explained by clarifying the sense of relative time [80, I of Scholium, p. 8]. Unfortunately, this fact has been ignored in the modern physics and science. Note 3 Choice of the relative zero moment tzero and the initial moment t0 We accept herein the relative zero moment tzero to have the zero numerical value, num tzero = 0 because we deal with the time-invariant systems. Besides, we adopt it for tzero to be also the initial moment t0 , t0 = tzero ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 6 — #17

i

6

i

CHAPTER 1. INTRODUCTION

num t0 = 0, in view of the time-invariance of the systems to be studied. This determines the subset T0 of the time set T, T0 = {t : t ∈ T, numt ∈ [0, ∞[}. Sometimes, we will denote the initial moment explicitly by t0 but it will mean that num t0 = 0. Note 4 We usually use the letters t and τ to designate time itself and an arbitrary moment, as well as the numerical value of the arbitrary moment with respect to the chosen zero instant, e.g., t = 0 is used in the sense numt = 0. From the physical point of view this is incorrect. The numerical value num t of the instant t is a real number without a physical dimension, while the instant t is a temporal value that has the physical dimension, which is the temporal dimension T of time. We overcome that by using the normalized, dimensionless, mathematical temporal variable, denoted by t and defined by t t = [−], 1t so that the time set T is to be replaced by T = {t[−] : t = numt = num t ∈ R, dt > 0, t

(1)

≡ 1}.

With this in mind we will use in the sequel the letter t also for t, and T also for T . Hence, t[−] = numt[−]. Between any two different instants t1 ∈ T and t2 ∈ T there is a third instant t3 ∈ T , either t1 < t3 < t2 or t2 < t3 < t1 . The time set T is continuum. It is called also the continuous-time set. This book is on continuous-time systems and their control.

1.2

Notational preliminaries

Lower case ordinary letters denote scalars, bold (lower case and capital, Greek and Roman) letters signify vectors, capital italic letters stand for matrices, and we use capital Fraktur letters for sets and spaces. For example, the identity matrix of the dimension i is denoted by Ii , Ii = diag {1 1 ... 1} ∈ Ri×i , In = I ∈ Rn×n .

(1.3)

The variables in the mathematical models are dimensionless because their values are normalized relative to their characteristic values. Throughout the book we accept the following condition to hold:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 7 — #18

i

i

1.2. NOTATIONAL PRELIMINARIES

7

Condition 5 Normalized variables The value of every variable Z appearing in a system mathematical model is dimensionless normalized physical variable ZP h relative to some its characteristic value ZP hCh (e.g., nominal value ZP hN or the unit value 1Z ): Z [−] =

ZP h [ZP h ] . ZP hCh [ZP h ]

(1.4)

Note 6 Useful simple vector notation [36], [40] Instead of using, for example, Y∓ (s) = F (s)• •

h

T

T

iT

T

I ∓ (s) I T (0∓ ) .. I (µ−1) (0∓ ) Y T (0∓ ) .. Y (ν−1) (0∓ )

,

the following simple vector notation enabled us to define and use effectively the system full transfer function matrix F (s): Y∓ (s) = F (s)V(s),   µ−1 ∓   ∓ I (0 ) I (s) ∓ , C = , V(s) = ν−1 (0∓ ) 0 Y C∓ 0    Y (0∓ ) I(0∓ ) (1) ∓   I (1) (0∓ )   , Yν−1 (0∓ ) =  Y (0 ) Iµ−1 (0∓ ) =     ... ... (ν−1) (µ−1) ∓ Y (0∓ ) I (0 )

  , 

by introducing the general compact vector notation    (0)  Y Y (1) (1)     Y (k+1)N    Y , k ∈ {0, 1, ...} , Y0 = Y. Yk =   ...  =  ...  ∈ R Y(k) Y(k)

(1.5)

It is different from the k-th derivative Y(k) of Y : dk Y ∈ RN , k ∈ {1, ...} , Yk 6= Y(k) . dtk P (i) (t) as follows, This permits to express i=ν i=0 Ai Y  (0) Y (t)   i=ν X  . . . Y(1) (t) Ai ∈ RN ×N , Ai Y(i) (t) = A0 .. A1 .. ... .. Aν   ... i=0 Y(k) (t) Y(k) =

  , 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 8 — #19

i

8

i

CHAPTER 1. INTRODUCTION

i.e., in the compact form by introducing the extended system matrix A(ν) composed of the system matrices Ai ∈ RN ×N , i ∈ {0, 1, ..., ν} ,   .. .. .. (ν) (1.6) A = A0 . A1 . ... . Aν ∈ RN ×(ν+1)N , N ×N A(ν) 6= Aν = AA...A | {z } ∈ R

(1.7)

ν−times

so that

i=ν X

Ai Y(i) (t) = A(ν) Yν (t) .

(1.8)

i=0

Other notation is defined at its first appearance in the text and in Appendix A.

1.3

Compact, simple, and elegant calculus

The introduction and definition of: • The extended vector Yk ∈ R(k+1)N (1.5), which is composed of the vector Y and its derivatives up to the order k, • The extended matrix A(ν) ∈ RN ×(ν+1)N (1.6), the entries of which are submatrices Ai , i = 0, 1, 2, .., ν, enable us to develop a compact, simple, and elegant calculus. The matrix differential equation: i=ν X i=0

Ai Y(i) (t) =

i=µ≤ν X

Bi I(i) (t) ,

i=0

Ai ∈ RN ×N , Y ∈ RN , Bi ∈ RN ×M , I ∈ RM ,

(1.9)

has the equivalent compact form in the time domain [36], [40]: A(ν) Yν (t) = B (µ) Iµ (t) , A(ν) ∈ RN ×(ν+1)N , Yν ∈ R(ν+1)N , B (µ) ∈ RN ×(µ+1)M , Iµ ∈ R(µ+1)M . (1.10)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 9 — #20

i

1.4. SYSTEM BEHAVIOR

i

9

Comment 7 Compact form of the linear differential equation Equation (1.10) is a differential, not algebraic, equation that is the compact form of the original differential Equation (1.9).  T  If N ≤ M and rank A(ν) = N then the matrix A(ν) A(ν) is nonsin  −1   T T of A(ν) is well defined. gular and the right inverse A(ν) A(ν) A(ν) If Equation (1.10) had been algebraic and treated as algebraic then we would have been formally able to solve it for Yν (t) : ν



(ν)

Y (t) = A

T −1 T   (ν) (ν) A B (µ) Iµ (t) , A

but this would not have been a solution to Equation (1.10) because it is a differential, not algebraic, equation. The compact, simple, and elegant calculus is the basis for all calculations in the book. It is effectively applicable not only to linear continuous-time systems [36], [40], but also to linear discrete-time systems [14] and to nonlinear dynamical systems [41].

1.4

System behavior

Time is a basic constituent of the environment of every dynamical physical system. The time field is the temporal environment, i.e., the time environment, of the system [42], [43], [44]. A time-dependent variable will be denoted for short by the corresponding letter, e.g., scalar variables by D, I, R, S, U , Y, ... and vector variables by D, I, R, S, U, Y, .... From the mathematical point of view they are functions, e.g., D = D (.) : T −→ R1 , D = D (.) : T −→ Rd . A variation of the value of every time-dependent variable is in time. As usual, R+ is the set of all nonnegative real numbers, R+ is the set of all positive real numbers, Rk is the k-dimensional real vector space, the elements of which are k-dimensional real valued vectors, where k is any natural number. Notice that R1 6= R. There are three substantial characteristic groups of the variables that are associated with the dynamical system in general. Their definitions follow by referring to [18, Definition 3-6, p. 83], [40], [41], [66, p. 105], [78, 2. Definition, p. 380], [79, 2. Definition, p. 380], [81, p. 4], [82, p. 664]. Note 8 The capital letters D, I, R, S, U , Y (and D, I, R, S, U, Y) denote the total scalar (vector) values of the variables D (.), I (.), R (.), S (.), U (.),

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 10 — #21

i

10

i

CHAPTER 1. INTRODUCTION

Y (.) (of the vector variables D (.), I (.), R (.), S (.), U (.), Y (.)) relative to their total zero scalar (vector) value, if it exists, or relative to their accepted zero scalar (vector) value, respectively. A characteristic of the dynamical systems is their dynamical behavior. The dynamical system can possess the explicit internal dynamics and the implicit output dynamics or explicit both the internal and output dynamics. A special family of the dynamical systems is that of plants (objects). Definition 9 Plant (object) A plant P (i.e., an object O) is a system that should under specific conditions called nominal (nonperturbed) realize its demanded dynamical behavior and under other (nonnominal, perturbed, real) conditions should realize its dynamical behavior sufficiently close to its demanded dynamical behavior over some (bounded or unbounded) time interval. The physical nature of a plant can be anyone. Definition 10 Input variables, input vector, and input space A variable that acts on the system and its influence is essential for the system behavior is the system input variable denoted by I ∈ R. The system can be under the action of several mutually independent input variables I1 , I2 , ..., IM . They compose the system input vector (for short, input) I = [I1 I2 ... IM ]T ∈RM ,

(1.11)

which is an element of the input space RM . The instantaneous values of the variables Ii and I at an instant t ∈ T are Ii (t) and I (t), respectively. The capital letters I and I denote the total (scalar, vector) values of the variable I and the vector I relative to their total zero (scalar, vector) value, if it exists, or relative to their accepted zero (scalar, vector) value, respectively. Definition 11 Disturbance variable and disturbance vector An input variable D of a system that acts on the system without using any information about the system demanded dynamical behavior or by using it in order to perturb the system behavior is the disturbance variable (for short: disturbance) for the system. If there are several, e.g., d, disturbance variables D1 , D2 , ... , Dd , then they are entries of the disturbance vector (for short: disturbance) D,  T .. .. .. (1.12) D = D1 . D2 . ... . Dd ∈Rd .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 11 — #22

i

1.4. SYSTEM BEHAVIOR

i

11

The instantaneous values of the variables Di and D at an instant t ∈ T are Di (t) and D (t), respectively. A disturbance action on a system most often is not rejectable. The disturbance acts on the system at best independently of the system behavior, because if the disturbance exploits the information about the system demanded behavior in order to perturb the system behavior then it is an enemy disturbance. In order to interrupt its action on the system its source should be destroyed, which is rarely possible. The physical nature of disturbances can be anyone. The system output behavior is determined by the temporal evolution of its output variables and their derivatives, in the sense of the following definitions: Definition 12 Output variables, output vector, output space, and response A variable Y ∈ R is an output variable of the system if and only if its values result from the system behavior, they are (directly or indirectly) measurable, and we are interested in them. The number N is the maximal number of linearly independent output variables Y1 , Y2 , . . ., YN on T of the system. They form the output vector Y of the system, which is an element of the output space RN : Y = [Y1 Y2 ...YN ]T ∈RN .

(1.13)

The time evolution Y (t) of the output vector Y takes place, i.e., the output vector Y propagates, in the integral output space I, I = T × RN .

(1.14)

The instantaneous values of the variables Yi and Y at an instant t ∈ T are Yi (t) and Y (t), respectively. The time variation Y (t) of the system output vector Y is the system (output) response. The plant desired output behavior is denoted by Yd (t). Note 13 There are systems, the output variable of which is fed back to the system input. Such output variable is also the system input variable, and such system has its own (local) feedback.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 12 — #23

i

12

i

CHAPTER 1. INTRODUCTION

A (physical and a mathematical) dynamical system can be subjected to the action of the input vector derivatives I(l) (t), l ∈ {1, 2, ...} . The system internal and output dynamical behavior depend then not only on the input vector I(t) but also on all its derivatives acting on the system. This is reality that inspires us, justifies and demands us to generalize the concept of the dynamical system state as follows. Definition 14 State of a dynamical system The (internal, output) state of a physical dynamical system at a moment τ ∈ T is, respectively, the system (internal, output) dynamical physical situation at the moment τ, which, together with the input vector and its derivatives acting on the system at any moment (t ≥ τ ) ∈ T, determines uniquely the system behavior, [i.e., the system (internal, output) state and the system output response], for all (t > τ ) ∈ T, respectively. The (internal, output) state of a mathematical dynamical system at a moment τ ∈ T is, respectively, the minimal amount of information about the system at the moment τ , which, together with information about the action on the system (about the system input vector and its derivatives acting on the system) at any moment (t ≥ τ ) ∈ T, determines uniquely the system behavior (i.e., the system (internal, output) state and its output response) for all (t > τ ) ∈ T, respectively. The minimal number n(.) of linearly independent variables S(·)i on T, i = 1, 2, ... , n(.) , the values S(·)i (τ ) of which are at every moment τ ∈ T in the biunivoque correspondence with the system (internal: (·) = I, output: (·) = O) state at the same moment τ, is the state dimension and the variables S(·)i , i = 1, 2,. . ., n(.) , are, respectively, the (internal: (·) = I, output: (·) = O) state variables of the system. They compose, respectively, the (internal: (·) = I, output: (·) = O) state vector S (·) of the system, h iT S(·) = S(·)1 S(·)2 ...S(·)n(.) ∈ Rn , (.) = , I, O.

(1.15)

The space Rn(.) is, respectively, the (internal: (.) = I, output: (·) = O) state space of the system. The state vector function S (.) : T −→ Rn is the motion of the system. The instantaneous value of the (internal, output) state vector function S(·) (.) at an instant t ∈ T is the instantaneous (internal, output) state vector S(·) (t) at the instant t, respectively The plant desired state behavior is denoted by Sd (t).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 13 — #24

i

1.4. SYSTEM BEHAVIOR

i

13

This definition broadens and generalizes the well known and commonly accepted definition of the state of the dynamical in general, control in particular, systems. In what follows the term mathematical system denotes the accepted mathematical model (description) of the corresponding physical system. The system explicit internal dynamics variable is its internal (dynamics) state variable SI . This is typical for the ISO, EISO, and HISO systems. The IO and IIO systems possess the explicit output dynamics, too. The internal dynamics of the IO systems has not been well-studied directly. The system output dynamics variable is its output (dynamics) state variable SO . The IO system internal dynamics is simultaneously its output dynamics so that SI = SO = S, where S is the system full state variable SF , SF = S. The internal dynamics of the ISO, EISO, and HISO systems determines completely their output dynamics in the free regime so that for them SI = SO = SF = S, too. The IIO system internal dynamics and output dynamics are explicit and different so that SI 6= SO and the full state variable is the vector variable   .. T T T SF = S = SI . SO . The properties of the system determine the form and the character of the system state vector S: • The Input-Output (IO) systems are described by the ν-th order linear vector differential input-output, i.e., the output state, equation of the output vector Y ∈RN ,  T .. .. .. Y = Y1 . Y2 . ... . YN ∈ RN , Yi ∈ R, i = 1, 2, ..., N. (1.16) Their extended output vector Yν−1 ,   .. (ν−1)T T ν−1 T .. (1)T .. Y = Y .Y . ... . Y ∈ RνN , n = νN,

(1.17)

is their state vector SIO , which is also their internal state vector SIOI , their output state vector SIOO , and their full state vector SF , SIOI = SIOO = SIOF = SIO =Yν−1 ∈ Rn , n = νN.

(1.18)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 14 — #25

i

14

i

CHAPTER 1. INTRODUCTION • The Input-State-Output (ISO) systems are determined by the first order linear vector differential equation in the vector X (1.20), which is the (internal) state equation, by the algebraic output vector equation of the output vector Y, and the only derivative in them is the first derivative of the state vector. Their state vector SISO is the vector X, which is also their internal state vector SISOI and their full state vector SISOF : SISOI = SISOF = SISO =X ∈ Rn .

(1.19)

They do not possess the output state vector SO because they do not have an independent output dynamics. Their output equation does not contain any derivative of the output vector. • The Extended Input-State-Output (EISO) systems are determined by the first order linear vector differential equation in the vector X (1.20), 

. . . X = X1 .. X2 .. ... .. Xn

T

∈ Rn , Xi ∈ R, ∀i = 1, 2, ..., n,

(1.20)

which is the (internal) state equation, by the algebraic output vector equation of the output vector Y, and, in addition to the first derivative of the state vector, there are derivatives of the input vector only in the state equation. Their state vector SEISO is the vector X (1.20) that is also the internal state vector SEISOI , and the full state vector SEISOF : SEISOI = SEISOF = SEISO = X ∈ Rn ,

(1.21)

They do not possess the output state vector SO for the same reason for which the ISO systems do not have the output state vector. Note 15 On the highest derivative of the input vector In order to avoid the problem of the appearance of impulse discontinuities in the system behavior the systems theory and the control theory restrict the order of the highest derivative of the input vector to be at most equal to the system order. However, the problem of the appearance of impulse discontinuities in the system behavior does not exist if the input vector function is defined and continuously differentiable µ−times, where µ is the order of the highest input vector derivative acting on the system. For its physical origin see in the sequel Note 50 (Subsection 2.1.1).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 15 — #26

i

1.4. SYSTEM BEHAVIOR

i

15

• The Higher Order-Input-State-Output (HISO) systems are characterized by the α-th order linear vector differential equation, i.e., the α-th order (internal) state equation, in the substate vector R, 

. . . R = R1 .. R2 .. ... .. Rρ

T

∈ Rρ , Ri ∈ R, i = 1, 2, ..., ρ,

(1.22)

and are additionally determined by the algebraic output vector equation of the output vector Y. Their internal state vector SHISOI is the extended vector Rα−1 , α−1

R



T . T . . = R .. R(1) .. ... .. R(α−1)

T

T

∈ Rαρ , n = αρ,

(1.23)

which is also their full state vector Sf , SHISOI = SHISOf = SHISO = Rα−1 ∈ Rn , n = αρ.

(1.24)

They do not possess the output state vector SO . The derivatives of the input vector can exist only in the state equation. • The Input-Internal and Output state (IIO) systems are characterized by the α-th order linear vector differential equation, i.e., by the α-th order internal state equation, in the substate vector R, and by the linear output vector ν-th order differential equation, i.e., by the output state equation of the output vector Y. Their extended vector Rα−1 (1.24) is their internal state vector SIIOI (1.25), SIIOI = Rα−1 ∈ RnI , nI = αρ,

(1.25)

and their output state vector SIIOO is the extended output vector Yν−1 , SIIOO = Y

ν−1

  .. (ν−1)T T T .. (1)T .. = Y .Y . ... . Y ∈ RnO , nO = νN.

(1.26) Their full state vector SIIOf , which is their state vector SIIO , is composed of their internal state vector SIIOI = Rα−1 and of their output state vector SIIO = Yν−1 ,    α−1  SIIOI R SIIOf = = = SIIO ∈ Rn , n = αρ + νN. (1.27) SIIOO Yν−1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 16 — #27

i

16

i

CHAPTER 1. INTRODUCTION

Comment 16 The state variables and the state vectors defined by (1.16)(1.27) have the full physical sense (for more details, see Note 28 in Section 2.1 and Note 42 in Section 3.1). Definition 17 System state, motion, and response The system state vector S(t) at a moment t ∈ T is the vector value of the system motion S(.; t0 ;S 0 ; I) at the same moment t: S (t) ≡ S(t; t0 ; S0 ; I) =⇒ S (t0 ) ≡ S(t0 ; t0 ; S0 ; I) ≡ S0 .

1.5

Control

Definition 18 Control variable and control vector An input variable U of a system (e.g., of a plant) that acts, together with its µ derivatives, on the system by using information about the system demanded behavior in order to force the system to realize its demanded behavior under the system nominal conditions and to force the system real behavior to be sufficiently close to the system demanded behavior under perturbed conditions is the control variable for the system. If and only if there are several, e.g., r, control variables U1 , U2 , ... , Ur , then they form the control vector (for short: control) U, 

. . . U = U1 .. U2 .. ... .. Ur

T

∈Rr ,

(1.28)

and together with their µ derivatives that act on the system form the extended control vector Uµ , Uµi

  .. (µ) T (1) .. i .. = Ui . Ui . ... . Ui ∈Rµ+1 , i = 1, 2, ..., r, µ

U =



(µ) . U1 ..

(µ) U2

.. . . ... .. U(µ) r

T

∈R(µ+1)r .

(1.29) (1.30)

The instantaneous values of the control variables Ui and of the control vector U at an instant t ∈ T are Ui (t) and U (t), respectively. A system that creates, generates, the control for the given system is the controller C for the given system. Its output vector YC is the control vector U, YC = U. The physical nature of a control variable can be anyone.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 17 — #28

i

1.5. CONTROL

i

17

Note 19 Rejection or compensation? In this book we accept to use the term “compensation (for disturbance action)” rather than the term “rejection (the disturbance action)” for the reasons explained in [41, Remark 134, p. 62] and [45, Remark 234, pp. 169, 170]. Definition 20 Control system The system composed of a plant and of its controller is the control system CS of the plant. Note 21 Plant input and output vectors The plant P input vectors are in general the disturbance vector D and the control vector U, so that the plant input vector IP has two subvectors: IP =

DT



UT

T

∈ Rd+r .

(1.31)

The plant P output vector Yp is in general denoted by Y, Yp = Y ∈ RN .

(1.32)

Note 22 Controller input and output vectors The controller C input vectors are in general the disturbance vector D, the plant output vector Y, and the plant desired output vector Yd , so that the controller input vector in general IC =



DT

YT

T

YdT

∈ Rd+2N ,

(1.33)

The feedback controller Cf input vectors are in general the plant output vector Y and the plant desired output vector Yd so that the feedback controller input vector ICf =



YT

YdT

T

∈ R2N ,

(1.34)

Usually we treat mathematically the output error vector e, e = Yd − Y,

(1.35)

as the feedback controller input vector, ICf = e,

(1.36)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 18 — #29

i

18

i

CHAPTER 1. INTRODUCTION

although the controller receives the signals on Yd and Y, determines their difference, i.e., the output error vector e = Yd − Y, and creates the error signal ξe usually proportional to e, ξe = ke e. The controller C output vector YC in general and the feeback controller Cf output vector YCf in particular are the control vector U, YC = YCf = U ∈ Rr .

(1.37)

Note 23 Control system input and output vectors The control system CS input vectors and the closed loop, i.e., feedback, control system CSf input vectors are in general the disturbance vector D and the plant desired output vector Yd so that the control system input vector  T ICS = ICSf = DT YdT ∈ Rd+2N . (1.38) The control system CS output vector YC and the closed loop, i.e., feedback, control system CSf output vector YCf are the same and are the plant output vector YP , YCS = YCf = YP ∈ RN .

(1.39)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 19 — #30

i

i

Chapter 2

IO systems 2.1

IO system mathematical model

2.1.1

Time domain

This section deals with physical dynamical systems in general and control systems in particular, which are mathematically described directly in the form of a time-invariant linear vector Input-Output (IO) differential equation of the classical form (2.1) , k=ν X

Ak Y

(k)

(t) =

k=0

k=η X k=0

Dk D

(k)

(t) +

k=µ X

Bk U

(k)

(t) =

k=0

ν ≥ 1, ξ = max (η, µ) , Y(k) (t) =

k=ξ X

Hk I(k) (t), ∀t ∈ T0 ,

k=0

dk Y(t)

, 0 ≤η ≤ ν, 0 ≤µ ≤ ν, dtk Ak ∈RN xN , Dk ∈RN xd , Bk ∈RN xr , k = 0, 1, .., ν, detAν 6= 0, η < ν =⇒ Di = ON,d , i = η + 1, η + 2, ..., ν. µ < ν =⇒ Bi = ON,r , i = µ + 1, µ + 2, ..., ν.

(2.1)

Note 24 System, plant, and control system If and only if there is k ∈ {0, 1, ..., µ} such that Bk 6= ON,r then the IO system (2.1) becomes the IO plant (2.1) (Definition 9, Section 1.4). Otherwise, the IO system (2.1) represents the IO control system, ξ = η and  .. H ≡ D .O . k

k

N,r

19 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 20 — #31

i

20

i

CHAPTER 2. IO SYSTEMS

The disturbance vector D (1.12) (Section 1.4) and the control vector U (1.28) (Section 1.5) compose the system input vector I (2.4) (Section 1.4):   D I = IIO = ∈Rd+r , M = d + r. (2.2) U We accept the following: Condition 25 The matrix Aν of the IO system (2.1) is nonsingular, i.e., it obeys detAν 6= 0. (2.3) Note 26 Throughout this book we accept the validity of Condition 25. Note 27 The condition on the nonsingularity of the matrix Aν imposed in Condition 25 guarantees ! k=ν X k Ak s 6= 0, ∃s ∈ C =⇒ det k=0

and permits the solvability of the Laplace transform of (2.1) for Y(s) [40]. Besides, the condition detAν 6= 0 is a sufficient condition, but not necessary condition, for all the output variables of the system (2.1) to have the same order ν of their highest derivatives. Rk is the k-dimensional real vector space, k ∈ {1, 2, ...}, (Section 1.4). denotes the k-dimensional complex vector space (Section 1.4). OM xN is the zero matrix in RM xN , and ON is the zero matrix in RN xN , ON = ON xN . The vector 0k ∈ Rk is the zero vector in Rk and 1k ∈ Rk is the unit vector in Rk ,, (Section 1.2). The total input vector Ck

I = [I1 I2 ... IM ]T ∈RM ,

(2.4)

D = [D1 D2 ... Dd ]T ∈Rd ,

(2.5)

its subvectors T

r

(2.6)

Y = [Y1 Y2 ... YN ]T ∈RN ,

(2.7)

U = [U1 U2 ... Ur ] ∈R , (Definition 10), and the total output vector

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 21 — #32

i

2.1. IO SYSTEM MATHEMATICAL MODEL

i

21

(Definition 12) (Section 1.4). The values Ii , Dj , Uk , and Yl are the total values of the input and the output variables, respectively. The total value of a variable signifies that its value is measured with respect to its total zero, if it has the total zero value, and if it does not have the total zero value then an appropriate value is accepted to play the role of the total zero value. The form of the system mathematical model (2.1) is too complex and makes the system study unreasonably cumbersome. We simplify it by applying the elegant and simple compact notation for the extended matrices proposed in [35] and in brief explained in Note 6 (Section 1.2). At first we introduce the extended matrices A(ν) , B (µ) , and D(η) ,   .. .. .. (ν) A = A0 . A1 . ... . Aν ∈ RN x(ν+1)N ,   . . . B (µ) = B0 .. B1 .. ... .. Bµ ∈ RN x(µ+1)r ,   .. .. .. (η) D = D0 . D1 . ... . Dη ∈ RN x(η+1)d , (2.8) and then the very simple extended vectorsDη (t), Iξ (t), Uµ (t), and Yν (t) : 

. . . T T D (t) = D (t) .. D(1) (t) .. ... .. D(η) (t) η

T

T

∈ R(η+1)d ,

 T .. (1)T .. .. (ξ)T T I (t) = I (t) . I (t) . ... . I (t) ∈ R(ξ+1)M ξ

(2.9)

(2.10)

 T .. (1)T .. .. (µ)T T U (t) = U (t) . U (t) . ... . U (t) ∈ R(µ+1)r

(2.11)

 T .. (1)T .. .. (ν)T T Y (t) = Y (t) . Y (t) . ... . Y (t) ∈ R(ν+1)N ,

(2.12)

µ

ν

They induce the corresponding initial vectors Dη−1 = Dη−1 (0), Iξ−1 = 0 0 µ−1 ν−1 ξ−1 µ−1 ν−1 I (0), U0 = U (0), and Y0 =Y (0). We repeat that the upper index ν in the parentheses in A(ν) makes A(ν) essentially different from the ν-th power Aν of A,   .. .. .. (ν) (2.13) A = A0 . A1 . ... . Aν 6= Aν = AA....A | {z } . ν times

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 22 — #33

i

22

i

CHAPTER 2. IO SYSTEMS

Notice also that for the extended vector Yv the superscript ν is not in the parentheses in order to distinguish it from the ν-th derivative dν Y(t)/dtν of Y(t),  T .. (1)T .. .. (ν)T dν Y(t) v T . (2.14) Y (t) = Y (t) . Y (t) . ... . Y (t) 6= Y(ν) (t) = dtν The application of the above compact notation (2.8)-(2.12) to the IO vector differential equation (2.1) transforms it into the following simple, elegant, and compact form: A(ν) Yν (t) = D(η) Dη (t) + B (µ) Uµ (t) = H (µ) Iµ (t), ∀t ∈ T0 ,    T .  T T (µ) (µ) .. (µ) µ (µ) (µ) . H = D .B , I (t) = D . U . Note 28 The state vector SIO (Section 1.4) by:  T . ν−1 SIO = Y = YT .. Y(1)

(2.15)

of the IO system (2.15) is defined in (1.18) T .. . . ... .. Y(ν−1)

T

∈ Rn , n = νN,

(2.16)

This new vector notation Yν−1 has permitted us to define the state of the IO system (2.15) by preserving the physical sense. It enabled us to establish in [40] the direct link between the definitions of the Lyapunov and of BI stability properties with the corresponding conditions for them in the complex domain. It enables us to discover in what follows the complex domain criteria for observability, controllability, and trackability directly from their definitions. Such criteria possess the complete physical meaning. The state variables and the state vector SIO = Yν−1 , Equation (2.16), have the well-known form called phase form, i.e., SIO = Yν−1 is the phase (state) vector.

2.1.2

Complex domain

The following complex matrix functions [35], [36], [40], the first one of which (k) is Si (.) : C −→ C i(k+1)xi , essentially simplify the system study via the complex domain,   .. 1 .. 2 .. .. k T (k) 0 Si (s) = s Ii . s Ii . s Ii . ... . s Ii ∈ C i(k+1)xi , (k, i) ∈ {(µ, M ) , (ν, N )} .

(2.17)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 23 — #34

i

i

2.1. IO SYSTEM MATHEMATICAL MODEL

23

The matrix Ii is the i-th order identity matrix, Ii ∈ Rixi . Another complex (ς−1) function is Zk (.) : C → C(ς+1)kxςk ,   Ok Ok Ok ... Ok  s0 Ik Ok Ok ... Ok  (ς−1)  , ς ≥ 1, Zk (s) =   ... ... ... ... ...  sς−1 Ik sς−2 Ik sς−3 Ik ... s0 Ik (1−1)

ς = 1 =⇒ Zk (ς−1)

Zk

(0)

(s) = Zk (s) = s0 Ik = Ik ,

(s) ∈ C(ς+1)kxςk , (ς, k) ∈ {(µ, M ) , (ν, N )} , (ς−1)

where the final entry of Zk

(2.18)

(s) is always s0 Ik . (ς−1)

(−1)

Note 29 [35], [36], [40] If ς = 0 then the matrix Zk (s) = Zk (s) is not defined and should be completely omitted rather than to be replaced by (ζ−1) the zero matrix. Because the matrix Zk (s) is not defined for ζ ≤ 0, i.e., it does not exist for ζ ≤ 0. Derivatives exist only for natural numbers, i.e., (ζ−1) Y(ς) (t) can exist only for ς ≥ 1. Matrix function Zk (.) is related to the Laplace transform of derivatives only. It is well known that the Laplace transform of k=ν X

Ak Y(k) (t)

(2.19)

k=0

contains the Laplace transform Y(s) of Y(t) multiplied by a matrix polynomial in s and a double sum containing the products of powers of the complex variable s and initial values of Y(t) and of its derivatives up to the order of ν − 1, all multiplied by the corresponding system matrices. The references [35], [36], and [40] contain the proof that the simple, compact, and elegant form of the Laplace transform of (2.19) reads: (k=ν ) X (ν) (ν−1) (k) L Ak Y (t) = A(ν) SN (s)Y(s) − A(ν) ZN (s)Y0ν−1 , (2.20) k=0 (ν)

(ν−1)

where the matrices SN (s) and ZN (s) are defined in (2.17) and in (2.18), respectively. Analogously, (k=η ) X (η) (η−1) (k) L Dk D (t) = D(η) Sd (s)D(s) − D(η) Zd (s)Dη−1 (0), (2.21) k=0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 24 — #35

i

24

i

CHAPTER 2. IO SYSTEMS

L

(k=µ X

) (k)

Bk U

(t)

= B (µ) Sr(µ) (s)U(s) − B (µ) Zr(µ−1) (s)Uµ−1 (0).

(2.22)

k=0

Equations (2.20), (2.21) and (2.22) determine the simple compact form of the Laplace transform of (2.1), hence of (2.15): (ν)

(ν−1)

A(ν) SN (s)Y(s) − A(ν) ZN (η)

(s)Yν−1 (0) =

(η−1)

= D(η) Sd (s)D(s) − D(η) Zd

(s)Dη−1 (0)+

+B (µ) Sr(µ) (s)U(s) − B (µ) Zr(µ−1) (s)Uµ−1 (0).

(2.23)

This equation determines Y(s) :  −1 (ν) Y(s) = A(ν) SN (s) •   .. (µ) (µ) .. .. .. (ν) (ν−1) (η) (η) (η) (η−1) (µ) (µ−1) • D Sd (s).B Sr (s). − D Zd (s). − B Zr (s).A ZN (s)   D(s)  U(s)   η−1   • (2.24)  D (0)  = FIO (s) VIO (s) ,  Uµ−1 (0)  Yν−1 (0) (ν)

since the inverse of A(ν) SN (s) exists due to Condition 25. The plant full transfer function matrix results from (2.24)  −1 (ν) FIO (s) = A(ν) SN (s) •   .. (µ) (µ) .. .. .. (ν) (ν−1) (η) (η−1) (µ) (µ−1) (η) (η) (s). − B Zr (s).A ZN (s) • D Sd (s).B Sr (s). − D Zd (2.25) The inverse Laplace transform of FIO (s) is the IO system full fundamental matrix ΨIO (t) , ΨIO (t) = L−1 {FIO (s)}, and the inverse Laplace transform of   (ν)    −1 adj A(ν) SN (s) (ν) (ν) (ν) = p−1 (s) adj A S (s) (2.26) A(ν) SN (s) = IO N pIO (s) is the IO system fundamental matrix ΦIO (t) [40]:  −1  −1 (ν) (ν) ΦIO (t) = L A SN (s) .

(2.27)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 25 — #36

i

2.1. IO SYSTEM MATHEMATICAL MODEL Equation (2.25) discovers that the polynomial pIO (s) ,   (ν) pIO (s) = det A(ν) SN (s) ,

i

25

(2.28)

is the characteristic polynomial of the IO system (2.15) and the denominator polynomial of all its transfer function matrices due to Equation (2.25) as shown also in what follows. Equation (2.25) induces also the matrix polynomial LIO (s) defined by   (ν) LIO (s) = adj A(ν) SN (s) B (µ) Sr(µ) (s), LIO (s) ∈ CN ×r . (2.29) It is the numerator matrix polynomial of the plant transfer function matrix GIOU (s) relative to the control vector U:  −1 (ν) GIOU (s) = A(ν) SN (s) B (µ) Sr(µ) (s) =   (ν) (µ) adj A(ν) SN (s) B (µ) Sr (s) LIO (s) = , = pIO (s) pIO (s)

(2.30)

Equation (2.24) determines also all other specific transfer function matrices of the IO system (2.15): - With respect to the disturbance D:   (ν) (η) adj A(ν) SN (s) D(η) Sd (s) , (2.31) GIOD (s) = pIO (s) - With respect to the initial conditions Dη−1 (0) of the disturbance D:   (ν) (η−1) (ν) adj A SN (s) D(η) Zd (s) GIOD0 (s) = − , (2.32) pIO (s) - With respect to the initial conditions Uµ−1 (0) of the control U:   (ν) (µ−1) adj A(ν) SN (s) B (µ) Zr (s) GIOU0 (s) = − , (2.33) pIO (s) - With respect to the initial conditions Yν−1 (0) of the output Y:   (ν) (ν−1) adj A(ν) SN (s) A(ν) ZN (s) GIOY0 (s) = . (2.34) pIO (s)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 26 — #37

i

26

i

CHAPTER 2. IO SYSTEMS

The Laplace transform VIO (s) of the IO system action vector VIO (t) reads   D(s)  U(s)      η−1 I (s) IO  (2.35) VIO (s) =   D (0)  = CIO0 .  Uµ−1 (0)  Yν−1 (0) The Laplace transform IIO (s) of the IO system input vector IIO (t) reads:   D(s) . (2.36) IIO (s) = U(s) The vector CIO0 of all IO system initial conditions has the following form:  η−1  D (0) CIO0 =  Uµ−1 (0)  . (2.37) ν−1 Y (0)  The IO system output response Y t; Y0ν−1 ; D; U is the inverse Laplace transform of (2.24):  (2.38) Y t; Y0ν−1 ; D; U = L−1 {FIO (s) VIO (s)} . Example 30 Appendix B.1 contains an IO system Example 227. What follows shows an important physical meaning of the system full transfer function matrix F (s). For the definition, types and properties of the Dirac unit impulse δ(.), see [40]. Definition 31 [40, Definition 181, p. 171, Note 182, p. 172] A matrix function ΨIO (.) : T −→ RN x[(µ+1)M +νN ] is the full fundamental matrix function of the IO system (2.15) if and only if it obeys both (i) and (ii) for an arbitrary input vector function I(.), Equation (2.2), and for arbitrary initial conditions Iµ−1 and Y0ν−1 − , 0− (i)    Z t  I(t − τ )   dτ = Y(t; Y0ν−1 ΨIO (τ )  δ(t − τ )Iµ−1 − ; I) = − 0  0−  δ(t − τ )Y0ν−1 −    Z t  I(τ )   = ΨIO (t − τ )  δ(τ )Iµ−1 dτ, (2.39) 0−  ν−1 0−  δ(τ )Y0−

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 27 — #38

i

i

2.1. IO SYSTEM MATHEMATICAL MODEL

27

equivalently Z

Y(t; Y0ν−1 − ; I) Z

t

ν−1 ΓIO (τ )I(t − τ )dτ + ΓIOi0 (t)Iµ−1 − + ΓIOy0 (t)Y0− , 0 0− Z t [ΓIO (t − τ )I(τ )dτ ] , [ΓIO (τ )I(t − τ )dτ ] =

=

t

0−

0−

I(t − τ ) = I(t, τ ), ΓIO (t − τ ) = ΓIO (t, τ ) ,

(2.40)

and 

 .. .. ΨIO (t) = ΓIO (t) . ΓIOi0 (t) . ΓIOy0 (t) , ΓIO (t) ∈ RN xM , ΓIOi0 (t) ∈ RN xµM , ΓIOy0 (t) ∈ RN xνN ,

(2.41)

(ii)   .. ΓIOi0 (0 ) = ΓIOi0 1 . ON ,(µ−1)M where −

Z



ΓIOi0 1 (0 )i0− = − −

0−

0−



ΓIOy0 (0 ) ≡ IN

[ΓIO (τ )i(t − τ )dτ ] ,  .. . ON ,(ν−1)N .

(2.42)

Note 32 [40, Equation (10.4), p. 172] The second Equation (2.39) under (i) of Definition 31 results from its first equation and from the properties of δ(.): Y(t; Y0ν−1 − ; I) = Z

t

= 0−

ΓIO (τ )I(t, τ )dτ + ΓIOi0 (t)Iµ−1 + ΓIOy0 (t)Y0ν−1 − , t ∈ T0 . 0−

(2.43)

The matrix ΓIO (t) is the output fundamental matrix of the IO system (2.15), Theorem 33 [40, Theorem 183, pp. 172, 173] (i) The full fundamental matrix function ΨIO (.) of the IO system (2.15) is the inverse of the left Laplace transform of the system full transfer function matrix FIO (s), ΨIO (t) = L−1 {FIO (s)} .

(2.44)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 28 — #39

i

28

i

CHAPTER 2. IO SYSTEMS

(ii) The full transfer function matrix FIO (s) of the IO system (2.15) is the left Laplace transform of the system full fundamental matrix ΨIO (t), FIO (s) = L − {ΨIO (t)} .

(2.45)

(iii) The submatrices ΓIO (t), ΓIOi0 (t) and ΓIOy0 (t) are the inverse Laplace transforms of GIO (s), GIOi0 (s) and GIOy0 (s), respectively, n h io (µ) ΓIO (t) = L−1 {GIO (s)} = L−1 ΦIO (s) B (µ) SM (s) , n h io (µ−1) ΓIOi0 (t) = L−1 {GIOi0 (s)} = −L−1 ΦIO (s) B (µ) ZM (s) , n h io (ν−1) ΓIOy0 (t) = L−1 {GIOy0 (s)} = L−1 ΦIO (s) A(ν) ZN (s) , (2.46) where ΦIO (s) is the left Laplace transform of the IO system fundamental matrix function ΦIO (.) : T −→ RN ×N ,, ΦIO (s) = L − {ΦIO (t)} , ΦIO (t) = L−1 {ΦIO (s)} ,

(2.47)

and ΦIO (s) =



−1 (ν) A(ν) SN (s) ,

ΦIO (t) = L

−1



−1 (ν) A(ν) SN (s)

 .

(2.48)

(iv) The IO system full fundamental matrix ΨIO (t) and its fundamental matrix ΦIO (t) are linked as follows:    . . (µ) (µ−1) (ν−1) ΨIO (t) = L−1 ΦIO (s) B (µ) SM (s) .. − B (µ) ZM (s) ..A(ν) ZN (s) ,   .. .. (ν) (ν−1) (µ) (µ) (µ) (µ−1) ΨIO (s) = ΦIO (s) B SM (s) . − B ZM (s) .A ZN (s) . (2.49)

2.2

IO plant desired regime

We accept the following definition of a desired regime by following [36], [40]: Definition 34 Desired regime A system (plant) is in a desired (called also: nominal or nonperturbed) regime on T0 (for short: in a desired regime) if and only if it realizes its desired (output) response Yd (t) all the time on T0 , Y(t) = Yd (t), ∀t ∈ T0 .

(2.50)

The terms nominal and nonperturbed are meaningful in general, i.e., for any system, e.g., for plants, controllers and control systems; while the term desired has the full sense only for plants.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 29 — #40

i

2.2. IO PLANT DESIRED REGIME

i

29

Proposition 35 [40] In order for the plant to be in a desired (nominal, nonperturbed) regime, i.e., Y(t) = Yd (t), ∀t ∈ T0 , it is necessary that the initial real output vector is equal to the initial desired output vector, Y0 = Yd0 . The system cannot be in a nominal regime (on T0 ) if its initial real output vector is different from the initial desired output vector: Y0 6= Yd0 =⇒ ∃σ ∈ T0 =⇒ Y(σ) 6= Yd (σ). The real initial output vector Y(0) = Y0 is most often different from the desired initial output vector Yd (0) = Yd0 . The system is most often in a nondesired (non-nominal, perturbed, disturbed ) regime. Definition 36 Nominal control UN (.)] relative to [D(.), Yd (.)] of the IO plant (2.15) A control vector function U∗ (.) of the IO plant (2.15) is nominal relative to [D(.), Yd (.)], which is denoted by UN (.), if and only if U(.)] = U∗ (.) ensures that the corresponding real response Y(.) = Y*(.) to the input action of D(.) on the plant obeys Y*(t) = Yd (t) all the time as soon as all the internal and the output system initial conditions are desired (nominal, nonperturbed). This definition and (2.15) imply the following theorem: Theorem 37 [36, Theorem 50, pp. 46-48], [40, Theorem 56, pp. 49-52] In order for a control vector function U*(.) to be nominal for the IO plant (2.15) relative to [D(.), Yd (.)]: U*(.) = UN (.), it is necessary and sufficient that 1) and 2) hold: (µ) 1) rank rankB (µ) Sr (s) = N ≤ r, i.e., rankB (µ) = N ≤ r, and 2) any one of the following equations is valid: µ

B (µ) U* (t) = −D(η) Dη (t) + A(ν) Ydν (t), ∀t ∈ T0 ,

(2.51)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 30 — #41

i

30

i

CHAPTER 2. IO SYSTEMS

or, equivalently in the complex domain:   T   T −1 (µ) (µ) (µ) (µ) (µ) (µ) U*(s) = B Sr (s) B Sr (s) B Sr (s) • h i + * (µ) (µ−1) µ−1 (ν) (ν−1) B Zr (s)U* (0) + A(ν) SN (s)Yd (s) − ZN (s)Ydν−1 (0) − h i • . (η) (η−1) −D(η) Sd (s)D(s) − Zd (s)Dη−1 (0) (2.52) This theorem holds for all IO plants (2.15). Condition 38 The desired output response Yd (t) of the IO system (2.15) is realizable, i.e., N ≤ r. Both it and the nominal input IN (.) are known. The compact form of the IO plant (2.15) in terms of the deviations follows from (2.53), (2.55), (9.3), d = D − DN ,

(2.53)

i = I − IN

(2.54)

u = U − UN ,

(2.55)

y = Y − Yd ,

(2.56)

and (2.15): A(ν) yν (t) = D(η) dη (t) + B (µ) uµ (t) = H (µ) iµ (t), ∀t ∈ T0 .

(2.57)

Note 39 Equation (2.57) is the IO system model determined in terms of the deviations of all variables. It has exactly the same form, the same order, and the same matrices as the system model expressed in total values of the variables (2.15). They possess the same characteristics and properties by noting once more that y = 0N represents Y = Yd . For example, they have the same transfer function matrices, and the stability properties of yγ−1 = 0νN of (2.57) are simultaneously the same stability properties of Ydγ−1 (t) of (2.15).

2.3

Exercises

Exercise 40 1. Select an IO physical plant. 2. Determine its time domain IO mathematical model.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 31 — #42

i

i

Chapter 3

ISO systems 3.1

ISO system mathematical model

3.1.1

Time domain

The dynamical systems theory and the control theory have been mainly developed for the linear Input-State-Output (ISO) (dynamical, control) systems. Their mathematical models contain the state vector differential equation (3.1) and the output algebraic vector equation (3.2), dX(t) = AX(t) + DD(t) + BU(t) = AX(t) + P I(t), ∀t ∈ T0 , dt   .. n×n n×d n×r A∈R ,D ∈ R , B ∈ R , P = D . B ∈ Rn×(d+r) , (3.1) Y(t) = CX(t) + V D(t) + U U(t) = CX(t) + QI(t), ∀t ∈ T0 ,   .. N ×n N ×d N ×r C∈R , V ∈R ,U ∈R , Q = V . U ∈ RN ×(d+r) .

(3.2)

The ISO mathematical model (3.1), (3.2) is well-known also as the statespace system (description). Note 41 System, plant, and control system If and only if B 6= On,r then the ISO system (3.1), (3.2) becomes the ISO plant (3.1), (3.2) (Definition 9, Section 1.4). Otherwise, the ISO system (3.1), (3.2) represents the ISO control system. The state vector SISO of the ISO system (3.1), (3.2) is the vector X (1.19). 31 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 32 — #43

i

32

i

CHAPTER 3. ISO SYSTEMS The fundamental matrix function ΦISO (., t0 ) ≡ Φ (., t0 ) : T −→ Rn×n

(3.3)

of the system (3.1), (3.2), Φ (t, t0 ) = eAt eAt0

−1

= eA(t−t0 ) ∈ Rn×n ,

(3.4)

has the following well-known properties: detΦ (t, t0 ) 6= 0, ∀t ∈ T0 , ∀t0 ∈ T,

(3.5)

Φ (t, t0 ) Φ (t0 , t) = eA(t−t0 ) eA(t0 −t) ≡ eA0 = In =⇒ Φ (t0 , t) = Φ−1 (t, t0 ) ,

(3.6)

Φ(1) (t, t0 ) = AΦ (t, t0 ) = Φ (t, t0 ) A.

(3.7)

By applying the classical method to solve the state equation (3.1) by its integration we determine its solution: Zt X (t; t0 ; X0 ; I) = Φ (t, t0 ) X0 +

Φ (t, τ ) P I (τ ) dτ = t0

Zt



 Φ (t0 , τ ) P I (τ ) dτ  , ∀t ∈ T0 .

= Φ (t, t0 ) X0 +

(3.8)

t0

This and the system output Equation (3.2) determine the system response: Zt Y (t; t0 ; X0 ; I) = CΦ (t, t0 ) X0 +

CΦ (t, τ ) P I (τ ) dτ + QI(t) = t0



Zt

 Φ (t0 , τ ) P I (τ ) dτ  + QI(t), ∀t ∈ T0 ,

= CΦ (t, t0 ) X0 +

(3.9)

t0

Note 42 The IO system (2.1), Section 2, can be formally mathematically transformed into the equivalent ISO system (3.1), (3.2) (for such transformation in the general case of the IO system (2.15) see Appendix C.1 and for more details: [40, Appendix C.1, pp. 417-420]). The obtained state variables are without any physical meaning if µ > 0 in the IO system (2.15). Also, the ISO system (3.1), (3.2) can be transformed into the IO system (2.15) (for the transformation in the general case see [40, Appendix C.2, p. 421]).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 33 — #44

i

3.1. ISO SYSTEM MATHEMATICAL MODEL

3.1.2

i

33

Complex domain

We recall (1.3) (Section 1.2) that I is the identity matrix of the dimension n: In = I. The application of the Laplace transform to the ISO system (3.1), (3.2) gives its complex domain description: sX(s) − X0 = AX(s) + DD(s) + BU(s), Y(s) = CX(s) + V D(s) + U U(s). We determine first X(s) from the first equation, and then replace the solution into the second equation to get the well-known result for Y(s) : X(s) = (sI − A)−1 [DD(s) + BU(s) + X0 ] , Y(s) = C (sI − A)

−1

[DD(s) + BU(s) + X0 ] + V D(s) + U U(s),

(3.10) (3.11)

which we can set into the following forms:     D(s) . . X(s) = (sI − A)−1 D .. B .. I  U(s)  = FISOIS (s) VISO (s) , (3.12) X0   . . Y(s) = C (sI − A)−1 D + V .. C (sI − A)−1 B + U .. C (sI − A)−1 ·   D(s) ·  U(s)  = FISO (s) VISO (s) , (3.13) X0 where: - FISO (s) , FISO (s) =  . . = C (sI − A)−1 D + V .. C (sI − A)−1 B + U .. C (sI − A)−1 , 

(3.14)

is the ISO plant (3.1), (3.2) input to output (IO) full transfer function matrix, the inverse Laplace transform of which is the plant IO full fundamental matrix ΨISO (t) [40], ΨISO (t) = L−1 {FISO (s)} ,

(3.15)

pISO (s) = det (sI − A) ,

(3.16)

- pISO (s) ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 34 — #45

i

34

i

CHAPTER 3. ISO SYSTEMS

is the characteristic polynomial of the ISO plant (3.1), (3.2) and the denominator polynomial of all its transfer function matrices, - GISOD (s) , GISOD (s) = C (sI − A)−1 D + V = = p−1 ISO (s) [Cadj (sI − A) D + pISO (s) V ] ,

(3.17)

is the ISO plant (3.1), (3.2) transfer function matrix relative to the disturbance D, - GISOU (s) , GISOU (s) = C (sI − A)−1 B + U = LISO (s) = p−1 , ISO (s) [Cadj (sI − A) B + pISO (s) U ] = pISO (s)

(3.18)

is the ISO plant (3.1), (3.2) transfer function matrix relative to the control U, -GISOX0 (s) , GISOX0 (s) = C (sI − A)−1 = p−1 ISO (s) Cadj (sI − A) ,

(3.19)

is the ISO plant (3.1), (3.2) transfer function matrix relative to the initial state X0 , - VISO (s) and CISO0 ,     IISO (s) D(s) , CISO0 = X0 , (3.20) VISO (s) = , IISO (s)= CISO0 U(s) are the Laplace transform of the action vector VISOP (t) and the vector CISO0 of all plant initial conditions, respectively. Example 43 Let the ISO system (3.1), (3.2) be defined by  dX1         −2 0 1 X1 7 5 dt  dX2  =  3 2 2  X2  +  10 −6  U1 , dt U2 dX3 0 4 −3 X3 −4 3 | {z } dt {z }| {z } | {z } U | {z } | A

X(1)

B

X





   X    Y1 −1 1 4  1  6 7 U1 X2 + = .. Y2 0 2 3 5 8 U2 X | {z } | {z } | {z }| {z } 3 | {z } C U Y U 

(3.21)

(3.22)

X

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 35 — #46

i

i

3.1. ISO SYSTEM MATHEMATICAL MODEL

35

The resolvent matrix −1 s+2 0 −1 =  −3 s − 2 −2  = 0 −4 s + 3 

(sI3 − A)−1   =

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40)

4 s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

 (3.23)

 

The equations (3.12) for D = O3,d yields: FISOIS (s) =   =

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40)

4

s−2

s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40

s3 +3s2 −12s−40

4s+8 s3 +3s2 −12s−40



2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

 •





  7 5 1 0 0 ..   • 10 −6 0 1 0 = GISOISU (s) . GISOXoIS (s) , −4 3 0 0 1

(3.24)

where   GISOISU (s) = 

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40)

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

  •



 7 5 •  10 −6  =⇒ −4 3   GISOISU (s) =    GISOXoIS (s) = 

7s2 +3s−50 s3 +3s2 −12s−40 10s2 +63s+95 s3 +3s2 −12s−40 −4s2 +124s+264 (s+2)(s3 +3s2 −12s−40)

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40)

5s2 +8s−100 s3 +3s2 −12s−40 −6s2 −9s+30 s3 +3s2 −12s−40 3s2 +36s+60 s3 +3s2 −12s−40

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

  ,

(3.25)

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

  .

(3.26)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 36 — #47

i

36

i

CHAPTER 3. ISO SYSTEMS

Finally, FISOIS (s) = 



     =       

 

7s2 +3s−50 s3 +3s2 −12s−40 10s2 +63s+95 s3 +3s2 −12s−40 −4s2 +124s+264 (s+2)(s3 +3s2 −12s−40)

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12s+24 (s+2)(s3 +3s2 −12s−40)

5s2 +8s−100 s3 +3s2 −12s−40 −6s2 −9s+30 s3 +3s2 −12s−40 3s2 +36s+60 s3 +3s2 −12s−40

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

  

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

T             

(3.27)

Equation (3.14), (Subsection 3.1.2), becomes for D = O3,d and V = O2,d ,   .. FISO (s) = GISO (s) . GISOX0 (s) =                          



−1 1 4 0 2 3



 T

•       2   4 s−2 s +s−14   3 2 3 2 3 2 s +3s −12s−40 s +3s −12s−40 s +3s −12s−40     3s+9 s2 +5s+6 2s+7   • s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40  •   2   4s+8 s −4 s+2 12 (s+2)(s3 +3s   2 −12s−40) s3 +3s  2 −12s−40 s3 +3s2 −12s−40        7 5   6 7   •  10 −6  + 5 8 −4 3     −1 1 4 •   0 2 3     2 s +s−14 4 s−2     s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 2     3s+9 s +5s+6 2s+7  •   s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s+2 4s+8 s2 −4 12 (s+2)(s3 +3s2 −12s−40) s3 +3s2 −12s−40 s3 +3s2 −12s−40

             .            (3.28)

The system transfer function matrices and the vector VISO (s) are, in view

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 37 — #48

i

i

3.1. ISO SYSTEM MATHEMATICAL MODEL

37

of (3.18)-(3.20), (Subsection 3.1.2):

             

   •

            

GISOU (s) =  −1 1 4 • 0 2 3

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40) 

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s  2 −12s−40

  •

(7s3 +19s2 +52s+105)(s+2)

(6s3 +2s2 +476s+1121)(s+2) 3

2

(s3 +3s2 −12s−40)(s+2)

+3s −12s−40)(s+2) GISO (s) =  (5s(s 3 +23s2 +638s+512 (s+2) )

(8s3 +21s2 −6s−80)(s+2)

(s3 +3s2 −12s−40)(s+2)

      

 

   •    

 GISOX0 (s) = 

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

(−s2 +50s+119)(s+2) (s+2)(s3 +3s2 −12s−40) (42s+90 )(s+2) (s+2)(s3 +3s2 −12s−40)

  , (3.29)

(s3 +3s2 −12s−40)(s+2)

GISOX0 (s) =  −1 1 4 • 0 2 3

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 s+2 12 (s+2)(s3 +3s 2 −12s−40)

=⇒

            

  7 5 6 7   • 10 −6 + 5 8 −4 3



=

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

             

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

s2 +21s+34 s3 +3s2 −12s−40 2s2 +22s+36 s3 +3s2 −12s−40

       

=⇒

       

4s2 +3s−7 s3 +3s2 −12s−40 3s2 +4s+2 s3 +3s2 −12s−40

 . (3.30)

Finally,

        

   T T . GISO (s) FISO (s) = GISO (s) .. GISOX0 (s) = GTISOX0   (6s3 +2s2 +476s+1121)(s+2) (7s3 +19s2 +52s+105)(s+2) T 3 2 3 2 (s +3s −12s−40)(s+2) (s +3s −12s−40)(s+2)   (5s3 +23s2 +638s+512)(s+2) (8s3 +21s2 −6s−80)(s+2) (s3 +3s2 −12s−40)(s+2) (s3 +3s2 −12s−40)(s+2)  T 2 (−s +50s+119)(s+2) s2 +21s+34 4s2 +3s−7 2 −12s−40 s3 +3s2 −12s−40   (s+2)(s3 +3s2 −12s−40) s3 +3s 2 2 (42s+90 )(s+2) (s+2)(s3 +3s2 −12s−40)

2s +22s+36 s3 +3s2 −12s−40

3s +4s+2 s3 +3s2 −12s−40

= T        

(3.31)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 38 — #49

i

38

i

CHAPTER 3. ISO SYSTEMS

together with  VISO (s) =

3.2

U(s) CISO0



 , U(s) =

U1 (s) U2 (s)



 , CISO0

 X10 = X0 =  X20  . X30 (3.32)

ISO plant desired regime

The following definition clarifies the meaning of the nominal input control vector UN and of the nominal state vector XN with respect to a chosen or given disturbance vector function D(.) and the desired output function Yd (.) of the ISO plant (3.1), (3.2). Definition 44 A functional vector control-state pair [U*(.), X*(.)] is nominal for the ISO plant (3.1), (3.2) relative to the functional pair [D(.), Yd (.)] , which is denoted by [UN (.), XN (.)], if and only if [U(.), X(.)] = [U*(.), X*(.)] ensures that the corresponding real response Y(.) = Y*(.) of the plant obeys Y*(t) = Yd (t) all the time, [U*(.), X*(.)] = [UN (.), XN (.)] ⇐⇒ hY*(t) = Yd (t), ∀t ∈ T0 i . The nominal motion XN (.; XN 0 ; D; UN ), XN (0; XN 0 ; D; UN ) ≡ XN 0 , is the desired motion Xd (.; Xd0 ; D; UN ) of the ISO plant (3.1), (3.2) relative to the functional vector pair [D(.), Yd (.)] , for short: the desired motion of the system, Xd (.; Xd0 ; D; UN ) ≡ XN (.; XN 0 ; D; UN ), Xd (0; Xd0 ; D; UN ) ≡ Xd0 ≡ XN 0 .

(3.33)

Notice that the full system matrix [40, Section 11.2, pp. 192-199]   −B sI − A ∈ C(n+N )x(r+n) (3.34) U C is a rectangular matrix in general. Definition 44 and (3.1), (3.2) imply the following theorem:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 39 — #50

i

3.2. ISO PLANT DESIRED REGIME

i

39

Theorem 45 In order for the vector control-state pair [U*(.), X*(.)] to be nominal for the ISO plant (3.1), (3.2) relative to the functional vector pair [D(.), Yd (.)] , [U*(.), X*(.)] = [UN (.), XN (.)], it is necessary and sufficient that it obeys the following equations: dX*(t) − AX*(t) = DD(t), ∀t ∈ T0 , dt U U*(t) + CX*(t) = Yd (t) − V D(t), ∀t ∈ T0 ,

−BU*(t) +

or equivalently in the complex domain,      X∗0 + DD(s) U*(s) −B sI − A . = Yd (s) − V D(s) X*(s) U C

(3.35) (3.36)

(3.37)

Let us consider the existence of the solutions of the equations (3.35), (3.36), or equivalently of (3.37). There are (n + r) unknown variables and (N +n) equations. The unknown variables are the entries of U*(s) ∈ Cr and of X*(s) ∈ Cn . There are (n + r) unknown variables and (N + n) equations. The unknown variables are the entries of U*(s) ∈ Cr and of X*(s) ∈ Cn . Claim 46 In order to exist a nominal functional vector control-state pair [UN (.), XN (.)] for the ISO plant (3.1), (3.2) relative to its desired response Yd (.) it is necessary and sufficient that N ≤ r. Then, the functional vector controlstate pair [UN (.), XN (.)] is nominal relative to the desired response Yd (.) of the plant (3.1), (3.2) in view of Theorem 45. Proof. The dimension of the matrix (3.34) is (n + N ) × (r + n) . It is well-known (e.g., [2, p. 115]) that for Equation (3.37) to have a solution it is necessary and sufficient that the rank of the matrix (3.34) is equal to n + N, which is possible if and only if n + N ≤ r + n, i.e., if and only if N ≤ r. Claim 46 resolves completely the problem of the existence of a nominal functional vector control-state pair [UN (.), XN (.)] for the ISO plant (3.1), (3.2) relative to the functional vector pair [D(.), Yd (.)]. Condition 47 The desired output response of the ISO plant (3.1), (3.2) is realizable, i.e., N ≤ r. The nominal control-state pair [UN (.), XN (.)] is known.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 40 — #51

i

40

i

CHAPTER 3. ISO SYSTEMS The ISO plant description in terms of the deviations (3.38), x = X − XN = X − Xd ,

(3.38)

(9.3), and (2.53), (2.55) (Section 2.2) reads: dx(t) = Ax(t) + Dd(t) + Bu(t), ∀t ∈ T0 , dt y(t) = Cx(t) + V d(t) + U u(t), ∀t ∈ T0 ,

3.3

(3.39) (3.40)

Exercises

Exercise 48 1. Select a physical ISO plant. 2. Determine its time domain ISO mathematical model.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 41 — #52

i

i

Chapter 4

EISO systems 4.1 4.1.1

EISO system mathematical model Time domain

A slightly more general class than the ISO systems (3.1), (3.2) is the family of the Extended Input-State-Output systems (EISO systems) described in terms of the total coordinates by dX(t) = AX(t) + D(µ) Dµ (t) + B(µ) Uµ (t) = AX(t) + P (µ) Iµ (t), ∀t ∈ T0 , dt A ∈ Rnxn , D(µ) ∈ Rnx(µ+1)d , B(µ) ∈ Rnx(µ+1)r , P (µ) ∈ Rnx(µ+1)M , (4.1) Y(t) = CX(t) + V D(t) + U U(t) = CX(t) + QI(t), ∀t ∈ T0 . C ∈ RN xn , V ∈ RN xd , U ∈ RN xr , Q ∈ RN xM .

(4.2)

The overall input mathematical data of the EISO system are the input vector I and the matrix P (µ) related to the extended input vector Iµ :  T I = IEISO = DT UT ∈Rd+r , M = d + r,   .. .. .. (µ) P = P0 . P1 . ... . Pµ ∈ Rn×(µ+1)M ,     .. (µ) (µ) .. (µ) nx(µ+1)(d+r) P = D .B ∈R , Q = V . U ∈RN x(d+r) ,     .. .. .. .. .. .. (µ) n×(µ+1)r (µ) B = B0 .B1 .....Bµ ∈ R , D = D0 .D1 .....Dµ ∈ Rn×(µ+1)d . (4.3) If µ = 0 then the EISO system (4.1), (4.2) becomes the ISO system (3.1), (3.2) (Section 3.1). In order to present one physical origin of the EISO 41 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 42 — #53

i

42

i

CHAPTER 4. EISO SYSTEMS

system (4.1), (4.2) we discover it in the physical IO systems (2.15), (Section 2.1), as shown in the following: Theorem 49 The The IO system (4.2) by preserving distinguish the case which

EISO form (4.1), (4.2) of the IO system (2.15) (2.15) can be transformed into the EISO form (4.1), the physical meaning of all variables, where we should ν > 1 from the case ν = 1 in the IO system (2.15), for

νN = n, (4.4) with the following choice of the system physical substate vectors Xi of the system physical state vector X :   ν > 1 =⇒ Xi = Y(i−1) ∈ RN , ∀i = 1, 2, ..., ν, i.e.,           T T . . . . . . T . . . . . . T T T T (1)T (ν−1) X = X1 . X2 . ... . Xν = Y .Y . ... . Y =       = Yν−1 ∈ Rn ν = 1 =⇒ X1 = X = Y ∈ RN ,

(4.5)

and with the following matrices of the EISO form (4.1), (4.2) in terms of the matrices Ak , k = 0, 1, ..., ν, of the IO system (2.15):        

ON ON ON ... ON −A−1 ν A0

ν > 1 =⇒ A = IN ... ON ON ON ... ON ON ON ... IN ON ... ... ... ... ON ... ON IN −1 −1 −A−1 ν A1 ... −Aν Aν−2 −Aν Aν−1 −1 ν = 1 =⇒ A = −A1 A0 ∈ RN ×N ,

     ∈ RνN ×νN   

,

(4.6) P (µ)

     O(ν−1)N,(µ+1)M n×(µ+1)M , ν > 1,  ∈ R (µ) = , A−1 ν H   −1 (µ) N ×(µ+1)M A1 H ∈ R , ν = 1,

(4.7)

or, equivalently, P (µ)

Pinv

    O(ν−1)N,N (µ) , ν > 1, A−1 ν H IN =  (µ) , ν = 1, IN A−1 1 H     O(ν−1)N,N ∈ Rn×N , ν > 1, = IN  IN ∈ RN ×N , ν = 1,

 

,

(4.8)

,

(4.9)

   

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 43 — #54

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

i

43

i.e., (µ) P (µ) = Pinv A−1 ∈ Rn×(µ+1)M ν H

C = [IN ON ON ON ... ON ] ∈ RN ×n , Q = ON,M ∈ RN ×M .

(4.10) (4.11)

Proof. of Theorem 49, Section 4.1. Let Equation (4.4) hold and let X be defined by (4.5) so that ν > 1 =⇒ X1 (t) = Y(t), (1)

X2 (t) = Y(1) (t) = X1 (t) (1)

X3 (t) = Y(2) (t) = X2 (t) . . . . (1)

Xν−1 (t) = Y(ν−2) (t) = Xν−2 (t) (1)

Xν (t) = Y(ν−1) (t) = Xν−1 (t) ν = 1 =⇒ X(t) = X1 (t) = Y(t).

(4.12)

We can solve the preceding equations for the derivatives: ν > 1 =⇒ (1) X1 (t) (1) X2 (t)

= X2 (t), = X3 (t),

. . . . (1) Xν−2 (t) (1) Xν−1 (t)

= Xν−1 (t) = Xν (t)

ν = 1 =⇒ (1) X1 (t)

= Y(1) (t).

(4.13)

Equations (4.12) and (4.13) transform IO Equation (2.1) into

ν > 1 =⇒ Aν X(1) ν (t) +

k=ν−1 X

Ak Xk+1 (t) = H (µ) Iµ (t),

k=0

ν = 1 =⇒

(1) A1 X1 (t)

. . + A0 X1 (t) = H (µ) Iµ ..(t) = H (1) I1 ..(t),

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 44 — #55

i

44

i

CHAPTER 4. EISO SYSTEMS

which implies the following due to detAν 6= 0 determined in (2.1): ν > 1 =⇒ "k=ν−1 # X (1) −1 (µ) µ .. Xν (t) = −Aν Ak Xk+1 (t) − H I .(t) , k=0

ν = 1 =⇒

(1) A1 X1 (t)

. + A0 X1 (t) = H (µ) Iµ ..(t),

i.e., ν > 1 =⇒   −1 −A−1 (1) ν A0 X1 (t) − Aν A1 X2 (t) − ... , Xν (t) = −1 (β) Iµ (t) ... − A−1 ν Aν−1 Xν (t) + Aν H ν = 1 =⇒ (1) −1 (µ) µ .. X1 (t) = −A−1 I .(t). 1 A0 X1 (t) + A1 H

(4.14)

This and (4.13) yield ν > 1 =⇒ (1) X1 (t) (1) X2 (t)

= X2 (t), = X3 (t), · ··

(1) Xν−2 (t) (1) Xν−1 (t)

= Xν−1 (t)

= Xν (t) ( ) −1 −A−1 ν A0 X1 (t) − Aν A1 X2 (t) − ... (1) , Xν (t) = . −1 (µ) Iµ ..(t) ... − A−1 ν Aν−1 Xν (t) + Aν H ν = 1 =⇒

(1) −1 (µ) µ .. X1 (t) = −A−1 I .(t), 1 A0 X1 (t) + A1 H

(4.15)

or,      ν > 1 =⇒     

(1)

X1 (t) (1) X2 (t) ... (1) Xν−2 (t) (1) Xν−1 (t) (1) Xν (t)

     =    

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 45 — #56

i

4.1. EISO SYSTEM MATHEMATICAL MODEL      =    

block  ON  ON   ON •  ...   ON A0

diag IN ON ON ... ON A1

|

I(ν−1)N − A−1 • ν ON ON ... ON IN ON ... ON ON IN ... ON ... ... ... ... ON ON ... IN A2 A3 ... Aν−1 {z



       

A

    +    |

ON,(µ+1)M ON,(µ+1)M ON,(µ+1)M ... ON,(µ+1)M (µ) A−1 ν H {z P

i

45           

X1 (t) X2 (t) ... Xν−2 (t) Xν−1 (t) Xν (t) | {z } X

    +    }

    µ I (t),    }

(1) −1 (µ) µ .. I .(t), ν = 1 =⇒ X1 (t) = −A−1 1 A0 X1 (t) + A1 H

which imply (4.6)-(4.11). Note 50 The substate vectors Xi and the state vector X composed of them and all defined by (4.5) have the full physical meaning. They are the system output vector Y and its derivatives. The EISO system (4.1), (4.2) determined by (4.4)-(4.11) retains the full physical sense as the original IO system (2.15). They have the same properties. The EISO form (4.1), (4.2) of the original IO system (2.15) differs from the well-known ISO form (3.1), (3.2), i.e., (C.2), (C.3) (Appendix C.1), of the IO system (2.15) for the preservation of the derivatives of the input vector in the state equation (4.1), which has not been accepted so far: Equation (3.1). The physical nature of the IO system (2.15) introduces the derivatives of the input vector in the state equation. The formal mathematical transformation given by Equations (C.4)-(C.10) (Section C.1) ignores the explicit action of the input vector derivatives on the physical state of the IO system (2.15). The existing formal mathematical transformation of the IO system (2.15) into the ISO form (3.1), (3.2) loses the physical sense if µ > 0 so that the chosen state variables and the state vector are physically meaningless. This book develops the state theory for the IO systems (2.15) by exploiting their EISO form (4.1), (4.2), (4.4)-(4.11) in order to preserve the full

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 46 — #57

i

46

i

CHAPTER 4. EISO SYSTEMS

physical sense of the original IO system (2.15). A useful tool to achieve this is the new simple compact calculus based on the compact notation   .. (ν−1)T T T .. (1)T .. Y .Y . ... . Y = Yν−1 , which enabled us to define the physical (and mathematical) state vector of the IO systems (2.15) in the form X = Yν−1 . (µ)

Note 51 The matrix Pinv (4.7), (4.8), (4.10) is the invariant submatrix of the matrix P (µ) . It is invariant relative to both all matrices Ai , i = 0, 1, 2, ..., ν, and all submatrices Hk of H (µ) , k = 0, 1, 2, ..., µ, of the original IO system (2.15). In other words, the matrix Pinv is independent of both all matrices Ai , i = 0, 1, 2, ..., ν, and all matrices Hk , k = 0, 1, 2, ..., µ. Note 52 Let ν > 1.Then O(ν−1)N,M is (ν − 1) N × M zero matrix. If and only if ν = 1, then the matrix O(ν−1)N,M becomes formally O0,M that does not exist. Then it should be simply omitted. Conclusion 53 For the existence of the (ν − 1) N × M zero matrix O(ν−1)N,M to exist it is necessary and sufficient that the natural number ν obeys ν > 1 : ∃O(ν−1)N,M ∈ R(ν−1)N ×M ⇐⇒ ν ∈ {2, 3, ..., n, ...} .

(4.16)

By referring to the well-known form of the solution of the ISO systems (3.1), (3.2) we easily show that the solution of the EISO system (4.1), (4.2) is determined by Z t X(t; t0 ; X0 ; Iµ ) = eA(t−t0 ) X0 + eA(t−τ ) P (µ) Iµ (τ ) dτ, (4.17) t0   Z t A(t−t0 ) A(t0 −τ ) (µ) µ =e X0 + e P I (τ ) dτ , ∀t ∈ T0 , (4.18) t0

or equivalently by Z

µ

t

X(t; t0 ; X0 ; I ) = Φ (t, t0 ) X0 + Φ (t, τ ) P (µ) Iµ (τ ) dτ = t0   Z t = Φ (t, t0 ) X0 + Φ (t0 , τ ) P (µ) Iµ (τ ) dτ , ∀t ∈ T0 ,

(4.19) (4.20)

t0

for Φ (t, t0 ) (3.3) (Section 3.1), i.e., Φ (t, t0 ) = Φ (t, 0) Φ−1 (t0 , 0) = eAt eAt0

−1

= eA(t−t0 ) ∈ Rn×n .

(4.21)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 47 — #58

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

i

47

These equations and Equation (4.2) determine the EISO system response to the initial state vector X0 and to the extended input vector function Iµ (.): Z t µ Y(t; t0 ; X0 ; I ) = CΦ (t, t0 ) X0 + CΦ (t, τ ) P (µ) Iµ (τ ) dτ + QI(t) = t0

(4.22) 

Z

t



Φ (t0 , τ ) P (µ) Iµ (τ ) dτ + QI(t), ∀t ∈ T0 .

= CΦ (t, t0 ) X0 +

(4.23)

t0

4.1.2

Complex domain

The Laplace transform of the EISO system (4.1), (4.2) relative to D(t) and U(t) reads: ) ( (µ) (µ−1) + D(µ) Sd (s)D(s) − D(µ) Zd (s)Dµ−1 0 , sX(s) − X0 = AX(s) + (µ) (µ−1) +B(µ) Sr (s)U(s) − B(µ) Zr (s)Uµ−1 0 (4.24) Y(s) = CX(s) + V D(µ) (s) + V U(s).

(4.25)

These equations lead to: 

 . . (µ) S (µ) (s) ..B(µ) S (µ) (s) ..− D r d  X(s) = (sI − A)−1  .. . (µ−1) (µ−1) (µ) (µ) D Zd (s) . − B Zr (s) .. I   .. T .. (µ) µ−1T .. µ−1T .. T T T • D (s) . U (s) . D 0 . U0 . X0 = = FEISOIS (s) VEISO (s) ,          Y(s) =       

        

(4.26) .

(µ) C (sI − A) D(µ) Sd (s) + V .. .. . (µ) . C (sI − A)−1 B(µ) Sr (s) + U .. −1

.. (µ−1) . − C (sI − A)−1 D(µ) Zd (s) .. −1 (µ) (µ−1) . − C (sI − A) B Zr (s) .. . C (sI − A)−1

.. . .. .





    •    

       =      

  .. T .. µ−1T .. µ−1T .. T T T • D (s) . U (s) . D0 . U0 . X0 = FEISO (s) VEISO (s) ,

(4.27)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 48 — #59

i

48

i

CHAPTER 4. EISO SYSTEMS

where - FEISOIS (s) , FEISOIS (s) = (sI − A)−1 •  .. (µ) (µ) .. .. .. (µ) (µ) (µ) (µ−1) (µ) (µ−1) • D Sd (s) .B Sr (s) . − D Zd (s) . − B Zr (s) .I , 

(4.28) is the EISO system (4.1), (4.2) input to state (IS) full transfer function matrix, the inverse Laplace transform of which is the plant IS full fundamental matrix ΨEISOIS (t), ΨEISOIS (t) = L−1 {FEISOIS (s)} ,

(4.29)

- FEISO (s) . (µ) C (sI − A)−1 D(µ) Sd (s) + V ..  . .  . (µ) −1  . C (sI − A) B(µ) Sr (s) + U ..  . . (µ−1) −1 FEISO (s) =  (s) ..  .. − C (sI − A) D(µ) Zd  .  .. − C (sI − A)−1 B(µ) Z (µ−1) (s) ... r  .. −1 . C (sI − A) 

     ,    

(4.30)

is the EISO plant (4.1), (4.2) input to output (IO) full transfer function matrix FEISO (s) relative to the input pair [ D(µ) (t), U(t)] and the initial vectors D(µ) µ−1 , Uµ−1 and X0 . 0 0 The inverse Laplace transform of FEISO (s) is the plant IO full fundamental matrix ΨEISO (t) [40], ΨEISO (t) = L−1 {FEISO (s)} ,

(4.31)

- pEISO (s) , for short p (s) , p (s) = pEISO (s) = det (sI − A) ,

(4.32)

is the characteristic polynomial of the EISO plant (4.1), (4.2) and the denominator polynomial of all its transfer function matrices, - GEISOD (s) , for short GD(µ) (s), (µ)

GD(µ) (s) = GEISOD (s) = C (sI − A)−1 D(µ) Sd (s) + V = h i (µ) = p−1 (s) Cadj (sI − A)−1 D(µ) Sd (s) + p (s) V ,

(4.33)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 49 — #60

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

i

49

is the EISO plant (4.1), (4.2) transfer function matrix relating the output to the disturbance D(µ) , - GEISOU (s) , for short GU (s), GU (s) = GEISOU (s) = C (sI − A)−1 B(µ) Sr(µ) (s) + U = h i = p−1 (s) Cadj (sI − A) B(µ) Sr(µ) (s) + p (s) U ∈ CN ×r ,

(4.34)

is the EISO plant (4.1), (4.2) transfer function matrix relating the output to the control U, - GEISOD0 (s) , for short GD0 (s), (µ−1)

GD0 (s) = GEISOD0 (s) = −C (sI − A)−1 D(µ) Zd (s) = h i (µ−1) = −p−1 (s) Cadj (sI − A) D(µ) Zd (s) ,

(4.35)

is the EISO plant (4.1), (4.2) transfer function matrix relating the output to the initial extended disturbance vector D(µ) µ−1 , 0 - GEISOU0 (s) , for short GU0 (s), GU0 (s) = GEISOU0 (s) = −C (sI − A)−1 B(µ) Zr(µ−1) (s) = h i = −p−1 (s) Cadj (sI − A) B(µ) Zr(µ−1) (s) ,

(4.36)

is the EISO plant (4.1), (4.2) transfer function matrix relating the output , to the extended initial control Uµ−1 0 - GEISOX0 (s), for short GX0 (s) , GEISOX0 (s) = GX0 (s) = C (sI − A)−1 = = p−1 (s) Cadj (sI − A) ,

(4.37)

is the EISO plant (4.1), (4.2) transfer function matrix relating the output to the initial state X0 . - VEISO (s) and CEISO0 ,     IEISO (s) D(s) VEISO (s) = , I (s) = IEISO (s)= , CEISO0 U(s)   T T  T µ−1 µ−1 T CEISO0 = , (4.38) D0 U0 X0 are the Laplace transform of the action vector VEISO (t) and the vector CEISO0 of all initial conditions, respectively.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 50 — #61

i

50

i

CHAPTER 4. EISO SYSTEMS

If we consider the whole extended vectors Dµ (t) and Uµ (t) as the system input vectors then other forms of the system transfer function matrices result. To show that let   . . . V (µ) = V .. ON,d .. · · · .. ON,d ∈ RN ×(µ+1)d , (4.39)   .. .. .. (µ) (4.40) U = U . ON,r . · · · . ON,r ∈ RN ×(µ+1)r . The Laplace transform of the EISO system (4.1), (4.2) relative to the extended vectors D(µ)µ (t) and Uµ (t) reads: sX(s) − X0 = AX(s) + D(µ) L {Dµ (t)} + B(µ) L {Uµ (t)} , Y(s) = CX(s) + V (µ) L {Dµ (t)} + U (µ) L {Uµ (t)} , so that the Laplace transform Y(s) of the system output vector Y(t) can be set also in the following form:     .   −1 (µ) . (µ)   C (sI − A) D + V  .     .    .. C (sI − A)−1 B(µ) + U (µ) ...  •      = Y(s) =  ..     . C (sI − A)−1   T    . . • L {Dµ (t)}T .. L {Uµ (t)}T .. XT0 = FEISOU µ (s) VEISOU µ (s) ,

(4.41)

where: - FEISOU µ (s) ,    FEISOµ (s) =  

−1

D(µ)

(µ)

C (sI − A) +V .. . C (sI − A)−1 B(µ) + U (µ) .. . C (sI − A)−1

 .. .  ..  , .  

(4.42)

is the EISO plant (4.1), (4.2) input to output (IO) full transfer function matrix FEISOµ (s) relative to the extended input pair [ D(µ)µ (t), Uµ (t)] and the initial state vector X0 , - GEISODµ (s) , for short GD(µ)µ (s), GD(µ)µ (s) = GEISODµ (s) = C (sI − A)−1 D(µ) + V (µ) = h i = p−1 (s) Cadj (sI − A)−1 D(µ) + p (s) V (µ) ∈ CN ×d , | {z }

(4.43)

ND(µ) (s)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 51 — #62

i

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

51

is the EISO plant (4.1), (4.2) transfer function matrix relating the output to the extended disturbance D(µ)µ , - and GEISOU µ (s), for short GU µ (s), GU µ (s) = GEISOU µ (s) = C (sI − A)−1 B(µ) + U (µ) = h i = p−1 (s) Cadj (sI − A) B(µ) + p (s) U (µ) ∈ CN ×(µ+1)r =⇒ GU (s) = GU µ (s) Sr(µ) (s) ,

(4.44) (4.45) (4.46)

is the EISO plant (4.1), (4.2) transfer function matrix GEISOU µ (s), for short GU µ (s), relating the output to the extended control Uµ . Theorem 54 Properties of the EISO system (4.1), (4.2), (4.4)(4.11) The EISO system (4.1), (4.2), (4.4)-(4.11) possesses the following properties:   .. I) a) If ν = 1 and 0 ≤ µ ≤ 1, the matrix (sI − A) . P has inv

the full rank n for every complex number s and for every matrix A (4.6) including every eigenvalue si (A) of the matrix A (4.6), and for every matrix A (4.6):   .. rank (sI − A) . Pinv = n, ∀ (s, A) ∈ C × Rn×n . (4.47)   .. (µ) is invariant and full relative to The rank n of the matrix (sI − A) . P every (s, A) ∈ C × Rn×n , A given by (4.6).



 .. (µ) b) If ν = 1 and 0 ≤ µ ≤ 1 then for the matrix (sI − A) . P to

have the full rank n for every complex number s including every eigenvalue si (A) of the matrix A, and for every matrix A , it is necessary and sufficient that the matrix H (1) has the full rank n = N : rankH (1) = n = N.

(4.48)  .. a) If ν > 1 and 0 ≤ µ < ∞, then the matrix (sI − A) . Pinv 

II)

has the full rank n for every complex number s including every eigenvalue si (A) of the matrix A and for every matrix A ∈ Rn×n :   .. rank (sI − A) . Pinv = n, ∀ (s, A) ∈ C × Rn×n . (4.49)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 52 — #63

i

52

i

CHAPTER 4. EISO SYSTEMS 

. The rank of the matrix (sI − A) .. Pinv

 is invariant and full relative to

every (s, A) ∈ C × Rn×n .



. b) If ν > 1 and 0 ≤ µ < ∞, then for the matrix (sI − A) .. P (µ)



to have the full rank n for every complex number s including every eigenvalue si (A) of the matrix A and for every matrix A ∈ Rn×n :   .. (µ) rank (sI − A) . P = n, ∀ (s, A) ∈ C × Rn×n . (4.50) it is necessary and sufficient that the extended matrix H (µ) has the full rank N, rankH (µ) = N. (4.51) Proof. The matrix A of the EISO system (4.1), (4.2), (4.4)-(4.11) is determined by (4.6) . I)  a) Let ν = 1 and  0 ≤ µ ≤ 1 due to (2.1). Then N = n and the .. matrix (sI − A) . P has the following form due to (4.9): n

inv

    .. .. −1 rank (sIn − A) . Pinv = rank sIn + A1 A0 .In = = rankIn = n =⇒ ∀s = si (A) ∈ C, ∀i = 1, 2, ..., n; i.e., ∀s ∈ C, ∀Ak ∈ Rn×n , k = 0, 1. This proves the statement under I-a). b) Let ν = 1 and 0 ≤ µ ≤ 1 due to (2.1). Let the statement under I-b) we use the matrix (sIn − A)

(4.48)be valid. For .. (µ) .P that has the

following form due to (4.8):     . .. −1 (µ) (sIn − A) .. P (µ) = sIn + A−1 A . A H =⇒ 0 1 1     .. (µ) .. −1 (µ) −1 rank (sIn − A) . P = rank sIn + A1 A0 . A1 H .   .. (µ) Necessity. Let A0 = On and s = 0 in (sIn − A) . P :     .. (µ) .. −1 (µ) (0In − On ) . P = On . A1 H =⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 53 — #64

i

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

53

Let the matrix 

   .. −1 (µ) .. (µ) = On . A1 H (sIn − A) . P

have the full rank n :     .. (µ) .. −1 (µ) n = rank (sIn − A) . P = rank On . A1 H = (µ) = rankA−1 = rankH (µ) due to detA−1 1 H 1 6= 0.

The rank of H (µ) equals n. Equation (4.48) holds, which proves its necessity. Sufficiency. Let rankH (µ) = N = n due to (4.48). This and detA−1 1 6= 0 yield (µ) = N = n = rankH (µ) = rankA−1 1 H    .. (µ) .. −1 (µ) −1 = rank (sIn − A) . P , = rank sIn + A1 A0 . A1 H



0 ≤ µ ≤ 1, ∀s ∈ C, ∀Ak ∈ Rn×n , k = 0, 1,   .. (µ) This proves that for the rank of the matrix (sIn − A) . P to be full, i.e., to be equal to n, it is sufficient that the rank of the matrix H (1) is full, i.e., equal to n. Hence,   .. (µ) (1) rankH = n =⇒ rank (sIn − A) . P ≡ n. This proves the statement under I-b). II) Let ν > 1 and µ ≥ 0, µ < ∞. . a) The matrix (sI − A) .. P n

 inv

has the following form due to

(4.9):  ν > 1, µ ≥ 0 =⇒  . = sIn − A ..     =   

sIN −IN ON ON sIN −IN ON ON sIN ... ... ... ON ON ON −1 A −1 A A−1 A A A 0 1 2 ν ν ν

 .. (sIn − A) . Pinv =  O(ν−1)N,N = IN

... ON ON ... ON ON ... ON ON ... ... ... ... sIN −IN ... A−1 A sI + A−1 ν−2 N ν ν Aν−1

ON ON ON ... ON IN

    .   

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 54 — #65

i

54

i

CHAPTER 4. EISO SYSTEMS

This implies   . rank (sIn − A) .. Pinv =    .. O(ν−1)N,N = rank sIn − A . IN     = rank    

    = rank    

sIN −IN ON sIN ON ON ... ... ON ON −1 A A−1 A A 0 1 ν ν −IN ON sIN −IN ON sIN ... ... ON ON −1 A A−1 A A 1 2 ν ν

... ON ON ... ON ON ... ON ON ... ... ... ... sIN −IN ... A−1 A sI + A−1 ν−2 N ν ν Aν−1 ... ON ... ON ... ON ... ... ... sIN ... Aν−1 Aν−2 −sIN

ON ON ON ... ON IN

ON ON ON ... −IN + A−1 ν Aν−1

ON ON ON ... ON IN

    =   

    =   

= νN = n, ∀ (s, A) ∈ C × Rn×n .   . This proves the invariance of the matrix (sIn − A) .. Pinv relative to every (s, A) ∈ C × Rn×n . The first statement under II) is true. b) Let ν > 1 and 0 ≤ µ < ∞. Necessity. Let   Ak = ON, ∀k = 0, 1, ..., ν − 1 and s = 0 in .. (µ) . (sI − A) .P and let rank (sI − A) .. P (µ) = n : n

n

 

    .. (µ) (sIn − A) . P =   

ON ON ON ... ON ON

−IN ON ON ... ON ON

... ... ... ... ... ...

ON ON ON ... ON ON

ON ON ON ... −IN ON

ON ON ON ... ON (µ) A−1 ν H

     =⇒   



 .. (µ) n = νN = rank (sIn − A) . P =

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 55 — #66

i

4.1. EISO SYSTEM MATHEMATICAL MODEL ON −IN  ON ON   ON ON = rank   ... ...   ON ON ON ON  −IN ...  ON ...   ON ... = rank   ... ...   ON ... ON ... 

... ... ... ... ... ...

ON ON ON ... ON ON

ON ON ON ... ON ON

ON ON ON ... −IN ON

ON ON ON ... −IN ON

i

55

 ON  ON   ON =  ...   ON (µ) A−1 ν H  ON  ON   ON =  ...   ON

(µ) A−1 ν H

(µ) = (ν − 1) N + rankA−1 =⇒ ν H (µ) N = rankA−1 = rankH (µ) due to detA−1 ν H ν .

This proves the validity of the condition (4.51), i.e., its necessity. Sufficiency. Let the condition (4.51) hold. The matrix     .. (µ) .. O(ν−1)N,(µ+1)M (sIn − A) . P = sIn − A . (µ) A−1 ν H has the following form in view  sIn − A     =   

sIN −IN ON ON sIN −IN ON ON sIN ... ... ... ON ON ON −1 −1 A−1 ν A0 Aν A1 Aν A2

of (4.7)-(4.9): .. .

O(ν−1)N,(µ+1)M (µ) A−1 ν H

 =

... ON ON ... ON ON ... ON ON ... ... ... ... sIN −IN ... Aν−2 sIN + A−1 ν Aν−1

ON,(µ+1)M ON,(µ+1)M ON,(µ+1)M ... ON,(µ+1)M (µ) A−1 ν H

    .   

Having in mind that for the matrix   .. O(ν−1)N,(µ+1)M (sIn − A) . (µ) A−1 ν H

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 56 — #67

i

56

i

CHAPTER 4. EISO SYSTEMS

to have the full rank n for every eigenvalue si (A) of the matrix si (A) it is sufficient that its following submatrix has the full rank n:        

−IN ON sIN −IN ON sIN ... ... ON ON −1 A−1 ν A1 Aν A2

... ON ON ... ON ON ... ON ON ... ... ... ... sIN −IN −1 ... A−1 ν Aν−2 sIN + Aν Aν−1

ON,(µ+1)M ON,(µ+1)M ON,(µ+1)M ... ON,(µ+1)M (µ) A−1 ν H

       

which is true because the matrix H (µ) has the rank N due to rankH (µ) = N (µ) due to detA−1 6= 0: (4.51) and implies N = rankH (µ) = rankA−1 ν ν H    . O(ν−1)N,(µ+1)M rank sIn − A .. = (µ) A−1 ν H      = rank     

−IN sIN ON ... ON

ON −IN sIN ... ON

−1 A−1 ν A1 Aν A2



−IN  sIN  = rank   ON  ... ON

ON −IN sIN ... ON

.. .. .. .. ..

ON ON ON ... sIN

.

A−1 ν Aν−2 ... ... ... ... ...

ON ON ON ... sIN

ON ON ON ... −IN sIN + +A−1 ν Aν−1 

ON ON ON ... −IN

ON,(µ+1)M ON,(µ+1)M ON,(µ+1)M ... ON,(µ+1)M (µ) A−1 ν H

     =    

  (µ)  + rankA−1 = ν H  

(µ) = (ν − 1) N + rankA−1 = (ν − 1) N + N = νN = n, ν H

∀si (A) ∈ C, i.e., ∀s ∈ C, ∀A ∈ Rn ,

(4.52)

This proves the second statement under II) and completes the proof. Comment 55 If detAν 6= 0 then the statements of this theorem largely simplify the verification of the rank of the complex valued matrices     .. .. (µ) (sI − A) . Pinv and (sI − A) . P .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 57 — #68

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

i

57

  .. While the rank of the matrix (sI − A) . Pinv is invariantly equal to n,   .. (µ) the rank of the matrix (sI − A) . P is not invariant. (µ) is full, rankH (µ) = N , then the However, if the rank of the matrix  H . rank of the matrix (sI − A) .. P (µ) is independent of both A and s ∈ C,

i.e., it is also invariant. Example 56 Let the EISO system (4.1), (4.2) be defined by 

dX1 dt dX2 dt dX3 dt





 −2 0 1   =  3 2 2  0 4 −3 {z }| | {z } | A

X(1)

 X1 X2 + X3 {z } X



  " #   7 5 0 1 (1) U U 1 1 + 10 −6  +  1 −1  (1) , U2 U 2 −4 3 −2 2 | {z } } | {z } U | {z }| {z (1) B0

B1

(4.53)

I



    X1 6 7 U1 Y1 −1 1 4   X2 + = .. 5 8 U2 Y2 0 2 3 X | {z }| {z } {z } | {z } | 3 | {z } C U Y U 







(4.54)

X

In this example n = 3,  (1) S2 (s) = s0 I2

M = 2, µ = 1, r = 2.    .. 1 T O2 (1−1) . s I2 , Z 2 (s) = I2

The extended matrix P (µ)(1) and I1 follow from (4.53):  

B

(1)



   7 5 0 1 U  1   10 −6 1 −1 , U = = = (1) U  −4 3 −2 2

U1 U2 (1) U1 (2) U2

   . 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 58 — #69

i

58

i

CHAPTER 4. EISO SYSTEMS

The resolvent matrix (sI3 − A)−1 : −1 s+2 0 −1 =  −3 s − 2 −2  = 0 −4 s + 3 

(sI3 − A)−1   =

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 (s3 +3s2 −12s−40)

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

 (4.55)

 

Equation (4.30) yields: −1



FEISOIS (s) = (sI − A)

. (1) B(1) S2 (s) ..

−B

(1)

. (0) Z2 (s) ..I3

 ,

for  1 7 5 0 1 0 (1) B(1) S2 (s) =  10 −6 1 −1  s −4 3 −2 2 0

  0 7 5+s 1 =  10 + s −6 − s  , 0 −4 − 2s 3 + 2s s





0 7 5 0 1  0 (0) B(1) Z2 (s) =  10 −6 1 −1    1 −4 3 −2 2 0 

  FISOIS (s) =  



s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 (s3 +3s2 −12s−40)

   0 0 1 0   =  1 −1  , 0  −2 2 1

4

s−2

s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40

s3 +3s2 −12s−40

4s+8 s3 +3s2 −12s−40

7 5+s 0  10 + s −6 − s 1 • −4 − 2s 3 + 2s −2  . = GEISOISU (s) .. GEISOISU0 (s)

2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

 1 1 0 0 −1 0 1 0  = 2 0 0 1  .. . GEISOISX0 (s) ,

  •

(4.56)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 59 — #70

i

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

59

where s2 +s−14 s3 +3s2 −12s−40

  GEISOISU (s) = 

3s+9 s3 +3s2 −12s−40 12 (s3 +3s2 −12s−40)

4

s−2

s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40

s3 +3s2 −12s−40

4s+8 s3 +3s2 −12s−40

2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

  •



 7 5+s •  10 + s −6 − s  =⇒ −4 − 2s 3 + 2s   GEISOISU (s) = 

  GEISOISU0 (s) = 

75s2 +11s−42 s3 +3s2 −12s−40 s3 +15s2 +77s+123 s3 +3s2 −12s−40 −2s3 +56s+180 s3 +3s2 −12s−40

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 s3 +3s2 −12s−40

s3 +8s2 −8s−82 s3 +3s2 −12s−40 −s3 −7s2 −10s+79 s3 +3s2 −12s−40 2s3 −s2 −28s s3 +3s2 −12s−40

  ,

4

s−2

s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40

s3 +3s2 −12s−40

4s+8 s3 +3s2 −12s−40

2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

(4.57)

  •



 0 1 •  1 −1  =⇒ −2 2   GEISOISU0 (s) = 

  GEISOISXo (s) = 

−2s s3 +3s2 −12s−40 s2 +s−8 s3 +3s2 −12s−40 −2s2 +4s+16 s3 +3s2 −12s−40

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 s3 +3s2 −12s−40

s2 +3s−22 s3 +3s2 −12s−40 −s2 +2s+17 s3 +3s2 −12s−40 2s2 −4s−4 s3 +3s2 −12s−40

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

  ,

(4.58)

s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40



s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40



 •

1 0 0 • 0 1 0 =⇒ 0 0 1   GEISOISXo (s) = 

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 s3 +3s2 −12s−40

4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

 . (4.59)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 60 — #71

i

60

i

CHAPTER 4. EISO SYSTEMS

Finally, FEISOIS (s) =           =           

T s3 +8s2 −8s−82 s3 +3s2 −12s−40  −s3 −7s2 −10s+79   s3 +3s2 −12s−40  2s3 −s2 −28s s3 +3s2 −12s−40  T s2 +3s−22 −2s s3 +3s2 −12s−40 s3 +3s2 −12s−40   s2 +s−8 −s2 +2s+17  s3 +3s 2 −12s−40 s3 +3s2 −12s−40  −2s2 +4s+16 12−4ss−8 s3 +3s2 −12s−40 s3 +3s2 −12s−40 4 s−2 s2 +s−14 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 2 3s+9 s +5s+6 2s+7 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 12 4s+8 s2 −4 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 

T

75s2 +11s−42 s3 +3s2 −12s−40 s3 +15s2 +77s+123 s3 +3s2 −12s−40 −2s3 +56s+180 s3 +3s2 −12s−40

T  

                 

(4.60)

Equation (4.27) yields:   .. .. FEISO (s) = GEISOU (s) . GEISOU0 (s) . GEISOX0 (s) =                 •     |                                    •    |

 75s2 +11s−42 s3 +3s2 −12s−40 s3 +15s2 +77s+123 s3 +3s2 −12s−40 −2s3 +56s+180 s3 +3s2 −12s−40

−1 1 4 0 2 3

 •

s3 +8s2 −8s−82 s3 +3s2 −12s−40 −s3 −7s2 −10s+79 s3 +3s2 −12s−40 2s3 −s2 −28s s3 +3s2 −12s−40

   +

6 7 5 8



{z

GEISOI (s)



         •

−1 1 4 0 2 3

3s+9 s3 +3s2 −12s−40 12 s3 +3s2 −12s−40

12−4ss−8 s3 +3s2 −12s−40

        

{z

}

GEISOII0 (s)

 s2 +s−14 s3 +3s2 −12s−40



s2 +3s−22 s3 +3s2 −12s−40 −s2 +2s+17 s3 +3s2 −12s−40

−2s s3 +3s2 −12s−40 2 s +s−8 s3 +3s2 −12s−40 −2s2 +4s+16 s3 +3s2 −12s−40

|

T



−1 1 4 0 2 3

 •

4

s−2

s3 +3s2 −12s−40 s2 +5s+6 s3 +3s2 −12s−40

s3 +3s2 −12s−40

4s+8 s3 +3s2 −12s−40

{z

GEISOIX0 (s)

2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

  

T T                }            .        T                  } 

(4.61)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 61 — #72

i

i

4.1. EISO SYSTEM MATHEMATICAL MODEL

61

The system transfer function matrices and the vector VISO (s) are, in view of (4.34)-(3.20), (Subsection 3.1.2):



     =    •

GEISOU (s) =  −1 1 4 • 0 2 3

75s2 +11s−42 s3 +3s2 −12s−40 s3 +15s2 +77s+123 s3 +3s2 −12s−40 −2s3 −28s−72 (s+2)(s3 +3s2 −12s−40)

" GEISOU (s) =

       =⇒ 6 7    + 5 8

s3 +8s2 −8s−82 s3 +3s2 −12s−40 −s3 −7s2 −10s+79 s3 +3s2 −12s−40 2s3 +11s2 +60s−48 s3 +3s2 −12s−40

−7s3 −60s2 +284s+42+885 + s3 +3s2 −12s−40 8s3 +27s2 +70s+246 +5 s3 +3s2 −12s−40



6s3 −19s2 −114s+161 +7 s3 +3s2 −12s−40 4s3 −17s2 −104s+1582 +8 s3 +3s2 −12s−40

6

# , (4.62)



     GEISOU0 (s) =     •

−2s s3 +3s2 −12s−40 s2 +s−8 s3 +3s2 −12s−40 −2s2 +4s+16 s3 +3s2 −12s−40

" GEISOU0 (s) =



     =    •

s2 +s−14 s3 +3s2 −12s−40 3s+9 s3 +3s2 −12s−40 12 s3 +3s2 −12s−40

" GEISOX0 (s) =

−1 1 4 0 2 3

−7s2 +19s+56 s3 +3s2 −12s−40 −4s2 +14s+32 s3 +3s2 −12s−40







s2 +3s−22 s3 +3s2 −12s−40 −s2 +2s+17 s3 +3s2 −12s−40 2s2 −4s−4 s3 +3s2 −12s−40

     =⇒    

6s2 −17s+23 s3 +3s2 −12s−40 4s2 −8s+22 s3 +3s2 −12s−40

GEISOX0 (s) =  −1 1 4 • 0 2 3 4 s3 +3s2 −12s−40 2 s +5s+6 s3 +3s2 −12s−40 4s+8 s3 +3s2 −12s−40

−s2 +5s+71 s3 +3s2 −12s−40 6s+18+36 s3 +3s2 −12s−40

# ,

(4.63)

 s−2 s3 +3s2 −12s−40 2s+7 s3 +3s2 −12s−40 s2 −4 s3 +3s2 −12s−40

s2 +21s+34 s3 +3s2 −12s−40 2s2 +22s+36 s3 +3s2 −12s−40

     =⇒    

4s2 +3s−7 s3 +3s2 −12s−40 3s2 +4s+2 s3 +3s2 −12s−40

# . (4.64)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 62 — #73

i

62

i

CHAPTER 4. EISO SYSTEMS

Altogether,   . . FEISO (s) = GEISOU (s) .. GEISOU0 (s) .. GEISOX0 (s) = T GTEISOU (s) =  GTEISOU0  = GTEISOX0 

 "           

#T T

3 2 −114s+161 −7s3 −60s2 +284s+42+885 + 6 6ss−19s +7 3 +3s2 −12s−40 s3 +3s2 −12s−40 8s3 +27s2 +70s+246 4s3 −17s2 −104s+1582 +5 +8 s3 +3s2 −12s−40 s3 +3s2 −12s−40 " #T 2 2 −7s +19s+56 6s −17s+23 s3 +3s2 −12s−40 s3 +3s2 −12s−40 2 4s2 −8s+22 −4s +14s+32 s3 +3s2 −12s−40 s3 +3s2 −12s−40 " #T −s2 +5s+71 s2 +21s+34 4s2 +3s−7 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40 6s+18+36 2s2 +22s+36 3s2 +4s+2 s3 +3s2 −12s−40 s3 +3s2 −12s−40 s3 +3s2 −12s−40

          

(4.65)

together with  VEISOU (s) =

U(s)



 , U(s) =

CEISOU 0

  CEISO0 =

4.2

U10 X0



  =  

U10 U20 X10 X20 X30

U1 (s) U2 (s)

 ,

   .  

(4.66)

EISO plant desired regime

Definition 44, (Section 3.2), slightly changes its formulation as follows. Definition 57 A functional vector control-state pair [U*(.), X*(.)] is nominal for the EISO plant (4.1), (4.2) relative to the functional pair [D(.), Yd (.)] , which is denoted by [UN (.), XN (.)], if and only if [U(.), X(.)] = [U*(.), X*(.)] ensures that the corresponding real response Y(.) = Y*(.) of the plant obeys Y*(t) = Yd (t) all the time, [U*(.), X*(.)] = [UN (.), XN (.)] ⇐⇒ hY*(t) = Yd (t), ∀t ∈ T0 i .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 63 — #74

i

i

4.2. EISO PLANT DESIRED REGIME

63

The nominal motion XN (.; XN 0 ;Dµ ; UµN ), XN (0; XN 0 ;Dµ ; UµN ) ≡ XN 0 , is the desired motion Xd (.; Xd0 ;Dµ ; UµN ) of the EISO plant (4.1), (4.2) relative to the functional vector pair [D(.), Yd (.)] , for short: the desired motion of the system, Xd (.; Xd0 ; Dµ ; UµN ) ≡ XN (.; XN 0 ; Dµ ; UµN ), Xd (0; Xd0 ; Dµ ; UµN ) ≡ Xd0 ≡ XN 0 .

(4.67)

Definition 57 and the system description (4.1), (4.2) imply the following theorem: Theorem 58 In order for the functional vector pair [U*(.), X*(.)] to be nominal plant (4.1), (4.2) relative to the functional vector  for the EISO  pair D(µ) (.), Yd (.) , [U*(.), X*(.)] = [UN (.), XN (.)], it is necessary and sufficient that it obeys the following equations: dX*(t) − AX*(t) = D(µ) Dµ (t), ∀t ∈ T0 , dt U U*(t) + CX*(t) = Yd (t) − V D(t), ∀t ∈ T0 ,

−B(µ) U*µ (t) +

(4.68) (4.69)

or equivalently, "

  X* + = 0

(µ)

−B(µ) Sr (s) sI − A U C (

(µ−1) B(µ) Zr (s)Uµ−1 0

#

+ D(µ)

U*(s) X*(s)

 =

(µ)

Sd (s)D(s)− (µ−1) −Zd (s)Dµ−1 0

)   .

(4.70)

Yd (s) − V D(s) This theorem opens the problem of the conditions for the existence of the solutions of the equations (4.68), (4.69), or equivalently of (4.70). There are (r + n) unknown variables U*(s) ∈ Cr and X*(s) ∈ Cn and (N + n) equations so that the following holds: Claim 59 In order to exist a nominal functional vector pair [UN (.), XN (.)] for the EISO system (4.1), (4.2) relative to the functional vector pair [D(.), Yd (.)] it is necessary and sufficient that N ≤ r.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 64 — #75

i

64

i

CHAPTER 4. EISO SYSTEMS

The proof of this Claim is analogous to the proof of Claim 46 (Section 3.2). Claim 59 provides the full solution to the problem of the existence of a nominal functional vector pair [UN (.), XN (.)] for the EISO plant (4.1), (4.2) relative to the functional vector pair [D(.), Yd (.)]. Condition 60 The desired output response of the EISO system (4.1), (4.2) is realizable, i.e., N ≤ r. The nominal control-state pair [UN (.), XN (.)] is known. The EISO plant description in terms of the deviations (3.38), (2.53), (9.3) and (2.55) reads: dx(t) = Ax(t) + D(µ) dµ (t) + B(µ) uµ (t), ∀t ∈ T0 , dt y(t) = Cx(t) + V d(t) + U u(t), ∀t ∈ T0 .

4.3

(4.71) (4.72)

Exercises

Exercise 61 1. Select a physical EISO plant. 2. Determine its time domain EISO mathematical model.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 65 — #76

i

i

Chapter 5

HISO systems 5.1 5.1.1

HISO system mathematical model Time domain

The linear Higher order Input-State-Output (HISO) (dynamical, control) systems have not been studied so far. Their mathematical models contain the α-th order linear differential state vector equation (5.1) and the linear algebraic output vector equation (5.2), A(α) Rα (t) = D(µ) Dµ (t) + B (µ) Uµ (t) = H (µ) Iµ (t), ∀t ∈ T0 , A(α) ∈ Rρx(α+1)ρ , Rα ∈ R(α+1)ρ , D(µ) ∈Rρx(µ+1)d , B (µ) ∈Rρx(µ+1)r ,    T (µ) (µ) .. (µ) ρx(µ+1)(d+r) µ µ T .. µ T H = D .B ∈R , I = (D ) . (U ) ∈R(µ+1)(d+r) , (5.1) Y(t) = R(α) Rα (t) + V D(t) + U U(t) = R(α) Rα (t) + QI(t), ∀t ∈ T0 ,   .. N xd N xr V ∈ R , U ∈R , Q = V . U ∈RN x(d+r) ,   . . . . R(α) = R0 ..R1 ......Rα−1 ..ON,ρ , Rα = ON,ρ . (5.2) The zero matrix value of Ryα , Ryα = ON,ρ , ensures that the highest derivative R(α) of the vector R does not act on the system output vector Y. The output vector Y depends only linearly and algebraically on the state vector S = Rα−1 ∈ Rαρ and on the output vector I. It does not depend on the vector R(α) because it does not depend on the whole extended vector Rα due to Ryα = ON,ρ . 65 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 66 — #77

i

66

i

CHAPTER 5. HISO SYSTEMS

Note 62 The state vector SHISO of the HISO system (5.1), (5.2), is defined in (1.22), (1.24) (Section 1.4) by:   .. (α−1)T T α−1 T .. (1)T .. SHISO = R = R .R . ... . R ∈ Rn , n = αρ, (5.3) This new vector notation Rα−1 has permitted us to define the state of the HISO system (5.1), (5.2), by preserving the physical sense. It enables us to discover in what follows the complex domain criteria for observability, controllability and trackability directly from their definitions. Such criteria possess the complete physical meaning.

5.1.2

Complex domain

After the application of the Laplace transform to Equations (5.1) and (5.2) they are transformed into:  −1 R(s) = A(α) Sρ(α) (s) •   .. (µ) (µ) .. .. (α) (α−1) .. (µ) (µ−1) (µ) (µ) D Sd (s) . B Sr (s) . − D Zd (s) . A Zρ (s) .  • • .. (µ−1) (µ) . − B Zr (s)  T T   .. T ..  µ−1 T .. µ−1 α−1 T .. T • D (s) . U (s) . D0 . R0 . U0 = = FHISOIS (s) VHISOIS (s) ,

(5.4)

Y(s) = FHISO (s) VHISO (s) =           =        

 T  −1 .. (α) (α) (µ) (α) (α) (µ) R Sρ (s) A Sρ (s) D Sd (s) + V .  T  −1 .. .. (µ) (α) (α) B (µ) Sr (s) + U . . R(α) Sρ (s) A(α) Sρ (s) T   −1 .. .. (α) (α) (µ−1) (µ) (α) (α) . −R Sρ (s) A Sρ (s) D ZM (s) .   T  −1 (α−1) (α) S (α) (s) A(α) S (α) (s) .. R A(α) Zρ (s)−  .. ρ ρ .  . (α−1) −R(α) Zρ (s)  T  −1 .. (µ−1) (α) (α) (α) (α) (µ) B Zr (s) . −R Sρ (s) A Sρ (s)

  T T  .. T ..  µ−1 T .. µ−1 α−1 T .. T • D (s) . U (s) . D0 . R0 . U0 ,

T           •        

(5.5)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 67 — #78

i

5.1. HISO SYSTEM MATHEMATICAL MODEL

i

67

where: - FHISOIS (s) ,       −1   FHISOIS (s) = A(α) Sρ(α) (s)      

h iT . (µ) .. D(µ) Sd (s) .. h (µ) (µ) iT .. . B Sr (s) . h i T .. .. (µ−1) . −D(µ) Zd (s) . .. h (α) (α−1) iT .. (s) . A Zρ . h i T .. (µ−1) . −B (µ) Zr (s)

T        ,     

(5.6)

is the HISO plant (5.1), (5.2) input to state (IS) full transfer function matrix, the inverse Laplace transform of which is the plant IS full fundamental matrix ΨHISOIS (t) [40], ΨHISOIS (t) = L−1 {FHISOIS (s)} ,

(5.7)

 −1 (α) and the inverse Laplace transform of A(α) Sρ (s) is the HISO system fundamental matrix Φ (t) ,,  −1  A(α) Sρ(α) (s) Φ (t) = L−1 , (5.8) - FHISO (s) ,

                  

FHISO (s) =  T  −1 .. (µ) (α) (α) (α) (α) (µ) D Sd (s) + V R Sρ (s) A Sρ (s) .   T  −1 .. .. (µ) (α) (α) B (µ) Sr (s) + U . . R(α) Sρ (s) A(α) Sρ (s)  T  −1 .. .. (α) (α) (µ−1) . −R(α) Sρ (s) A(α) Sρ (s) D(µ) Zd (s) . T   −1 (α) S (α) (s) A(α) S (α) (s) (α) Z (α−1) (s)− .. . R A ρ ρ ρ  .. .  (α−1) −R(α) Zρ (s)  T  −1 .. (α) (α) (µ−1) (α) (α) (µ) . −R Sρ (s) A Sρ (s) B Zr (s)

T           ,        

(5.9)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 68 — #79

i

68

i

CHAPTER 5. HISO SYSTEMS

is the HISO plant (5.1), (5.2) input to output (IO) full transfer function matrix, the inverse Laplace transform of which is the system IO full fundamental matrix ΨHISO (t), ΨHISO (t) = L−1 {FHISO (s)} ,

(5.10)

  pHISO (s) = det A(α) Sρ(α) (s) ,

(5.11)

- pHISO (s) , is the characteristic polynomial of the HISO plant (5.1), (5.2) and the denominator polynomial of all its transfer function matrices, - GHISOD (s) ,  −1 (µ) GHISOD (s) = R(α) Sρ(α) (s) A(α) Sρ(α) (s) D(µ) Sd (s) + V =   " # (α) S (α) (s)adj A(α) S (α) (s) D (µ) S (µ) (s)+ R ρ ρ d = p−1 , (5.12) HISO (s) +pHISO (s) V is the HISO plant (5.1), (5.2) transfer function matrix relative to the disturbance D, - GHISOU (s) ,    −1 (α) (α) (α) (α) (µ) (µ) GHISOU (s) = R Sρ (s) A Sρ (s) B Sr (s) + U =   " # (α) S (α) (s)adj A(α) S (α) (s) B (µ) S (µ) (s)+ R ρ ρ r = p−1 = HISO (s) • +pHISO (s) U = p−1 HISO (s) LHISO (s) ,

(5.13)

is the HISO plant (5.1), (5.2) transfer function matrix relative to the control U, - GHISOD0 (s) , GHISOD0 (s) = −p−1 HISO (s) • h   i (µ−1) • R(α) Sρ(α) (s)adj A(α) Sρ(α) (s) D(µ) Zd (s)

(5.14)

is the HISO plant (5.1), (5.2) transfer function matrix relative to the initial extended disturbance Dµ−1 , 0 - GHISOR0 (s) ,



GHISOR0 (s) = p−1 (s) •  HISO ! (α) (α) (α−1) R(α) Sρ (s)adj A(α) Sρ (s) A(α) Zρ (s)− (α−1)

−pHISO (s) R(α) Zρ

(5.15)

(s)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 69 — #80

i

5.1. HISO SYSTEM MATHEMATICAL MODEL

i

69

is the HISO plant (5.1), (5.2) transfer function matrix relative to the initial state vector Rα−1 , 0 - GHISOU0 (s) , GHISOU0 (s) = −p−1 HISO (s) • h   i • R(α) Sρ(α) (s)adj A(α) Sρ(α) (s) B (µ) Zr(µ−1) (s)

(5.16)

is the HISO plant (5.1), (5.2) transfer function matrix relative to the ex, tended initial control vector Uµ−1 0 - VHISO (s) and CHISO0 , VHISO (s) = VHISOIS (s) = CHISO0 = CHISOIS0 =

 



DT (s) UT (s) CHISO0

Dµ−1 0

T

T Rα−1 0



Uµ−1 0

T

,

T  T

(5.17) ,

(5.18)

are the Laplace transform of the action vector VHISOP (t) and the vector CHISO0 of all initial conditions, respectively. Example 63 Let the beginning (5.2) be defined by #   " (3)  2 R1 (t) 1 2 + (3) 1 3 −4 R2 (t) #   " (2)  1 I1 (t) 5 3 + = (2) 0 4 1 I2 (t)



Y1 (t) Y2 (t)



and the end of the HISO system (5.1),

3 0

"

#    (1) R1 (t) 4 0 R1 (t) = + (1) R2 (t) 1 1 R2 (t) #     " (1) I1 (t) 4 2 I1 (t) 0 , + (1) I2 (t) 3 6 3 I2 (t) (5.19)

#   " (1)   1 0 R1 (t) 1 2 I1 (t) = + (1) 0 2 2 1 I2 (t) R2 (t) | {z } | {z } 

Ry1

(5.20)

Q

In this case α = 3, ρ = 2, N = 2, Ry0 = Ry2 = Ry3 = O2 , M = 2, µ = 2 < 3 = α, (5.21)     . . . 0 0 1 0 0 0 0 0 (α) (3) . . . Ry = Ry = Ry0 . Ry1 . Ry2 . Ry3 = , 0 0 0 2 0 0 0 0 (5.22)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 70 — #81

i

70

i

CHAPTER 5. HISO SYSTEMS

  . . 0 0 1 0 0 0 0 0 = • s0 I2 .. s1 I2 .. s2 I2 0 0 0 2 0 0 0 0   s 0 (3) Ry(3) S2 = , 0 2s  O2 O2   3−3  0 0 1 0 0 0 0 0  s I2 O2 (3−1) Ry(3) Z2 = • 0 0 0 2 0 0 0 0  s3−2 Ik s3−3 Ik s3−1 I2 s3−2 I2   1 0 0 0 0 0 (3−1) . Ry(3) Z2 = 0 2 0 0 0 0 (3) Ry(3) S2



.. 3 . s I2

T =⇒ (5.23)

 O2 O2   =⇒ O2  s3−3 I2 (5.24)

The equation (5.19) is the equation (B.1) (Example 227) so that the equation (B.14) (Example 227, Subsection 2.1.2) holds for Y replaced by R, FIO (s) ∓ ∓ (s), and it is denoted (s) replaced by VHISOIS replaced by FHISOIS (s), VIO as (5.25): ∓ (s), (5.25) R∓ (s) = FHISOIS (s) VHISOIS where in this context: - FHISOIS (s), which is equal to FIO (s) (B.15) (Example 227), " 3 3 FHISOIS (s) =

4s −1

2s +3s

10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

10s6 +19s4 +17s3 +3s2 +s−4 s3 +2s+4 10s6 +19s4 +17s3 +3s2 +s−4



 T 4 + s + 5s2 2 + 3s2   3 + 4s2 6 + 3s + s2       −1 − 5s −3s −5 −3  •   −4s −3 − s −4 −1     2 2   2+s 3 + 2s s 2s 1 2 2 2 1 + 3s 3 − 4s 3s −4s 3 −4   .. .. = GHISOIS (s) . GHISOISI0 (s) . GHISOISY0 (s) , 

# •



(5.26)

is the IS full transfer function matrix of the HISO system (5.19), (5.20), - The system IS transfer function GHISOIS (s) relative to the input vec-

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 71 — #82

i

i

5.1. HISO SYSTEM MATHEMATICAL MODEL

71

tor I : " GHISOIS (s) =

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

 • " =





4 + s + 5s2 2 + 3s2 2 3 + 4s 6 + 3s + s2

28s5 +4s4 +34s3 −5s2 +8s−4 10s6 +19s4 +17s3 +3s2 +s−4 11s5 +3s4 +6s3 −10s2 −s−8 10s6 +19s4 +17s3 +3s2 +s−4

#

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 s3 +2s+4 10s6 +19s4 +17s3 +3s2 +s−4

 =

14s5 +6s4 +23s3 +6s2 +18s−2 10s6 +19s4 +17s3 +3s2 +s−4 8s5 −3s4 +s3 −7s2 −12s−22 10s6 +19s4 +17s3 +3s2 +s−4

# ,

(5.27)

- The system IS transfer function GHISOISI0 (s) relative to the extended : initial input vector I0µ−1 ∓ " GHISOISI0 (s) =

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

 •

#

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4

−1 − 5s −3s −5 −3 −4s −3 − s −4 −1



 =⇒

GHISOISI0 (s) =    = 

1+5s−12s2 −4s3 −28s4 10s6 +19s4 +17s3 +3s2 +s−4 −6s−3s2 −6s3 −14s4 10s6 +19s4 +17s3 +3s2 +s−4 5−12s −28s3 10s6 +19s4 +17s3 +3s2 +s−4 3−3s−14s3 10s6 +19s4 +17s3 +3s2 +s−4

−1+10s+3s2 −3s3 −3s4 10s6 +19s4 +17s3 +3s2 +s−4 12+7s−s2 +3s3 −8s4 10s6 +19s4 +17s3 +3s2 +s−4 11+3s+4s3 10s6 +19s4 +17s3 +3s2 +s−4 1−s−8s3 10s6 +19s4 +17s3 +3s2 +s−4

T    , 

(5.28)

- The system IS transfer function GHISOISR0 (s) relative to the extended initial output vector R0α−1 : ∓ " GHISOISR0 (s) =  •

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4 + s2 3 + 2s2 s 2s

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4

2 1 2 1 + 3s2 3 − 4s2 3s −4s 3 −4

# •

 =⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 72 — #83

i

72

i

CHAPTER 5. HISO SYSTEMS GHISOISR0 =      =    

10s5 +19s3 −s2 +3s−2 10s6 +19s4 +17s3 +3s2 +s−4 −4s5 +6s3 −2s2 +9s−3 10s6 +19s4 +17s3 +3s2 +s−4 10s4 +9s2 −s 10s6 +19s4 +17s3 +3s2 +s−4 −12s2 −2s 10s6 +19s4 +17s3 +3s2 +s−4 10s3 +9s−1 10s6 +19s4 +17s3 +3s2 +s−4 −12s−2 10s6 +19s4 +17s3 +3s2 +s−4

−11s2 −2 10s6 +19s4 +17s3 +3s2 +s−4 5 10s +16s3 +18s2 −3s−9 10s6 +19s4 +17s3 +3s2 +s−4 −5s2 −11s 10s6 +19s4 +17s3 +3s2 +s−4 2s4 −6s2 −14s 10s6 +19s4 +17s3 +3s2 +s−4 −5s−11 10s6 +19s4 +17s3 +3s2 +s−4 10s3 +10s+18 10s6 +19s4 +17s3 +3s2 +s−4

T      ,    

(5.29)

∓ (s) of the system action vector VHISOIS (t) - The Laplace transform VHISOIS reads  ∓    I (s) I∓ (s) ∓ 1   I0∓ , (5.30) VHISOIS (s) = = CHISOIS0∓ R20∓

- And the vector CHISOIS0∓ of all the initial conditions is found to be  1  I0∓ CHISOIS0∓ = . (5.31) R20∓ In order to determine the system IO transfer function matrices we refer to Equation (5.9), FHISO (s) =  T  −1 (α) (α) (α) (µ) (α) (µ) Ry Sρ (s) A Sρ (s) H SM (s) + Q  T  −1 (α) (α) (α) (µ−1) (α) (µ) −Ry Sρ (s) A Sρ (s) H ZM (s)



    =    T  −1  (α) (α) (α) (α−1) (α) (α−1) (α) (α) Ry Sρ (s) A Sρ (s) A Zρ (s) − Ry Zρ (s)

T      ,    (5.32)

to the system (5.19), (5.20):   .. .. FHISO (s) = GHISO (s) . GHISOI0 (s) . GHISOR0 (s) =    =   



T (s) + Q  T (α) (α) Ry Sρ (s)GHISOISI0 (s)

(α) (α) Ry Sρ (s)GHISOIS

T (α) (α) (α) (α−1) Ry Sρ (s)GHISOISR0 (s) − Ry Zρ (s)

T    ,  

(5.33)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 73 — #84

i

i

5.1. HISO SYSTEM MATHEMATICAL MODEL

73

where: - FHISO (s) is the IO full transfer function matrix of the HISO system (5.19), (5.20), which together with (5.23), (5.24), determines: - GHISOI (s) , (3)

GHISOI (s) = Ry(3) S2 GHISOIS (s) + Q =   s 0 =Q+ GHISOIS (s) =⇒ 0 2s  GHISOI (s) = " •

−4+8s−5s2 +34s3 +4s4 +28s5 10s6 +19s4 +17s3 +3s2 +s−4 16+11s+22s2 +28s3 +3s4 +19s5 10s6 +19s4 +17s3 +3s2 +s−4

1 2 2 1



 +

s 0 0 2s

 •

−2+18s+6s2 +23s3 +6s4 +14s5 10s6 +19s4 +17s3 +3s2 +s−4 26+26s+13s2 +17s3 +3s4 +10s5 − 10s6 +19s4 +17s3 +3s2 +s−4

# =⇒

GHISOI (s) = " =

−4−3s+11s2 +12s3 +53s4 +4s5 +38s6 −4+s+3s2 +17s3 +19s4 +10s6 −8+34s+28s2 +78s3 +94s4 +6s5 +58s6 −4+s+3s2 +17s3 +19s4 +10s6



−8+24s2 +40s3 +61s4 +6s5 +34s6 −4+s+3s2 +17s3 +19s4 +10s6 4+51s+49s2 +9s3 +15s4 +6s5 +10s6 −4+s+3s2 +17s3 +19s4 +10s6

# , (5.34)

GHISOI (s) is the IO transfer function matrix of the HISO system (5.19), (5.20) relative to the input vector I, - GHISOI0 (s) , (3)

GHISOI0 (s) = Ry(3) S2 GHISOISI0 (s) =   s 0 = GHISOISI0 (s) =⇒ 0 2s  GHISOI0 (s) =    • 

1+5s−12s2 −4s3 −28s4 10s6 +19s4 +17s3 +3s2 +s−4 −6s−3s2 −6s3 −14s4 10s6 +19s4 +17s3 +3s2 +s−4 5−12s −28s3 10s6 +19s4 +17s3 +3s2 +s−4 3−3s−14s3 10s6 +19s4 +17s3 +3s2 +s−4

s 0 0 2s

 •

−1+10s+3s2 −3s3 −3s4 10s6 +19s4 +17s3 +3s2 +s−4 12+7s−s2 +3s3 −8s4 10s6 +19s4 +17s3 +3s2 +s−4 11+3s+4s3 10s6 +19s4 +17s3 +3s2 +s−4 1−s−8s3 10s6 +19s4 +17s3 +3s2 +s−4

T    =⇒ 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 74 — #85

i

74

i

CHAPTER 5. HISO SYSTEMS GHISOI0 (s) =    = 

s+5s2 −12s3 −4s4 −28s5 −4+s+3s2 +17s3 +19s4 +10s6 −6s2 −3s3 −6s4 −14s5 −4+s+3s2 +17s3 +19s4 +10s6 5s−12s2 −28s4 −4+s+3s2 +17s3 +19s4 +10s6 3s−3s2 −14s4 −4+s+3s2 +17s3 +19s4 +10s6

−2s+20s2 +6s3 −6s4 −6s5 −4+s+3s2 +17s3 +19s4 +10s6 24s+14s2 −2s3 +6s4 −16s5 −4+s+3s2 +17s3 +19s4 +10s6 22s+6s2 +8s4 −4+s+3s2 +17s3 +19s4 +10s6 2s−2s2 −16s4 −4+s+3s2 +17s3 +19s4 +10s6

T    , 

(5.35)

is the IO transfer function matrix of the HISO system (5.19), (5.20) rel, ative to the extended initial input vector Iµ−1 0∓ - The IO transfer function matrix GHISOR0 (s) of the HISO system (5.19), (5.20) relative to the extended initial state vector Rα−1 : 0∓ (3)

=

                                    

(3−1)

GHISOR0 (s) = Ry(3) S2 GHISOISR0 (s) − Ry(3) Z2   s 0 • 0 2s      •    

−2+3s−s2 +19s3 +9s5 10s6 +19s4 +17s3 +3s2 +s−4 −2−11s2 −2s5 10s6 +19s4 +17s3 +3s2 +s−4 −s +9s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −2s−12s2 10s6 +19s4 +17s3 +3s2 +s−4 −1+9s+10s3 10s6 +19s4 +17s3 +3s2 +s−4 −2−12s 10s6 +19s4 +17s3 +3s  2 +s−4



−3+9s−2s2 +6s3 10s6 +19s4 +17s3 +3s2 +s−4 2−3s+18s2 +16s3 +18s5 10s6 +19s4 +17s3 +3s2 +s−4 −11s−5s2 10s6 +19s4 +17s3 +3s2 +s−4 18s+10s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −11−5s 10s6 +19s4 +17s3 +3s2 +s−4 18+10s+10s3 10s6 +19s4 +17s  3 +3s2 +s−4

1 0 0 0 0 0 0 2 0 0 0 0

(s)

T

T                  

     − =⇒                      

GHISOR0 (s) =      =    

5.2

4−3s−18s3 −s6 −4+s+3s2 +17s3 +19s4 +10s6 −2s−11s3 −2s6 −4+s+3s2 +17s3 +19s4 +10s6 −s2 +9s3 +10s5 −4+s+3s2 +17s3 +19s4 +10s6 −2s2 −12s3 −4+s+3s2 +17s3 +19s4 +10s6 −s+9s2 +10s4 −4+s+3s2 +17s3 +19s4 +10s6 −2s−12s2 −4+s+3s2 +17s3 +19s4 +10s6

−6s+18s2 −4s3 +12s4 −4+s+3s2 +17s3 +19s4 +10s6 8+2s−12s2 +2s3 −6s4 +16s6 −4+s+3s2 +17s3 +19s4 +10s6 −22s2 −10s3 −4+s+3s2 +17s3 +19s4 +10s6 36s2 +20s3 +20s5 −4+s+3s2 +17s3 +19s4 +10s6 −22s−10s2 −4+s+3s2 +17s3 +19s4 +10s6 36s+20s2 +30s4 −4+s+3s2 +17s3 +19s4 +10s6

T      .    

(5.36)

The HISO plant desired regime

Definition 44, (Section 3.2), takes the special form for the HISO plant (5.1), (5.2).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 75 — #86

i

5.2. THE HISO PLANT DESIRED REGIME

i

75

h i α−1 Definition 64 The functional vector control-state pair U*(.), R* (.) is nominal for the HISO plant (5.1), (5.2) relative to the functional vector pair [D(.), Yd (.)], which is denoted by UN (.), Rα−1 N (.) , if and only if i   h α−1 U(.), Rα−1 (.) = I*(.), R* (.) ensures that the corresponding real response Y(.) = Y*(.) of the system obeys Y*(t) = Yd (t) all the time, h i   α−1 U*(.), R* (.) = UN (.), Rα−1 N (.) ⇐⇒ hY*(t) = Yd (t), ∀t ∈ T0 i . α−1 α−1 α−1 α−1 The system motion Rα−1 N (.; RN 0 ; D; UN ), RN (0; RN 0 ; D; UN ) ≡ RN 0 , is the desired motion Rα−1 (.; Rα−1 d d0 ; D; UN ) of the HISO plant (5.1), (5.2) relative to the functional vector pair [D(.), Yd (.)] , for short: the system desired motion, α−1 α−1 (t; Rα−1 Rα−1 d0 ; D; UN ) ≡ RN (t; RN 0 ; D; UN ), d α−1 α−1 (0; Rα−1 Rα−1 d0 ; D; UN ) ≡ Rd0 ≡ RN 0 . d

(5.37)

Let ( v1 (s) =

(α−1)

(µ−1)

− − A(α) Zρ (s)Rα−1 B (µ) Zr (s)Uµ−1 0 0 (µ) (µ−1) µ−1 (ν) (ν) −D Sd (s)D (s) + D Zd (s)D0

) ,

− V D(s). v2 (s) = Yd (s) + Ry(α) Zρ(α−1) (s)Rα−1 0

(5.38) (5.39)

Definition 64, the system description (5.1), (5.2), Equations (5.38) and (5.39) imply: Theorem 65 In order for a functional vector pair [I*(.), R*(.)] to be nominal for the HISO plant (5.1), (5.2) relative to the functional vector pair [D(.), Yd (.)], h i   α−1 U*(.), R* (.) = UN (.), Rα−1 N (.) , it is necessary and sufficient that it obeys the following equations: µ

α

B (µ) U* (t) − A(α) R* (t) = −D(µ) Dµ (t), ∀t ∈ T0 ,

(5.40)

U U*(t) + Ry(α) R*α (t) = Yd (t) − V D(t), ∀t ∈ T0 ,

(5.41)

or equivalently, " #    (µ) (α) B (µ) Sr (s) −A(α) Sρ (s) U*(s) v1 (s) = . (α) (α) R*(s) v2 (s) U Ry Sρ (s)

(5.42)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 76 — #87

i

76

i

CHAPTER 5. HISO SYSTEMS

Let us consider the existence of the solutions of Equations (5.40), (5.41), i.e., of (5.42). The HISO plant (5.1), (5.2) contains (r+ρ) unknown variables and (N + ρ) equations. The unknown variables are the entries of U*(s) ∈ Cr and of X*(s) ∈ Cn . Claim 66 In order to exist a nominal functional vector control-state pair   UN (.), Rα−1 (.) N for the HISO plant (5.1), (5.2), relative to the functional vector pair disturbance - desired output [D(.), Yd (.)] , it is necessary and sufficient that N ≤ r. The proof of this claim is analogous to the proof of Claim 46 (Section 3.2). Claim 66 presents the complete solution to the problem of the existence   of a nominal functional vector pair UN (.), Rα−1 (.) for the HISO plant N (5.1), (5.2) relative to the functional vector pair [D(.), Yd (.)]. Condition 67 The desired output response of the HISO plant (5.1), (5.2) is realizable, i.e.,, N ≤ r. The nominal functional vector pair   α−1 UN (.), RN (.) is known. The HISO plant description in terms of the deviations (5.43), r = R − RN ,

(5.43)

A(α) rα (t) = D(µ) dµ (t) + B (µ) uµ (t), ∀t ∈ T0 ,

(5.44)

y(t) = R(α) Rα (t) + V d(t) + U u(t), ∀t ∈ T0 ,

(5.45)

(2.53), (9.3), and (2.55) reads:

due to Equations (5.1) and (5.2).

5.3

Exercises

Exercise 68 1. Select a physical HISO plant. 2. Determine its time domain HISO mathematical model.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 77 — #88

i

i

Chapter 6

IIO systems 6.1 6.1.1

IIO system mathematical model Time domain

The general description, in terms of the total vector coordinates, of timeinvariant continuous-time linear Input-Internal and Output state systems, for short IIO systems, without a delay, has the following general form: A(α) Rα (t) = D(µ) Dµ (t) + B (µ) Uµ (t) = H (µ) Iµ (t) , ∀t ∈ T0 , (6.1) ( ) (α−1) α−1 Ry R (t) + V (µ) Dµ (t) + U (µ) Uµ (t) = E (ν) Yν (t) = , ∀t ∈ T0 , (α−1) α−1 = Ry R (t) + Q(µ) Iµ (t) (6.2) where  T I = IIIO = DT UT ∈Rd+r , M = d + r,     .. .. ρx(d+r) H = D . B ∈R , Q = V . U ∈RN x(d+r) ,   .. .. .. (α−1) Ry = Ry0 . Ry1 . ... . Ry,α−1 ∈ RN xαρ .

(6.3)

Note 69 If ν = 0 then the IIO system (6.1), (6.2) reduces to the HISO system (5.1), (5.2), Chapter 5. We continue to treat the IIO system (6.1), (6.2) with ν > 0.

77 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 78 — #89

i

78

i

CHAPTER 6. IIO SYSTEMS

Condition 70 The matrices Aα and Eν obey: detAα 6= 0, which implies ∃s ∈ C=⇒det detEν 6= 0, which implies ∃s ∈ C=⇒det

"k=α X

k=0 "k=ν X k

# k

s Ak 6= 0, #

s Ek 6= 0,

(6.4)

k=0

and ν ∈ {1, 2, ....} .

(6.5)

Note 71 We accept the validity of Condition 70 in the sequel. The left-hand side of Equation (6.1) describes the internal dynamics of the system, i.e. the internal state of the system (Definition 14, Section 1.4), and the left-hand side of Equation (6.2) describes the output dynamics, i.e., the output state of the system if and only if ν > 0. Note 72 The state vector SIIO of the IIO system (6.1), (6.2) is defined in Equation (1.27) (Section 1.4) by:  SIIO =

Rα−1 Yν−1





.. (1)T .R  = . T YT .. Y(1) RT

.. . ... .. . ...

n = αρ + νN,

 .. (α−1)T T .R  ∈ Rn , .. (ν−1)T .Y (6.6)

The new vector notation Rα−1 and Yν−1 has permitted us to define the state of the IIO system (6.1), (6.2) by preserving the physical sense. It enabled us to establish in [40] the direct link between the definitions of the Lyapunov and of BI stability properties with the corresponding conditions for them in the complex domain. It enables us to discover in what follows the complex domain criteria for observability, controllability and trackability directly from their definitions. Such criteria possess the complete physical meaning. The extended vector Rα−1 is the IIO system internal state vector SIIOI . The extended vector Yν−1 is the IIO system output state vector SIIOO . They compose the IIO system (full) state vector SIIO ,  α−1    R SIIOI SIIO = = . (6.7) Yν−1 SIIOO

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 79 — #90

i

6.1. IIO SYSTEM MATHEMATICAL MODEL

6.1.2

i

79

Complex domain

We transform Equations (6.1), (6.2) by applying the Laplace transform, into R(s) = FIIOIS (s) VIIOIS (s) ,

(6.8)

Y(s) = FIIO (s) VIIO (s) ,

(6.9)

where: - FIIOIS (s) ,    FIIOIS (s) =   

GTIIOISD (s) GTIIOISU (s) GTIIOISD0 (s) GTIIOISR0 (s) GTIIOISU0 (s)

T     

(6.10)

is the IIO system (6.1), (6.2) input to state (IS) full transfer function matrix, the inverse Laplace transform of which is the plant IS full fundamental matrix ΨIIOIS (t) [40], ΨIIOIS (t) = L−1 {FIIOIS (s)} ,

(6.11)

 −1 (α) and the inverse Laplace transform of A(α) Sρ (s) is the IIO plant IS  −1  (α) −1 (α) fundamental matrix ΦIIOIS (t) , ΦIIOIS (t) = L A Sρ (s) [40], - GIIOISD (s) ,  −1 (µ) GIIOISD (s) = A(α) Sρ(α) (s) D(µ) Sd (s),

(6.12)

is the IIO plant (6.1), (6.2) disturbance to internal state (IS) transfer function matrix, - GIIOISU (s) ,  −1 GIIOISU (s) = A(α) Sρ(α) (s) B (µ) Sr(µ) (s),

(6.13)

is the IIO plant (6.1), (6.2) control to internal state (IS) transfer function matrix, - GIIOISD0 (s) ,  −1 (µ−1) GIIOISD0 (s) = − A(α) Sρ(α) (s) D(µ) Zd (s),

(6.14)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 80 — #91

i

80

i

CHAPTER 6. IIO SYSTEMS

is the IIO plant (6.1), (6.2) initial disturbance to internal state (IS) transfer function matrix, - GIIOISR0 (s) ,  −1 A(α) Zρ(α−1) (s), GIIOISR0 (s) = A(α) Sρ(α) (s)

(6.15)

is the IIO plant (6.1), (6.2) initial internal state to internal state (IS) transfer function matrix, - GIIOISU0 (s) ,  −1 GIIOISU0 (s) = A(α) Sρ(α) (s) B (µ) Zr(µ−1) (s),

(6.16)

is the IIO plant (6.1), (6.2) initial control to internal state (IS) transfer function matrix, VIIOIS (s) , 

. . VIIOIS (s) = D (s) .. UT (s).. CTIIOIS0 T

T ,

(6.17)

is the Laplace transform of the action vector VIIOIS (t) , and CIIOIS0 , CIIOIS0 =



Dµ−1 0

T .  T  .. Rα−1 T ... Uµ−1 T , 0 0

(6.18)

is the vector of all initial conditions acting on the system internal state, - FIIO (s) ,  T T GIIOD (s)  GTIIOU (s)   T   GIIOD (s)  0   , (6.19) FIIO (s) =  T   GIIOR0 (s)   GT (s)  IIOU0

GTIIOY0 (s) is the IIO plant (6.1), (6.2) input to output (IO) full transfer function matrix, the inverse Laplace transform of which is the plant IO full fundamental matrix ΨIIO (t), ΨIIO (t) = L−1 {FIIO (s)} , (6.20) - pIIO (s) ,     (ν) pIIO (s) = det E (ν) SN (s) det A(α) Sρ(α) (s) ,

(6.21)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 81 — #92

i

i

6.1. IIO SYSTEM MATHEMATICAL MODEL

81

is the characteristic polynomial of the IIO plant (6.1), (6.2) and the denominator polynomial of all its transfer function matrices - GIIOD (s) ,

  =

GIIOD (s) =  −1  (ν) (α−1) (α) (α) (µ) E (ν) SN (s) Ry Sρ (s) A(α) Sρ (s) H (µ) Sd (s) =  −1 (ν) (µ) + E (ν) SN (s) V (µ) Sd (s) −1

= p−1 IIO (s) •       (ν) (α−1) (α) (α) (µ) adj E (ν) SN (s) Ry Sρ (s)adj A(α) Sρ (s) H (µ) Sd (s) h  i   , • (α) (ν) (µ) + det A(α) Sρ (s) adj E (ν) SN (s) V (µ) Sd (s) (6.22) is the IIO plant (6.1), (6.2) IO transfer function matrix relative to the disturbance D, - GIIOU (s) ,

  =

GIIOU (s) = −1  −1  (ν) (α−1) (α) (α) (µ) E (ν) SN (s) Ry Sρ (s) A(α) Sρ (s) B (µ) Sr (s)+ =  −1 (ν) (µ) + E (ν) SN (s) U (µ) Sr (s), p−1 IIO (s) •



     (ν) (α−1) (α) (α) (µ) adj E (ν) SN (s) Ry Sρ (s)adj A(α) Sρ (s) B (µ) Sr (s) h  i    • (α) (ν) (µ) + det A(α) Sρ (s) adj E (ν) SN (s) U (µ) Sr (s), (6.23) is the IIO plant (6.1), (6.2) IO transfer function matrix relative to the control U, - GIIOD0 (s) , GIIOD0 (s) = −p−1 (s) •    IIO   (ν) (α−1) (α) (α) (µ−1) adj E (ν) SN (s) Ry Sρ (s)adj A(α) Sρ (s) D(µ) Zd (s)      • (α) (ν) (µ−1) + det A(α) Sρ (s) adj E (ν) SN (s) V (µ) Zd (s), (6.24) is the IIO plant (6.1), (6.2) IO transfer function matrix relative to the initial extended disturbance Dµ−1 , 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 82 — #93

i

82

i

CHAPTER 6. IIO SYSTEMS - GIIOR0 (s) ,

GIIOR0 (s) = p−1 IIO (s) •       (ν) (α−1) (α) (α) (α−1) adj E (ν) SN (s) Ry Sρ (s)adj A(α) Sρ (s) A(α) Zρ (s) h  i    •  (α) (ν) (α−1) (α−1) − det A(α) Sρ (s) adj E (ν) SN (s) Ry Zρ (s), (6.25) is the IIO plant (6.1), (6.2) IO transfer function matrix relative to the , initial internal state vector Rα−1 0 - GIIOU0 (s) , GIIOU0 (s) = p−1 IIO (s) •       (ν) (α−1) (α) (α) (µ−1) −adj E (ν) SN (s) Ry Sρ (s)adj A(α) Sρ (s) B (µ) Zr (s) h  i   , •  (α) (ν) (µ−1) − det A(α) Sρ (s) adj E (ν) SN (s) U (µ) Zr (s) (6.26) is the IO transfer function matrix relative to the initial extended control vector Uµ−1 of the IIO plant (6.1), (6.2), 0 - GIIOY0 (s) ,  −1 (ν) (ν−1) GIIOY0 (s) = E (ν) SN (s) E (ν) ZN (s) (6.27) is the IIO plant (6.1), (6.2) IO transfer function matrix relative extended initial output state vector Y0ν−1 , - VIIO (s) and CIIO0 ,  µ−1 D0      Rα−1 IIIO (s) D(s) 0 VIIO (s) = , IIIO (s) = , CIIO0 =   Uµ−1 CIIO0 U(s) 0 Y0ν−1

to the

  , 

(6.28) are the Laplace transform of the action vector VIIO (t) and the vector CIIO0 of all initial conditions, respectively. Equations (6.9), (6.19), (6.21)-(6.23), (6.24)-(6.27) determine the Laplace transform Y(s) of the output vector Y(t), Y(s) = GIIOD (s) D(s) + GIIOU (s) U(s) + GIIOD0 (s) Dµ−1 + 0 +GIIOR0 (s) Rα−1 + GIIOU0 (s) Uµ−1 + GIIOY0 (s) Y0ν−1 = 0 0 = FIIO (s) VIIO (s) .

(6.29)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 83 — #94

i

i

6.1. IIO SYSTEM MATHEMATICAL MODEL

83

This can be set in a more compact form. Equations (6.28) and (6.29), together with     .. .. GIIO (s) = GIIOD (s) . GIIOU (s) , GIIOI0 (s) = GIIOD0 (s) . GIIOU0 (s) , give the compact form to the Laplace transform Y∓ (s) of the system response Y(t; Rα−1 ; Y0ν−1 ; Iµ ), 0− Y∓ (s) = GIIO (s)I(s) + GIIOI0 (s)Iµ−1 + GIIOR0 (s) Rα−1 + GIIOY0 (s) Y0ν−1 ∓ . 0 0− (6.30) The inverse Laplace transform of this equation determines the IIO system ; Y0ν−1 ; Iµ ), (6.1), (6.2) response Y(t; Rα−1 0− ; Y0ν−1 ; Iµ ) Y(t; Rα−1 0−

=L

−1





Y (s) =

Z

t

0−

ΓIIO (τ )I(t − τ )dτ +

+ΓIIOI0 (t)Iµ−1 + ΓIIOR0 (t)Rα−1 + ΓIIOY0 (t)Y0ν−1 − , 0− 0−

(6.31)

∀t ∈ T0 , where ΓIIO (t) = L−1 {GIIO (s)} =     −1 (α) (α) (α) (µ)   −1 (α) (µ) E Ry Sρ (s) A Sρ (s) H SM (s)+  = L−1 ΘIIO (s)  ν , (µ)   +E −1 Q(µ) S (s) ν

M

(6.32) ΓIIOI0 (t) = L−1 {GIIOI0 (s)} =     −1 (α) (α) (α) (µ−1)   −1 (α) (µ) −Eν Ry Sρ (s) A Sρ (s) H ZM (s)−  = L−1 ΘIIO (s)  , (µ−1)   −E −1 Q(µ) Z (s) ν

M

(6.33) ΓIIOR0 (t) = L−1 {GIIOR0 (s)} =     −1 (α−1) (α) (α) (α)   (α) −1 (α) (s)−  E Ry Sρ (s) A Sρ (s) A Zρ = L−1 ΘIIO (s)  ν , (α) (α−1)   −E −1 R Z (s) ν

y

ρ

(6.34)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 84 — #95

i

84

i

CHAPTER 6. IIO SYSTEMS

n o (ν−1) ΓIIOY0 (t) = L−1 {GIIOY0 (s)} = L−1 ΘIIO (s)Eν−1 E (ν) ZN (s) . (6.35) Equations (6.32)-(6.35) define well the matrices ΓIIO (t), ΓIIOI0 (t), ΓIIOR0 (t) and ΓIIOY0 (t) in terms of the system transfer function matrices GIIO (s), GIIOI0 (s), GIIOR0 (s) , and GIIOY0 (s), respectively. Example 73 Let the IIO system (6.1), (6.2) be defined by #  #    " (3)  " (1)   1 2 R1 (t) 2 3 R1 (t) 4 0 R1 (t) + + = (3) (1) 3 −4 1 0 1 1 R2 (t) R2 (t) R2 (t) #  #    " (2)  " (1)   I1 (t) 5 3 I1 (t) 1 0 I1 (t) 4 2 , = + + (2) (1) I2 (t) 4 1 0 3 3 6 I2 (t) I2 (t) (6.36) " #  " #  " #   (2) (2) (1) Y1 (t) 1 0 1 2 R1 (t) I1 (t) = + (6.37) (2) (2) (1) 0 2 2 1 Y2 (t) R2 (t) I2 (t) In this case N = 2, α = 3, ρ = 2, Ry0 = Ry1 = Ry3 = O2 , M = 2, ν = 2, E0 = E1 = O2 , E2 = I2 , Q0 = Q2 = O2 , µ = 2 < 3 = α, Ry(α)

  .. .. .. = = Ry0 . Ry1 . Ry2 . Ry3 =   0 0 0 0 1 0 0 0 , = 0 0 0 0 0 2 0 0

(6.38)

Ry(3)



0 0 0 0 1 0 0 0 0 0 0 0 0 2 0 0   .. 1 .. 2 .. 3 T 0 • s I2 . s I2 . s I2 . s I2 =⇒

(3)

Ry(3) S2 (s) =

(3) Ry(3) S2 (s)

(3−1)

Ry(3) Z2



 (s) =

O2  s3−3 I2 •  s3−2 I2 s3−1 I2

 =

s2 0 0 2s2

(6.39)

 •

 ,

0 0 0 0 1 0 0 0 0 0 0 0 0 2 0 0  O2 O2 O2 O2   =⇒ 3−3 s I2 O2  s3−2 I2 s3−3 I2

(6.40)  •

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 85 — #96

i

6.1. IIO SYSTEM MATHEMATICAL MODEL (3−1) Ry(3) Z2 (s)

 =

s 0 1 0 0 0 0 2s 0 2 0 0

i

85  .

(6.41)

Equation (6.36) is Equation (B.1) (Example 227, Subsection 2.1.2) so that Equation (B.14), (Example 227), holds for Y replaced by R, FIO (s) re∓ ∓ placed by FIIOIS (s), VIO (s) replaced by VIIOIS (s), and it is denoted as (6.42): ∓ R∓ (s) = FIIOIS (s) VIIOIS (s), (6.42) where in this context: - FIIOIS (s), which is equal to FIO (s) (B.15) (Example 227),   . . GIIOIS (s) .. GIIOISI0 (s) .. GIIOISY0 (s)



 .. .. FIIOIS (s) = GIIOIS (s) . GIIOISI0 (s) . GIIOISY0 (s) = # " 3 3 =

4s −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

2s +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4  T 2



4 + s + 5s2 2 + 3s 2   3 + 4s 6 + 3s + s2       −1 − 5s −3s −5 −3  ,  •  −4s −3 − s −4 −1     2 2   2+s 3 + 2s s 2s 1 2 2 2 1 + 3s 3 − 4s 3s −4s 3 −4 



(6.43)

is the IS full transfer function matrix of the IIO system (6.36), (6.37), - The system IS transfer function GIIOIS (s) = GIO (s) (B.16) relative to the input vector I : GIIOIS (s) = "

−4+8s−5s2 +34s3 +4s4 +28s5 10s6 +19s4 +17s3 +3s2 +s−4 −8−s+6s2 +6s3 +3s4 +11s5 10s6 +19s4 +17s3 +3s2 +s−4

−2+18s+6s2 +23s3 +6s4 +14s5 10s6 +19s4 +17s3 +3s2 +s−4 2 +17s3 +3s4 +10s5 − 26+26s+13s 10s6 +19s4 +17s3 +3s2 +s−4

# ,

(6.44)

- The system IS transfer function GIIOISI0 (s) = GIOI0 (s) (B.17) relative

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 86 — #97

i

86

i

CHAPTER 6. IIO SYSTEMS

to the extended initial input vector Iµ−1 : 0∓ GIIOISI0 (s) =    = 

1+5s−12s2 −4s3 −28s4 10s6 +19s4 +17s3 +3s2 +s−4 −6s−3s2 −6s3 −14s4 10s6 +19s4 +17s3 +3s2 +s−4 5−12s −28s3 10s6 +19s4 +17s3 +3s2 +s−4 3−3s+10s3 10s6 +19s4 +17s3 +3s2 +s−4

−1+10s+3s2 −3s3 −11s4 10s6 +19s4 +17s3 +3s2 +s−4 12+7s−s2 +3s3 −8s4 10s6 +19s4 +17s3 +3s2 +s−4 11+3s−11s3 10s6 +19s4 +17s3 +3s2 +s−4 1−s−8s3 10s6 +19s4 +17s3 +3s2 +s−4

T    , 

(6.45)

- The system IS transfer function GIIOISR0 (s) = GIOY0 (s) (B.18) relative : to the extended initial output vector Rα−1 0∓ GIIOISR0 =      =    

−2+3s−s2 +19s3 +10s5 10s6 +19s4 +17s3 +3s2 +s−4 −3+9s−2s2 +6s3 10s6 +19s4 +17s3 +3s2 +s−4 −s+9s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −2s−12s2 10s6 +19s4 +17s3 +3s2 +s−4 −1+9s+10s3 10s6 +19s4 +17s3 +3s2 +s−4 −2−12s 10s6 +19s4 +17s3 +3s2 +s−4

−2−11s2 10s6 +19s4 +17s3 +3s2 +s−4 −9−3s+18s2 +16s3 +20s5 10s6 +19s4 +17s3 +3s2 +s−4 −11s −5s2 10s6 +19s4 +17s3 +3s2 +s−4 −14s−6s2 +2s4 10s6 +19s4 +17s3 +3s2 +s−4 −11−5s 10s6 +19s4 +17s3 +3s2 +s−4 −14−6s+2s3 10s6 +19s4 +17s3 +3s2 +s−4

T      ,    

(6.46)

∓ (s) (B.19) of the system action - The Laplace transform VIIOIS (s) = VIO vector VIIOIS (t) reads  ∓    I (s) ∓ (s) I 1 VIIOIS (s) =  I0∓  = , (6.47) CIIOIS0∓ R20∓

- And the vector CIIOIS0∓ = CIO0∓ (B.20) of all the initial conditions is found to be  1  I0∓ CIIOIS0∓ = . (6.48) R20∓ Equation (6.37) determines     .. .. 0 0 0 0 1 0 (ν) (2) E = E = E0 . E1 . E2 = , 0 0 0 0 0 1 so that (2)

E (2) S2 =



0 0 0 0 1 0 0 0 0 0 0 1



  2  s0 I2 0  s1 I2  = s , 0 s2 s2 I2 

(6.49)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 87 — #98

i

6.1. IIO SYSTEM MATHEMATICAL MODEL 

(2−1)

E (2) Z2

 (2) −1 E (2) S2



 =

1 s2

0

0 0 0 0 1 0 0 0 0 0 0 1

=

E

(2)

(2−1) Z2

 =

s 0

0 1 s2

 =

i

87 1 I2 , s2

(6.50)

 O2 O2  s2−2 I2 O2  =⇒ 2−1 2−2 s I2 s Ik  0 1 0 . (6.51) s 0 1 



From Equation (6.37) follow:  . . (3) R = Ry0 .. Ry1 .. Ry2  0 0 0 0 1 0 = 0 0 0 0 0 2

 .. . Ry3 =  0 0 , 0 0

(6.52)

 0 0 0 0 1 0 0 0 • = 0 0 0 0 0 2 0 0  T . . . • s0 I2 .. s1 I2 .. s2 I2 .. s3 I2 =⇒

(3) R(3) S2



(3) R(3) S2



 =

s2 0 0 2s2

 ,

0 0 0 0 1 0 0 0 0 0 0 0 0 2 0 0  O2 O2 O2  s3−3 I2 O2 O2   =⇒ • 3−2 3−3  s Ik s Ik O2  s3−1 I2 s3−2 I2 s3−3 I2 (3−1)

R(3) Z2 

(2)

 •

 s 0 1 0 0 0 R = , 0 2s 0 2 0 0   0 0 1 2 0 0 Q(2) = , 0 0 2 1 0 0     s0 I2   0 0 1 2 0 0  1  s 2 s I2 = = 2 s 0 0 2 1 0 0 s2 I2 (3)

Q(2) S2

=

(6.53)

(3−1) Z2



(6.54)

(6.55)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 88 — #99

i

88

i

CHAPTER 6. IIO SYSTEMS

In order to determine the system IO transfer function matrices we apply Equations (6.19), (6.23)-(6.27), and (6.28) to the system (6.36), (6.37):   .. .. .. (6.56) FIIO (s) = GIIO (s) . GIIOI0 (s) . GIIOR0 (s) . GIIOY0 (s) , where: - FIIO (s) is the IO full transfer function matrix of the IIO system (6.36), (6.37), which, together with (6.49)-(6.55), determines: - GIIO (s) ,  −1 (2) GIIO (s) = E (2) S2 (s) • h i (3) (2) = R(3) S2 (s)GIIOIS (s) + Q(2) S2 (s) =⇒



   "   = •   

1 s2

0

GIIO (s) =   2 0 s 0 • 1 0 2s2 s2

−4+8s−5s2 +34s3 +4s4 +28s5 10s6 +19s4 +17s3 +3s2 +s−4 −8−s+6s2 +6s3 +3s4 +11s5 10s6 +19s4 +17s3 +3s2+s−4 1 + s2

0



−2+18s+6s2 +23s3 +6s4 +14s5 10s6 +19s4 +17s3 +3s2 +s−4 2 +17s3 +3s4 +10s5 − 26+26s+13s 6 +19s4 +17s3 +3s2 +s−4 10s  

s 2 2 s

0

1 s2

 # +     =⇒   

GIIO (s) =  −4s − + 11s3 +     +12s4 + 53s5   6 7 +4s + 38s

      =    



3s2

   

−8 + 2s + 4s2 + +52s3 44s4 + 23s5 + +26s6 + 14s7

   

8 6 5 4 3 −4s2  10s +19s +17s +3s +s  2

2 6 4 3 2 +s−4) s (10s +19s +17s +3s  2

  

  

−8 + 2s − 10s + +32s3 + 50s4 + 12s5 + +26s6 + +22s7

10s8 +19s6 +17s5 +3s4 +s3 −4s2

  

−4s − 51s − −49s3 − 9s4 − 15s5 − −6s6 − 10s7

  

       , (6.57)    

10s8 +19s6 +17s5 +3s4 +s3 −4s2

GIIO (s) is the IO transfer function matrix of the IIO system (6.36), (6.37) relative to the input vector I, - GIIOI0 (s) , (3)

GIIOI0 (s) = Ry(3) S2 GIIOISI0 (s) =   s 0 = GIIOISI0 (s) =⇒ 0 2s

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 89 — #100

i

i

6.1. IIO SYSTEM MATHEMATICAL MODEL  GIIOI0 (s) =    • 

s 0 0 2s

89

 •

−1+10s+3s2 −3s3 −3s4 10s6 +19s4 +17s3 +3s2 +s−4 12+7s−s2 +3s3 −8s4 10s6 +19s4 +17s3 +3s2 +s−4 11+3s+4s3 10s6 +19s4 +17s3 +3s2 +s−4 1−s−8s3 10s6 +19s4 +17s3 +3s2 +s−4

1+5s−12s2 −4s3 −28s4 10s6 +19s4 +17s3 +3s2 +s−4 −6s−3s2 −6s3 −14s4 10s6 +19s4 +17s3 +3s2 +s−4 5−12s −28s3 10s6 +19s4 +17s3 +3s2 +s−4 3−3s−14s3 10s6 +19s4 +17s3 +3s2 +s−4

T    =⇒ 

GIIOI0 (s) =    = 

−2s+20s2 +6s3 −6s4 −6s5 −4+s+3s2 +17s3 +19s4 +10s6 24s+14s2 −2s3 +6s4 −16s5 −4+s+3s2 +17s3 +19s4 +10s6 22s+6s2 +8s4 −4+s+3s2 +17s3 +19s4 +10s6 2s−2s2 −16s4 −4+s+3s2 +17s3 +19s4 +10s6

s+5s2 −12s3 −4s4 −28s5 −4+s+3s2 +17s3 +19s4 +10s6 −6s2 −3s3 −6s4 −14s5 −4+s+3s2 +17s3 +19s4 +10s6 5s2 −12s3 −28s5 −4+s+3s2 +17s3 +19s4 +10s6 3s−3s2 −14s4 −4+s+3s2 +17s3 +19s4 +10s6

T    , 

(6.58)

is the system IO transfer function matrix of the IIO system (6.36), (6.37) relative to the extended initial input vector Iµ−1 , 0∓ - GIIOR0 (s) , (3)

=

                                    

(3−1)

GIIOR0 (s) = Ry(3) S2 GIIOISR0 (s) − Ry(3) Z2   s 0 • 0 2s  −3+9s−2s2 +6s3 −2+3s−s2 +19s3 +9s5     •    

10s6 +19s4 +17s3 +3s2 +s−4 −2−11s2 −2s5 10s6 +19s4 +17s3 +3s2 +s−4 −s +9s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −2s−12s2 10s6 +19s4 +17s3 +3s2 +s−4 −1+9s+10s3 10s6 +19s4 +17s3 +3s2 +s−4 −2−12s 10s6 +19s4 +17s3 +3s  2 +s−4



(s) =

10s6 +19s4 +17s3 +3s2 +s−4 2−3s+18s2 +16s3 +18s5 10s6 +19s4 +17s3 +3s2 +s−4 −11s−5s2 10s6 +19s4 +17s3 +3s2 +s−4 18s+10s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −11−5s 10s6 +19s4 +17s3 +3s2 +s−4 18+10s+10s3 10s6 +19s4 +17s  3 +3s2 +s−4

1 0 0 0 0 0 0 2 0 0 0 0

T

T                  

     −                      

=⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 90 — #101

i

90

i

CHAPTER 6. IIO SYSTEMS GIIOR0 (s) = 

    =    

4−3s−18s3 −s6 −4+s+3s2 +17s3 +19s4 +10s6 −2s−11s3 −2s6 −4+s+3s2 +17s3 +19s4 +10s6 −s2 +9s3 +10s5 −4+s+3s2 +17s3 +19s4 +10s6 −2s2 −12s3 −4+s+3s2 +17s3 +19s4 +10s6 −s+9s2 +10s4 −4+s+3s2 +17s3 +19s4 +10s6 −2s−12s2 −4+s+3s2 +17s3 +19s4 +10s6

6.2

−6s+18s2 −4s3 +12s4 −4+s+3s2 +17s3 +19s4 +10s6 10−5s+12s2 −18s3 −38s4 +18s5 −20s6 −4+s+3s2 +17s3 +19s4 +10s6 −11s−5s2 −4+s+3s2 +17s3 +19s4 +10s6 18s+10s2 +10s4 −4+s+3s2 +17s3 +19s4 +10s6 −11−5s −4+s+3s2 +17s3 +19s4 +10s6 18+10s+10s3 −4+s+3s2 +17s3 +19s4 +10s6

T      . (6.59)    

IIO plant desired regime

We adjust Definition 44, (Section 3.2), to the IIO plant (6.1), (6.2): h i α−1 Definition 74 A functional control-state pair U*(.), R* (.) is nominal for the IIO plant (6.1), (6.2) relative to the functional vec  α−1 tor pair [D(.), Yd (.)], which is denoted by UN (.), RN (.) , if and only i   h α−1 α−1 if I(.), R (.) = I*(.), R* (.) ensures that the corresponding real response Y(.) = Y*(.) of the system obeys Y*(t) = Yd (t) all the time as soon ν−1 , as Y0ν−1 = Yd0 h i   α−1 I*(.), R* (.) = IN (.), Rα−1 N (.) ⇐⇒

ν−1 =⇒ Y*(t) = Yd (t), ∀t ∈ T0 . ⇐⇒ Y0ν−1 = Yd0 Let ( w1 (s) =

(µ)

(µ−1)

−D(µ) Sd (s)D(s) + D(µ) Zd (s)Dµ−1 + 0 µ−1 α−1 (µ−1) (α) (α−1) (µ) ∗ ∗ +B Zr (s)U0 − AP Z ρ (s)R0

) ,

(6.60)

 (ν) (ν−1) ν−1 E (ν) SN (s)Yd (s) − E (ν) ZN (s)Yd0 +   α−1 (α−1) (α−1) (µ) w2 (s) = . +Ry Zρ (s)R∗0 − V (µ) Sd (s)D(s)+   µ−1   (µ−1) (ν−1) µ−1 (µ) (µ) ∗ +V Zd (s)D0 + U Zr (s)U0

(6.61)

  

Definition 74 and the plant description (6.1), (6.2) imply the following: h i α−1 Theorem 75 In order for a functional vector pair U*(.), R* (.) to be nominal for the IIO plant (6.1), (6.2) relative to the functional vector pair [D(.), Yd (.)], h i   α−1 U*(.), R* (.) = UN (.), Rα−1 N (.) ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 91 — #102

i

i

6.3. EXERCISES

91

it is necessary and sufficient that it obeys the following equations: µ

α

B (µ) U* (t) − A(α) R* (t) = −D(µ) Dµ (t), ∀t ∈ T0 , µ

α

U (µ) U* (t) + R(α) R* (t) = E (ν) Ydν (t) − V (µ) Dµ (t), ∀t ∈ T0 , or equivalently, " #    (µ) (α) B (µ) Sr (s) −A(α) Sρ (s) U*(s) w1 (s) = . (µ) (α) R*(s) w2 (s) U (µ) Sr (s) R(α) Sρ (s)

(6.62) (6.63)

(6.64)

What are the conditions for the existence of the solutions of the equations (6.62), (6.63), i.e., of (6.64)? There are (r+ρ) unknown variables and (N +ρ) equations. The unknown variables are the entries of U*(s) ∈ Cr and of R*(s) ∈ Cρ . Claim 76 In order to exist a nominal functional vector pair   UN (.), Rα−1 N (.) for the IIO plant (6.1), (6.2) relative to the functional pair [D(.), Yd (.)] it is necessary and sufficient that N ≤ r. The proof of this claim follows the proof of Claim 46 (Section 3.2). Condition 77 The desired output response of the IIO plant (6.1), (6.2) is realizable, i.e., N ≤ r. The nominal control-state pair UN (.), Rα−1 N (.) is known. The time domain description of the IIO plant in terms of the deviations reads: A(α) rα (t) = D(µ) dµ (t) + B (µ) uµ (t), ∀t ∈ T0 , E

6.3

(ν) ν

y (t) =

Ry(α−1) rα−1 (t)

+V

(µ) µ

d (t) + U

(µ) µ

u (t), ∀t ∈ T0 .

(6.65) (6.66)

Exercises

Exercise 78 1. Select a physical IIO plant. 2. Determine its time domain IIO mathematical model.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 92 — #103

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 93 — #104

i

i

Part II

OBSERVABILITY

93 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 94 — #105

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 95 — #106

i

i

Chapter 7

Mathematical preliminaries 7.1

Linear independence and matrices

This section gives a short outline on linear independence of (column and row, respectively) vectors (ki ∈ Rν and ci ∈ Cν , i = 1, 2, ..., µ, rj ∈ R1×µ and zj ∈ C1×µ , j = 1, 2, ..., ν). The brief reminder of the linear (in)dependence and of the rank of the matrices is helpful to emphasize the subtle but crucial differences between the matrices and matrix functions. Definition 79 Linear independence and dependence of vectors A) Real valued µ vectors ki ∈ Rν are linearly independent if and only if their linear combination (7.1), α1 k1 + α2 k2 + ... + αµ kµ

(7.1)

α1 k1 + α2 k2 + ... + αµ kµ = 0ν

(7.2)

vanishes, i.e., only for zero values of all the real valued scalars α1 , α2 , ..., αµ , i.e., α1 k1 + α2 k2 + ... + αµ kµ = 0ν ⇐⇒ α1 = α2 =... = αµ = 0.

(7.3)

Otherwise, i.e., if and only if (7.2) holds for at least one nonzero real number αk , i.e., ∃ (αk 6= 0) ∈ R, k ∈ {1, 2, ..., µ} =⇒ α1 k1 + α2 k2 + ... + αµ kµ = 0ν (7.4) real valued µ vectors ki ∈ Rν are linearly dependent.

95 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 96 — #107

i

96

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Real valued ν vectors rj ∈ R1×µ are linearly independent if and only if their linear combination (7.5), b1 r1 + b2 r2 + ... + bν rν

(7.5)

b1 r1 + b2 r2 + ... + bν rν = 0Tµ

(7.6)

vanishes, i.e., only for zero values of all the real valued scalars b1 , b2 , ..., bν , i.e., b1 r1 + b2 r2 + ... + bν rν = 0Tµ ⇐⇒ b1 = b2 =... = bν = 0.

(7.7)

Otherwise, i.e., if and only if (7.5) holds for at least one nonzero real number bm , i.e., ∃ (bm 6= 0) ∈ R, m ∈ {1, 2, ..., µ} =⇒ b1 r1 + b2 r2 + ... + bν rν = 0Tµ

(7.8)

real valued ν vectors rj ∈ R1×µ are linearly dependent. B) Complex valued µ vectors ci ∈ Cν are linearly independent if and only if their linear combination (7.9), γ1 c1 + γ2 c2 + ... + γµ cµ

(7.9)

γ1 c1 + γ2 c2 + ... + γµ cµ = 0ν

(7.10)

vanishes, i.e., only for zero values of all complex valued scalars γ1 , γ2 , ..., γµ , i.e., γ1 c1 + γ2 c2 + ... + γµ cµ = 0ν ⇐⇒ γ1 = γ2 =... = γµ = 0.

(7.11)

Otherwise, i.e., if and only if (7.10) holds for at least one nonzero complex number γj , i.e., ∃ (γj 6= 0) ∈ C, j ∈ {1, 2, ..., µ} =⇒ γ1 c1 + γ2 c2 + .. + γj cj + . + γµ cµ = 0ν ,

(7.12)

complex valued µ vectors ci ∈ Cν are linearly dependent. Complex valued ν vectors zj ∈ C1×µ are linearly independent if and only if their linear combination (7.13), γ1 z1 + γ2 z2 + ... + γν zν

(7.13)

γ1 z1 + γ2 z2 + ... + γν zν = 0Tµ

(7.14)

vanishes, i.e.,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 97 — #108

i

7.2. MATRIX RANGE, NULL SPACE, AND RANK

i

97

only for zero values of all complex valued scalars γ1 , γ2 , ..., γν , i.e., γ1 z1 + γ2 z2 + ... + γν zν = 0Tµ ⇐⇒ γ1 = γ2 =... = γν = 0.

(7.15)

Otherwise, i.e., if and only if (7.6) holds for at least one nonzero complex number γl , i.e., ∃ (γl 6= 0) ∈ R, l ∈ {1, 2, ..., µ} =⇒ γ1 z1 + γ2 z2 + ... + γν zν = 0Tµ

(7.16)

complex valued ν vectors zj ∈ C1×µ are linearly dependent. We can form a matrix K ∈ Rν×µ of µ (column) vectors ki ∈ Rν and the matrix C ∈ Cν×µ of µ (column) vectors ci ∈ Cν :     K = k1 k2 ... kµ , C = c1 c2 ... cµ . (7.17) Let



   k1i c1i ki =  :  and ci =  :  , i = 1, 2, ..., µ, kνi cνi

then, e.g., for ν < µ :   k11 k12 ... k1i ... k1µ : : ... : : : ... : :  ∈ Rν×µ , K= : kν1 kν2 ... kνi ... kνµ  c11 c12 ... c1i ... c1µ : : ... : : : ... : :  ∈ Cν×µ . C= : cν1 cν2 ... cνi ... cνµ

(7.18)

(7.19)



(7.20)

Equations (7.17) through (7.20) establish the link between the vectors and the matrices. The vectors ki and ci represent the i−th column of the matrices K and C, respectively, for every i = 1, 2, .., µ. The number ν is the number of the rows of the matrices K and C, and the number µ is the number of the columns of the matrices K and C.

7.2

Matrix range, null space, and rank

Let a matrix M ∈ {Rν×µ , Cν×µ } . The following sets are determined by the properties of the matrix M :

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 98 — #109

i

98

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

The set denoted by R(M ) called the range of the matrix M is the set of all vectors y ∈ {Rν , Cν } for which there exists a vector x ∈ {Rµ , Cµ } such that y =M x, R (M ) = {y :∃x ∈ {Rµ , Cµ } =⇒ y =M x ∈ {Rν , Cν }} ⊆ .Rν . The dimension denoted by dimR(M ) of R(M ) is the largest number of linearly independent vectors in R(M ) such that every vector in R(M ) can be represented as their linear combination. The set denoted by N (M ) called the null space of the matrix M is the set of all vectors x ∈ {Rµ , Cµ } such that M x = 0ν , N (M ) = {x ∈ {Rµ , Cµ } : M x = 0ν } ⊆ Rµ . The dimension of N (M ) denoted by dimN (M ) and called the nullity of M is the largest number of linearly independent vectors in N (M ) such that every vector in N (M ) can be represented as their linear combination. dimR(M ) and dimN (M ) obey dimR (M ) + dimN (M ) = µ. Definition 80 Rank of a matrix The following definitions of the rank of a ν×µ matrix M ∈ {Rν×µ , Cν×µ } are equivalent: The rank of a ν × µ matrix M ∈ {Rν×µ , Cν×µ } is the number ρ such that its minor Mρ of the order ρ is nonzero and every its minor Mk of the order k bigger than ρ is zero,   Mk 6= 0, ∀k = 1, 2, ..., ρ, rank M = ρ ⇐⇒ (7.21) Mk 6= 0, ∀k = ρ + 1, ρ + 2, ..., min (ν, µ) . The rank of a ν × µ matrix M ∈ {Rν×µ , Cν×µ } is the dimension ρ of the largest nonsingular square submatrix of the matrix M . The rank of a ν × µ matrix M ∈ {Rν×µ , Cν×µ } is the maximal number ρ of the linearly independent rows of the matrix M . This is also called the row rank of the matrix M . The rank of a ν × µ matrix M ∈ {Rν×µ , Cν×µ } is the maximal number ρ of the linearly independent columns of the matrix M . This is also called the column rank of the matrix M . Claim 81 Maximal numbers of linearly independent columns, of linearly independent rows and rank of a ν × µ matrix M

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 99 — #110

i

7.2. MATRIX RANGE, NULL SPACE, AND RANK

i

99

The maximal number nir max of linearly independent rows, the maximal number nic max of linearly independent columns, and the rank ρ of ν × µ matrix M ∈ {Rν×µ , Cν×µ } are all equal and are not bigger than min(ν, µ) : nic max = nir max = rankM = ρ ≤ min (ν, µ) .

(7.22)

By applying the elementary transformations to any ν × µ matrix M ∈ it can be transformed into its equivalent (˜) matrix of the following structure (e.g., [2, pp. 107, 108]):   1 0 0 0  0 1 0 0   M˜   0 0 1 0  =⇒ 0 0 0 0

{Rν×µ , Cν×µ }

=⇒ nic max = nir max = rankM = 3.

(7.23)

This explains and illustrates Claim 81. Example 82 Let  2 1 M =  −2 −1  ∈ R3×2 =⇒ ν = 3, µ = 2. 4 3 

2,2 composed of the entries of the second row and the third row The minor M3,2 of the matrix M is nonsingular, which implies rank M = 2 because there is not a third order minor of the matrix M : −2 −1 2,2 = (−2) 3 − (−1) 4 = −2 6= 0 =⇒ rankM = 2, M3,2 = 4 3

In order to determine the number of linearly independent columns by definition we test the existence of at least one nonzero number between α1 and α2 such that the linear combination of the columns of the matrix M vanishes:           2 1 2 1 0 α 1 α1  −2  + α2  −1  =  −2 −1  =  0  ⇐⇒ α2 4 3 4 3 0 2α1 + α2 = 0 α2 = −2α1 ⇐⇒ −2α1 − α2 = 0 ⇐⇒ α2 = −2α1 ⇐⇒ α1 = α2 = 0. α2 = − 34 α1 4α1 + 3α2 = 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 100 — #111

i

100

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

The linear combination of the columns of the matrix M vanishes only if the numbers α1 and α2 are equal to zero. The number of linearly independent rows is 2, and the number of linearly independent columns is 2. We apply the sequence of elementary transformations to the matrix M in order to verify its rank:         2 1 2 1 2 2 2 2 M =  −2 −1  ˜  −2 −1  ˜  −2 −2  ˜  0 0  ˜ 4 3 −2 0 −2 0 −2 0 

       2 2 1 1 0 1 1 0 ˜  −2 0  ˜  1 0  ˜  1 0  ˜  0 1  =⇒ rankM = 2. 0 0 0 0 0 0 0 0 The number of linearly independent columns is 2, and the number of linearly independent rows is also 2. They are equal to the rank of M . Lemma 83 Linear dependence of columns/rows of the matrix K 1) In order for columns ki ∈ Rν , i = 1, 2, ..., µ, of the matrix K (7.19) to be linearly dependent it is necessary and sufficient that there exists an µ × 1  nonzero vector a= [α1 α2 ...αµ ]T , a 6= 0µ ∈ Rµ , such that the product Ka is zero vector: Linear dependence of the columns ki (.) of K (.) (7.19)   ⇐⇒ ∃ a 6= 0µ ∈ Rµ =⇒ Ka = 0ν .

(7.24)

2) In order for rows ri , ri ∈ R1×µ , i = 1, 2, ..., ν, of the matrix K (7.19) to be linearly dependent it is necessary and sufficient that there exists an 1 × ν nonzero vector g = [γ1 γ2 ...γν ] , g 6= 0Tν ∈ R1×ν , such that the product gK is zero vector: Linear dependence of the rows ri (.) of K (7.19)   ⇐⇒ ∃ g 6= 0Tν ∈ R1×ν =⇒ gK = 0Tµ .

(7.25)

3) The statements under 1) and 2) analogously hold for the matrix C ∈ Cν×µ . Proof. 1) The necessity and sufficiency follow directly from A-1) of Definition 79 when (7.4) is written in the vector-matrix form: Ka = 0ν f or a =



α1 ... αµ

T

,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 101 — #112

i

7.2. MATRIX RANGE, NULL SPACE, AND RANK

i

101

and from Definition 80 of the matrix rank in terms of the linear dependence of the columns of the matrix K (7.19). 2) The necessity and sufficiency follow directly from A-2) of Definition 79 when (7.8) is written in the vector-matrix form:   bK = 0Tµ f or b = b1 ... bν , and from Definition 80 of the matrix rank in terms of the linear dependence of the rows of the matrix K (7.19). 3) The proof of 3) is literally analogous to the proofs of 1) and 2). Lemma 84 Linear independence of column vectors of a matrix For the columns ki (.) , i = 1, 2, ..., µ, of the matrix K (7.19) to be linearly independent it is necessary and sufficient that for every µ × 1 nonzero vector a,  a = [α1 α2 ...αµ ]T , a 6= 0µ ∈ Rµ , the product Ka is nonzero vector: Linear independence of the columns ki (.) of K (7.19)  ⇐⇒ ∀ a 6= 0µ ∈ Rµ =⇒ Ka 6= 0µ .

(7.26)

The preceding statement analogously holds for the matrix C (7.20), i.e., for its columns ci ∈ Cν×1 , i = 1, 2, ..., µ. Proof. The statement of this lemma follows directly from (7.3) written in the vector-matrix form. Lemma 85 Linear independence of row vectors of a matrix For the rows ri , i = 1, 2, ..., ν, of the matrix K (7.19) to be linearly independent it is necessary  and sufficient that for every 1 × ν nonzero vector g = [γ1 γ2 ...γν ] , g 6= 0Tν ∈ R1×ν , the product gK is nonzero vector: Linear independence of the rows ri (.) of K (7.19)  ⇐⇒ ∀ g 6= 0Tν ∈ R1×ν =⇒ gK 6= 0Tν .

(7.27)

The preceding statement analogously holds for the matrix C (7.20), i.e., for its rows zi ∈ C1×µ , i = 1, 2, ..., ν. Proof. The statement of this lemma follows directly from (7.7) written in the vector-matrix form.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 102 — #113

i

102

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Theorem 86 Linear independence of the rows of a matrix product Let M ∈ Cp×p be nonsingular, detM 6= 0, and N ∈ Cp×q , where p and q are natural numbers. Let k be a natural number not bigger than min (p, q) , 0 < k ≤ min (p, q) . a) In order for k rows of the matrix product M N to be linearly independent it is necessary and sufficient that k rows of the matrix N are linearly independent. b) In order for all p rows of the matrix product M N to be linearly independent it is necessary and sufficient that all p rows of the matrix N are linearly independent, equivalently that rankN = p ≤ q. Proof. Let M (t) ∈ Cp×p be nonsingular at every t ∈ T0 , detM (t) 6= 0, ∀t ∈ T0 , which implies the linear independence of its rows (and of its columns) and rankM (t) = p at every t ∈ T0 . Let N ∈ Cp×q , where p and q are natural numbers. Let any τ ∈ T0 be chosen and fixed. The condition detM (t) 6= 0, ∀t ∈ T0 , implies detM (τ ) 6= 0. Necessity. a) Let k be a natural number not bigger than p, 0 < k ≤ p. Let rankM (t) N = k at every t ∈ T0 . Let at most k − 1 rows of the matrix N be assumed linearly independent. Then rankN ≤ k − 1. Since M (t) is nonsingular at every t ∈ T0 , its rankM (τ ) = p 6= 0. The assumption that rankN ≤ k − 1 and the well-known rule on the rank of the matrix product of a nonsingular matrix M (τ ) multiplying the rectangular matrix N, which reads rankM (τ ) N = rankN, yield rankM (τ ) N = rankN ≤ k − 1. This contradicts rankM (τ ) N = k. The contradiction is the result of the assumption on the linear independence of at most k − 1 rows of the matrix N . The assumption fails, which implies that k rows of the matrix N are linearly independent, i.e.,, rankN = k. b) The necessity of the statement under b) results from the necessity of a) for k = p. Sufficiency. a) Let rankN = k. The well-known rule on the rank of the matrix product in which the premultiplying matrix M is nonsingular ensures rankM (τ ) N = rankN = k. This implies the linear independence of the k rows of the matrix product M (τ ) N . b) The sufficiency of the statement under b) results from the sufficiency of a) for k = p, which completes the proof by recalling an arbitrary choice of τ ∈ T0 .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 103 — #114

i

7.2. MATRIX RANGE, NULL SPACE, AND RANK

i

103

Example 87 Let     1 3 0 M= , N= , rankM = 2, rankN = 1, 2 2 1 and

 MN =

3 2

 =⇒ rankM N = 1 = rankN.

This agrees with Theorem 86. Example 88 Let     2 4 7 1 1 0 3 M =  6 7 11  , N =  3 3 6 8  , rankM = 3, rankN = 2, 4 8 14 2 0 9 0 and 

    1 0 3 2 4 7 1 14 28 49 7 M N =  6 7 11   3 3 6 8  =  77 133 238 84  =⇒ 0 9 0 4 8 14 2 27 27 54 72 rankM N = 2. This verifies Theorem 86. Note 89 Theorem 86 fails in the framework of matrix functions (Note 103 in the sequel). Comment 90 If a matrix M ∈ {Rν×µ , Cν×µ }: Has linearly independent columns then ν ≥ µ, Has linearly independent rows then ν ≤ µ. Has all linearly independent columns then rankM = µ ≤ ν, Has all linearly independent rows then rankM = ν ≤ µ. Definition 91 Pseudo inverse matrix of a given matrix If a matrix M ∈ {Rν×µ , Cν×µ } then: A matrix L ∈ {Rν×µ , Cν×µ } is its left pseudo inverse matrix denoted by ML† , L = ML† , if and only if LM = Iµ , A matrix R ∈ {Rν×µ , Cν×µ } is its right pseudo inverse matrix denoted by MR† , R = MR† , if and only if M R = Iν . Comment 90 and Definition 91 imply:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 104 — #115

i

104

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Lemma 92 Pseudo inverse matrix and matrix rank If a matrix M ∈ {Rν×µ , Cν×µ } : Has the full column rank, rankM = µ ≤ ν, then it has the left pseudo inverse matrix ML† : ML† = M T M

−1

M T ∈ Rµ×ν ,

(7.28)

Has the full row rank, rankM = ν ≤ µ, then it has the right pseudo inverse matrix MR† : MR† = M T M M T

−1

∈ Rµ×ν ,

(7.29)

Is right (left) invertible then its rows (columns) are linearly independent, respectively, Has both the left and the right pseudo inverse then ν = µ, the matrix M is square and nonsingular matrix, and its pseudo inverse matrices are equal to its inverse matrix M −1 :  M ∈ Rν×µ , Cν×µ : ∃ML† , ∃MR† =⇒ ν = µ, ML† = MR† = M −1 .

7.3

Linear independence and scalar functions

The definitions of the linear (in)dependence of constant vectors and of constant matrices are not directly applicable to the linear (in)dependence of scalar functions. Definition 93 Linear dependence and independence of scalar functions A) Real valued scalar functions κi (.) : T −→ R, i = 1, 2, ..., µ, are: A-1) Linearly dependent on T0 if and only if their exist real numbers αi , i = 1, 2, ..., µ, at least one of which is nonzero, |α1 | + |α2 | + ... + |αµ | 6= 0,

(7.30)

such that the linear combination (7.31), α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t)

(7.31)

of the scalar functions κi (.) vanishes at every t ∈ T0 , α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t) = 0, ∀t ∈ T0 .

(7.32)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 105 — #116

i

7.3. LINEAR INDEPENDENCE AND SCALAR FUNCTIONS

i

105

A-2) Linearly independent on T0 if and only if their linear combination (7.33) vanishes at every t ∈ T0 if and only if all numbers αi are equal to zero, ∀t ∈ T0 : α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t) = 0 ⇐⇒ ⇐⇒ |α1 | + |α2 | + ... + |αµ | = 0.

(7.33)

B) Complex valued scalar functions ξi (.) : C −→ C, i = 1, 2, ..., µ, are: B-1) Linearly dependent on C if and only if there exist complex numbers βi , i = 1, 2, ..., µ, at least one of which is nonzero, |β1 | + |β2 | + ... + |βµ | 6= 0,

(7.34)

such that the linear combination (7.35) of the functions ξi (.), β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s)

(7.35)

vanishes for every s ∈ C, β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s) = 0, ∀s ∈ C.

(7.36)

B-2) Linearly independent on C if and only if their linear combination (7.35) vanishes at every s ∈ C if and only if all complex numbers βi are equal to zero, ∀s ∈ C =⇒β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s) = 0 ⇐⇒ ⇐⇒ |β1 | + |β2 | + ... + |βµ | = 0.

(7.37)

Example 94 Let κ1 (t) = 1, κ2 (t) = 2t and κ3 (t) = et . Their linear combination α1 κ1 (t) + α2 κ2 (t) + α3 κ3 (t) = = α1 + 2α2 t + α3 et = 0, ∀t ∈ T0 ⇐⇒ α1 = α2 = α3 = 0. The functions κ1 (.) , κ2 (.) and κ3 (.) are linearly independent on T0 . Example 95 Let κ1 (t) = t, κ2 (t) = 2t and κ3 (t) = −5t. Their linear combination α1 t + α2 2t − α3 5t = (α1 + 2α2 − 5α3 )t = 0, ∀t ∈ T0 , vanishes at every t ∈ T0 for α1 = −2α2 + 5α3 , ∀ (α2 6= 0, α3 6= 0) ∈ R. The functions κ1 (.) , κ2 (.) and κ3 (.) are linearly dependent on T0 .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 106 — #117

i

106

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Definition 96 Wronskian of (n-1) times differentiable scalar functions Let n scalar functions κi (.) : R −→ R be (n-1) times differentiable on a connected subset T∗ of T with the nonempty interior, InT∗ 6= φ, κi (t) ∈ Cn−1 (T∗ ), for every i = 1, 2, ..., n. Their Wronskian W (t; κ1 , κ2 , .., κn ) at t ∈ T∗ is the determinant composed so that entries of its first row are the values of the functions κi (.) , i = 1, 2, ..., n, at t ∈ T∗ and every next row is the elementwise derivative of the preceding row at t ∈ T∗ by finishing with the (n-1) differentiation: κ1 (t) κ2 (t) ... κn (t) (1) (1) (1) κ2 (t) ... κn (t) κ1 (t) (7.38) W (t; κ1 , κ2 , .., κn ) = . ... ... ... ... (n−1) (n−1) (n−1) κ (t) κ (t) ... κn (t) 1

2

The following theorem is very useful to test linear independence of the functions κi (.) , i = 1, 2, ..., n: Theorem 97 Wronskian of (n-1) times differentiable n scalar functions and their (in)dependence Let n scalar functions κi (.) : R −→ R be (n-1) times differentiable on a connected subset T∗ of T with the nonempty interior, InT∗ 6= φ, κi (t) ∈ Cn−1 (T∗ ), for every i = 1, 2, ..., n. 1) For the functions κi (t) ∈ Cn−1 (T∗ ), i = 1, 2, ..., n, to be linearly independent on the set T∗ it is sufficient that there is τ ∈ T∗ such their Wronskian (7.38) is different from zero at t = τ : ∃τ ∈ T∗ =⇒ W (τ ; κ1 , κ2 , .., κn ) 6= 0.

(7.39)

2) If the functions κi (t) ∈ Cn−1 (T∗ ), i = 1, 2, ..., n, are linearly dependent on the set T∗ then their Wronskian is equal to zero at every t ∈ T∗ . Proof. Let n scalar functions κi (.) : R −→ R be (n-1) times differentiable on a connected subset T∗ of T with the nonempty interior, InT∗ 6= φ, κi (t) ∈ Cn−1 (T∗ ), for every i = 1, 2, ..., n. 1) Let the condition be wrong, i.e., let there be τ ∈ T∗ such that (7.39), (Section 7.3), holds and the functions κi (.), i = 1, 2, ..., n, are linearly dependent. This means that there is a non zero vector v,   .. .. .. n (v 6= 0n ) ∈ R , v = v1 . v2 . .. . vn , (7.40)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 107 — #118

i

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

107

such that κ1 (t) v1 + κ2 (t) v1 + .. + κn (t) v1 = 0, ∀t ∈ T∗ . After differentiating this equation (n − 1) times the following system of the equations results: κ1 (t) v1 + κ2 (t) v1 + ... + κn (t) v1 = 0, ∀t ∈ T∗ (1)

(1)

∗ κ1 (t) v1 + κ2 (t) v1 + ... + κ(1) n (t) v1 = 0, ∀t ∈ T

... (n−1) κ1 (t) v1

+

(n−1) κ2 (t) v1

+ ... + κ(n−1) (t) v1 = 0, ∀t ∈ T∗ , n

or in the matrix-vector form: V (t) v = 0n , ∀t ∈ T∗ ,

(7.41)

where    V (t) =  

κ1 (t) κ2 (t) (1) (1) κ1 (t) κ2 (t) ... ... (n−1) (n−1) κ1 (t) κ2 (t)

... κn (t) (1) ... κn (t) ... ... (n−1) ... κn (t)

    , ∀t ∈ T∗ . 

Since the solution vector v of the homogenous linear algebraic equation (7.41) is non zero vector then the matrix V (t) is singular on T∗ : detV (t) = 0, ∀t ∈ T∗ , which contradicts the condition (7.39) because detV (t) = W (t; κ1 , κ2 , .., κn ) , ∀t ∈ T∗ ,

(7.42)

and proves the statement under 1). 2) If the functions κi (t) ∈ Cn−1 (T∗ ), i = 1, 2, ..., n, are linearly dependent on the set T∗ then there is a non zero vector v, (7.40), such that it obeys (7.41) that implies detV (t) = 0, ∀t ∈ T∗ . This and (7.42) prove the statement under 2).

7.4

Linear independence and matrix functions

We introduce the matrix function R (.) : T −→ Rν×µ of µ column vector functions κi (.) : T −→ Rν , i = 1, 2, ..., µ, and the matrix function C (.) :

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 108 — #119

i

108

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

C −→ Cν×µ of µ column vector functions ξi (.) : C −→ Cν , i = 1, 2, ..., µ :   R (t) = κ1 (t) κ2 (t) ... κµ (t) ,   C (s) = ξ1 (s) ξ2 (s) ... ξµ (s) . (7.43) Let 

   κ1i (t) ξ1i (s)  and ξi (s) =   , i = 1, 2, ..., µ, : : κi (t) =  κνi (t) ξνi (s)

(7.44)

The vector functions κi (.) : T −→ Rν and ξi (.) : C −→ Cν represent the i − th column of the matrix functions R (.) : T −→ Rν×µ and C (.) : C −→ Cν×µ , respectively, for every i = 1, 2, .., µ. The number ν is the number of the rows of the matrix functions R (.) and C (.), and the number µ is the number of their columns. The matrix functions R (.) and C (.) can be represented also in terms of their row vector functions ρk (.) : T −→ R1×µ and ζk (.) : C −→ C1×µ , k = 1, 2, ..., ν, respectively, e.g., for ν < µ :     ρ11 (t) ... ρ1i (t) ... ρ1µ (t) ρ1 (t)  =  : , : : ... : : : ... : : R (t) =  ρν1 (t) ... ρνi (t) ... ρνµ (t) ρν (t)   1×µ ρk (t) = ρk1 (t) ... ρki (t) ... ρkµ (t) ∈ R , k = 1, 2, ..., ν, (7.45)  ζ1 (s) ζ11 (s) ... ζ1i (s) ... ζ1µ (s)    =  ζ2 (s)  , : : ... : : : ... : : C (s) =   :  ζν1 (s) ... ζνi (s) ... ζνµ (s) ζν (s)   ζk (s) = ζk1 (s) ... ζki (s) ... ζkµ (s) ∈ C1×µ , k = 1, 2, ..., ν. (7.46) 





Definition 79 does not specify the linear independence of (column) vector functions κi (.) : T −→ Rν or ξi (.) : C −→ Cν , i = 1, 2, ..., µ. Definition 98 Linear dependence and independence of vector functions and of rows/columns of a matrix function A) Real valued row vector functions ρi (.) : T −→ R1×µ , i = 1, 2, ..., ν, which are rows of a matrix function R (.) : T −→ Rν×µ , are: A-1) Linearly dependent on T0 if and only if their exist real numbers γi , i = 1, 2, ..., ν, at least one of which is nonzero, |γ1 | + |γ2 | + ... + |γν | 6= 0,

(7.47)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 109 — #120

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

i

109

such that the linear combination (7.48), γ1 ρ1 (t) + γ2 ρ2 (t) + ... + γν ρν (t)

(7.48)

of the vector functions ρi (.) vanishes at every t ∈ T0 , γ1 ρ1 (t) + γ2 ρ2 (t) + ... + γν ρν (t) = 0µ , ∀t ∈ T0 .

(7.49)

A-2) Linearly independent on T0 if and only if their linear combination (7.48) vanishes at every t ∈ T0 if and only if all numbers γi are equal to zero, ∀t ∈ T0 : γ1 ρ1 (t) + γ2 ρ2 (t) + ... + γν ρν (t) = 0µ ⇐⇒ ⇐⇒ |γ1 | + |γ2 | + ... + |γν | = 0.

(7.50)

B) Complex valued row vector functions ζi (.) : C −→ Cν , i = 1, 2, ..., ν, which are rows of a matrix function C (.) : C −→ Cν×µ , are: B-1) Linearly dependent on C if and only if there exist complex numbers θi , i = 1, 2, ..., ν, at least one of which is nonzero, |θ1 | + |θ2 | + ... + |θν | 6= 0,

(7.51)

such that the linear combination (7.52), θ1 ζ1 (s) + θ2 ζ2 (s) + ... + θν ζν (s)

(7.52)

of the vector functions ζi (.) vanishes at every s ∈ C, θ1 ζ1 (s) + θ2 ζ2 (s) + ... + θν ζν (s) = 0ν , ∀s ∈ C.

(7.53)

B-2) Linearly independent on C if and only if their linear combination (7.52) vanishes at every s ∈ C if and only if all complex numbers θi are equal to zero, ∀s ∈ C : θ1 ζ1 (s) + θ2 ζ2 (s) + ... + θν ζν (s) = 0ν ⇐⇒ ⇐⇒ |θ1 | + |θ2 | + ... + |θν | = 0.

(7.54)

C) Real valued column vector functions κi (.) : T −→ Rν , i = 1, 2, ..., µ, which are columns of a matrix function R (.) : T−→ Rν×µ , are: C-1) Linearly dependent on T0 if and only if their exist real numbers αi , i = 1, 2, ..., µ, at least one of which is nonzero, |α1 | + |α2 | + ... + |αµ | 6= 0,

(7.55)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 110 — #121

i

110

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

such that the linear combination (7.56), α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t)

(7.56)

of the vector functions κi (.) vanishes at every t ∈ T0 , α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t) = 0ν , ∀t ∈ T0 .

(7.57)

C-2) Linearly independent on T0 if and only if their linear combination (7.56) vanishes at every t ∈ T0 if and only if all numbers αi are equal to zero, ∀t ∈ T0 : α1 κ1 (t) + α2 κ2 (t) + ... + αµ κµ (t) = 0ν ⇐⇒ ⇐⇒ |α1 | + |α2 | + ... + |αµ | = 0.

(7.58)

D) Complex valued column vector functions ξi (.) : C −→ Cν , i = 1, 2, ..., µ, which are columns of a matrix function C (.) : C−→Cν×µ , are: D-1) Linearly dependent on C if and only if there exist complex numbers βi , i = 1, 2, ..., µ, at least one of which is nonzero, |β1 | + |β2 | + ... + |βµ | 6= 0,

(7.59)

such that the linear combination (7.60), β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s)

(7.60)

vanishes at every s ∈ C, β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s) = 0ν , ∀s ∈ C.

(7.61)

D-2) Linearly independent on C if and only if their linear combination (7.60) vanishes at every s ∈ C if and only if all complex numbers βi are equal to zero, ∀s ∈ C: β1 ξ1 (s) + β2 ξ2 (s) + ... + βµ ξµ (s) = 0ν ⇐⇒ ⇐⇒ |β1 | + |β2 | + ... + |βµ | = 0.

(7.62)

Example 99 illustrates Definition 98. Example 99 Let    et 6et  , κ2 (t) =  . −6et −et κ1 (t) =  t −2t t −2t 6e − 2e 7e + 2e 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 111 — #122

i

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

111

Their linear combination reads: 

   6et et  + α2  . −6et −et α1 κ1 (t) + α2 κ2 (t) = α1  t −2t t −2t 6e − 2e 7e + 2e

We test the existence of at least one real number αi such that the linear combination α1 κ1 (t)+α2 κ2 (t) is equal to the zero vector 03 at every moment t ∈ T for |α1 | + |α2 | 6= 0 : ∃α1 , α2 ∈ R, |α1 | + |α2 | 6= 0 =⇒    6et et  + α2   = 03 , ∀t ∈ T? −6et −et α1  t −2t t −2t 6e − 2e 7e + 2e 

If they exist then 

   6α1 et + α2 et 0   =  0  , ∀t ∈ T −6α1 et − α2et 6α1 et − 2α1 e−2t + 7α2 et + 2α2 e−2t 0

holds. However, the first two equations demand 2 − 6e3t 6et − 2e−2t = = α2 (t) . 7et + 2e−2t 2 + 7e3t The result is that α1 and α2 should be either equal to zero or should be time dependent functions in order for the linear combination of the vector functions κ1 (.) and κ2 (.) to be equal to the zero vector 03 at every moment t ∈ T. There do not exist real numbers α1 and α2 obeying |α1 | + |α2 | 6= 0 so that the linear combination of the vector functions κ2 (.) and κ2 (.) is equal to the zero vector 03 at every moment t ∈ T. In view of A) of Definition 98, (Section 7.4), the vector functions κ1 (.) and κ2 (.) are linearly independent on T. α2 = −6α1 and α2 = −

Lemma 100 Linear dependence of rows/columns of the matrix function R (.) on T0 or the matrix function C (.) on C 1) In order for rows ρi (.) , ρi (.) : T −→ T1×µ , i = 1, 2, ..., ν, of the matrix function R (.) (7.45) to be linearly dependent on T0 it is necessary and sufficient that there exists an 1 × ν nonzero vector g = [g1 g2 ...gν ] ,  T 1×ν g 6= 0ν ∈ R , such that for every t ∈ T0 the product gR (t) is zero vector: Linear dependence of the rows ρi (.) of R (.) (7.45)   on T0 ⇐⇒ ∃ g 6= 0Tν ∈ R1×ν =⇒ gR (t) = 0Tµ , ∀t ∈ T0 .

(7.63)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 112 — #123

i

112

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

2) In order for columns κi (.) , κi (.) : T −→ Tν , i = 1, 2, ..., µ, of the matrix function R (.) (7.43) to be linearly dependent on T0 it is necessary and sufficient that there exists an µ × 1 nonzero vector a= [α1 α2 ...αµ ]T ,  a 6= 0µ ∈ Rµ×1 , such that for every t ∈ T0 the product R (t) a is zero vector: Linear dependence of the columns κi (.) of R (.) (7.45)   on T0 ⇐⇒ ∃ a 6= 0µ ∈ Rµ×1 =⇒ R (t) a = 0ν , ∀t ∈ T0 .

(7.64)

3) If C (.) : C −→ Cν×µ then the statements under 1) and 2) analogously hold. Proof. 1) The necessity and sufficiency follow directly from A-1) of Definition 98, i.e.,, from Equation 7.49 of the linear dependence of the rows of the matrix function R (.) (7.45) when it is written in the vector-matrix form gR (t) = 0Tµ , ∀t ∈ T0 . 2) The necessity and sufficiency follow directly from C-1) of Definition 98, i.e.,, from Equation 7.57 of the linear dependence of the columns of the matrix function R (.) (7.45) written in the matrix-vector form R (t) a = 0ν ,∀t ∈ T0 . 3) The proof of 3) is literally analogous to the proofs of 1) and 2). Lemma 101 Linear independence of row vectors of a matrix function on T0 or on C In order for the rows ρi (.) , i = 1, 2, ..., ν, of the matrix function R (.) (7.45) to be linearly independent on T0 it is necessaryand sufficient that for every 1 × ν nonzero vector g = [γ1 γ2 ...γν ] , g 6= 0Tν ∈ R1×ν , there exists σ ∈ T0 for which gR (σ) is nonzero vector: Linear independence of the rows ρi (.) of R (.) (7.45)  on T0 =⇒ ∀ g 6= 0Tν ∈ R1×ν , ∃σ ∈ T0 =⇒ gR (σ) 6= 0Tν .

(7.65)

If C (.) : C −→ Cν×µ , i.e.,, its rows ζi (.) : C −→ C1×µ , i = 1, 2, ..., ν, then the statement analogously holds. Proof. The necessity and sufficiency follow directly from A-2) of Definition 98, i.e.,, from the condition 7.50 of the linear dependence of the rows of the matrix function R (.) (7.45) when it is written in the vector-matrix form gR (t) = 0Tµ , ∀t ∈ T0 . Appendix D.1 provides the more detailed proof of Lemma 101.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 113 — #124

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

i

113

Lemma 102 Linear independence of column vectors of a matrix function on T0 or on C In order for the columns κi (.) , i = 1, 2, ..., µ, of the matrix function R (.) (7.45) to be linearly independent on T0 it is necessary and sufficient that for every µ × 1 nonzero vector  a = [α1 α2 ...αµ ]T , a 6= 0µ ∈ Rµ×1 , there exists σ ∈ T0 for which R (σ) a is nonzero vector: Linear independence of the columns κi (.) of R (.) (7.45)  on T0 =⇒ ∀ a 6= 0µ ∈ Rµ×1 , ∃σ ∈ T0 =⇒ R (σ) a 6= 0µ .

(7.66)

If C (.) : C −→ Cν×µ , i.e.,, its columns ξi (.) : C −→ Cν×1 , i = 1, 2, ..., µ, then the preceding statement analogously holds. The proof of this theorem is the full analogy of the proof of Theorem 101. Note 103 Theorem 86 and the nonsingular matrix function Theorem 86 is not valid on T0 despite the nonsingular matrix is a matrix function M (.) : T −→ Rp×p or M (.) : C −→ Cp×p . In this regard we present the following: Example 104 Let      −t  1 e−t 0 e M (t) = , N= , rankN = 1, , M (t)N = =⇒ 2t 2 1 2 a1 e−t + a2 2 = 0 ⇐⇒ a1 = −a2 2et 6= const. ⇐⇒ a1 = a2 = 0. The rows of M (t)N are linearly independent on T0 . Notice that rankM (t)N ≡ 1 < 2 , ∀t ∈ T0 , in spite of the linear independence of the rows of M (t)N on T0 . At any fixed moment τ ∈ T0 ,

a1 e−τ

a1 6= 0 and a2 = −2−1 a1 e−τ =⇒  + a2 2 = a1 e−τ + −2−1 a1 e−τ 2 = a1 e−τ − a1 e−τ = 0,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 114 — #125

i

114

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

the rows of M (t)N are linearly dependent. However, the value of the coefficient a2 = a2 (τ ) is not a number, but it depends on τ ∈ T0 , which implies the linear independence of the rows of M (t)N on T0 . Notice also that 1 e−t  = 2 1 − te−t 6= 0, ∀t ∈ T. detM (t) = 2t 2 The following theorem explains the difference between the rank and the independence of the rows of a time varying matrices product: Theorem 105 Linear independence of the rows and the rank of a time varying matrix product Let M (t) ∈ Cp×p be nonsingular at every t ∈ T0 , detM (t) 6= 0, ∀t ∈ T0 , and N ∈ Cp×q , where p and q, q ≥ p, are natural numbers. Let k be a natural number not bigger than p, 0 < k ≤ p. a) In order for the rank of the matrix product M (t) N to be k at every t ∈ T0 it is necessary and sufficient that k rows of the matrix N are linearly independent, equivalently, rankN = k. b) In order for the rank of the matrix product M (t) N to be full, i.e.,, p, at every t ∈ T0 it is necessary and sufficient that all p rows of the matrix N are linearly independent, equivalently rankN = p. Proof. Let M (t) ∈ Cp×p be nonsingular at every t ∈ T0 , detM (t) 6= 0, ∀t ∈ T0 , which implies the linear independence of its rows (and of its columns) and rankM (t) = p at every t ∈ T0 . Let N ∈ Cp×q , where p and q are natural numbers. Let any τ ∈ T0 be chosen and fixed. The condition detM (t) 6= 0, ∀t ∈ T0 , implies detM (τ ) 6= 0. Necessity. a) Let k be a natural number not bigger than p, 0 < k ≤ p. Let rankM (t) N = k at every t ∈ T0 . Let at most k − 1 rows of the matrix N be assumed linearly independent. Then rankN ≤ k − 1. Since M (t) is nonsingular at every t ∈ T0 , its rankM (τ ) = p 6= 0. The assumption that rankN ≤ k − 1 and the well-known rule on the rank of the matrix product of a nonsingular matrix M (τ ) multiplying the rectangular matrix N, which reads rankM (τ ) N = rankN, yield rankM (τ ) N = rankN ≤ k − 1. This contradicts rankM (τ ) N = k. The contradiction is the result of the assumption on the linear independence of at most k − 1 rows of the matrix N . The assumption fails, which implies that k rows of the matrix N are linearly independent, i.e.,, rankN = k.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 115 — #126

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

i

115

b) The necessity of the statement under b) results from the necessity of a) for k = p. Sufficiency. a) Let rankN = k. The well-known rule on the rank of the matrix product in which the premultiplying matrix M is nonsingular ensures rankM (τ ) N = rankN = k. This implies the linear independence of the k rows of the matrix product M (τ ) N . b) The sufficiency of the statement under b) results from the sufficiency of a) for k = p, which completes the proof by recalling an arbitrary choice of τ ∈ T0 . Let us consider an illustrative example: Example 106 Let     4 2 3 1 e−t =⇒ rankN = 1 < 2, , N= M (t) = 8 4 6 2t 2 1 detM (t) = 2t    4 2 1 e−t M (t)N = 8 4 2t 2

 e−t −t = 2 1 − te 6= 0, ∀t ∈ T, 2    4 + 8e−t 2 + 4e−t 3 + 6e−t 3 =⇒ = 8t + 16 4t + 8 6t + 12 6

1. The rows of N are linearly dependent, i.e.,, its rank is defective. 2. The rows of M (t)N are linearly independent on T but 3. rankM (t)N = 1 < 2, ∀t ∈ T. What can we say about the linear (in)dependence of rows of a matrix product in which a matrix function P (.) : T −→ Rν×µ is postmultiplied by a matrix function Q (.) : T −→ Rµ×ξ ? Lemma 107 Linear (in)dependence of the matrix rows and a matrix product Let the matrix functions P (.) : T −→ Rν×µ and Q (.) : T −→ Rµ×ξ form the matrix function product P (.) Q (.) : T −→ Rν×ξ . 1) If the rows of the matrix function product P (.) Q (.) : T −→ Rν×ξ are linearly independent on a nonempty connected subset T∗ of T, T∗ ⊆ T, with the nonempty interior, InT∗ 6= φ, then the rows of the matrix function P (.) are also linearly independent on the set T∗ .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 116 — #127

i

116

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

2) If the rows of the matrix function P (.) are linearly dependent on the set T∗ then the rows of the matrix function product P (.) Q (.) : T −→ Rν×ξ are also linearly dependent on the set T∗ . 3) Let M ∈ Rµ×µ be nonsingular constant matrix, detM 6= 0. If the rows of the matrix function P (.) are linearly independent on the set T∗ then the rows of the matrix function product P (.) M are also linearly independent on the set T∗ . Proof. Let the matrix functions P (.) : T −→ Rν×µ and Q (.) : T −→ Rµ×ξ form the matrix function product P (.) Q (.) : T −→ Rν×ξ . Let the set T∗ be a nonempty connected subset of T, T∗ ⊆ T, with the nonempty interior, InT∗ 6= φ. 1) Let the rows of the matrix function product P (.) Q (.) : T −→ Rν×ξ be linearly independent on the set T∗ . Let be assumed that the rows of the matrix functionP (.) are linearly dependent on the set T∗ . Then there is a vector a 6= 0Tν ∈R1×ν such that aP (t) = 0Tµ for every t ∈ T∗ . The multiplication of the matrix function product P (t) Q (t) by the vector a results in the following: aP (t) Q (t) = [aP (t)] Q (t) = 0Tµ Q (t) = 0Tξ , ∀t ∈ T∗ .

(7.67)

This, by the definition (Definition 98), i.e.,, due to Lemma 100, means that the rows of the matrix function product P (t) Q (t) are linearly dependent on T∗ , which contradicts to their linear independence. The contradiction is the consequence of the assumption on the linear dependence of the rows of P (t) on T∗ . The failure of the assumption proves that the rows of P (t) are linearly independent on T∗ . 2) The proof of the statement under 1), i.e.,, Equations (7.67) prove the statement under 2). 3) Let M ∈ Rµ×µ be nonsingular constant matrix, detC 6= 0, and the rows of the matrix function P (.) be linearly independent on the set T∗ . Let be assumed that the rows of the matrix function product P (.) M are linearly dependent on the set T∗ . There exist 1 × ν constant non zero vector  T 1×ν a, a 6= 0ν ∈R , such that aP (t) M = 0Tµ , ∀t ∈ T∗ . We multiply this equation on the right by M −1 . The result reads: aP (t) = 0Tµ , ∀t ∈ T∗ .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 117 — #128

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS

i

117

This means that the rows of the matrix P (t) are linearly dependent on the set T∗ , which contradicts their linear independence on the set T∗ . The  assumed constant non zero vector a, a 6= 0Tν ∈ R1×ν , does not exist. This proves that the rows of the matrix function product P (.) M are also linearly independent on the set T∗ . Lemma 108 Linear (in)dependence of the matrix columns and a matrix product Let the matrix functions P (.) : T −→ Rν×µ and Q (.) : T −→ Rµ×ξ form the matrix function product P (.) Q (.) : T −→ Rν×ξ . 1) If the columns of the matrix function product P (.) Q (.) : T −→ Rν×ξ are linearly independent on a nonempty connected subset T∗ of T, T∗ ⊆ T, with the nonempty interior, InT∗ 6= φ, then the columns of the matrix function Q (.) are also linearly independent on the set T∗ . 2) If the columns of the matrix function Q (.) are linearly dependent on the set T∗ then the columns of the matrix function product P (.) Q (.) : T −→ Rν×ξ are also linearly dependent on the set T∗ . 3) Let D ∈ Rν×ν be nonsingular constant matrix, detD 6= 0. If the columns of the matrix function P (.) are linearly independent on the set T∗ then the columns of the matrix function product DP (.) are also linearly independent on the set T∗ . The proof is essential repetition of the proof of Theorem 107. Note 109 Lemma 100 through Lemma 108 hold for complex matrix functions with exactly one change that is the replacement of the argument t by s. Definition 110 Rank of a matrix function at a particular value of its argument The rank of a ν × µ matrix function M (.) ,  M (.) ∈ T −→ Rν×µ , C −→ Cν×µ , at σ ∈ {t, s} is the integer ρ such that the matrix minor Mk (σ) is nonzero and every matrix minor Mk (σ) of the order k bigger than ρ is zero, rank M (σ) = ρ, σ ∈ {t, s} ⇐⇒  ⇐⇒

Mk (σ) 6= 0, ∀k = 1, 2, ..., ρ, Mk (σ) = 0, ∀k = ρ + 1, ρ + 2, ..., min (ν, µ) .

 (7.68)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 118 — #129

i

118

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

This definition specifies that the rank of a matrix function depends on its independent variable σ ∈ {t, s} . The consequence is the invalidity of Claim 81 for the matrix functions on any nonsingleton nonempty set I ∈ {T, C} because Definition 110 determines the rank of a matrix function at any fixed value of its argument, but not on a nonsingleton nonempty set I ∈ {T, C} . Example 111 Let  6et et 1 . −6et −et R (t) =  3 t −2t t −2t 6e − 2e 7e + 2e 

R (t) has the second order minor R2 (t) , 1 −6et −et R2 (t) = t −2t t 7e + 2e−2t 3 6e − 2e

= −16e2t 6= 0, ∀t ∈ T.

which is nonsingular. The rank of R (t) is 2 for every t ∈ T. Its rank on T is equal to 2. The equivalent form of R (t) follows after multiple application of the elementary transformations,     6et et 0 0 1  ˜1  ˜ −6et −et 0 −et 3 3 t −2t t −2t −2t t −2t 6e − 2e 7e + 2e −6e 7e + 2e       t 0 0 −e 0 −et 0 1 1 1 0 −et  ˜  −6e−2t 2e−2t  ˜  0 2e−2t  . ˜  3 3 3 −6e−2t 2e−2t 0 0 0 0 and it reads:

  −et 0 1 0 2e−2t  , ∀t ∈ T. R (t) ˜ 3 0 0

This verifies that R (t) has two linearly independent columns on T and shows that its rank is 2 on T. In this example the rank of the matrix function on T and the number of its linearly independent columns and rows on T are equal (to 2). Example 112 Let signt =

t ⇐⇒ t 6= 0, signt = 0 ⇐⇒ t = 0, |t|

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 119 — #130

i

7.4. LINEAR INDEPENDENCE AND MATRIX FUNCTIONS and

i

119



 tsignt 0 0 0 sign2 t 0  =⇒ R (t) =  0 0 t t < 0 =⇒   −t 0 0 R (t) =  0 1 0  =⇒ 0 0 t 3 linearly independent rows and columns, rankF (t) = 3, t = 0 =⇒   0 0 0 R (0) =  0 0 0  =⇒ 0 0 0 0 linearly independent rows and columns, rankF (0) = 0 t > 0 =⇒   t 0 0 R (t) =  0 1 0  =⇒ 0 0 t 3 linearly independent rows and columns, rankF (t) = 3.

This analysis shows that R (t) has 3 linearly independent rows and columns on T in view of Definition 98, (Section 7.4). Its rank varies in terms of t ∈ T so that it is either 0 (at t = 0) or 3 (for t 6= 0). Its rank on T is time-varying, but the number of linearly independent columns and rows on T is constant (3) in view of Definition 110, (Section 7.4). Example 113 Let     R (t) = −2 et e−t =⇒ t = 0 =⇒ R (0) = −2 1 1 . R (0) has one linearly independent row, one linearly independent column, and rankR (0) = 1 −2a1 + et a2 + e−t a3 = 0, ∀t ∈ T ⇐⇒a1 = a2 = a3 = 0 =⇒ R (t) has three linearly independent columns on T, one independent row on T, and rankR (t) = 1, ∀t ∈ T.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 120 — #131

i

120

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Let a matrix function G (.) : T −→ Rν×ν be the definite integral of the matrix function R (.) : T −→ Rν×µ (7.45) and of its transpose: Z G (t) =

t

R (τ ) RT (τ ) dτ.

(7.69)

0

The matrix G (t) (7.69) is the Gram matrix ( or, grammian) of the matrix R (.) (7.45). It enables us to test the linear independence of the rows ρi (.) : T −→ T1×µ of the matrix function R (.) (7.45). Theorem 114 Gram matrix criterion for the linear independence of the rows of the inegrable matrix function R (.) (7.45) For the rows ρi (.) : T −→ T1×µ , i = 1, 2, . . . , ν, of the integrable matrix function R (.) (7.45), R (t) ∈ C (T0 ), to be linearly independent on T0 it is necessary and sufficient that their Gram matrix G (t) ∈ Rν×ν (7.69) is nonsingular on T0 . For the proof see Appendix D.2.

7.5

Polynomial matrices. Matrix polynomials

A matrix function P (.) : C −→ CN ×r , P (s) = [pj,k (s)] ∈ CN ×r ,

(7.70)

is polynomial matrix if and only if every its entry pj,k (s) is a polynomial in the complex variable s ∈ C, pj,k (s) =

i=µ X

pij,k si , pij,k ∈ R, ∀j = 1, 2, .., N, ∀k = 1, 2, .., r.

(7.71)

i=0

The polynomial matrix P (s) can be set in the form of the matrix polynomial P (s) =

i=µ X

Pi si , Pi ∈ RN ×r , ∀i = 0, 1, ..., µ,

(7.72)

i=0

or in the compact form by applying P (µ) ,   .. .. .. (µ) P = P0 . P1 . ... . Pµ ∈ RN ×(µ+1)r ,

(7.73)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 121 — #132

i

7.5. POLYNOMIAL MATRICES. MATRIX POLYNOMIALS

i

121

(µ)

by using Sr (s) , T  . . . . Sr(µ) (s) = Ir .. sIr .. s2 Ir .. · · · .. sµ Ir ∈ C(µ+1)r×r =⇒ rankSr(µ) (s) ≡ r,

(7.74) (7.75)

and the identity matrix Ir ∈ Rr×r , so that P (s) =

i=µ X

Pi si = P (µ) Sr(µ) (s) .

(7.76)

i=0

The matrix P (µ) (7.73) is the generating matrix of both the matrix polynomial (7.72) and the polynomial matrix P (s) (7.70). A square polynomial matrix is nonsingular if and only if it has the full rank. By referring to [34, Definition B.6] and to [84] we will use the following definition: Definition 115 Rank of a polynomial matrix The normal rank (for short: rank) of the polynomial matrix P (s) is the number of its linearly independent rows (columns). The rank of a polynomial matrix P (s) (7.70) on C is the rank of the matrix almost everywhere in s ∈ C. The polynomial matrix P (s) has full column /row/ rank on C if it has full column /row/ rank everywhere in the complex plane C except at a finite number of points s ∈ C, respectively. The full column and the full row rank of the polynomial matrix P (s) on C are mutually equal and equal to its full rank on C : f ull column rank P (s) on C = f ull row rank P (s) on C = = f ull rank P (s) on C = ρ = min (N, r) . The rank of a polynomial matrix P (s) obeys rankP (s) = max [rankP (s) : s ∈ C] ≤ min (N, r) . A square polynomial matrix is nonsingular if and only if it has full rank. The expression “on C” will be sometimes omitted in the sequel.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 122 — #133

i

122

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

The roots or zeros of a polynomial matrix P (s) are those points s ∈ C where P (s) loses rank. If P (s) is square then its roots are the roots of its determinant det P (s), including multiplicity. A square polynomial matrix P (s) is unimodular if its determinant det P (s) is a nonzero constant. The inverse of a unimodular polynomial matrix P (s) is again a polynomial matrix. Matrix pencils are matrix polynomials of degree 1, such as P (s) = P0 + P1 s. Matrix pencils are often represented as polynomial matrices of the special form, e.g., the characteristic matrix sI − A of the matrix A. Elementary operations hold for polynomial matrices.  Let s0 be any complex number s, s0 ∈ C, for which rankP s0 < min (N, r) . Comment 116 Throughout this book rankP (s) signifies “rankP (s) on C” meaning “for almost every s ∈ C”,   rankP (s) = rankP (s) , ∀ s 6= s0 ∈ C = {rankP (s) on C} . (7.77) Theorem 117 Rank of a polynomial matrix Let N ≤ r. 1) In order for the polynomial matrix P (s) (7.70), P (s) ∈ CN ×r , to have the full rank ρ = min (N, r) = N it is necessary and sufficient that there is s∗ ∈ C such that rankP (s∗ ) = N : f ull rankP (s) = N on C ⇐⇒ ∃s∗ ∈ C, rankP (s∗ ) = N,

(7.78)

2) If the polynomial matrix P (s) (7.70), P (s) ∈ CN ×r , has the full rank ρ = min (N, r) = N then its generating matrix P (µ) ∈ RN ×(µ+1)r has also the full rank ρ = N , f ull rankP (s) = N =⇒ rankP (µ) = f ull rankP (µ) = N.

(7.79)

Proof. Let N ≤ r. 1) Necessity. Let the polynomial matrix P (s) ∈ CN ×r have the full rank ρ = N. There is a ρ×ρ polynomial submatrix Pρ,ρ (s) of P (s) that is nonsingular on C almost for all s ∈ C, i.e.,, for any s ∈ C except for finite number of values of s0 ∈ C for which detPρ,ρ s0 = 0. There exist infinitely many s∗ 6= s0 ∈ C for which rankP (s∗ ) = ρ = N . This proves the necessity of the condition under 1). Sufficiency. Let there exist s∗ ∈ C such that rankP (s∗ ) = ρ = N. The polynomial matrix P (s) has the full rank for s = s∗ . There is a ρ ×

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 123 — #134

i

7.5. POLYNOMIAL MATRICES. MATRIX POLYNOMIALS

i

123

ρ polynomial submatrix Pρ,ρ (s∗ ) of P (s∗ ) that is nonsingular. It can be singular, i.e.,, P (s) can have a defective rank, only for a finite number of s-values s0 ∈ C. For all other complex numbers s it has the same rank as for s∗ , i.e.,, it has the full rank ρ = N = min(N, r) on C. This proves the sufficiency of the condition under 1). 2) Equation (7.76) implies the following: rankP (s) = ρ = N = min (N, r) = min (N, (µ + 1) r) = rankP (µ) =⇒ rankP (µ) = N. This completes the proof. The condition 2) of this theorem has the following consequence: Corollary 118 1) For the polynomial matrix P (s) (7.70) to have the full rank N it is necessary (but not sufficient) that its generating matrix P (µ) has the full rank N . 2) The polynomial matrix P (s) (7.70) cannot have the full rank N if its generating matrix P (µ) does not have the full rank N : rankP (µ) < N =⇒ rankP (s) < N on C.

(7.80)

It is easier to test rankP (µ) than rankP (s) . If P (µ) has a defective rank, i.e.,, the rank less than N, then there is not a need to test whether P (s) has the full rank. Example 119 Let a 2 × 3 polynomial matrix P (s) be  2  2s + 3s 4s − 1 s3 P (s) = ∈ C2×3 . 0 4s3 + s2 + 3 6s − 6 It has the full rank 2, rankP (s) = 2 = f ull rankP (s) . It induces the following matrix polynomial  2  2s + 3s 4s − 1 s3 P (s) = 0 4s3 + s2 + 3 6s − 6         0 −1 0 3 4 0 2 0 0 0 0 1 2 = + s+ s + s3 = 0 3 −6 0 0 6 0 1 0 0 4 0   0 −1 0 3 4 0 2 0 0 0 0 1 (12) (12) = S3 (s) = P (12) S3 (s) . 0 3 −6 0 0 6 0 1 0 0 4 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 124 — #135

i

124

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

The generic matrix P (12) of the matrix polynomial is found to read   0 −1 0 3 4 0 2 0 0 0 0 1 (12) P = =⇒ 0 3 −6 0 0 6 0 1 0 0 4 0 rankP (12) = 2 = f ull rankP (12) . This illustrates the statement under 1) of Corollary 118 that the full rank of the polynomial matrix P (s) implies the full rank of the generic matrix P (12) of the matrix polynomial P (s) . Equivalently, the full rank of P (12) is necessary for P (s) to have the full rank. Example 120 Let another 2 × 3 polynomial matrix P (s) be   2 2s + 3s 4s + 6 0 =⇒ P (s) = 0.5s 1 0 rankP (s) = 1 < 2 = f ull rank of P (s) . The matrix polynomial form of the polynomial matrix P (s) reads:       2 0 0 3 4 0 0 6 0 s2 = s+ + P (s) = 0 0 0 0.5 0 0 0 1 0   0 6 0 3 4 0 2 0 0 (9) = S3 (s) . 0 1 0 0.5 0 0 0 0 0 Its generic matrix P (9) follows:   0 6 0 3 4 0 2 0 0 (9) =⇒ P = 0 1 0 0.5 0 0 0 0 0 rankP (9) = 2 = f ull rankP (9) . This illustrates that the full rank of P (9) does not imply the full rank of P (s). Problem 121 Unsolved matrix polynomial problem Under what conditions the full rank N of the generic matrix P (µ) of the (µ) matrix polynomial P (s) = P (µ) Sr (s), Equation (7.76), implies the full rank N of the matrix polynomial P (s), Equations (7.70) and (7.71), i.e.,, (µ) of the polynomial matrix P (s) = P (µ) Sr (s) , Equation (7.76)?

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 125 — #136

i

i

7.6. RATIONAL MATRICES

7.6

125

Rational matrices

A matrix function R (.) : C −→ Cm×n , R (s) = [fj,k (s)] ,

(7.81)

is rational matrix (function) if and only if every its entry rj,k (s) is a rational function, i=µj,k

X rj,k (s) =

aij,k si ,

i=0 i=ηj,k

X i=0 bij,k

, aij,k ∈ R, ∀j = 1, 2, .., m, ∀k = 1, 2, .., n bij,k si ∈ R, ∀j = 1, 2, .., m, ∀k = 1, 2, .., n.

Definition 122 Rank of a rational matrix The rank of a rational matrix R (s) (7.81) on C is the rank of the matrix almost everywhere in s ∈ C. The rational matrix R (s) has the full column/row/ rank on C if it has the full column/row/rank everywhere in the complex plane C except at a finite number of points s ∈ C, respectively. The full column and the full row rank of the rational matrix R (s) on C are mutually equal and equal to its full rank on C : f ull column rank R (s) on C = f ull row rank R (s) on C = = f ull rank R (s) on C = min (m, n) . The rank of the rational matrix R (s) obeys rankR (s) = max [rankR (s) : s ∈ C] ≤ min (m, n) . A square rational matrix is nonsingular if and only if it has the full rank. The expression “on C” will be sometimes omitted in the sequel. Let M (s) ∈ Cm×m , detM (s) 6= 0 on C, N (s) ∈ C

n×n

, detN (s) 6= 0 on C.

(7.82) (7.83)

be rational matrices and R (s) = M (s) P (s) N (s) ,

(7.84)

where P (s) is the matrix polynomial, Equation (7.70). Definition 98 and Definition 122 imply directly the following:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 126 — #137

i

126

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Theorem 123 Linear independence of rows (columns) of a rational matrix and its rank If a rational matrix R (s) has the full row (column) rank then it has the maximal number of linearly independent rows (columns) equal to its full row (column) rank, respectively, but the inverse statement does not hold. This theorem is important for the relationship between trackability and output controllability. The following well-known result is also important for trackability conditions. Theorem 124 Rank of the matrix product (7.82)-(7.84) The rank of the rational matrix R (s) (7.82)-(7.84) is equal to the rank of the polynomial matrix P (s) (7.70), rankR (s) = rankM (s) P (s) N (s) = rankP (s) ≤ min (m, n) .

(7.85)

Proof. Rank of every nonsingular matrix does not influence the rank of the product of that matrix and another matrix, i.e.,, the rank of the matrix product is equal to the rank of that another matrix. In the case of the rational matrix R (s) (7.84) that rule yields the following: rankR (s) = rankM (s) P (s) N (s) = rankP (s) due to the nonsingularity of the square matrices M (s) (7.82) and N (s) (7.83). The resolvent matrix (sI − A)−1 of a square matrix A ∈ Rn×n is rational matrix, (sI − A)−1 =

adj (sI − A) , det [adj (sI − A)] = [det (sI − A)]n−1 , (7.86) det (sI − A) h i det (sI − A)−1 = [det (sI − A)]−1 (7.87)

where the adjoint matrix adj (sI − A) of the matrix A is matrix polynomial, and the determinant det (sI − A) is the characteristic polynomial of the matrix A. Equations (7.86), (7.87) imply the following: rank (sI − A)−1 = rank adj (sI − A) .

(7.88)

Let C ∈ RN ×n , A ∈ Rn×n , and B ∈ Rn×r and U ∈ RN ×r . They define the rational matrix R (s) by h i R (s) = C (sI − A)−1 B + U ∈ CN ×r . (7.89)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 127 — #138

i

7.6. RATIONAL MATRICES

i

127

Let p (s) = det (sI − A) ,

(7.90)

L (s) = Cadj (sI − A) B + p (s) U, L (s) ∈ CN ×r .

(7.91)

and L (s) is polynomial matrix, L (s) =

=

                  

i=n X

Li

si

=

(n) L(n) Sn (s) ,

Li ∈

RN ×r ,

i=0

∀i = 0, 1, ..., n, ⇐⇒ U 6= ON,r i=n−1 X

Li

si

=

(n−1) L(n−1) Sn (s) ,

Li ∈

RN ×r ,

i=0

∀i = 0, 1, ..., n − 1, ⇐⇒ U = ON,r ,

         

(7.92)

        

Equations (7.89)-(7.92) simplify the definition of R (s) : R (s) =

L (s) = p−1 (s) L (s) . p (s)

(7.93)

The following theorem results directly form Theorem 117: Theorem 125 Rank of a rational matrix The rank of the rational matrix R (s) (7.89) is the rank of its numerator polynomial matrix L (s) (7.91), h i rankF (s) = rank C (sI − A)−1 B + U =   = rank p−1 (s) L (s) = rankL (s) ≤ min (N, r) . (7.94) Proof. Let Equation (7.89) hold. It can be set into the form (7.93) determined by Equations (7.89)-(7.92): R (s) =

L (s) , L (s) = Cadj (sI − A) B + p (s) U. p (s)

The scalar polynomial p (s) does not influence the rank of p−1 (s) L (s) . This fact and the preceding equations, i.e.,, Equations (7.93), furnish rankF (s) = rankL (s) , which proves (7.94).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 128 — #139

i

128

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES

Notice that Theorem 125 is a special case of Theorem 124, which is characterized with M (s) = p−1 (s) Im , detM (s) = p−m (s) 6= 0, and N (s) = In , rankP (s) = rankL (s) . Example 126 Let R (s) = C (sI2 − A)−1 B + U = p−1 (s)L(s) ∈ C 1×3 for  A=

0 1 −1 −2



     .. 1 2 0 , U= 1 0 3 . , C= 1.2 , B= 2 1 2 

The matrix A is stable matrix with two eigenvalues, both equal matrix A does not have an eigenvalue at the origin. The above data yield:   s+2 1   −1 s s −1 =⇒ (sI2 − A)−1 = 2 sI2 − A = 1 s+2 s + 2s + 1  s+2 −1 2 2 p(s) = s + 2s + 1 = (s + 1) , adj (sI2 − A) = −1

to −1. The

=⇒ 1 s

 ,

so that R (s) = C (sI2 − A)−1 B + U =   s+2 1       −1 s .. 1 2 0 = 1.2 + 1 0 3 = 2 2 1 2 s + 2s + 1   .. .. 2 2 s + 11s + 1. 4s + 1. 3s + 10s + 5 = = p−1 (s)L(s) =⇒ s2 + 2s + 1  1 + 11s + s2 1 + 4s 5 + 10s + 3s2 ∈ C1×3 =⇒       L(s) = 1 1 5 + 11 4 10 s + 1 0 3 s2 = | {z } | {z } | {z }

L(s) =



L0

L1

=

(2) L(2) S3 (s)

L2

=⇒

rankR(s) = rankL(s) = 1,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 129 — #140

i

7.6. RATIONAL MATRICES

i

129

T 1 0 0 s 0 0 s2 0 0 (2) (2) S3 (s) =  0 1 0 0 s 0 0 s2 0  ∈ C9×3 , rankS3 (s) = 3, 0 0 1 0 0 s 0 0 s2 

L(2) =

L 

(2)

rankL

  .. .. = L0 . L1 . L2 ∈ R1×9 ,

1 1 5 11 4 10 1 0 3 (2)



=⇒

= 1 = rankL (s) = rankR (s) .

This illustrates Theorem 117. Notice that in this example, s∗ = 0 is not an eigenvalue of the matrix A, and we test rankR (0):   rankR (0) = rank −CA−1 B + U =   = rankL(0) = rankL0 = rank 1 1 5 = 1 = f ull rankF (s) , and rankL(2) = f ull rankL(2) = 1.

Example 127 Let R (s) = C (sI3 − A)−1 B + U = p−1 (s)L(s) ∈ C 2×12 for 

0 1  0 0 A= −3 −7  1 −1  B= 3 2 1 3

  0 1  1 , C= 3 −5   2 1  −2 , U = 3 1

0 2 1 2 2 3 2 1

 ,

 .

The matrix A is stable matrix, i.e.,, Hurwitz matrix, λ1 (A) = λ2 (A) = −1 < 0, λ3 (A) = −3 < 0. The matrices A, C, B and U imply:   s −1 0 −1  , sI3 − A =  0 s 3 7 s+5

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 130 — #141

i

130

i

CHAPTER 7. MATHEMATICAL PRELIMINARIES  s2 + 5s + 7 s+5 1  −3 s2 + 5s s  −3s −7s − 3 s2 (sI3 − A)−1 = =⇒ s3 + 5s2 + 7s + 3 p(s) = s3 + 5s2 + 7s + 3 = (s + 1)2 (s + 3), 

R (s) = C (sI3 − A)−1 B + U =     s2 + 5s + 7 s+5 1 1 0 2  −3 s2 + 5s s    3 1 2 −3s −7s − 3 s2 1 2 3 , + = 3 2 1 s3 + 5s2 + 7s + 3 so that R (s) = C (sI3 − A)−1 B + U = " = 

s3 +6s2 +6s+10 s3 +5s2 +7s+3 3s3 +18s2 +30s+27 s3 +5s2 +7s+3   3 s + 6s2 +

2s3 +10s2 +s+5 s3 +5s2 +7s+3 2s3 +11s2 +8s+15 s3 +5s2 +7s+3  3 2s + 10s2 +

#

3s3 +17s2 +21s+10 s3 +5s2 +7s+3 = s3 +7s2 +8s+6 s3 +5s2 +7s+3    3 3s + 17s2 +

 +s + 5   +6s + 10     2s3 + 11s2 + 3s3 + 18s2 + +8s + 15 +30s + 27 = 3 s + 5s2 + 7s + 3 has the full rank 2: rankR (s) = 2.



+ 10     +21s 3  s + 7s2 + +8s + 6

This leads to the following results: L(s) =   3  + 2s3 + 10s2 + 3s + 17s2 +  + 10   +s + 5 + 10   +6s   +21s = 3 2 3 2 3  3s + 18s + 2s + 11s + s + 7s2 + +30s + 27 +8s + 15 +8s + 6  

s3

6s2 +





  , 

L(s) ∈ C2×3 . Let us verify the nonsingularity of the minor M2,2 (s) of L(s) : s3 + 6s2 + 6s + 10 2s3 + 10s2 + s + 5 M2,2 (s) = 3 = 3s + 18s2 + 30s + 27 2s3 + 11s2 + 8s + 15 = −4s6 − 43s5 − 157s4 − 238s3 − 142s2 − 7s + 15 6= 0, ∀s ∈ / {0.25, − 1.0, − 3.0, − 5.0} =⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 131 — #142

i

7.6. RATIONAL MATRICES

i

131

rankL(s) = 2 = ρ = min (2, 3) =⇒ rankR(s) = 2. This illustrates Theorem 117. We accept s = 0 because it is not eigenvalue of the matrix A in order to check whether s = 0 can be adopted for s∗ of the statement under 1) of Theorem 117. Hence,   10 5 10 L(0) = =⇒ rankL(0) = 2 = min (2, 3) = ρ. 27 15 6 L(0) has the full rank ρ = 2, which permits accept s∗ = 0. Verification,  rankR (0) = rank U − GA−1 B = p−1 (0)rankL (0) =   10 5 10 27 15 6 = rank = 2 = rankR (s) . 3 This example illustratively verifies Theorem 117 and Theorem 125.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 132 — #143

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 133 — #144

i

i

Chapter 8

Observability and stability 8.1

Observability and system regime

Kalman introduced the concept of the system observability [59], [61]. He introduced it in the framework of the time-invariant linear systems. It has become one of the fundamental dynamical systems and control concepts [2], [13], [15], [18], [48]-[52], [67], [79], [82], [90]. System observability reflects the system property to permit the determination of its initial state S0 , i.e., of its state S (t) at the initial moment t = t0 = 0, from the system response Y (t; S0 ). The relationship between the system response and the initial state is independent of the input actions on the system over the time interval T0 .This is due to the fact that the initial state is independent of the input action that starts at the initial moment. This enables us to consider the system in the free regime, i.e., for the zero input vector. This is fully meaningful if we use the system mathematical model in terms of the deviations. The zero input deviation means that the total input vector is nominal if we use the system mathematical model in terms of the total coordinates. This assumes that we know also the total initial nominal state vector SN 0 . Once we determine the initial state deviation vector s0 = S0 − SN 0 from the system response in the free regime, we easily determine the total initial vector from S0 = SN 0 + s0 and vice versa. Let S ∈ Rn , A ∈ Rn×n , P (µ) ∈ Rn×(µ+1)M , Y ∈ RN , C ∈ RN ×n , and µ I ∈ R(µ+1)M be an input vector of the system described in terms of the

133 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 134 — #145

i

134

i

CHAPTER 8. OBSERVABILITY AND STABILITY

total coordinates in the ISO form (3.1), (3.2) by (8.1) and (8.2): dS(t) = AS(t) + P (µ) Iµ (t), dt Y(t) = CS(t) + QI(t),

(8.1) (8.2)

The fundamental matrix function Φ (., t0 ) : T −→ Rn×n of the system (8.1), (8.2) is determined in (3.3) by Φ (t) = eAt ∈ Rn×n , ∀t ∈ T,

(8.3)

and has the following property: detΦ (t) 6= 0, ∀t ∈ T, Φ (0) = eA0 = I.

(8.4)

The system motion results from (8.1) after its integration: µ

At

Z

t

S(t; S0 ; I ) = e S0 +

eA(t−τ ) P (µ) Iµ (τ )dτ, S0 = S(0).

(8.5)

0

This and (8.2) determine the system response: µ

At

Z

Y(t; S0 ; I ) = Ce S0 + C

t

eA(t−τ ) P (µ) Iµ (τ )dτ + QI(t).

(8.6)

0

Let us recall Conditions 38, 47, 60, 67, and 77. They are accepted and summarized as: Claim 128 The nominal data are known For any given system desired total output Yd (t) the nominal state vector SN (t) and the nominal input vector IN (t) are known at every t ∈ T0 . This assumption is valid in all what follows in the book.

8.2

Observability definition in general

We present the observability definitions in general valid for all types of the systems studied herein. Definition 129 Observability of the linear time-invariant continuoustime dynamical system under the action of the known input vector

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 135 — #146

i

8.2. OBSERVABILITY DEFINITION IN GENERAL

i

135

An initial state S0 ∈ Rn at t0 = 0 of a linear time-invariant continuoustime dynamical system is observable if and only if the system output response Y(.; S0 ; Iµ ) on T0 to the known extended input vector Iµ (t) on T0 uniquely determines S0 . The system is observable if and only if every its state is observable. Since the definition of the system observability does not care about the input action then it permits us to accept that I(t) is known at every t ∈ T0 , which we do. Then the left hand sides of the following equalities obtained from (8.6): Z t µ µ e Y(t; S0 ; I ) = Y(t; S0 ; I ) − C eA(t−τ ) P (µ) Iµ (τ )dτ − QI(t) = CeAt S0 , 0

(8.7) is also known as soon as the response Y(t; S0 ; Iµ ) is known for a given I(t), e S0 ; Iµ ) is known. Its relationship with S0 is defined exclusively by i.e., Y(t; CeAt independently of I(t). The sense of the left hand side of Equation (8.7) is that the system complete response is decreased by that part of the response resulted from the input action. Their difference expresses the part of the response resulting from the initial state influence on the response. It is shown by the right hand side of Equation (8.7). If the input I(t) is known at every t ∈ T0 then all what holds for Y(t; S0 ; 0(µ+1)M ) holds also e S0 ; Iµ ). for Y(t; We accepted the validity of Claim 128 that the nominal vector functions SN (t) and IN (t) are determined for a given nominal Yd (t). We can select I(t) ≡ IN (t). Then the knowledge of the deviations s, i and y, s = S − SN , i = I − IN , y = Y − Yd ,

(8.8)

determines the total values S, I and Y. The system (8.1), (8.2) in the nominal regime takes the following form: dSN (t) = ASN (t) + P (µ) IµN (t), dt Yd (t) = CSN (t) + QIN (t),

(8.9) (8.10)

In terms of the deviations s, v and y Equations (8.1), (8.2), (8.5), and (8.6) become, respectively, for I(t) ≡ IN (t) that implies i(t) ≡ 0M : ds(t) = As(t), dt y(t) = Cs(t),

(8.11) (8.12)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 136 — #147

i

136

i

CHAPTER 8. OBSERVABILITY AND STABILITY s(t; s0 ; 0(µ+1)M ) = Φ (t) s0 = eAt s0 , s0 = s(0), At

y(t; s0 ; 0(µ+1)M ) = CΦ (t) s0 = Ce s0 .

(8.13) (8.14)

The comparison of (8.11) and (8.12) with (8.1) and (8.2), or (8.13), (8.14) with (8.5), (8.6), respectively, verifies that they have the same matrices, they are of the same order, structure and form so that they have the same qualitative dynamical properties in general, and the same observability property in particular. Only the matrices A and C determine the relationship between the system response and the initial state. This analysis justifies to consider the system in terms of the deviations in the free regime if we analyze the system observability. With this in mind we will mainly study the system observability by using the system model in the free regime and expressed in terms of the deviations. Definition 130 Observability of the linear time-invariant continuoustime dynamical system An initial state s0 ∈ Rn at t0 = 0 of a linear time-invariant continuoustime dynamical system is observable if and only if the system output response Y(.; s0 ; 0(µ+1)M ) in the free regime on T0 uniquely determines the initial state vector s0 . The system is observable if and only if every its state is observable.

8.3

Observability criterion in general

The general observability criterion reads: Theorem 131 Observability criterion in general For the linear dynamical system (8.1), (8.2) to be observable it is necessary and sufficient that any of the following equivalent conditions holds: 1. The observability Gram matrix GOB (t) , Zt GOB (t) =

ΦT (t) C T CΦ (t) dt,

(8.15)

0

is nonsingular for any t ∈ InT0 . 2. All columns of CΦ (t) are linearly independent on [0,t] for any t ∈ InT0 . 3. All columns of C(sIN − A)−1 are linearly independent on C.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 137 — #148

i

8.3. OBSERVABILITY CRITERION IN GENERAL 4. The nN × n observability matrix OOBS ,   C  CA    2  ∈ RnN ×n , CA OOBS =     ···  CAn−1

i

137

(8.16)

has the full rank n, rankOOBS = n.

(8.17)

5. For every complex number s ∈ C, equivalently for every eigenvalue si (A) of the matrix A, ∀i = 1, 2, ..., n, the (n + N ) × n complex matrix OO (s) ,   sI − A OO (s) = ∈ R(n+N )×n , (8.18) C has the full rank n, rankOO (s) = n, ∀s ∈ C.

(8.19)

The proof of this theorem is given in Appendix D.3. Note 132 The form of Theorem 131 has been well known. The novelty of this theorem is its validity proved in the sequel for all five classes of the systems studied herein instead of only for the ISO systems and for the IO systems formally transformed into ISO systems by loosing the physical sense of the state variables if µ > 0 as it has been known so far. For more details see in the sequel Comment 144, Section 9.1. Comment 133 Let the columns of the matrix C be linearly dependent. Let at a moment τ ∈ T0 all columns of CΦ (τ ) be linearly independent. The linearity and the time invariance of the linear time-invariant continuoustime system imply the continuity of the system motion and of the system response in the free regime. This implies the existence of ε > 0 such that all columns of CΦ (t) are linearly independent at any moment t ∈]τ − ε, τ + ε[. However, this does not guarantee their independence at every t ∈ T0 . Since all columns of the matrix C are not linearly independent then all columns of CΦ (t) at the initial instant, i.e., for t = t0 = 0 they are not linearly independent because Φ (0) = I and CΦ (0) = CI = C. In spite of that, all columns of CΦ (τ ) are linearly independent on T0 .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 138 — #149

i

138

i

CHAPTER 8. OBSERVABILITY AND STABILITY

Comment 134 System response and its derivatives Let the system (8.1), (8.2) be in the free regime: ds(t) = As(t), dt y(t) = Cs(t).

(8.20) (8.21)

Its motion s (t; s0 ; 0M ) and its output response y (t; s0 ; 0M ) have the following well known forms in view of (8.3):  s t; s0 ; 0(µ+1)M = eAt s0 = Φ (t, t0 ) s0 , (8.22)  At y t; s0 ; 0(µ+1)M = Cs(t) = Ce s0 = CΦ (t, t0 ) s0 . (8.23) The system response (D.8) determines directly and uniquely its derivatives:  y(i) t; s0 ; 0(µ+1)M = CΦ(i) (t, t0 ) s0 = CAi eAt s0 , ∀i = 0, 1, ..., n − 1, ..., ∀t ∈ [t0 , t1 ] ,

(8.24)

It is now clear that for any initial state vector s0 ∈ Rn the knowledge of the matrices A and C determines completely, in the free regime,   the system motion s t; s0 ; 0(µ+1)M , the system response y t; s0 ; 0(µ+1)M , all   their derivatives s(k) t; s0 ; 0(µ+1)M and y(k) t; s0 ; 0(µ+1)M at every moment t ∈ T0 . This analysis permits the following: Claim 135 Simple proof of the condition 4. of Theorem 131, Proof. Let the system response y (t; s0 ; 0M ) = CeAt s0 = CΦ (t) s0 , ∀ (t, s0 ) ∈ T0 × Rn , in (D.8) be known at every t ∈ T0 . Necessity. Let the system (8.1), (8.2), (Sections 8.1), be observable. Definition 130, (Section 8.2), holds. The vector form of Equations (8.24) at the initial moment t = t0 = 0 reads for every s0 ∈ Rn :     y (0; s0 ; 0M ) C  y(1) (0; s0 ; 0M )      = yn−1 =  CA s0 = OOBS s0 . (8.25) 0     ... ... CAn−1 y(n−1) (0; s0 ; 0M ) {z } | OOBS

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 139 — #150

i

8.4. OBSERVABILITY AND STABILITY

i

139

The system observability guarantees both the knowledge of the left-hand side of this vector equation and the existence of its unique solution s0 , hence the system observability guarantees the full rank n of the matrix OOBS ∈ RnN ×n that proves the necessity of the condition 4 of Theorem 131. Sufficiency. Let the condition 4 of Theorem 131 hold. The full rank n of T the matrix OOBS guarantees that the matrix OOBS OOBS ∈ Rn×n has also the full rank n: T T rankOOBS OOBS = n, i.e., detOOBS OOBS 6= 0.

This justifies to introduce and to use the left inverse + T = OOBS OOBS OOBS

−1

T OOBS ∈ Rn×nN

(8.26)

of the matrix OOBS . It has the following property: + OOBS = I. OOBS

(8.27)

+ (8.26) the After the multiplication of Equation (8.25) on the left by OOBS equation transforms into: + + y0n−1 , ∀s0 ∈ Rn , OOBS s0 = OOBS OOBS

the final form of which reads + y0n−1 , ∀s0 ∈ Rn , s0 = OOBS

due to (8.27). The preceding resulting equation proves the observability of every initial state vector s0 ∈ Rn . The system is observable in view of (Definition 130). Theorem 131 resolves completely the problem of the necessary and sufficient conditions for the observability of all systems that can be set in the form (8.1), (8.2) as shown in the sequel.

8.4 8.4.1

Observability and stability System stability

The Lyapunov stability theory is established and developed directly only for the ISO systems (3.1), (3.2). The book [40, Part III, Chapter 13: Lyapunov stability] broadness the Lyapunov stability concept, properties, method, methodology and theorems directly to the IO systems (2.1) and to the IIO

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 140 — #151

i

140

i

CHAPTER 8. OBSERVABILITY AND STABILITY

systems (6.1), (6.2). They incorporate the direct application of the Lyapunov stability theory also to the HISO systems (5.1), (5.2) and to the EISO systems (4.1), (4.2). This is due to the facts that in the free regime the state equation of the HISO system (5.1), (5.2) coincides with the state equation of the IO system (2.1), and that in the free regime the state equation of the EISO system (4.1), (4.2) coincides with the state equation of the ISO system (3.1), (3.2). The Lyapunov stability properties concern the system behavior in the nominal regime in the total coordinates, i.e., in the free regime in terms of the deviations of all variables. The mathematical models of all these systems can be transformed into the system (8.1), (8.2), i.e., into dS(t) = AS(t) + P (µ) IµN (t), dt Yd (t) = CS(t) + QIN (t),

(8.28) (8.29)

in terms of the total coordinates, and into the system (8.11), (8.12) ds(t) = As(t), dt y(t) = Cs(t),

(8.30) (8.31)

in terms of the deviations of all variables in the free regime. The solution of the system (8.32), (8.33), dSN (t) = ASN (t) + P (µ) IµN (t), dt Yd (t) = CSN (t) + QIN (t),

(8.32) (8.33)

for the nominal both the initial state SN (0) = SN 0 and the input vector IN (t) is the system nominal motion. The system forms (8.28), (8.28) and (8.30), (8.31) describe the system under the action of the nominal input IN (t) and under the influence of arbitrary initial conditions, (8.33). They are appropriate for the analysis of both stability properties of the system nominal motion and stability properties of the system zero equilibrium state se = 0n. The nominal motion SN (t) of the system (8.1), (8.2) and the zero equilibrium state se = 0n of the system (8.30), (8.31) have the same stability properties. Definition 136 A square matrix A ∈ Rnxn is stable (or stability, or Hurwitz) matrix if and only if the real parts of all its eigenvalues are negative.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 141 — #152

i

i

8.4. OBSERVABILITY AND STABILITY

141

The fundamental result of the Lyapunov stability theory for the timeinvariant continuous-time systems (8.28), (8.29) and (8.30), (8.31) is the well known Lyapunov matrix theorem [40, Part III, Chapter 13: Lyapunov stability]: Theorem 137 Lyapunov matrix theorem For the zero equilibrium state se = 0n of the system (8.30), (8.31) to be (globally) asymptotically stable, or equivalently, for the square matrix A ∈ Rnxn to be stable, it is both necessary and sufficient that that for any positive definite symmetric matrix G, G = GT ∈ Rnxn , the matrix solution H of the Lyapunov matrix equation: AT H + HA = −G

(8.34)

is also positive definite symmetric matrix and the unique solution to (8.34).

8.4.2

System stability and observability

The condition for the positive definiteness of the matrix G in the Lyapunov matrix equation (8.34) can be sometimes restrictive. Kalman relaxed for the observable systems (8.28), (8.29) and (8.30), (8.31). The pair (A,C) is observable pair if and only if the system (8.28), (8.29), equivalently, if and only if the system (8.30), (8.31), is observable. Theorem 138 Kalman relaxation of the Lyapunov matrix theorem For the zero equilibrium state se = 0n of the observable system (8.30), (8.31) to be (globally) asymptotically stable, or equivalently, for the square matrix A ∈ Rnxn to be stable in the case (A,C) is observable pair, it is both necessary and sufficient that that the matrix solution H of the relaxed Lyapunov matrix equation (8.35), AT H + HA = −C T C,

(8.35)

is positive definite symmetric matrix and the unique solution to (8.35). The proof of this theorem is in Appendix D.4. Comment 139 While in Theorem 137 the matrix G is positive definite symmetric matrix, in Theorem 138 the matrix C T C is only positive semidefinite symmetric. Theorem 138 relaxes slightly the condition of Theorem 137. The relaxation is possible due to the system observability.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 142 — #153

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 143 — #154

i

i

Chapter 9

Various systems observability 9.1

IO systems observability

In order to facilitate reading and following the text and formulae we will use the compact form (2.15): A(ν) Yν (t) = H (µ) Iµ (t), ∀t ∈ T0 , ν ≥ 1, 0≤µ ≤ ν,

(9.1)

of the IO system (2.15) in terms of the total coordinates and in its compact form, or A(ν) yν (t) = H (µ) iµ (t), ∀t ∈ T0 , ν ≥ 1, 0≤µ ≤ ν, (9.2) in terms of the deviations (2.54), (9.3), i = I − IN ,

(9.3)

y = Y − Yd , of all variables and also in the compact form. For the observability study we can consider the system in the free regime (Section 8.1), in which (9.2) becomes A(ν) yν (t) = 0N , ∀t ∈ T0 , ν ≥ 1, 0≤µ ≤ ν, (9.4) In view of Note 28, Equation (2.16) (Section 2.1), the state vector deviation s ∈Rn of the system (9.2), hence of its form (9.4) in the free regime, has the following form: s=y

ν−1



T . T . . = y .. y(1) .. ... .. y(ν−1)

T

T

∈ Rn , n = νN.

(9.5)

143 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 144 — #155

i

144

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY

This permits us to apply directly Definition 129 to the observability of the system (9.1), equivalently, to apply directly Definition 130 to the system (9.2). The observability criterion for the IO system (9.1), equivalently for the system (9.2), has the simplest possible form that is the same for both because they have the same properties: Theorem 140 Observability criterion for the IO system (2.15), equivalently, for (9.2), (9.4) The IO system (2.15), equivalently (9.2), in total coordinates and its form (9.2), i.e., (9.4), in terms of deviations of all variables, is invariably observable; i.e., it is observable independently of all its matrices Ak , k = 0, 1, ..., ν, and Bk , k = 0, 1, ..., µ, i.e., independently of its matrices A(ν) and B (µ) . Proof is in Appendix D.5. Comment 141 The statement of Theorem 140 is physically clear. System observability signifies the system property that every system initial state veccan be determined from the system response y(t; s0 ; 0(µ+1)M ), tor s0 = yν−1 0 y(t; s0 ; 0(µ+1)M ) = CeAt s0 = CeAt y0ν−1 , ∀t ∈ T0 . The knowledge of the system response y(t; s0 ; 0(µ+1)M ) determines all its derivatives y(k) (t; s0 ; 0(µ+1)M ), ∀k = 1, 2, ...., ∀t ∈ T0 , hence, the system response y(t; s0 ; 0(µ+1)M ) determines yν−1 (t; s0 ; 0(µ+1)M ), ∀t ∈ T0 , and the initial state vector y0ν−1 = s0 fully independently of all its matrices Ak , k = 0, 1, ..., ν, and Bk , k = 0, 1, ..., µ, i.e., independently of its matrices A(ν) and B (µ) . The result is that every IO system is invariably observable. Conclusion 142 Every IO system (2.15), equivalently, (9.2), and (9.4), is (invariably) observable. The problem of the observability test does not exist in the framework of the IO systems (2.15), equivalently, (9.2), and (9.4). Note 143 It is to remind ourselves that Definition 129, Definition 130, and Theorem 140 concern the real, physical, initial state total vector S0 = Yν−1 0 , i.e., its deviation s0 = yν−1 , and the system real, physical, output response 0 Y(t; Y0ν−1 ; 0M ), i.e., its deviation y(t; y0ν−1 ; 0M ) in the free regime.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 145 — #156

i

9.1. IO SYSTEMS OBSERVABILITY

i

145

Comment 144 The formal mathematical algorithm defined by Equations (C.4)-(C.9) transforms (see Section C.1 herein or, for more details, see Appendix B in [40]) the IO system (2.1) into the ISO system (3.1), (3.2), i.e., into dX(t) = AX(t) + HI(t), ∀t ∈ T0 , dt Y(t) = CX(t) + QI(t), ∀t ∈ T0 ,

(9.6) (9.7)

in which the state vector X is determined by (C.4)-(C.10) (Section C.1):    X =  

X1 X2 : Xν−1 Xν



Y − Hν I



 •   X1 + Aν−1 Y−Hν−1 I   = ...   •   X  ν−2 + A2 Y−H2 I •

     .,   

(9.8)

Xν−1 + A1 Y−H1 I

which is different from the state vector (4.5) (Section 4.1):     Y X1  X2   Y(1)       = . . . : X = .      (ν−2)  Xν−1   Y  T Xν Y(ν−1) The state vector X (9.8) depends on the system output vector Y and on the system input vector I in spite their physical natures are most often very different so that the state vector X and its subvectors X1 , X2 , ... , Xν are physically meaningless if there exists k ∈ {1, 2, ... , ν} such that Hk 6= ON,M . Equations (C.4)-(C.10) together with (2.15) define the matrices A (C.11), B = H (C.12), C (C.13) and Q (C.14) (Section C.1). They determine (n + N ) × n complex matrix OO (s) (D.62),   sIn − A OO (s) = ∈ C(n+N )×n , (9.9) C

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 146 — #157

i

146

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY

which takes the following form:  sIN + Aν−1  Aν−2   ...   A 3 OO (s) =   A2   A1   A0 IN

−IN sIN ... ON ON ON ON ON

ON −IN ... ON ON ON ON ON

... ON ... ON ... ... ... −IN ... sIN ... ON ... ON ... ON

ON ON ... ON −IN sIN ON ON

ON ON ... ON ON −IN sIN ON

           

Its rank is obtained from:  rankOO (s) = rank      = rank     

sIN + Aν−1 −IN Aν−2 sIN ... ... A3 ON A2 ON A1 ON IN ON

−IN  sIN   ... = rank   ON   ON ON 

ON ... −IN ... ... ... ON ... ON ... ON ...  sIn − A OO (s) = C

ON −IN ... ON ON ON ON

sIn − A C ... ON ... ON ... ... ... −IN ... sIN ... ON ... ON

 = ON ON ... ON −IN sIN ON 

ON ON ... ON ON −IN ON

     =    

ON ON ON ON ON ON   ... ... ...   = νN = n =⇒ −IN ON ON   sIN −IN ON  ON sIN −IN  = n, ∀ (s, A) ∈ C × Rn×n .

The ISO system (9.6), (9.7), (9.8) is also observable (Theorem 131, Section 8.3) independently of its matrices Aj and Bk , i.e., it is invariably observable. The result seems the same as expressed in Theorem 140, but it is not as explained in what follows. The explanation is in the fact that the obtained ISO system (9.6), (9.7), (9.8) is also considered in the free regime, i.e., for the zero input vector, I(t) = 0M , ∀t ∈ T0 , which implies I(k) (t) = 0M , ∀k = 0, 1, 2, .., ν, i.e., Iν (t) = 0(ν+1)M , ∀t ∈ T0 .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 147 — #158

i

9.2. ISO SYSTEMS OBSERVABILITY

i

147

The state subvectors then become due to the free regime: X1 = Y,

(9.10)

X2 = Y(1) + Aν−1 Y,

(9.11)

X3 = Y(2) + Aν−1 Y(1) + Aν−2 Y,

(9.12)

....

(9.13)

Xν−2 = Y

(ν−3)

+ Aν−1 Y

(ν−2)

+ ... + A3 Y,

(9.14)

Xν−1 = Y(ν−2) + Aν−1 Y(ν−3) + ... + A2 Y,

(9.15)

Xν = Y(ν−1) + Aν−1 Y(ν−2) + ... + A1 Y.

(9.16)

and the state vector of the transformed IO system (2.15) into the ISO system (3.1), (3.2) is the vector X defined by (9.8) and by (9.10)-(9.16). It is not the physical state vector SIO = Yν−1 of the original IO system (2.1), X 6= Yν−1 = SIO . It is very rare for the mathematical subvectors Xk (9.10)-(9.16) of the state vector X to have a physical sense, i.e., for the initial state vector X0 to be physically meaningful. The formal mathematical approach left open the problem of the observability of the real, physical, initial state vector S0 = Y0ν−1 of the original IO system (2.1). This illustrates the advantage of the usage of the new notation (Section 1.2) and of the generalization of the state concept (Section 1.4, Definition 14), which enabled Theorem 140, i.e., the complete solution of the physical state observability problem for all IO systems (2.1). Exercise 145 Test the observability of the selected IO physical plant in Exercise 40, Section 2.3.

9.2

ISO systems observability

Definition 129 and Definition 130 hold for the ISO systems. The observability criterion for the ISO systems (3.1), (3.2) equivalently for the ISO systems (3.39), (3.40), is the same for both of them because they have the same matrices, orders, and structures. The following theorem is well known: Theorem 146 Observability criterion for the ISO system (3.1), (3.2) For the ISO system (3.1), (3.2) to be observable it is necessary and sufficient that any of the following equivalent conditions holds:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 148 — #159

i

148

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY 1. The observability Gram matrix GOB (t) , Zt GOB (t) =

ΦT (t) C T CΦ (t) dt,

(9.17)

0

is nonsingular for any t ∈ InT0 . 2. All columns of CΦ (t) are linearly independent on [0,t] for any t ∈ InT0 . 3. All columns of C(sIN − A)−1 are linearly independent on C. 4. The nN × n observability matrix OOBS ,   C  CA    nN ×n 2  , (9.18) OOBS =   CA  ∈ R   ... CAn−1 has the full rank n, rankOOBS = n.

(9.19)

5. For every complex number s ∈ C, equivalently for every eigenvalue si (A) of the matrix A, ∀i = 1, 2, ..., n, the (n + N ) × n complex matrix OO (s) ,   sIn − A ∈ R(n+N )×n , (9.20) OO (s) = C has the full rank n, rankOO (s) = n, ∀s ∈ C.

(9.21)

Proof. The state vector S of the ISO system (3.1), (3.2) is its vector X, and the state vector s of the ISO system (3.39), (3.40) is its vector x. The input vectors of these systems are I and i, respectively. When we replace S by X, and set µ = 0 that reduces Iµ to I0 = I in (8.1), (8.2) then they become (3.1), (3.2). When we replace s by x in (8.11), (8.2) for v = 0m then they become (3.39), (3.40). This proves that the Theorem 131 holds also for ISO systems (3.1), (3.2) equivalently for the ISO systems (3.39), (3.40). Theorem 146 is Theorem 131, which ends the proof. Differently than the observability test for the IO systems, which is trivial, the observability test for the ISO systems is not trivial at all. The observability property of the latter depend essentially on the properties of the system matrices A and C, as shown by the above theorem, while it is independent of them for the IO systems (Theorem 140, Comment 141 and Conclusion 142).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 149 — #160

i

9.3. EISO SYSTEMS OBSERVABILITY

i

149

Exercise 147 Test the output controllability of the selected ISO physical plant in Exercise 48, Section 3.3.

9.3

EISO systems observability

Definition 129 and Definition 130 are directly applicable in this framework. The observability criterion for the EISO system (4.1), (4.2), equivalently of the EISO system (4.71), (4.72), is the same as for the linear dynamical system (8.1), (8.2): Theorem 148 Observability criterion for the EISO system (4.1), (4.2), equivalently for the EISO system (4.71), (4.72) For the EISO system (4.1), (4.2), equivalently for the EISO system (4.71), (4.72), to be observable it is necessary and sufficient that any of the following equivalent conditions holds: 1. The observability Gram matrix GOB (t) , Zt GOB (t) =

ΦT (t) C T CΦ (t) dt,

(9.22)

0

is nonsingular for any t ∈ InT0 . 2. All columns of CΦ (t) are linearly independent on [0,t] for any t ∈ InT0 . 3. All columns of C(sIN − A)−1 are linearly independent on C. 4. The nN × n observability matrix OOBS ,   C  CA    nN ×n 2  , (9.23) OOBS =   CA  ∈ R  ···  CAn−1 has the full rank n, rankOOBS = n.

(9.24)

5. For every complex number s ∈ C, equivalently for every eigenvalue si (A) of the matrix A, ∀i = 1, 2, · · ·, n, the (n + N ) × n complex matrix OO (s) ,   sIn − A OO (s) = ∈ R(n+N )×n , (9.25) C has the full rank n, rankOO (s) = n, ∀s ∈ C.

(9.26)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 150 — #161

i

150

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY

Proof. The EISO system (4.1), (4.2) is the system (8.1), (8.2). Theorem 148 is Theorem 131 that is valid for the system (8.1), (8.2). Exercise 149 Test the observability of the selected EISO physical plant in Exercise 61, Section 4.3.

9.4

HISO systems observability

The state vector S (5.3) of the HISO system (5.1), (5.2) (Section 5.1) is the extended vector Rα−1 ∈ Rn for n = αρ, and the state vector s of the HISO system (5.1), (5.2) (Section 5.1) is the extended vector rα−1 ∈ Rn , S = Rα−1 ∈ Rn , n = αρ; s = rα−1 ∈ Rn .

(9.27)

Let us remind that the HISO system description (5.1), (5.2) is in the compact form. Definition 129 and Definition 130, (Section 8.2) are valid for the HISO system (5.1), (5.2), i.e., (5.44), (5.45), respectively. Definition 150 Observability of the HISO systems (5.1), (5.2), i.e., of the HISO systems (5.44), (5.45) An initial state S0 ∈ Rn at t0 = 0 of the HISO system (5.1), (5.2) is observable if and only if the output system response Y(.; S0 ; Iµ ) on T0 to the known extended input vector Iµ (t) on T0 uniquely determines S0 . An initial state s0 ∈ Rn at t0 = 0 of the HISO system (5.44), (5.45) is observable if and only if the output system response Y(.; s0 ; 0(µ+1)M ) in the free regime on T0 uniquely determines the initial state vector s0 . The system is observable if and only if every its state is observable. Let    A=  

Oρ Iρ Oρ Oρ Oρ Iρ : : : Oρ Oρ Oρ −1 −1 −A−1 α A1 −Aα A2 −Aα A3

... Oρ ... Oρ : : ... Iρ ... −A−1 α Aα−1

     

(9.28)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 151 — #162

i

9.4. HISO SYSTEMS OBSERVABILITY

151



P (µ)

H (µ)

 Oρ,ρµ  Oρ,ρµ     ∈ Rn×ρ(µ+1) , : =     Oρ,ρµ  (µ) , A−1 α H   .. .. .. = H0 . H1 . ... . Hµ ∈ Rρ×(µ+1)M ,

i



I

 I(1)  : I =   I(µ−1) I(µ) , µ

and C=

Ryα−1

(9.29)

(9.30)

    ∈ R(µ+1)M ,  

  .. .. .. = Ry0 . Ry1 . ... . Ry(α−1) ∈ RN xn .

(9.31)

(9.32)

Theorem 151 Observability criterion for the HISO system (5.1), (5.2), equivalently for the HISO system (5.44), (5.45) For the HISO system (5.1), (5.2), equivalently for the HISO system (5.44), (5.45), to be observable it is necessary and sufficient that any of the following equivalent conditions holds for the matrices A, P (µ) , H (µ) , C, and the vector Iµ defined by Equations (9.28)-(11.146), respectively: 1. The observability Gram matrix GOB (t) , Zt GOB (t) =

ΦT (t) C T CΦ (t) dt,

(9.33)

0

is nonsingular for any t ∈ InT0 . 2. All columns of CΦ (t) are linearly independent on [0,t] for any t ∈ InT0 . 3. All columns of C(sIN − A)−1 are linearly independent on C. 4. The nN × n observability matrix OOBS ,   C  CA    nN ×n 2  OOBS =  , (9.34)  CA  ∈ R  ···  CAn−1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 152 — #163

i

152

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY

has the full rank n, rankOOBS = n.

(9.35)

5. For every complex number s ∈ C, equivalently for every eigenvalue si (A) of the matrix A, ∀i = 1, 2, · · ·, n, the (n + N ) × n complex matrix OO (s) ,   sIn − A OO (s) = ∈ R(n+N )×n , (9.36) C has the full rank n, rankOO (s) = n, ∀s ∈ C.

(9.37)

For the proof of the theorem see the Appendix D.6. Note 152 In view of Equation (9.27) we conclude that Definition 150 and , Theorem 151 deal with the physical initial state total vector S0 = Rα−1 0 ν−1 i.e., with its deviation s0 = y0 , and with the system real, physical, output response Y(t; Y0α−1 ; Iµ ), i.e., its deviation y(t; y0α−1 ; 0(µ+1)M ) in the free regime. Exercise 153 Test the output controllability of the selected HISO physical plant in Exercise 68, Section 5.3.

9.5

IIO systems observability

Differently than all previously treated (IO, ISO, EISO, and HISO) systems, the IIO systems (6.1), (6.2) (Section 5.1), i.e., the IIO systems (6.65), (6.66) (Section 5.2), have the internal state vector SI = Rα−1 (1.25)(6.1) and the output state vector SO = Yν−1 (1.26) (Section 1.4). Their full state vector Sf ,    T SI Sf = = S = ST1 ST2 ... STα STα+1 ... STα+ν ∈ Rαρ+νN , (9.38) SO is their state vector S (1.27),  S=

Rα−1 Yν−1



 =

SI SO





   S1 Sα+1 = Sf , SI =  :  , SO =  :  . (9.39) Sα Sα+ν

Its deviation s from the nominal state vector SN , s = S − SN ,

(9.40)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 153 — #164

i

9.5. IIO SYSTEMS OBSERVABILITY

i

153

is composed of the internal state deviation vector rα−1 and of the output state deviation vector yν−1 :    α−1   T T T sI r T T T s= = s1 s2 ... sα sα+1 ... sα+ν = . (9.41) sO yν−1 Definition 154 Full state observability of the IIO systems (6.1), (6.2), i.e., of the IIO systems (6.65), (6.66) An initial total state vector S0 ∈ Rn at t0 = 0 of the IIO system (6.1), (6.2) is observable if and only if the output system response Y(.; S0 ; Iµ ) on T0 to the known extended input vector Iµ (t) on T0 uniquely determines S0 . An initial state deviation vector s0 ∈ Rn at t0 = 0 of the IIO system (6.65), (6.66), is observable if and only if the output system response Y(.; s0 ; 0(µ+1)M ) in the free regime on T0 uniquely determines s0 . The system is observable if and only if every its state is observable. This is the system initial full state observability. This definition determines the full state, i.e., the state, observability of the IIO systems (6.1), (6.2), i.e., of the IIO systems (6.65), (6.66). An essential characteristic of them is to have also the internal state SI = Rα−1 and the output state SO = Yν−1 . This poses the question whether it is meaningful to think of their observability and if yes, in which sense. The following definitions reply affirmatively to that question. They open the problem of the criteria for their observability. Definition 155 Internal state observability of the IIO systems (6.1), (6.2), i.e., of the IIO systems (6.65), (6.66) An initial total internal state vector SI0 ∈ Rαρ at t0 = 0 of the IIO system (6.1), (6.2) is observable if and only if the output system response Y(.; SI0 ; SO0 ; Iµ ) on T0 to the known both initial total output state vector SO0 and the extended input vector Iµ (t) on T0 uniquely determines SI0 . An initial internal state deviation vector sI0 ∈ Rn at t0 = 0 of the IIO system (6.65), (6.66) is observable if and only if the output system response Y(.; sI0 ; sO0 ; 0(µ+1)M ) in the free regime on T0 to the known initial output state deviation vector sO0 uniquely determines sI0 . The system is internal state observable if and only if every its internal state is observable. Analogously:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 154 — #165

i

154

i

CHAPTER 9. VARIOUS SYSTEMS OBSERVABILITY

Definition 156 Output state observability of the IIO systems (6.1), (6.2), i.e., of the IIO systems (6.65), (6.66) An initial total output state vector SO0 ∈ Rαρ at t0 = 0 of the IIO system (6.1), (6.2) is observable if and only if the output system response Y(.; SI0 ; SO0 ; Iµ ) on T0 to the known both the total internal state vector SI (t) and the extended input vector Iµ (t) on T0 uniquely determines SO0 . An initial output state deviation vector sO0 ∈ Rn at t0 = 0 of the IIO system (6.65), (6.66) is observable if and only if the output system response Y(.; 0αρ ; sO0 ; 0(µ+1)M ) in the free regime on T0 to the known internal state vector deviation sI (t) uniquely determines sO0 . The system is output state observable if and only if every its output state is observable. The observability criterion for the IIO systems (6.1), (6.2), and for the IIO systems (6.65), (6.66), is the same for both of them because they have the same matrices, orders and structures. Theorem 157 Observability criterion for the IIO system defined by (6.1), (6.2), equivalently for the IIO system (6.65), (6.66) The IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is not full state observable Appendix D.7 contains the proof of this theorem. We have solved the problem of the initial full state S0 , equivalently s0 , observability. What are the conditions for their internal state SI0 observability? The following theorem replies to this question: Theorem 158 Internal state observability criterion for the IIO system (6.1), (6.2), equivalently for the IIO system (6.65), (6.66) The IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is internal state unobservable. For the proof see Appendix D.8 Since the system is internal state unobservable (Theorem 158) it is also full state unobservable (Theorem 157). We have solved the problems of the initial full state s0 observability in Theorem 157 and the problem of the initial internal state sI0 observability in Theorem 158. These theorems discover that every IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is both internal state and full state unobservable. The criterion for the output state observability has very simple form.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 155 — #166

i

9.5. IIO SYSTEMS OBSERVABILITY

i

155

Theorem 159 Output state observability criterion for the IIO system (6.1), (6.2), equivalently for the IIO system (6.65), (6.66) The IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is invariably output state observable, i.e., output state observable independently of the system matrices. Appendix D.9 presents the proof of this theorem. It signifies that every IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is output observable independently of its matrices. Exercise 160 Test the output controllability of the selected IIO physical plant in Exercise 78, Section 6.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 156 — #167

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 157 — #168

i

i

Part III

CONTROLLABILITY

157 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 158 — #169

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 159 — #170

i

i

Chapter 10

Controllability fundamentals 10.1

Controllability and system regime

Kalman’s concept of the state controllability has become a fundamental control concept, [58]-[63]. E. G. Gilbert [32] generalized it to the M IM O systems. M. L. J. Hautus [48] established for them the simple form of the controllability criterion in the complex domain. J. E. Bertram and P. E. Sarachik [8] broadened the state controllability concept to the output controllability concept. Both the state controllability concept and the output controllability concept consider the system possibility of steering a state or an output from any initial state or from any initial output to another state or another output, in general, or to the zero state or to the zero output, in particular, respectively. R. W. Brockett and M. D. Mesarovi´c (Mesarovitch) [12] introduced the concept of functional (output) reproducibility, called also the output function controllability [2, page 313], [18, page 216], [97, pages 72 and 164], in which the target is not a particular output (e.g., the zero output) but a given function representing a reference (desired) output response. All these concepts concern the systems free of any external disturbance action: D (t) ≡ 0d . They assume the nonexistence of any external perturbation acting on the system. The only external influences on the system are control actions. This explains why they are mainly important for plants, i.e., objects (Definition 9 in Section 1.4).

10.2

Controllability concepts

There are two main controllability concepts of linear dynamical systems: 159 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 160 — #171

i

160

i

CHAPTER 10. CONTROLLABILITY FUNDAMENTALS • The concept of the system state controllability. • The concept of the system output controllability.

The state controllability concept treats the vector state regardless of its physical sense whether its entries are physical or mathematical variables. If the ISO mathematical model of a physical system results directly from the system physical properties then the state variables (the entries of the state vector) are physical variables. Their state vector is also physical vector rather than only mathematical one. Then the state controllability concept has both mathematical and physical sense. This controllability concept can have only purely mathematical sense and meaning. It permits for an ISO mathematical model of a physical system to be expressed in terms of either mathematical or physical variables. If the mathematical model obtained directly from the physical properties of the physical system is in the form of an IO system then it is used to introduce purely mathematical, without any physical sense, state variables, hence their state vector without any physical sense (Section C.1). These facts open the need to distinguish in the framework of the state controllability • The physical state controllability from • The mathematical state controllability. The former demands for a system mathematical model to be expressed in terms of physical variables only. The latter concerns system mathematical models determined mainly in terms of mathematical variables that can be without any physical sense. The following controllability definitions clearly and precisely explain and determine the corresponding controllability concept. They are valid for both total coordinates and for their deviations; i.e., for the total state vector S and the total output vector Y as well as for their deviation vectors s and y, respectively.

10.3

Controllability definitions in general

Definition 161 State controllability of dynamical systems

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 161 — #172

i

10.3. CONTROLLABILITY DEFINITIONS IN GENERAL

i

161

A dynamical system is the mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector S0 ∈ Rn at t0 ∈ T and for any final mathematical state or physical state vector S1 ∈ Rn , respectively, there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that S(t1 ; t0 ; S0 ; 0d ; Uµ[t0 ,t1 ] ) = S1 ,

(10.1)

where µ is the order of the highest control vector derivative U(µ) (t) acting on the system, µ ≥ 0. Note 162 The zero vector 0d ∈ Rd in the notation S(t1 ; t0 ; S0 ; 0d ; Uµ[t0 ,t1 ] ) (10.1) signifies that the system is unperturbed, i.e., there is not an external disturbance acting on the system: D (t) ≡ 0d . For the sake of the simplicity we will write S(t; t0 ; S0 ; Uµ[t0 ,t1 ] ) for S(t; t0 ; S0 ; 0d ; Uµ[t0 ,t1 ] ) in the sequel. We accept to consider the controllability of the unperturbed system, i.e.   .. T T T for D (t) ≡ 0d , so that the input vector I = D . U reduces to I =   .. T T T 0d . U . Comment 163 On the notions: mathematical state controllability and physical state controllability We use explicitly adjectives “mathematical” or “physical” before the term “state controllability” only when it is necessary to emphasize the single state nature. Otherwise, i.e., when the state is both mathematical and physical we use simply the therm “state controllability” without being preceded by the adjective “mathematical” or “physical” (see Conclusion 229, Section C.1). Comment 164 In a special case the zero state vector S = 0n can be accepted for the final state vector S1 . By the definition we are interested primarily in the system output response, which emphasizes the importance of the system output controllability. Definition 165 Output controllability of linear dynamical systems A linear dynamical system is the output controllable if and only if for every initial output vector Y0 ∈ RN at t0 ∈ T and for any final output

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 162 — #173

i

162

i

CHAPTER 10. CONTROLLABILITY FUNDAMENTALS

vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; Uµ[t0 ,t1 ] ) = Y1 ,

(10.2)

where µ is the order of the highest control vector derivative U(µ) (t) acting on the system, µ ≥ 0. Claim 166 Output controllability versus physical output controllability By the definition the output variables and the output vector are physical variables and physical vector, respectively. Therefore, the output controllability of a dynamical system is simultaneously its physical output controllability. Note 167 Output controllability versus state controllability The state controllability and output controllability are independent system qualitative properties in general. Comment 168 In a special case the zero output vector Y = 0N can be accepted for the final output vector Y1 : Y1 = 0N . Comment 169 The controllability concept has the full sense and the greatest importance for the plants (Definition 9 in Section 1.4). Note 170 Controllability and the control law Definition 161 and Definition 165 do not specify the control law. They demand only the existence of an extended control that can steer an arbitrary initial state or output to an arbitrary final state or output, respectively, over a finite time interval [t0 , t1 ] ⊂ T0 . They leave the selection of the control law, control algorithm, open and free.

10.4

General state controllability criteria

Let the system (8.1), (8.2), (Section 8.1), be unperturbed, i.e., D (t) ≡ 0d so that it takes the following form:   U(t)  U(1) (t)  dS(t)  ∈ R(µ+1)r , = AS(t) + B(µ) Uµ (t), Uµ (t) =  (10.3)   ... dt U(µ) (t) Y(t) = CS(t) + QU(t).

(10.4)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 163 — #174

i

10.4. GENERAL STATE CONTROLLABILITY CRITERIA

i

163

For the justification of allowing the existence of the input vector function derivatives in the state Equation (10.3) see Note 15 (Section 1.4) and for its physical origin see Note 50 (Section 4.1). The fundamental matrix function Φ (., t0 ) : T −→ Rn×n of the system (10.3), (10.4) is the same as of the ISO system (3.1), (3.2). It is defined in Equation (3.4). Equations (3.5)-(3.7) express its properties. The solution of the state equation (10.3) obtained after its integration is given by Zt

µ

S (t; t0 ; S0 ; U ) = Φ (t, t0 ) S0 +

Φ (t, τ ) B(µ) Uµ (τ ) dτ =

t0

Zt

 = Φ (t, t0 ) S0 +

 Φ (t0 , τ ) B(µ) Uµ (τ ) dτ  .

(10.5)

t0

This and the system output Equation (10.4) determine the system response: Zt

µ

Y (t; t0 ; S0 ; U ) = CΦ (t, t0 ) S0 +

CΦ (t, τ ) B(µ) Uµ (τ ) dτ + QU(t) =

t0



Zt

= CΦ (t, t0 ) S0 +

 Φ (t0 , τ ) B(µ) Uµ (τ ) dτ  + QU(t).

(10.6)

t0

The general state controllability criteria follow: Theorem 171 Conditions for the state controllability of the system (10.3), (10.4) For the system (10.3), (10.4) to be state controllable it is necessary and sufficient that: a) any of the following equivalent conditions 1. through 6.a) holds if µ = 0, b) any of the following equivalent conditions 1. through 5, 6.b) or 6.c) holds if µ > 0: 1. All rows of both matrices Φ (t1 , t)B(µ) and Φ (t0 , t)B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of Φ (s)B(µ) = (sI − A)−1 B(µ) are linearly independent on C.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 164 — #175

i

164

i

CHAPTER 10. CONTROLLABILITY FUNDAMENTALS 3. The Gram matrix GΦV (t1 , t0 ) (10.7) of Φ (t1 , t)B(µ) , Z t1  T GΦB (t1 , t0 ) = Φ (t1 , τ ) B(µ) B(µ) ΦT (t1 , τ ) dτ

(10.7)

t0

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGΦB (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(10.8)

4. The n × n(µ + 1)r controllability matrix C,   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. B ∈ Rn×n(µ+1)r , C = B . AB . A B . ... . A

(10.9)

has the full rank n, rankC = n.

(10.10)

5. For every eigenvalue si (A) of the matrix A, equivalently for every  .. (µ) complex number s ∈ C, the n × (n + (µ + 1) r) matrix sI − A . B has the full rank n, 

 .. (µ) rank sI − A . B = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(10.11)

6. a) If µ = 0 the control vector function U (.) obeys U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 .

(10.12)

b) If µ > 0 the control vector function U (.) obeys either  T T −1 Uµ (t) = T T B(µ) ΦT (t1 , t) G−1 ΦBT (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] , µ > 0, (10.13) where T ∈ R(µ+1)r×(µ+1)r is any nonsingular matrix, detT 6= 0, and the matrix GΦBT (t1 , t0 ) is the Gram matrix of Φ (t1 , t) B(µ) T, Z t1  T GΦBT (t1 , t0 ) = Φ (t1 , τ ) B(µ) T T T B(µ) ΦT (t1 , τ ) dτ, (10.14) t0

c) or the control vector function U (.) obeys B(µ) Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] , µ > 0,

(10.15)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 165 — #176

i

10.5. GENERAL OUTPUT CONTROLLABILITY CRITERIA where GΦ (t1 , t0 ) is the Gram matrix of Φ (t1 , t) , Z t1 Φ (t1 , τ ) ΦT (t1 , τ ) dτ. GΦ (t1 , t0 ) =

i

165

(10.16)

t0

Appendix D.10 contains the proof of Theorem 171. Comment 172 It seems that the conditions 1. - 5. of Theorem 171 have been known. However, in the existing state controllability conditions the matrix B∈ Rn×r describes the transmition of the action of the control vector U on the system. In Theorem 171 the extended matrix B(µ) ∈ Rn×(µ+1)r describes the action transmission of the whole extended control vector Uµ , i.e., of the control vector and of its derivatives derivatives up to the order µ, on the system. For µ = 0 the conditions 1. - 5. of Theorem 171 become those well known so far for the ISO systems and for the IO systems formally transformed into the ISO systems by loosing completely the physical meaning if µ > 0.

10.5

General output controllability criteria

Equation (10.6) is the basis for the study of the system output controllability (via the combined state-space approach and the output space approach). The integrability of the Dirac impulse δ (.) (see [2, Lemma 16.1, pp. 72-75] and [40, Appendix B.2, p. 401]) enables the following: Zt

Zt δ (t − τ ) QU(τ ) = Q δ (t, τ ) U(τ ) = QU(t), ∀t ∈ T0 .

t0

(10.17)

t0

This leads to the following equivalent form of Equation (10.6) if µ = 0: Zt [CΦ (t, τ ) + δ (t, τ ) Q] U (τ ) dτ. (10.18)

Y (t; t0 ; S0 ; U) = CΦ (t, t0 ) S0 + t0

The following facts are valid if µ > 0: Zt QU(t) =

QU t0

(1)

Zt (t)dt + QU(t0 ) =

e µ (t)dt + Q(µ−1) Uµ−1 (t0 ), QU

t0

  .. .. .. .. e Q = ON,r . Q . ON,r . .... . ON,r ∈ RN ×(µ+1)r ,   .. .. .. .. (µ−1) Q = Q . ON,r . ON,r . .... . ON,r ∈ RN ×µr ,

(10.19)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 166 — #177

i

166

i

CHAPTER 10. CONTROLLABILITY FUNDAMENTALS

Equation (10.18) and Equations (10.19) permit us to present the system response, Equation (10.6), in new forms valid for µ ≥ 0: Y (t; t0 ; S0 ; Uµ ) = CΦ (t, t0 ) S0 +

+

 t Z h i    (µ) + Q e  CΦ (t, τ ) B Uµ (τ ) dτ + Q(µ−1) Uµ−1 , µ > 0,  0   t0

Zt

      

[CΦ (t, τ ) B0 + δ (t, τ ) Q] U (τ ) dτ, µ = 0. t0

       

, (10.20)

      

Equation (10.20) suggests us to introduce a new system matrix function HS (., t0 ) : T −→ClRN ×(µ+1)r , for any t0 ∈ T, defined by the following:

(  =

HS (t, t0 ) =  ) e ∈ RN ×(µ+1)r , µ > 0, CΦ(t, t0 )B(µ) + Q

(CΦ(t, t0 )B0 + δ (t, t0 ) Q) ∈ RN ×r , µ = 0.

(10.21)

Its Laplace transform HS (s) reads:

 =

HS (s) = L {HS (t, t0 )} =  e ∈ CN ×(µ+1)r , µ > 0, C (sI − A)−1 B(µ) + s−1 Q C (sI − A)−1 B0 + Q ∈ CN ×r , µ = 0.

(10.22)

HS (t, t0 ) and its Laplace transform HS (s) should be distinguished, respectively, from GS (t, t0 ) if µ > 0, GS (t, t0 ) = L−1 {GS (s)} =     e Sr(µ) (s), µ > 0,   C (sI − A)−1 B(µ) + s−1 Q   = L−1 ,   C (sI − A)−1 B0 + Q , µ = 0,

(10.23)

and from its Laplace transform GS (s),     e Sr(µ) (s), µ > 0,   C (sI − A)−1 B(µ) + s−1 Q   GS (s) = ,   C (sI − A)−1 B0 + Q , µ = 0,

(10.24)

which is the system transfer function matrix (relative to the control vector U). Thus,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 167 — #178

i

10.5. GENERAL OUTPUT CONTROLLABILITY CRITERIA

 GS (t, t0 )

 =

6= HS (t, t0 ) if µ > 0, = HS (t, t0 ) if µ = 0,

i

167

 ,

(10.25)

HS (s) = L {HS (t, t0 )} =  e 6= GS (s) , µ > 0, C (sI − A) B(µ) + s−1 Q , C (sI − A)−1 B0 + Q = GS (s) , µ = 0, ) ( (µ) HS (s)Sr (s), µ > 0, . GS (s) = HS (s), µ = 0. −1

(10.26)

(10.27)

The introduction of HS (t, t0 ) sets the system response (10.20) in a more compact form (10.28),

+

Y (t; t0 ; S0 ; U) = CΦ (t, t0 ) S0 +  t Z     , µ > 0, HS (t, τ ) Uµ (τ ) dτ + Q(µ−1) Uµ−1  0   t0

      

Zt HS (t, τ ) U (τ ) dτ, µ = 0. t0

       

.

(10.28)

      

The general output controllability conditions for the system (10.3), (10.4) follow: Theorem 173 Conditions for the output controllability of the system (10.3), (10.4) For the system (10.3), (10.4) to be output controllable it is necessary that   .. rank C . Q = N (10.29) and if this condition is satisfied then: A) it is necessary that any of the following conditions 1.-6.a) holds and sufficient that any of the following conditions 1.-4., 6.a) holds if µ = 0, where the conditions 1.-4.,6.a) are equivalent, B) it is necessary that any of the following conditions 1.-5., 6.b) holds and sufficient that any of the following conditions 1.-4., 6.b) holds if µ > 0, where the conditions 1.-4.,6.b) are equivalent: 1. All rows of the system matrix HS (t, t0 ) (10.21) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 168 — #179

i

168

i

CHAPTER 10. CONTROLLABILITY FUNDAMENTALS

2. All rows of the system matrix HS (s) (10.22) are linearly independent on C. 3. The Gram matrix GHS (t1 , t0 ) of HS (t, t0 ) (10.21), Z

t1

GHS (t1 , t0 ) =

HS (t1 , τ ) HST (t1 , τ ) dτ,

t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(10.30)

is nonsingular, i.e., rankGHS (t1 , t0 ) = N, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(10.31)

4. The output N × (n + 1) (µ + 1) r controllability matrix CSout ,     .. .. .. .. e   (µ) (µ) n−1 (µ)  B . Q , µ > 0,   CB . CAB . ... . CA    (10.32) CSout = . . . .    CB0 .. CAB0 .. ... .. CAn−1 B0 .. Q , µ = 0,    has the full rank N, rankCSout = N.

(10.33)

5. For every eigenvalue si (A) of the matrix A, equivalently for every complex number s ∈ C, the N × (n + 2 (µ + 1) r) matrix CoutS ,     . .  . . (µ) e , µ > 0,      C (sI − A) . CB . Q   CoutS = (10.34) .. ..      C (sI − A) . CB0 . Q , µ = 0,  has the full rank N,

rankCoutS

   .. .. e  (µ)   C (sI − A) . CB . Q , µ > 0,   = rank . .  . .   C (sI − A) . CB0 . Q , µ = 0,

   

= N,

  

∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(10.35)

6. The control vector function U (.) obeys: a) The following equation if µ = 0: U (t) = (HS (t1 , τ ))T G−1 HS (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) S0 ] , µ = 0,

(10.36)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 169 — #180

i

10.5. GENERAL OUTPUT CONTROLLABILITY CRITERIA

i

169

b) The following equations for any (µ + 1) r × (µ + 1) r nonsingular matrix R: ( ) (HS (t1 , t) R)T • G−1 HS R (t1 , t0 ) • i −1 µ h R U (t) = , µ > 0, (10.37) • Y1 −CΦ (t1 , t0 ) S0 − Q(µ−1) Uµ−1 0 where GHS R (t1 , t0 ) is the Gram matrix of HS (t1 , τ ) R, Zt1 GHS R (t1 , t0 ) = HS (t1 , τ ) RRT HST (t1 , τ ) dτ.

(10.38)

t0

For the proof consult Appendix D.11. Theorem 174 On the necessary output controllability condition of Theorem 173   .. If the matrix C . Q = C for Q = ON,r does not have the full rank N , i.e., if Equation (10.29) is not satisfied, then the system (10.3), (10.4) is not output controllable in view of Definition 165. Note 175 The existing literature treats the system state controllability and output controllability only for the case µ = 0.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 170 — #181

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 171 — #182

i

i

Chapter 11

Various systems controllability 11.1

IO system state controllability

11.1.1

Definition

The general Definition 161 of the state controllability takes the following form for the IO system (2.1): Definition 176 State controllability of the IO system (2.1), i.e., (2.15) The IO system (2.1), i.e., (2.15), is mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector Y0ν−1 ∈ RνN at t0 ∈ T and for any final mathematical state or physical state vector Y1ν−1 ∈ RνN , respectively, there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Yν−1 (t1 ; t0 ; Y0ν−1 ; 0d ; Uµ[t0 ,t1 ] ) = Y1ν−1 , (11.1) where µ is the order of the highest control vector derivative U(µ) (t) acting on the system, µ ≥ 0. Note 177 The zero vector 0d ∈ Rd in Yν−1 (t1 ; t0 ; Y0ν−1 ; 0d ; Uµ[t0 ,t1 ] ) (11.1) denotes that the system is unperturbed, i.e., there is not an external disturbance acting on the system: D (t) ≡ 0d . For the sake of the simplicity we will write Yν−1 (t; t0 ; Y0ν−1 ; Uµ[t0 ,t1 ] ) for Yν−1 (t; t0 ; Y0ν−1 ; 0d ; Uµ[t0 ,t1 ] ) in the sequel. This permits us to accept D(µ) = ON,d(µ+1) . 171 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 172 — #183

i

172

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

Comment 178 The existence of the control U[t0 ,t1 ] on the time interval [t0 , t1 ] means the existence of the extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] in view of (2.1), i.e., (2.15). Note 179 The equivalence among (2.1), (2.15) and (2.57) permits us to refer in the sequel to anyone of them.

11.1.2

Criteria

What follows resolves completely the problem of the state controllability; i.e., it resolves the problem of the necessary and sufficient conditions for the state controllability, of the IO system (2.15). We recall detAν 6= 0 due to (2.1) so that also detA−1 ν 6= 0. Comment 144 (Section 9.1) explains the difference between the physical b state X = Yν−1 (2.16) (Section 2.1) and the formal mathematical state X (C.4)-(C.10) (Appendix C.1) of the IO system (2.15) . We are interested in the physical state controllability. The criteria for the formal mathematical state controllability of the IO system (2.15) are well known in the control theory. The equivalent description of the IO system (2.15), which preserves the physical sense of the state is its EISO form (4.1), (4.2) specified by (4.4)(4.11) (Section 4.1). The matrix A is defined by Equation (4.6) (Section 4.1), which is repeated as (11.2):

       

ON ON ... ON ON −A−1 ν A0

ν > 1 =⇒ A = IN ... ON ON ON ... ON ON ... ... ... ... ON ... IN ON ON ... ON IN −1 −1 −1 −Aν A1 ... −Aν Aν−2 −Aν Aν−1 n = νN, N ×N , ν = 1 =⇒ A = −A−1 1 A0 ∈ R

     ∈ RνN ×νN ,   

(11.2) T . In view of Note 162, the input vector I = DT .. UT reduces to I =   .. T T T 0d . U and simplifies the product H (µ) Iµ (Equation (2.15) in Section 

2.1) to B (µ) Uµ . These explanations enable us to use the following compact

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 173 — #184

i

11.1. IO SYSTEM STATE CONTROLLABILITY

i

173

form of the unperturbed IO system (2.15): A(ν) Yν (t) = B (µ) Uµ (t), ∀t ∈ T0 . The reduction of H (µ) Iµ to B (µ) Uµ reduces P (µ) Iµ (Section 4.1) to B(µ) Uµ , where the matrices Binv ,     O(ν−1)N,N ∈ Rn×N , ν > 1, Binv = IN  IN ∈ RN ×N , ν = 1,

(11.3) in Equation (4.1)  

,

(11.4)



(µ) , A−1 ν and B

B

(µ)

  .. .. .. = B0 . B1 . ... . Bµ ∈ RN x(µ+1)r , B (0) = B0

(11.5)

compose the matrix B(µ) ,       O(ν−1)N,(µ+1)r , 0 ≤ µ ≤ ν, ν > 1, (µ) = B(µ) = A−1 B ν   −1 (µ) IN A1 B , ν = 1, 0 ≤ µ ≤ 1,      O(ν−1)N,N  −1 (µ) Aν B , 0 ≤ µ ≤ ν > 1, (µ) IN = = Binv A−1 . (11.6) ν B   −1 (µ) IN A1 B , ν = 1, 0 ≤ µ ≤ 1, The output matrices C and Q are given by (4.11) repeated as (11.7): C = [IN ON ON ON . . . ON ] ∈ RN ×n , RankC ≡ N, Q = ON,M ∈ RN ×M .

(11.7)

The matrix Binv is constant, invariant, independent of the system matrices. The order ν of the system and the dimension N of the system output vector Y determine its structure as shown in (11.4). In view of (11.2) - (11.6), the following unperturbed form (11.8), (11.9) of the EISO system (4.1), (4.2), is both physically and mathematically equivalent to the IO system (2.15): dX(t) = AX(t) + B(µ) Uµ (t), ∀t ∈ T0 , 1 ≤ µ, B(µ) ∈ Rn×(µ+1)r , dt Y(t) = CX(t), ∀t ∈ T0 .

(11.8) (11.9)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 174 — #185

i

174

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

In view of Equations (4.19) and (4.22) (Section 4.1), the motion and response of this system are determined by the following equations: Zt

µ

X(t; t0 ; X0 ; U ) = Φ (t, t0 ) X0 +

Φ (t, τ ) B(µ) Uµ (τ ) dτ =

t0

  Z t (µ) µ Φ (t0 , τ ) B U (τ ) dτ , ∀t ∈ T0 , = Φ (t, t0 ) X0 +

(11.10)

t0

Zt

µ

Y(t; t0 ; X0 ; U ) = CΦ (t, t0 ) +

CΦ (t, τ ) B(µ) Uµ (τ ) dτ =

t0



Z

t (µ)

= CΦ (t, t0 ) X0 +

Φ (t0 , τ ) B

µ



U (τ ) dτ , ∀t ∈ T0 ,

(11.11)

t0

where (see Equations (3.3)-(3.6), Section 3.1): Φ (t, t0 ) = [φij (t, t0 )] = eA(t−t0 ) ∈ Rn×n , Φ (t0 , t0 ) ≡ I.

(11.12)

Theorem 180 State controllability criteria for the IO system (2.15) For the IO system (2.15) to be state controllable it is necessary and sufficient that: a) Any of the following equivalent conditions 1. through 6.a) holds if µ = 0, b) Any of the following equivalent conditions 1. through 5, 6.b) or 6.c) holds if µ > 0, in which Equations (11.2)-(11.6) induced by (2.15) determine the matrices: 1) All rows of matrices Φ (t1 , t) B(µ) , Φ (t1 , t) Binv and Φ (t0 , t) B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2) All rows of both Φ (s) B(µ) = (sI − A)−1 B(µ) and (sIn − A)−1 Binv are linearly independent on C. 3) The Gram matrices GctB (t1 , t0 ) of Φ (t1 , t) B(µ) and GctBinv (t1 , t0 ) of Φ (t1 , t) Binv , Z t1  T GctB (t1 , t0 ) = Φ (t1 , τ ) B(µ) B(µ) ΦT (t1 , τ ) dτ, ∀t ∈ T0 , t0 t1

Z GctBinv (t1 , t0 ) =

Φ (t1 , τ ) Binv BTinv ΦT (t1 , τ ) dτ, ∀t ∈ T0 , (11.13)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 175 — #186

i

11.1. IO SYSTEM STATE CONTROLLABILITY

i

175

are nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGctB (t1 , t0 ) = rank GctBinv (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.14)

4) The n×n (µ + 1) r controllability matrix CB and n×nN controllability matrix CBinv ,   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. CB = B . AB . A B . ... . A , B   .. .. 2 .. .. n−1 CBinv = Binv . ABinv . A Binv . ... . A Binv , (11.15) have the full rank n, rank CB = rank CBinv = n = νN.

(11.16)

5) For every eigenvalue si (A) of the matrix A, equivalently for every  .. (µ) complex number s ∈ C, the n × (n + (µ + 1) r) matrix sI − A . B and   . the n × (n + N ) matrix sI − A .. Binv have the full rank n,     .. (µ) .. rank sI − A . B = rank sI − A . Binv = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ..., n, ∀s ∈ C.

(11.17)

6. a) If µ = 0 the control vector function U (.) obeys U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 .

(11.18)

b) If µ > 0 the control vector function U (.) obeys either  T T −1 Uµ (t) = T T B(µ) ΦT (t1 , t) G−1 ΦBT (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0, (11.19) where T ∈ R(µ+1)r×(µ+1)r is any nonsingular matrix and GΦBT (t1 , t0 ) is the Gram matrix of Φ (t1 , t) B(µ) T, Z

t1

GΦBT (t1 , t0 ) =

 T Φ (t1 , τ ) B(µ) T T T B(µ) ΦT (t1 , τ ) dτ.

(11.20)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 176 — #187

i

176

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY c) or the control vector function U (.) obeys B(µ) Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0,

where GΦ (t1 , t0 ) is the Gram matrix of Φ (t1 , t) , Z t1 Φ (t1 , τ ) ΦT (t1 , τ ) dτ. GΦ (t1 , t0 ) =

(11.21)

(11.22)

t0

Proof. This is Theorem 171 in which B(µ) can be appropriately replaced by Binv and Q = ON,r by noting that the linear independence of all rows of Φ (t, t0 ) B(µ) on ]t0 , t1 ] for t1 ∈ InT0 , together with B(µ) = (µ) (11.6), implies the linear independence of the rows of the maBinv A−1 ν B trix product Φ (t, t0 ) Binv and of Φ (t, t0 ) on [t0 , t1 ] for t1 ∈ InT0 due to the statement under 1) of Lemma 107 in Section 7.4. Example 181 The following example illustrates Theorem 180 In this example the control is scalar variable U : (1)

2Y1 + 2Y1

+ 2Y2 = U, (1)

−Y1 + 3Y2 + Y2

= 2U,

i.e., ν = 1, n = νN = N = 2, µ = 0, M = r = 1, B (µ) = B (0) ,       2 2 1 2 0 (1) Y + Y= U. −1 3 2 0 1 | {z } | {z } | {z } A1

A0

H (0) =B (0) =B0

The preceding data specify the following:   Y1 Y= ∈ R2 , Y2 

  2 0 2 2 (1) Y + 0 1 −1 3     2 0 2 2 A1 = , A0 = , 0 1 −1 3    Y1 X=Y= = Y2



 Y=

B (0) X1 X2



1 2



U   1 = , U = [U ] , 2 ∈ R2 ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 177 — #188

i

11.1. IO SYSTEM STATE CONTROLLABILITY A−1 1

 =



0, 5 0 0 1



−A−1 1 A0

i

177 −1 −1 1 −3



=⇒ A = = ,   0, 5 = =⇒ rankB0 = 1 < 2 = n, 2

(0) B0 = A−1 1 B

   −1 −1 0.5 Y = Y+ U =⇒ 1 −3 2     −1 −1 0.5 X(1) = X+ U 1 −3 2   s+1 1 =⇒ s2 + 4s + 4, s1 (A) = s2 (A) = −2, sI2 − A = −1 s + 3 (1)



    .. s+1 1 0, 5 sI2 − A . B0 = =⇒ −1 s + 3 2     .. −1 1 0, 5 rank si (A) I2 − A . B0 = rank = −1 1 2   1 0, 5 = rank = 2, i = 1, 2. 1 2   .. The matrix sI2 − A . B0 has the invariant full rank n = 2. The matrix   . sI2 − A .. B0 has the full rank on C in spite rankB0 = 1 < 2 = n = N. Example 182 Let (1)

Y1 + 2Y1 Y2 +

(1) Y2

= 4U1 + 12U2 , = U1 + 3U2 ,

i.e., 

   Y1 U1 Y= , U= , Y2 U2       2 0 1 0 4 12 (1) Y + Y= U =⇒ 0 1 0 1 1 3  A1 =

2 0 0 1



 , A0 =

1 0 0 1

 , B

(0)

 =

4 12 1 3



i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 178 — #189

i

178

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

For this system ν = 1 and µ = 0, The state vector form of the system reads     dX −0, 5 0 2 6 = X+ U, 0 −1 1 3 dt       X1 Y1 −0, 5 0 −1 X= = , A = −A1 = , X2 Y2 0 −1      0, 5 0 4 12 2 6 −1 (0) B0 = .A1 B = = 0 1 1 3 1 3 The matrix A is nonsingular. The eigenvalues si (A) of A are s1 (A) = −0, 5 and s2 (A) = −1. The controllability test follows:     .. s + 0, 5 0 2 6 , rank sI2 − A . B0 = 0 s+1 1 3    .. 0 0 2 6 , rank s1 (A) I2 − A . B0 = 0 0, 5 1 3 

its submatrix



0 2 0, 5 1



is nonsingular, and     .. −0, 5 0 2 6 rank s2 (A) I2 − A . B0 = , 0 0 1 3 yields its nonsingular submatrix 

−0, 5 2 0 1

 .

The system is state controllable. The system state controllability ensures the existence of a control U that forces the system from any initial state at the initial moment to an arbitrarily chosen state at some finite moment after the initial one. Let us consider the case of the IO system (2.15) with A0 = ON to be state controllable for arbitrary matrices Ai , i = 0, 1, ..., ν − 1. This means the system state controllability robustness relative to the matrices Ai , i = 0, 1, ..., ν − 1. It is a kind of the robust state controllability.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 179 — #190

i

11.1. IO SYSTEM STATE CONTROLLABILITY

i

179

Theorem 183 Robust state controllability criteria for the IO system (2.15) with A0 = ON relative to arbitrary matrices Ai , i = 1, ..., ν − 1 For the IO system (2.15) with A0 = ON to be invariantly (i.e., robust) state controllable relative to arbitrary matrices Ai , i = 1, ..., ν − 1, it is necessary and sufficient that any of the above equivalent conditions 1. -5. of Theorem 180 holds, or equivalently, that the matrix B (µ) has the full rank N, (11.23) rankB (µ) = N, where Equations (11.2)-(11.6) induced by (2.15) determine the matrices. Appendix D.12 contains the proof of Theorem 183. Example 184 The following example illustrates Theorem 183. (2)

Y1

(1)

+ 2Y1

(2) Y2

+





1 0 0 1

 Y

(2)

+ Y1 = U,

(1) Y2

= U (1)

Y=

Y1 Y2





2 0 0 1

(11.24)

 ,

(1)



Y + +    U 1 0 = , 0 1 U (1) | {z }| {z }



1 0 0 0



Y=

U1

B (1)



1 0 0 0

2 0 0 1





A2 = I2, A1 = , A0 = ,     U 1 0 (1) (1) 1 B = = I2 , rankB = 2, U = 0 1 U (1) We apply Equations (11.2)-(11.7):    Y1 X1  Y2   X2   X =  (1)  =   Y1   X3 (1) X4 Y2

  = 



Y Y(1)



 =

X1 X2



i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 180 — #191

i

180

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY  X



(1)

  = 

0  0 =  −1 0 |

(1)

Y1 (1) Y2 (2) Y1 (2) Y2





    =  

0 1 0 0 0 1 0 −2 0 0 0 −1 {z A

X3 X4 (1) X3 (1) X4

   (1)  Y  = = Y(2) 

      X1   +   X2 | {z } | } X

0 0 1 0

 0 0  U1 , 0  1 {z }

B(1)

  1 0 0 0 Y= X, 0 1 0 0 | {z } C



 A=

O2 I2 −1 A −A −A−1 0 2 A1 2 

B

(1)

 =

O2 −1 (1) A2 B



0  0 =  1 0

0  0 =  −1 0  0 0  , C 0  1



The rank of the matrix A is 3. It does not have The matrix B (1) has the full rank 2.  s    0 s −I2 sI4 − A = =  1 A0 sI2 + A1 0

 0 1 0 0 0 1  , 0 −2 0  0 0 −1  =

1 0 0 0 0 1 0 0

 .

the full rank 4. It is singular.  0 −1 0 s 0 −1   0 s+2 0  0 0 s+1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 181 — #192

i

11.1. IO SYSTEM STATE CONTROLLABILITY    .. (1) .. O2 sI2 −I2 sI4 − A . B = . A0 sI2 + A1 I2  s 0 −1 0  0 s 0 −1 =   1 0 s+2 0 0 0 0 s+1  s 0 −1 0  0 s 0 −1 =   1 0 s+2 0 0 0 0 s+1

i

181  = 0 .. 0 . 1 0 0 0 1 0

 0 0  = 0  1 

0 0  . 0  1

The eigenvalues si (A) of the matrix A are s1 (A) = 0, and si (A) = −1, i = 1, 2, 3. They imply:     .. (1) .. (1) si (A) = 0 =⇒ sI4 − A . B = −I4 − A . B =   0 0 −1 0 0 0  0 0 0 −1 0 0  . =  1 0 2 0 1 0  0 0 0 1 0 1 Its submatrix



 −1 0 0 0  0 −1 0 0     2 0 1 0  0 1 0 1

has  the full rank.  For the eigenvalue si (A)  = −1, i = 1, 2, 3, the matrix .. (1) .. (1) sI4 − A . B becomes −I4 − A . B : 

 −1 0 −1 0 0 0  0 −1 0 −1 0 0   .  1 0 1 0 1 0  0 0 0 0 0 1 Its submatrix



 −1 0 0 0  0 −1 0 0     1 0 1 0  0 0 0 1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 182 — #193

i

182

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY 

. has the full rank and the matrix sI4 − A .. B(1)

 has the full rank on C.

The system is physical state controllable. Comment 185 Comparison with the classical controllability theory for µ > 0 We will apply the existing, i.e., the classical, state controllability condition to the original system (11.24), !   (2) (1) U Y1 + 2Y1 + Y1 =⇒ = (2) (1) U (1) Y2 + Y2 I2 Y(2) + |{z} A2

       2 0 1 0 1 0 Y(1) + Y= U+ U (1) , 0 1 0 0 0 1 | {z } | {z } | {z } | {z } 

A1

A0

The physical state vector X of this system is

B0

(11.25)

B1

Y1 ,

  Y1 X1     X2  Y  Y2  = Y1 = X = =  (1) (1)  X3  Y  Y1 (1) X4 Y2 

    

(11.26)

We transform now the system (11.25) by applying the transformations (C.4), (C.5). They imply Equations (C.10)-(C.13) in which ν = 2, µ = 1, N = 2, n = νN = 4, Hk = Bk , k = 0, 1 : " # " #   b1 b1 X X Y1 4 b b X= b ∈ R , X1 = (11.27) b2 = Y = Y2 , X X2 " # b3 X b2 = b (1) + A1 Y−B1 U = X =X 1 b X4 " #      (1) Y1 2 0 Y1 0 = + − U, (11.28) (1) 0 1 Y2 1 Y2 so that Equations (C.10)-(C.13) become, respectively:        b1 Y1 X X1  b     X2   Y2 X2    b     X= = = 6 X =  b    (1)  X3  =    X3   Y1 + 2Y1  (1) b X 4 Y2 + Y2 − U X4

Y1 Y2 (1) Y1 (1) Y2

   , 

(11.29)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 183 — #194

i

11.1. IO SYSTEM STATE CONTROLLABILITY  b= A

 −A1 I2 b , B −A0 O2  −2 0  0 −1 b= A  −1 0 0 0

 = 1 0 0 0

i

183

   . B1 . b = I2 . O2 =⇒ , C B0    0 0   1  b  1  , B=  , . 0  1  0 0

b is the mathematical, but not physical, state vector of the system (11.24). X b4 depends From (11.29) follows that the fourth mathematical state variable X (1) b4 = Y + Y2 − U. It is meaningless. For on the input control variable U , X 2 example, (1) (1) X = 04 ⇐⇒ Y1 = Y2 = 0, Y1 = Y2 = 0, but b 04 ⇐⇒ Y1 = Y2 = 0, Y (1) = 0, Y (1) = U. X= 1 2 b4 , The zero value of the fourth mathematical state variable X b4 = 0 = Y (1) − U, X 2 (1)

is not the zero value of the fourth physical state variable X4 = Y2 This imposes the following:

= 0.

Claim 186 On the physical sense of the mathematical state controllability of the IO system (2.15) with µ > 0 The mathematical state controllability of the IO system (2.15) with µ > 0 does not have a physical sense. Notice that for the existence of the control vector function U (.) that obeys the condition 6.c) of Theorem 180 it is necessary and sufficient that the rank condition (11.23) holds. Theorem 183 proves that Theorem 180 guarantees the robust state controllability of the IO system (2.15) with µ ≥ 0 and A0 = ON relative to arbitrary matrices Ai , i = 1, ..., ν − 1. The above results illustrate the existence of various control algorithms that satisfy the state controllability definition. Let us present the counter Example 187 to the existing controllability criterion for the IO systems: Comment 187 Counter example to the existing controllability criterion for the IO systems

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 184 — #195

i

184

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

The original system (11.24) of Example 184 is physical state controllable. It can be formally mathematically transformed into the following system (Section C.1):  (1)    b    b X X1 −2 0 1 0 0 1  b (1)    b     X 0 −1 0 1 1  X  2   2   [U ] ,  (1)  =   b +   b −1 0 0 0  X3  1   X  3 b4 0 0 0 0 0 b (1) X X 4   b1 X     b2  Y1 1 0 0 0  X  = (11.30)  b . Y2 0 1 0 0  X 3  b4 X The characteristic determinant det (sI4 − A) of the matrix A and its eigenvalues si (A) are: det (sI4 − A) = s4 + 3s3 + 3s2 + s = s (s + 1)3 , s1 (A) = 0, si (A) = −1, i = 2, 3, 4. We test the mathematical state controllability condition 5. for the system (11.30):   .. rank sI4 − A . B =  0 s+2 0 −1 0  0 s + 1 0 −1 −1   = 4, ∀si (A) , i = 1, 2, 3, 4? = rank   1 0 s 0 −1  0 0 0 0 s 

For the eigenvalue s1 (A) = 0 follows:   .. rank s1 (A) I4 − A . B = 

2  0 = rank   1 0

 0 −1 0 0 1 0 −1 −1   = 3 < 4. 0 0 0 −1  0 0 0 0

The system (11.30) is not (mathematical) state controllable. This contradicts the (physical) state controllability of the original system (11.24) despite the mathematical equivalence between the original system (11.24) and the formally mathematically transformed system (11.30).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 185 — #196

i

11.2. IO SYSTEM OUTPUT CONTROLLABILITY

i

185

Conclusion 188 On the mathematical state controllability of the IO system (2.15) with µ > 0 b controllability of the IO system (2.15) The formal mathematical state X is physically meaningless if µ > 0. Exercise 189 Test the physical state controllability of the selected IO physical plant in Exercise 40, Section 2.3.

11.2

IO system output controllability

11.2.1

Definition

The general output controllability Definition 165 holds unchanged for the IO system (2.15).

11.2.2

Criteria

The output controllability of the IO system (2.15) can be studied via its state space-output space product or directly in its output space. State space-output space product approach The approach via the state space-output space product uses the IO system (2.15) equivalent EISO form (4.1), (4.2), in which Equations (11.2)-(11.6) induced by the IO system (2.15) determine the matrices (Section 11.1). Equations (10.6) (Section 10.4) determine the system output response: Zt

µ

Y (t; t0 ; X0 ; U ) = CΦ(t, t0 )X0 +

CΦ(t, τ )B(µ) Uµ (τ ) dτ =

t0



Zt

= CΦ (t, t0 ) X0 +

 Φ (t0 , τ ) B(µ) Uµ (τ ) dτ 

(11.31)

t0

The system matrix HS (t, t0 ) (10.21) (Section 10.5) takes the following form for the IO system (2.15): (µ) HIO (t, t0 ) = CΦ(t, t0 )B(µ) = CΦ(t, t0 )Binv A−1 ∈ RN ×(µ+1)r . (11.32) ν B

Its Laplace transform reads due to Q = ON,r : (µ) HIO (s) = CΦ(s)B(µ) = CΦ(s)Binv A−1 ∈ CN ×(µ+1)r ν B

(11.33)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 186 — #197

i

186

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

Theorem 173 (Section 10.5) forms the basis for the following results. At first we present it adjusted to the IO system (2.15) by noting once more that Q = ON,r : Theorem 190 Equivalent conditions for the output controllability of the IO system (2.15): state space-output space product approach The IO system (2.15) obeys invariantly the output controllability necessary rank condition (10.29) (Section 10.5), i.e., rankC = N.

(11.34)

For the IO system (2.15) to be output controllable: A) It is necessary that any of the following conditions 1.-6.a) holds and sufficient that any of the following conditions 1.-4.,6.a) holds if µ = 0, where the conditions 1.-4.,6.a) are equivalent, B) It is necessary that any of the following conditions 1.-5., 6.b) holds and sufficient that any of the following conditions 1.-4., 6.b) holds if µ > 0, where the conditions 1.-4.,6.b) are equivalent, In A) and B) Equations (11.2)-(11.6) induced by (2.15) determine the matrices: 1. All rows of the system matrix HIO (t, t0 ) (11.32) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of the system matrix HIO (s) (11.33) are linearly independent on C. 3. The Gram matrix GHIO (t, t0 ) of HIO (t, t0 ) (11.32), Z t T HIO (t, τ ) HIO (t, τ ) dτ, GHIO (t, t0 ) = t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.35)

is nonsingular; i.e., rankGHIO (t1 , t0 ) = N, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.36)

4. The output N × n (µ + 1) r controllability matrix CIOout ,   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) CIOout = CB . CAB . CA B . .... CA B ,

(11.37)

has the full rank N, rankCIOout = N.

(11.38)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 187 — #198

i

11.2. IO SYSTEM OUTPUT CONTROLLABILITY

i

187

5. For every eigenvalue si (A) of the matrix A, equivalently for every complex number s ∈ C, the N × (n + (µ + 1) r) matrix CoutIO ,   .. (µ) ∈ RN ×(n+(µ+1)r) , µ ≥ 0, CoutIO (s) = C (sIn − A) . CB has the full rank N, rankCoutIO (s) = N, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.39)

6. The control vector function U (.) obeys: a) The following equation if µ = 0: U (t) = (HIO (t1 , t))T G−1 HIO (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) X0 ] , µ = 0,

(11.40)

b) The following equations for any (µ + 1) r × (µ + 1) r nonsingular matrix R:   (t1 , t0 ) • (HIO (t1 , t) R)T • G−1 −1 µ HIOR , µ > 0, (11.41) R U (t) = • [Y1 −CΦ (t1 , t0 ) X0 ] where GHIOR (t1 , t0 ) is the Gram matrix of HIO (t, t0 ) R, Zt1 T GHIOR (t1 , t0 ) = HIO (t1 , τ ) RRT HIO (t1 , τ ) dτ.

(11.42)

t0

Proof. After replacing HS by HIO in Theorem 173 it is adjusted to the IO system (2.15) and becomes Theorem 190, by noting that the conditions 2. and 3. are equivalent because they are related through the Laplace transform or its inverse, which are linear transformations, and that the matrix C (11.7) has the invariant full rank N : C = [IN ON ON ON ... ON ] ∈ RN ×n , RankC = N.

(11.43)

Let GCΦBinv (t1 , t0 ) be the Gram matrix of CΦ(t1 , τ )Binv , Zt1 GCΦBinv (t1 , t0 ) = CΦ(t1 , τ )Binv (CΦ(t1 , τ )Binv )T dτ.

(11.44)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 188 — #199

i

188

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

Theorem 191 Alternative sufficient condition and control algorithm for the output controllability of the IO system (2.15): state space-output space product approach For the IO system (2.15) to be output controllable it is sufficient that the rank condition (11.45) holds, rankB (µ) = N,

(11.45)

and that the control vector function is the solution of the following equation:   (CΦ(t1 , t)Binv )T G−1 CΦBinv (t1 , t0 ) • . (11.46) B (µ) Uµ (t) = Aν • [Y1 −CΦ (t1 , t0 ) X0 ] The proof of this theorem is in Appendix D.13. Comment 192 On the insufficiency of the condition 5. of Theorem 173, i.e., of the condition 5. of Theorem 190 Let ν > 1 and µ ≥ 0. We apply the condition 5. of Theorem 190 to the IO system (2.15):     .. .. rank C(sIn − A) . CB = rankC (sIn − A) . B = 

[IN ON ON ON ... ON ] • sIN −IN ... ON ON  ON sI ... O ON N N   ... ... ... ... ...   ON O ... −I O N N N   ON ON ... sIN −IN −1 −1 −1 A−1 ν A0 Aν A1 ... Aν Aν−2 sI N + Aν Aν−1 .. O(ν−1)N,(µ+1)r . (µ) A−1 ν B   = rank sIN −IN ON ... ON = N, ∀s ∈ C.

 

           rank   •          

 



         ..    .     =              

This shows that the condition (11.39) is invariantly satisfied, i.e., independently of the system matrices Ai , i = 0, 1, ..., ν − 1, and of the control matrix B (µ) ,   .. .. .. (µ) B = B0 . B1 . ... . Bµ 6= ON,(µ+1)r .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 189 — #200

i

11.2. IO SYSTEM OUTPUT CONTROLLABILITY

i

189

 .. The rank of C(sIn − A) . CB is independent of B (µ) if ν > 1 because for 

ν > 1 due to Equations (11.2) (Section 11.1):  CB = [IN ON ON ON

  ... ON ]   

ON,(µ+1)r ON,(µ+1)r ... ON,(µ+1)r (µ) A−1 ν B

    = ON,(µ+1)r .  

(11.47)

If ν > 1 then the rank condition (11.39) is satisfied for any B (µ) including B (µ) = ON,(µ+1)r that means the disconnection of the system from its controller, i.e., the control vector does not act on the system. This explains why the rank condition (11.39) is not sufficient (despite it is necessary) for the output controllability of the IO system (2.15). Output space approach The direct output controllability study in the output space starts with the system response fully described by Equation (2.39) or by its equivalent form in Equation (2.40) repeated as follows: Z t ν−1 ν−1 Y(t; Y0− ; U) = [ΓIOU (τ )U(t − τ )dτ ] + ΓIOU0 (t)Uµ−1 − + ΓIOY0 (t)Y0− , 0 0− Z t Z t [ΓIOU (τ )U(t − τ )dτ ] = [ΓIOU (τ )U(t, τ )dτ ] = 0− 0− Z t Z t = [ΓIOU (t − τ )U(τ )dτ ] = [ΓIOU (t, τ )U(τ )dτ ] , (11.48) 0−

0−

and by the first Equation (2.46) that reads for t0 = 0 : n  o ΓIOU (t) = L−1 {GIOU (s)} = L−1 ΦIO (s) B µ Sr(µ) (s) ,

(11.49)

the Laplace transform of which is GIOU (s) :  −1 (ν) GIOU (s) = A(ν) SN (s) B µ Sr(µ) (s) = L {ΓIOU (t)} ,

(11.50)

by Equation (2.47): ΦIO (s) = L − {ΦIO (t)} , ΦIO (t) = L−1 {ΦIO (s)} ,

(11.51)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 190 — #201

i

190

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

and by Equation (11.52): ΦIO (s) =



−1 (ν) , A(ν) SN (s)

ΦIO (t) = L

−1



−1 (ν) A(ν) SN (s)

 .

(11.52)

(all from Section 2.1). The preceding formulae come out from the Laplace transform Y(s; Y0ν−1 − ; U) = FIO (s) UIO (s), Equation (2.24) for D(t) ≡ 0d , of the system response Y(t; Y0ν−1 − ; U) (11.48) where: - The system IO full transfer function matrix FIO (s) , FIO (s) = 

 .. .. (ν) (ν−1) (µ) (µ−1) = ΦIO (s) . − B Zr (s) . A ZN (s) =   . . = GIOU (s) .. GIOU0 (s) .. GIOY0 (s) , (11.53) B (µ) Sr(µ) (s)

- The Laplace transform VIO (s) of the system action vector VIO (t),    U(t) U(s)  , VIO (t) =  δ (t) Uµ−1  , VIO (s) =  Uµ−1 0 0 δ (t) Y0ν−1 Y0ν−1 

(11.54)

where δ (t) is Dirac impulse (for full details see [36, Section E.2, pp. 411-426], [40, Section B.2, pp. 401-416]).   (ν) The leading coefficient of the denominator polynomial det A(ν) SN (s)  −1 (ν) from the denominator of A(ν) SN (s) in general is not equal to 1, which happens if the matrix Aν is not the unity matrix IN : if Aν 6= IN . In order to make that coefficient equal to 1 we introduce the IO system normalized fundamental matrix function ΘIO (.) : T −→ RN ×N , ΘIO (t) = L−1 {ΘIO (s)} ∈ RN ×N , ΘIO (s) = L − {ΘIO (t)} ∈ CN ×N , (11.55) and its left Laplace transform ΘIO (s),  −1 (ν) (ν) ΘIO (s) = A−1 A S (s) = ΦIO (s)Aν , ΦIO (s) = ΘIO (s)A−1 ν ν , N   −1 (ν) (ν) ΘIO (t) = L−1 A−1 A S (s) = ΦIO (t)Aν . (11.56) ν N

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 191 — #202

i

11.2. IO SYSTEM OUTPUT CONTROLLABILITY

i

191

The system IO full transfer function matrix FIO (s) (2.45) (also Equations (2.41), (2.46), Section 2.1), has then the following equivalent form: FIO (s) =  = ΘIO (s)

 .. .. −1 (ν) (ν−1) −1 (µ) (µ−1) . − Aν B Z r (s) . Aν A ZN (s) =   . . (11.57) = GIOU (s) .. GIOU0 (s) .. GIOY0 (s) ,

(µ) (µ) A−1 Sr (s) ν B

The system response Y(t; Y0ν−1 − ; U) (11.48) can be then set into the following form obtained after the application of the inverse Laplace transform to Y(s; Y0ν−1 − ; U) = FIO (s) VIO (s) (11.54), (11.53), and (11.54): Y(t; Y0ν−1 − ; U) = =

  Z t 

 ΘIO (t, τ )•   n o   (µ) (µ−1) µ−1 −1 (µ) −1 Aν B L Sr (s)U(s) − Zr (s)U0∓ +  dτ  =⇒ n o •    (ν−1) ν−1 −1 (ν) −1   +Aν A L ZN (s) Y0∓

0



Y(t; Y0ν−1 − ; U) = Z t( = 0

ΘIO (t, τ )• n i h o (ν−1) −1 (µ) µ −1 dτ • Aν B U (τ ) + Aν A(ν) L−1 ZN (s) Y0ν−1 ∓

) =⇒ (11.58)

Y(t; Y0ν−1 − ; U) = Z t( =

h

0

ΦIO (t,nτ )•

• B (µ) Uµ (τ ) + A(ν) L−1

)

o i (ν−1) ZN (s) Y0ν−1 dτ ∓

.

(11.59)

Equations (11.55), Equations (11.56), and Equation (11.49) permit us to express the output fundamental matrix ΓIOU (t) of the IO system (2.15) in terms of ΘIO (s) : n  o µ (µ) ΓIOU (t) = L−1 {GIOU (s)} = L−1 ΘIO (s) A−1 B S (s) , (11.60) ν r as well as the following: -The system transfer function GIOU (s) relating the output Y to the control vector U : (µ) (µ) GIOU (s) = ΦIO (s)B (µ) Sr(µ) (s) = ΘIO (s)A−1 Sr (s), ν B

(11.61)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 192 — #203

i

192

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

- The system transfer function GIOU0 (s) = L {ΓIOU0 (t)} relating the output to the extended initial input vector Uµ−1 : 0∓ (µ) (µ−1) Zr (s), GIOU0 (s) = −ΘIO (s)A−1 ν B

(11.62)

- The system transfer function GIOY0 (s) = L {ΓIOY0 (t)} relating the output to the extended initial output vector Y0ν−1 : ∓ (ν−1)

(ν) GIOY0 (s) = ΘIO (s)A−1 ν A ZN

(s).

(11.63)

Equations (11.49), (11.50), and (11.52) imply the following: Claim 193 The output fundamental matrix ΓIOU (t) of the IO system (2.15) is: 1. Determined, via the IO system EISO model (4.1), (4.2) (Section 11.1), in the time domain by the inverse Laplace transform of the product of the Laplace transform of the system state fundamental matrix Φ(t) = eAt induced by the matrix A (11.2) (Section 11.1) premultiplied by the matrix C (11.7), and of the matrix B (11.4)-(11.6) (all from Section 11.1) postmulti(µ) plied by Sr (s) : n o ΓIOU (t) = L−1 {GIOU (s)} = L−1 C (sIn − A)−1 BSr(µ) (s) = n o (µ) (µ) = L−1 C (sIn − A)−1 Binv A−1 B S (s) , (11.64) ν r 2. Determined in the time domain by the inverse Laplace transform of the system transfer function matrix GIOU (s) due to Equation (2.30) (Section 2.1): ΓIOU (t) = L−1 {GIOU (s)} =  −1   (ν) = L−1 A(ν) SN (s) B (µ) Sr(µ) (s) =  −1   −1 −1 (ν) (ν) −1 (µ) (µ) =L Aν A SN (s) Aν B Sr (s) ,

(11.65)

and, due to Equation (11.64): n  o (µ) (µ) ΓIOU (t) = L−1 C (sIn − A)−1 Binv A−1 B S (s) = ν r n  o (µ) (µ) = L−1 C (sIn − A)−1 Binv A−1 Sr (s) . (11.66) ν B

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 193 — #204

i

11.2. IO SYSTEM OUTPUT CONTROLLABILITY

i

193

Equations (11.65) and (11.66) lead to the following result: 

(ν)

(ν) A−1 ν A SN (s)

−1

= C (sIn − A)−1 Binv ,

(11.67)

which links the two approaches and expresses their equivalence. Equation (11.48) determines the output response of the IO system (2.15). The Gram matrix GΓIO (t1 ) of ΓIOU (t, t0 ) ∈ RN ×r reads Zt1 GΓIO (t1 , t0 ) = ΓIOU (t1 , τ )ΓTIOU (t1 , τ )dτ ∈ RN ×N .

(11.68)

t0

Equation (11.56) and Equation (11.67) imply the following equivalent definition of ΘIO (t) : n o ΘIO (t) = L−1 C (sIn − A)−1 Binv , detΘIO (t) 6= 0, ∀t ∈ T0 . (11.69) The Gram matrix GΘIO (t1 , t0 ) of ΘIO (t1 , τ ) reads: Zt1 GΘIO (t1 , t0 ) = ΘIO (t1 , τ )ΘTIO (t1 , τ )dτ ∈ RN ×N .

(11.70)

t0

Lemma 194 The relationship between ΘIO (t1 , τ ) and GΘIO (t1 , t0 ) , between GΓIO (t1 , t0 ) and GIOU (s) 1) In order for the Gramm matrix GΘIO (t1 , t0 ) (11.70) to be nonsingular: detGΘIO (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.71)

it is necessary and sufficient that the rows of ΘIO (t1 , τ ) are linearly independent on the time interval [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . In order for the Gramm matrix GΓIO (t1 , t0 ) (11.68) to be nonsingular it is necessary and sufficient that any of the following equivalent conditions holds: 2) The rows of ΓIOU (t, t0 ) are linearly independent on the time interval [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 3) The rows of GIOU (s) are linearly independent on C.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 194 — #205

i

194

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

Proof. Since the IO system is stationary and linear then its matrices ΓIOU (t, t0 ) and ΘIO (t, t0 ) are integrable in τ ∈ T0 , t1 ∈ InT0 . They satisfy the condition of Theorem 114. 1) The statement under 1) follows directly from Theorem 114 (Section 7.4) applied to ΘIO (t, t0 ). 2) The statement under 2) follows directly from Theorem 114 (Section 7.4) applied to ΓIOU (t, t0 ) and its Gramm matrix GΓIO (t1 , t0 ) (11.68). 3) The statement under 3) follows directly from Equations (11.64), (11.66), the linearity of the Laplace transform and condition 2). Equation (11.11) (Section 11.1) determines the output response of the IO system (2.15). Theorem 195 Output controllability criterion for the IO system (2.15) In order for the IO system (2.15) to be output controllable it is necessary and sufficient that any of the following equivalent conditions holds: 1) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the rows of the system output fundamental matrix ΓIOU (t, t0 ) (11.64) are linearly independent on the time interval [t0 , t1 ] . 2) The rows of the system transfer function matrix GIOU (s) (11.61) are linearly independent on C. 3) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrix GΓIO (t1 , t0 ) (11.68) is nonsingular, i.e., the condition (11.72), detGΓIO (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.72)

holds. 4) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the rows of the matrix product (µ) ΘIO (t, t0 )A−1 ν B

and of the matrix ΘIO (t, t0 ) are linearly independent on the time interval [t0 , t1 ] . 5) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrices GΘIOAB (t1 , t0 ) , Zt1  T (µ) −1 (µ) GΘIOAB (t1 , t0 ) = ΘIO (t1 , τ )A−1 B Θ (t , τ )A B dτ (11.73) IO 1 ν ν t0

and GΘIO (t1 , t0 ) (11.70) are nonsingular, i.e., the conditions (11.74), detGΘIOAB (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.74)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 195 — #206

i

11.3. ISO SYSTEM STATE CONTROLLABILITY

i

195

and (11.71) hold. 6) The control vector function satisfies any of the following linear timeinvariant [algebraic under a) or differential under b) and c)] vector equation: a) i h µ−1 ν−1 (t )Y . U(t) = ΓIOU (t1 , t)G−1 (t )U − Γ (t , t ) Y − Γ 1 IOY0 1 IOU0 1 ΓIO 1 0 0− 0− (11.75) b)  T (µ) R−1 Uµ (t) = ΘIO (t1 , t)A−1 B R G−1 ν ΘIOABR (t1 , t0 ) • h n o i (ν−1) ν−1 (ν) −1 • Y1 − A−1 A L Z (s) Y , (11.76) ν 0 N where R ∈ R(µ+1)r×(µ+1)r is any nonsingular square matrix, detR 6= 0, and Zt1  T (µ) −1 (µ) GΘIOABR (t1 , t0 ) = ΘIO (t1 , τ )A−1 B R Θ (t , τ )A B R dτ. IO 1 ν ν t0

c) n o (ν−1) (ν) −1 Z (s) Y0ν−1 (t , t ) Y + A L B (µ) Uµ (t) = Aν ΘTIO (t1 , t)G−1 1 ∓ . N ΘIO 1 0 (11.77) Appendix D.14 contains the proof of Theorem 195. Exercise 196 Test the output controllability of the selected IO physical plant in Exercise 40, Section 2.3.

11.3

ISO system state controllability

11.3.1

Definition

The general Definition 161 of a dynamical system state controllability takes the following form for the unperturbed ISO system (3.1), (3.2), (Section 3.1), i.e., unperturbed ISO plant (3.1), (3.2), (Section 3.1), described herein by (11.78), (11.79), dX(t) = AX(t) + BU(t), ∀t ∈ T0 , A ∈ Rnxn , U ∈Rr , B ∈ Rnxr , (11.78) dt Y(t) = CX(t) + U U(t), ∀t ∈ T0 , (C 6= ON ,n ) ∈ RN xn , U ∈ RN xr , n ≥ N : (11.79)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 196 — #207

i

196

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

Definition 197 State controllability of the ISO system defined by (11.78), (11.79) The ISO system (11.78), (11.79) is mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector X0 ∈ Rn at t0 ∈ T and for any final mathematical state or physical state vector X1 ∈ Rn , respectively, there exist a moment t1 ∈ InT0 and a control U[t0 ,t1 ] on the time interval [t0 , t1 ] such that X(t1 ; t0 ; X0 ; 0d ; U[t0 ,t1 ] ) = X1 .

11.3.2

(11.80)

Criterion

Notice that µ = 0 the ISO system (11.78), (11.79). Let the state vector S and the matrix B(µ) = B(0) = B0 of the system (10.3), (10.4) (Section 10.4) be denoted, respectively, by X, S = X, and by B, B0 = B. Then the system (10.3) becomes the ISO system (11.78), (11.79), and Theorem 171 becomes the following theorem that completes (due to the explicit statement of condition 6) the well-known conditions for the state controllability of the ISO system (11.78), (11.79) (e.g., [18, Theorem 5-7, pp. 183, 184]): Theorem 198 Equivalent conditions for the state controllability of the ISO system (11.78), (11.79) For the ISO system (11.78), (11.79) to be state controllable it is necessary and sufficient that a) Any of the following equivalent conditions 1. through 6. holds: 1. All rows of Φ (t1 , .) B and Φ (t0 , .) B are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of Φ (s) B = (sI − A)−1 B are linearly independent on C. 3. The Gramm matrix GΦB (t, t0 ) , Z

t1

GΦB (t1 , t0 ) =

Φ (t1 , τ ) BB T ΦT (t1 , τ ) dτ

(11.81)

t0

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGΦB (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 4. The n × nr controllability matrix C,   .. .. 2 .. .. n−1 C = B . AB . A B . ... . A B

(11.82)

(11.83)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 197 — #208

i

11.4. ISO SYSTEM OUTPUT CONTROLLABILITY

i

197

has the full rank n, rankC = n.

(11.84)

5. For every eigenvalue si (A) of the matrix A, equivalently  for every .. complex number s ∈ C, the n × (n + r) matrix sI − A . B has the full rank n, 

 .. rank sI − A . B = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.85)

6. The control vector function U (.) obeys U (t) = (Φ (t1 , t) B)T G−1 ΦB (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 .

(11.86)

Exercise 199 Test the state controllability of the selected ISO physical plant in Exercise 48, Section 3.3.

11.4

ISO system output controllability

11.4.1

Definition

The general output controllability Definition 165 (Section 10.3) slightly simplifies for the unperturbed ISO system (11.78), (11.79). Definition 200 Output controllability of the ISO system (11.78), (11.79) The ISO system (11.78), (11.79) is output controllable if and only if for every initial output vector Y0 ∈ RN at t0 ∈ T and for any final output vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and a control U[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; U[t0 ,t1 ] ) = Y1 .

11.4.2

Criteria

The matrix GISO (t, t0 ) is the matrix GS (t, t0 ) (10.23), (Section 10.5), applied to the ISO system (11.78), (11.79) n o GISO (t, t0 ) = L−1 {GISO (s)} = L−1 C (sI − A)−1 B + U = = CΦ(t, t0 )B + δ (t, t0 ) U.

(11.87)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 198 — #209

i

198

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

where GISO (s) is the system transfer function matrix, GISO (s) = C (sI − A)−1 B + U.

(11.88)

The matrix HS (t1 , τ ) (10.21), (Section 10.5), of the ISO system (11.78), (11.79) reads HISO (t1 , t) = CΦ(t1 , t)B + δ (t1 , t) U = GISO (t1 , t),

(11.89)

HISO (s) = C (sI − A)−1 B + U = GISO (s) .

(11.90)

State space-output space product approach The following theorem is usually referred to in textbooks without its proof if U 6= ON,r : Theorem 201 Equivalent conditions for the output controllability of the ISO system (11.78), (11.79) For the ISO system (11.78), (11.79) to be output controllable it is necessary that   .. rank C . U = N (11.91) is valid and if this condition is satisfied then it is necessary that any of the following conditions 1.-6. holds and sufficient that any of the following conditions 1.-4., 6. holds, where the conditions 1.-4.,6. are equivalent: 1. All rows of the system matrix HISO (t1 , τ ) (11.159) are linearly independent on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of the system transfer function matrix GISO (s) (11.88) are linearly independent on C. 3. The Gramm matrix GGISO (t1 , t0 ) of GISO (t1 , t) , Z t1 GGISO (t1 , t0 ) = GISO (t1 , τ ) GTISO (t1 , τ ) dτ, t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.92)

is nonsingular; i.e., rankGGISO (t1 , t0 ) = n, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .. 4. The output N × (n + 1) r controllability matrix  . . . . CISO = CB .. CAB .. CA2 B .. ..... CAn−1 B

CISO ,  .. .U

(11.93)

(11.94)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 199 — #210

i

11.4. ISO SYSTEM OUTPUT CONTROLLABILITY

i

199

has the full rank N, rankCISO = N.

(11.95)

5. For every eigenvalue si (A) of the matrix A,  equivalently for every  .. .. complex number s ∈ C, the N × (n + 2r) matrix C (sIn − A) . CB . U has the full rank N, 

 .. .. rank C (sIn − A) . CB . U = N, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.96)

6. The control vector function U (.) obeys the following equation: U (t) = (GISO (t1 , τ ))T G−1 GISO (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) X0 ] , µ = 0, (11.97) Proof. Let µ = 0, the state vector S, the matrix V and the control vector V of the system (10.3), (10.4) (Section 10.4) be denoted, respectively, by the vector X, the matrix B and by the control vector U. The system (10.3), (10.4) becomes the ISO system (11.78), (11.79) and Theorem 173 becomes Theorem 201 except for its condition 2., which results from its condition under 1. due to Equations (11.87)-(11.159) and the linearity of the Laplace transform. Notice that Theorem 174 is valid for the ISO system (11.78), (11.79). Output space approach The inverse Laplace transform of Equation (3.13), (Section 3.1), leads to the following for D(t) ≡ 0d :  Rt  − [ΓISOU (τ )U(t − τ )dτ ] + 0 Y(t; X0 ; U) = , +ΓISOX0 (t)X0 , Z t Z t [ΓISOU (τ )U(t − τ )dτ ] = [ΓISOU (τ )U(t, τ )dτ ] = 0− 0− Z t Z t = [ΓISOU (t − τ )U(τ )dτ ] = [ΓISOU (t, τ )U(τ )dτ ] , (11.98) 0−

0−

where - ΓISO (t) is the inverse Laplace transform of the ISO system (3.1), (3.2) transfer function matrix GISO (s) relating the output Y to the control vector U, GISO (s) = C (sI − A)−1 B + U, (11.99)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 200 — #211

i

200

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY n o ΓISOU (t) = L− {GISOU (s)} = L− C (sI − A)−1 B + U ,

(11.100)

and - ΓISOX0 (t) is the inverse Laplace transform of the ISO system (3.1), (3.2) transfer function matrix GISOX0 (s) relating the output Y to to the initial state X0 , ΓISOX0 (t) = L− {GISOX0 (s)} , GISOX0 (s) = C (sI − A)−1 .

(11.101)

Theorem 202 Output controllability criterion for the ISO system (3.1), (3.2) For the ISO system (3.1), (3.2) to be output controllable it is necessary that (11.91) is valid and if this condition is satisfied then it is necessary and sufficient that any of the following equivalent conditions holds: 1) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the rows of the system output fundamental matrix ΓISO (t, t0 ) (11.100) are linearly independent on the time interval [t0 , t1 ] . 2) The rows of the system transfer function matrix GISO (s) (11.99) are linearly independent on C. 3) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrix GΓISO (t1 , t0 ) (11.102), Zt1 GΓISO (t1 , t0 ) = ΓISO (t1 , τ )ΓTISO (t1 , τ )dτ ∈ RN ×N .

(11.102)

t0

is nonsingular, i.e., the condition (11.103), detGΓISO (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.103)

holds. 4) The control vector function satisfies the following linear timeinvariant algebraic vector equation: U(t) = ΓTISO (t, t0 )G−1 ΓISO (t1 , t0 ) [Y1 − ΓISOx0 (t1 )X0− ] .

(11.104)

The proof results directly from the proof (Appendix D.14) of Theorem 195, Subsection 11.2.2. Exercise 203 Test the output controllability of the selected ISO physical plant in Exercise 48, Section 3.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 201 — #212

i

11.5. EISO SYSTEM STATE CONTROLLABILITY

11.5

EISO system state controllability

11.5.1

Definition

i

201

The general state controllability Definition 161, (Section 10.3), reads as follows for the unperturbed EISO system (4.1), (4.2) (Section 4.1), which is described in the sequel by (11.105), (11.106) dX(t) = AX(t) + B (µ) Uµ (t), ∀t ∈ T0 , µ ≥ 1, B (µ) ∈ Rn×(µ+1)r , dt (11.105) Y(t) = CX(t) + U U(t), ∀t ∈ T0 .

(11.106)

Definition 204 State controllability of the EISO system (11.105), (11.106) The EISO system (11.105), (11.106) is mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector X0 ∈ Rn at t0 ∈ T and for any final mathematical state or physical state vector X1 ∈ Rn , respectively, there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that X(t1 ; t0 ; X0 ; Uµ[t0 ,t1 ] ) = X1 .

11.5.2

Criterion

Let S be replaced by X in the system (10.3), (10.4) (Section 10.4), which then becomes the EISO system (11.105), (11.106) (Section 4.1) and Theorem 171 (Section 10.4) takes the following form: Theorem 205 State controllability criteria for the EISO system (11.105), (11.106) For the EISO system (11.105), (11.106) to be state controllable it is necessary and sufficient that any of the following equivalent conditions 1. through 5, 6.a) or 6.b) holds due to µ > 0: 1. All rows of both matrices Φ (t1 , t)B(µ) and Φ (t0 , t)B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of Φ (s) B(µ) = (sI − A)−1 B(µ) are linearly independent on C. 3. The Gram matrix GΦB (t1 , t0 ) (11.107) of Φ (t1 , t) B (µ) , Z

t1

GΦB (t1 , t0 ) =

 T Φ (t1 , τ ) B(µ) B(µ) ΦT (t1 , τ ) dτ,

(11.107)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 202 — #213

i

202

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rank GΦB (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.108)

4. The n × n (µ + 1) r controllability matrix C,   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) ∈ Rn×n(µ+1)r , C = B . AB . A B ... . A B

(11.109)

has the full rank n, rank C = n.

(11.110)

5. For every eigenvalue si (A) of the matrix A, equivalently for every  .. (µ) complex number s ∈ C, the n × (n + (µ + 1) r) matrix sI − A . B has the full rank n,   .. (µ) rank sI − A . B = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ..., n, ,∀s ∈ C. (11.111) 6. a) If µ > 0 the control vector function U (.) obeys either T  T −1 Uµ (t) = T T B(µ) ΦT (t1 , t) G−1 ΦBT (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0, (11.112) where T ∈ R(µ+1)r×(µ+1)r is any nonsingular matrix, detT 6= 0, and the matrix GΦBT (t1 , t0 ) is the Gram matrix of Φ (t1 , t) B(µ) T, Z

t1

GΦBT (t1 , t0 ) =

T  Φ (t1 , τ ) B(µ) T T T B(µ) ΦT (t1 , τ ) dτ,

(11.113)

t0

b) or the control vector function U (.) obeys B(µ) Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0,

(11.114)

where GΦ (t1 , t0 ) is the Gram matrix of Φ (t1 , t) , Z

t1

GΦ (t1 , t0 ) =

Φ (t1 , τ ) ΦT (t1 , τ ) dτ.

(11.115)

t0

Exercise 206 Test the state controllability of the selected EISO physical plant in Exercise 61, Section 4.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 203 — #214

i

11.6. EISO SYSTEM OUTPUT CONTROLLABILITY

11.6

EISO system output controllability

11.6.1

Definition

i

203

The general output controllability Definition 165 (Section 10.5) preserves its form in the framework of the unperturbed EISO system (11.105), (11.106), for which µ > 0. Definition 207 Output controllability of the EISO system (11.105), (11.106) The EISO system (11.105), (11.106) is the output controllable if and only if for every initial output vector Y0 ∈ RN at t0 ∈ T and for any final output vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; Uµ[t0 ,t1 ] ) = Y1 .

11.6.2

Criteria

State space-output space product approach The system (10.3), (10.4) with µ > 0 becomes the EISO system (11.105), (11.106) when we replace in the former S by X and Q by U. This implies that HS (t, t0 ) (10.21), (10.22) and Theorem 173 (Section 10.5) become, respectively, Equation (11.116),   (µ) e HEISO (t, t0 ) = CΦ(t, t0 )B + U ∈ RN ×(µ+1)r ,   . . . . . . . . e = ON,r . U . ON,r . .... . ON,r ∈ RN ×(µ+1)r , U (11.116) Equation (11.117), HEISO (s) = L {HEISO (t, t0 )} =  e ∈ CN ×(µ+1)r , = C (sI − A)−1 B(µ) + s−1 U 

(11.117)

and the following theorem: Theorem 208 Output controllability of the EISO system (11.105), (11.106) For the EISO system (11.105), (11.106) to be output controllable it is necessary that   .. rank C . U = N (11.118)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 204 — #215

i

204

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

and if this condition is satisfied then it is necessary that any of the following conditions 1.-6. holds and sufficient that any of the following conditions 1.4., 6. holds, where the conditions 1.-4., 6. are equivalent: 1. All rows of the system matrix HEISO (t, t0 ) (11.116) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of the system matrix HEISO (s) (11.117) are linearly independent on C. 3.The Gram matrix GHEISO (t1 , t0 ) of the system matrix HEISO (t, t0 ), Z t1 T GHEISO (t1 , t0 ) = HEISO (t1 , τ ) HEISO (t1 , τ ) dτ, t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.119)

is nonsingular; i.e., rankGHEISO (t1 , t0 ) = n, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.120)

4. The output N × (n + 1) (µ + 1) r controllability matrix CEISOout ,   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e CEISOout = CB . CAB . CA B . .... CA B . U (11.121) has the full rank N, rankCEISOout = N.

(11.122)

5. For every eigenvalue si (A) of the matrix A, equivalently for every complex number s ∈ C, the N × (n + 2 (µ + 1) r) matrix CoutEISO ,   .. (µ) .. e CoutEISO = C (sI − A) . CB .U has the full rank N, 

rankCoutEISO

 .. (µ) .. e = rank C (sI − A) . CB . U = N,

∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.123)

6. The control vector function U (.) obeys the following equation for any nonsingular matrix R, where the matrix R ∈ R(µ+1)r×(µ+1)r : ( ) (HhEISO (t1 , t) R)T • G−1 HEISOR (t1 , t0 )i• −1 µ , (11.124) R U (t) = • Y1 −CΦ (t1 , t0 ) X0 − Q(µ−1) Uµ−1 0 where GHEISOR (t1 , t0 ) is the Gram matrix of HEISO (t1 , t) R, Zt1 T GHEISOR (t1 , t0 ) = HEISO (t1 , τ ) RRT HEISO (t1 , τ ) dτ.

(11.125)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 205 — #216

i

11.6. EISO SYSTEM OUTPUT CONTROLLABILITY

i

205

Output space approach The system response Y(t; X0 ; U) results as the inverse Laplace transform of the first Equation (4.27), (Section 4.1), applied to the undisturbed system (11.105), (11.106): Y(t; X0 ; U) = Z

t

+ ΓEISOX0 (t)X0 , [ΓEISO (τ )U(t − τ )dτ ] + ΓEISOU0 (t)Uµ−1 0 Z t Z t [ΓEISO (τ )U(t, τ )dτ ] = [ΓEISO (τ )U(t − τ )dτ ] = 0− 0− Z t Z t = [ΓEISO (t − τ )U(τ )dτ ] = [ΓEISO (t, τ )U(τ )dτ ] , (11.126)

=

0−

0−

0−

where - The system output fundamental matrix ΓEISO (t) is the inverse Laplace transform of the system transfer function GU (s) , Equation (4.34), (Subsection 4.1.2), relating the output Y to the control vector U : GU (s) = C (sI − A)−1 B(µ) Sr(µ) (s) + U, (11.127) n o ΓEISO (t) = L−1 {GP U (s)} = L−1 C (sI − A)−1 B(µ) Sr(µ) (s) + U , (11.128) - ΓEISOU0 (t) is the inverse Laplace transform of the transfer function matrix GEISOU0 (s) relative to the extended initial control vector Uµ−1 , 0 (11.129) GEISOU0 (s) = −C (sI − A)−1 B(µ) Zr(µ−1) (s), n o ΓEISOU0 (t) = L−1 {GEISOU0 (s)} = L−1 −C (sI − A)−1 B(µ) Zr(µ−1) (s) , (11.130) - ΓEISOX0 (t) is the inverse Laplace transform of the transfer function matrix GEISOX0 (s) relative to the initial state vector X0 , GEISOX0 (s) = C (sI − A)−1 , n o ΓEISOX0 (t) = L−1 {GEISOX0 (s)} = L−1 C (sI − A)−1 .

(11.131) (11.132)

Theorem 209 Output controllability of the EISO system (11.105), (11.106)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 206 — #217

i

206

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

For the EISO system (11.105), (11.106) to be output controllable it is necessary that (11.118) is valid and if it is satisfied then it is necessary and sufficient that any of the following equivalent conditions holds: 1) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the rows of the system output fundamental matrix ΓEISO (t1 , t) (11.128) are linearly independent on the time interval [t0 , t1 ] . 2) The rows of the system transfer function matrix GEISO (s) (11.127) are linearly independent on C. 3) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrix GΓEISO (t1 , t0 ) (11.133) Zt1 GΓEISO (t1 , t0 ) = ΓEISO (t1 , τ )ΓTEISO (t1 , τ )dτ ∈ RN ×N .

(11.133)

t0

is nonsingular, i.e., the condition (11.134), detGΓEISO (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.134)

holds. 4) The control vector function satisfies the following linear timeinvariant algebraic vector equation: U(t) = ΓTEISO (t1 , t)G−1 ΓEISO (t1 , t0 ) • i • Y1 − ΓEISOU0 (t1 )Uµ−1 − Γ (t )X . 1 0 EISOX − 0 0 h

(11.135)

The proof is essentially the same as the proof of Theorem 195, Subsection 11.2.2. Exercise 210 Test the output controllability of the selected EISO physical plant in Exercise 61, Section 4.3.

11.7

HISO system state controllability

11.7.1

Definition

The vector Rα−1 (5.3) is the state vector of the HISO system (5.1), (5.2), (Section 5.1). The following definition is the general Definition 161 of the state controllability adjusted to the HISO system (5.1), (5.2):

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 207 — #218

i

11.7. HISO SYSTEM STATE CONTROLLABILITY

i

207

Definition 211 State controllability of the HISO system (5.1), (5.2) The HISO system (5.1), (5.2) is mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector Rα−1 ∈ Rαρ at t0 ∈ T and for any final mathematical 0 state or physical state vector Rα−1 ∈ Rαρ , respectively, there exist a moment 1 t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that ; 0d ; Uµ[t0 ,t1 ] ) = Rα−1 . (11.136) Rα−1 (t1 ; t0 ; Rα−1 0 1

11.7.2

Criterion

The unperturbed HISO system (5.1), (5.2) is described in the compact form by A(α) Rα (t) = B (µ) Uµ (t), ∀t ∈ T0 , Y(t) =

Ry(α) Rα (t)

+ U U(t) =

Ry(α−1) Rα−1 (t)

(11.137)

+ U U(t), ∀t ∈ T0 . (11.138)

If we replace α by ν, ρ by N and R by Y then the matrix A(α) ,   .. .. .. (α) A = A0 . A1 . ... . Aα , becomes the matrix A(ν) (2.13) (Section 2.1),   .. .. .. (α) α = ν =⇒ A = A0 . A1 . ... . Aν = A(ν) , and Equation (11.137) becomes Equation (11.3) (Section 11.3). This explains that Equation (11.137) and Equation (11.138) can be set in the following equivalent EISO forms: dX(t) = AX(t) + B(µ) Uµ (t), ∀t ∈ T0 , µ ≥ 1, B(µ) ∈ Rn×(µ+1)r , n = αρ, dt (11.139) Y(t) = CX(t) + U U(t), ∀t ∈ T0 . (11.140) where (for details see Theorem 49 in Section 4.1 and Subsection 11.1.2 of Section 11.1):   α > 1 =⇒ Xi = R(i−1) ∈ Rρ , ∀i = 1, 2, ..., α, i.e.,     T  T    . . . . . . T . . . . . . T T T T (1)T (α−1) = R .R . ... . R = X = X1 . X2 . ... . Xα       = Rα−1 ∈ Rn , n = αρ, α = 1 =⇒ X1 = X = R ∈ Rρ ,

(11.141)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 208 — #219

i

208

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

       

α > 1 =⇒ A = Oρ Iρ ... Oρ Oρ Oρ Oρ ... Oρ Oρ Oρ Oρ ... Iρ Oρ ... ... ... ... ... Oρ Oρ ... Oρ Iρ −1 A −1 A −1 A −A−1 A −A ... −A −A 0 1 α−2 α−1 α α α α ρ×ρ , α = 1 =⇒ A = −A−1 A ∈ R 0 1

     ∈ Rαρ×αρ ,   

(11.142) Binv

     O(α−1)ρ,ρ  n×ρ ∈R , α > 1, = ∈ Rn×ρ , Iρ   Iρ ∈ Rρ×ρ , α = 1,

(µ) has its general form: A−1 α exists and B   .. .. .. (µ) B = B0 . B1 . ... . Bµ ∈ Rρx(µ+1)r , B (0) = B0 (µ) B(µ) = Binv A−1 ∈ Rn×(µ+1)r , α B

C=

Ry(α−1)

N ×n

∈R

, Q = U.

(11.143)

(11.144) (11.145) (11.146)

In view of the above explanations, Theorem 171 (Section 10.4) is applicable to the HISO system (5.1), (5.2) in the following form: Theorem 212 State controllability criteria for the HISO system (5.1), (5.2) For the HISO system (5.1), (5.2) to be state controllable it is necessary and sufficient that: a) Any of the following equivalent conditions 1. through 6.a) holds if µ = 0, b) Any of the following equivalent conditions 1. through 5, 6.b) or 6.c) holds if µ > 0. In a) and b) Equations (11.142)-(11.145) induced by (11.139) determine the matrices: 1. All rows of both matrices Φ (t1 , t) B(µ) and Φ (t0 , t) B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of Φ (s) B(µ) = (sI − A)−1 B(µ) and (sI − A)−1 Binv are linearly independent on C. 3. The Gram matrix GΦB (t1 , t0 ) (11.147) of Φ (t1 , t) B(µ) , Z t1  T GΦB (t1 , t0 ) = Φ (t1 , τ ) B(µ) B(µ) ΦT (t1 , τ ) dτ (11.147) t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 209 — #220

i

11.7. HISO SYSTEM STATE CONTROLLABILITY

i

209

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGΦB (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.148)

4. The n × n(µ + 1)r controllability matrix CHISO ,   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. ∈ Rn×n(µ+1)r , (11.149) CHISO = B . AB . A B . ... . A B has the full rank n, rankCHISO = n.

(11.150)

5. For every eigenvalue si (A) of the matrix A, equivalently for every  .. (µ) complex number s ∈ C, the n × (n + (µ + 1) r) matrix sI − A . B has the full rank n, 

 .. (µ) rank sI − A . B = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.151)

6. a) If µ = 0 the control vector function U (.) obeys U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , ∀t ∈ [t0 , t1 ] , f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 , µ = 0.

(11.152)

b) If µ > 0 the control vector function U (.) obeys either T  T −1 Uµ (t) = T T B(µ) ΦT (t1 , t) G−1 ΦBT (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0, (11.153) (µ+1)r×(µ+1)r where T ∈ R is any nonsingular matrix and GΦBT (t1 , t0 ) is the Gram matrix of Φ (t1 , t) B(µ) T, Z t1  T GΦBT (t1 , t0 ) = Φ (t1 , τ ) B(µ) T T T B(µ) ΦT (t1 , τ ) dτ. (11.154) t0

c) or the control vector function U (.) obeys B

(µ)

Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0,

where GΦ (t1 , t0 ) is the Gram matrix of Φ (t1 , t) , Z t1 GΦ (t1 , t0 ) = Φ (t1 , τ ) ΦT (t1 , τ ) dτ.

(11.155)

(11.156)

t0

Exercise 213 Test the state controllability of the selected HISO physical plant in Exercise 68, Section 5.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 210 — #221

i

210

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

11.8

HISO system output controllability

11.8.1

Definition

The general output controllability Definition 165, (Section 10.3), adjusted to the nondisturbed HISO system (11.137), (11.138) reads: Definition 214 Output controllability of the HISO system (11.137), (11.138) The HISO system (11.137), (11.138) is output controllable if and only if for every initial output vector Y0 ∈ RN at t0 ∈ T and for any final output vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; Uµ[t0 ,t1 ] ) = Y1 .

11.8.2

Criteria

State space-output space product approach An equivalent form of the HISO system (11.137), (11.138) is its EISO form (11.139), (11.140), (Section 11.7), presented as follows: dX(t) = AX(t) + B(µ) Uµ (t), ∀t ∈ T0 , µ > 0, B(µ) ∈ Rn×(µ+1)r , n = αρ, dt (11.157) Y(t) = CX(t) + U U(t), ∀t ∈ T0 . (11.158) Equations (11.141)-(11.146) link the HISO system (11.137), (11.138) with its EISO form (11.157), (11.158). This makes Theorem 208 directly applicable to the HISO system (11.137), (11.138) for which h i (µ) e ∈ RN ×(µ+1)r , B(µ) = Binv A−1 HHISO (t, t0 ) = CΦ(t, t0 )B + U , α B   e = ON,r ... U ... ON,r ... .... ... ON,r ∈ RN ×(µ+1)r , U (11.159)

e= HHISO (s) = L {HHISO (t, t0 )} = C (sI − A)−1 B(µ) + s−1 U   e ∈ CN ×(µ+1)r , = CΦ(s)B(µ) + s−1 U (11.160) where Φ(t, t0 ) = eA(t−t0 ) . The matrices A, Binv , B(µ) , and C are defined, respectively, in (11.142), (11.143), (11.144), and (11.146), (Section 11.7).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 211 — #222

i

11.8. HISO SYSTEM OUTPUT CONTROLLABILITY

i

211

Theorem 215 Output controllability of the HISO system (11.137), (11.138) For the HISO system (11.137), (11.138) to be output controllable it is necessary that   .. rank C . U = N (11.161) and if this condition is satisfied then it is necessary that any of the following conditions 1.-6. holds and sufficient that any of the following conditions 1.4., 6. holds, where the conditions 1.-4.,6. are equivalent: 1. All rows of the system matrix HHISO (t, t0 ) (11.159) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of the system matrix HHISO (s) (11.160) are linearly independent on C. 3. The Gram matrix GHISO (t1 , t0 ) of HHISO (t, t0 ), Z t1 T GHISO (t1 , t0 ) = HHISO (t1 , τ ) HHISO (t1 , τ ) dτ, t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.162)

is nonsingular; i.e., rankGHISO (t1 , t0 ) = n, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.163)

4. The output N × (n + 1) (µ + 1) r controllability matrix CHISOout ,   .. .. .. .. .. e 2 n−1 CHISOout = CB . CAB . CA B . .... CA B .U (11.164) has the full rank N, rankCHISOout = N.

(11.165)

5. For every eigenvalue si (A) of the matrix A, equivalently for every complex number s ∈ C, the N × (n + 2 (µ + 1) r) matrix   .. (µ) .. e C (sI − A) . CB . U has the full rank N, 

 .. (µ) .. e rank C (sI − A) . CB . U = N, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.166)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 212 — #223

i

212

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

6. The control vector function U (.) obeys the following equation for any nonsingular matrix R, where the matrix R ∈ R(µ+1)r×(µ+1)r : ( ) T −1 (H (t , t) R) • G (t , t ) • 1 1 0 HISO HISOR h i , R−1 Uµ (t) = (11.167) • Y1 −CΦ (t1 , t0 ) X0 − U (µ−1) Uµ−1 0 where GHISOR (t1 , t0 ) is the Gram matrix of HHISO (t1 , t) R, Zt1 T GHISOR (t1 , t0 ) = HHISO (t1 , τ ) RRT HHISO (t1 , τ ) dτ.

(11.168)

t0

Output space approach The system response Y(t; Rα−1 ; U) results as the inverse Laplace transform 0 of Equation (5.5), (Section 5.1), applied to the undisturbed system (5.1), (5.2): ; U) = Y(t; Rα−1 0 Z

t

, [ΓHISO (τ )U(t − τ )dτ ] + ΓHISOU0 (t)Uµ−1 + ΓHISOR0 (t)Rα−1 0 0 0− Z t Z t [ΓHISO (τ )U(t − τ )dτ ] = [ΓHISO (τ )U(t, τ )dτ ] = 0− 0− Z t Z t = [ΓHISO (t − τ )U(τ )dτ ] = [ΓHISO (t, τ )U(τ )dτ ] , (11.169)

=

0−

0−

where - The system output fundamental matrix ΓHISO (t) is the inverse Laplace transform of the system transfer function GHISO (s) relating the output Y to the control vector U :  −1 B (µ) Sr(µ) (s) + U, (11.170) GHISO (s) = Ry(α) Sρ(α) (s) A(α) Sρ(α) (s)

= L−1



ΓHISO (t) = L−1 {GHISO (s)} =  −1  (α) (α) (µ) (µ) (α) (α) B Sr (s) + U , Ry Sρ (s) A Sρ (s)

(11.171)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 213 — #224

i

11.8. HISO SYSTEM OUTPUT CONTROLLABILITY

i

213

- ΓHISOU0 (t) is the inverse Laplace transform of the system transfer function matrix GHISOU0 (s) relative to the extended initial control vector Uµ−1 , 0  −1 B (µ) Zr(µ−1) (s), GHISOU0 (s) = −Ry(α) Sρ(α) (s) A(α) Sρ(α) (s)

= L−1



ΓHISOU0 (t) = L−1 {GHISOU0 (s)} =   −1 (α) (α) (α) (α) (µ) (µ−1) B Zr (s) , −Ry Sρ (s) A Sρ (s)

(11.172)

(11.173)

- ΓHISOR0 (t) is the inverse Laplace transform of is the transfer function matrix GHISOR0 (s) relative to the initial state vector Rα−1 , 0 GHISOR0 (s) = −1 = Ry(α) Sρ(α) (s) A(α) Sρ(α) (s) A(α) Zρ(α−1) (s) − Ry(α) Zρ(α−1) (s), 

(11.174)

ΓHISOR0 (t) = L−1 {GHISOR0 (s)} =    −1 −1 (α) (α) (α) (α) (α) (α−1) (α) (α−1) =L Ry Sρ (s) A Sρ (s) A Zρ (s) − Ry Zρ (s) . (11.175) Theorem 216 Output controllability of the HISO system (11.137), (11.138) For the HISO system (11.137), (11.138) to be output controllable it is necessary that (11.161) is fulfilled and if it is satisfied then it is necessary and sufficient that any of the following equivalent conditions holds: 1) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the rows of the system output fundamental matrix ΓHISO (t1 , t) (11.171) are linearly independent on the time interval [t0 , t1 ] . 2) The rows of the system transfer function matrix GHISO (s) (11.170) are linearly independent on C. 3) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrix GΓHISO (t1 , t0 ) (11.176) Zt1 GΓHISO (t1 , t0 ) = ΓHISO (t1 , τ )ΓTHISO (t1 , τ )dτ ∈ RN ×N .

(11.176)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 214 — #225

i

214

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

is nonsingular, i.e., the condition (11.177), detGΓHISO (t1 , t0 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.177)

holds. 4) The control vector function satisfies the following linear timeinvariant algebraic vector equation: U(t) = ΓTHISO (t1 , t)G−1 ΓHISO (t1 , t0 ) • h i α−1 • Y1 − ΓHISOU0 (t1 )Uµ−1 − Γ . (t )R 1 HISOR − 0 0 0

(11.178)

The proof repeats essentially the proof of Theorem 195, (Subsection 11.2.2). Exercise 217 Test the output controllability of the selected HISO physical plant in Exercise 68, Section 5.3.

11.9

IIO system state controllability

11.9.1

Definition

The unperturbed IIO system (6.1), (6.2), (Section 6.1), has the following form: A(α) Rα (t) = B (µ) Uµ (t), ∀t ∈ T0 , (11.179) E (ν) Yν (t) = Ry(α−1) Rα−1 (t) + Q(µ) Uµ (t), ∀t ∈ T0 , ν > 0.

(11.180)

The system has (see Note 72, Section 6.1): - The internal state vector SIIOI , SIIOI = Rα−1 ∈ Rn , n = αρ,

(11.181)

- The output state vector SIIOO ,   .. (ν−1)T T ν−1 T .. (1)T .. SIIOO = Y = Y .Y . ... . Y ∈ Rn , n = νN, (11.182) - And the full state vector SIIOf , which is its state vector SIIO . It is composed of the internal state vector SIIOI = Rα−1 , Equation (11.181), and of the output state vector SIIOO = Yν−1 , Equation (11.182),    α−1  SIIOI R SIIOf = = = SIIO ∈ Rn , n = αρ + νN. (11.183) SIIOO Yν−1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 215 — #226

i

11.9. IIO SYSTEM STATE CONTROLLABILITY

i

215

The internal state vector SIIOI is independent of the output state vector SIIOO in view of Equation (11.179). However, the output state vector SIIOO depends on the internal state vector SIIOI . They both determine the system (full) state vector SIIO . These facts lead to the following definitions (see Definition 161, Section 10.3): Definition 218 Internal state controllability of the IIO system defined by (11.179), (11.180) The IIO system (11.179), (11.180) is the mathematical internal state or physical internal state controllable if and only if for every ini∈ Rαρ tial mathematical internal state or physical internal state vector Rα−1 0 at t0 ∈ T and for any final mathematical internal state or physical internal ∈ Rαρ , respectively, there exist a moment t1 ∈ InT0 and state vector Rα−1 1 an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that ; 0d ; Uµ[t0 ,t1 ] ) = Rα−1 . Rα−1 (t1 ; t0 ; Rα−1 0 1

(11.184)

Definition 219 State controllability of the IIO system (11.179), (11.180) The IIO system (11.179), (11.180) is the mathematical state or physical state controllable if and only if for every initial mathematical state or physical state vector  α−1  R0 ∈ Rn , n = αρ + νN, SIIO0 = Y0ν−1 at t0 ∈ T and for any final mathematical state or physical state vector  α−1  R1 SIIO1 = ∈ Rn , Y1ν−1 respectively, there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that " # α−1 (t ; t ; Rα−1 ; 0 ; Uµ R ) 1 0 d 0 [t0 ,t1 ] SIIO (t1 ; t0 ; SIIO0 ; 0d ; Uµ[t0 ,t1 ] ) = = Yν−1 (t1 ; t0 ; Rα−1 ; Y0ν−1 ; 0d ; Uµ[t0 ,t1 ] ) 0  α−1  R1 = SIIO1 = . (11.185) Y1ν−1

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 216 — #227

i

216

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

11.9.2

Criterion

Equation (11.179) is the same as Equation (11.137) (Subsection 11.7.2 of Section 11.7). The equivalent form of the latter, hence also of the former, is the EISO form (11.139), i.e., b dX(t) b b µ (t), ∀t ∈ T0 , µ ≥ 1, B bX(t) b (µ) U b (µ) ∈ Rnb×(µ+1)r , n =A +B b = αρ, dt (11.186) b where the vector X is defined as X in (11.141) that now reads:  b i = R(i−1) ∈ Rρ , ∀i = 1, 2, ..., α, i.e., α > 1 =⇒ X   T  T   . . . . . . T . . . . . . T (1)T (α−1) T T T b b . ... . X b b .X = R . R . ... . R = X= X α 1 2       n, = Rα−1 ∈ Re b 1 = X= b R ∈ Rρ , α = 1 =⇒ X (11.187)    

and the matrices are determined in Equations (11.142)-(11.145), (Section 11.7), i.e.,

       

Oρ Oρ Oρ ... Oρ −A−1 α A0

b= α > 1 =⇒ A Iρ ... Oρ Oρ Oρ ... Oρ Oρ Oρ ... Iρ Oρ ... ... ... ... Oρ ... Oρ Iρ −1 A −1 A −A−1 A ... −A −A 1 α−2 α−1 α α α b = −A−1 A0 ∈ Rρ×ρ , α = 1 =⇒ A 1

     ∈ Rαρ×αρ ,   

(11.188) b inv B

     O(α−1)ρ,ρ  n b ×ρ ∈R , α > 1, = ∈ Rnb×ρ , Iρ   ρ×ρ Iρ ∈ R , α = 1,

(11.189)

b (µ) and B b (µ) , B b (µ)

B

  .. .. .. b (0) = B0 = B0 . B1 . ... . Bµ ∈ Rρx(µ+1)r , B b (µ) = B b inv A−1 B b (µ) ∈ Rnb×(µ+1)r . B α

(11.190) (11.191)

Theorem 212 is valid unchanged for the IIO system (11.179), (11.180) internal state controllability:

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 217 — #228

i

11.9. IIO SYSTEM STATE CONTROLLABILITY

i

217

Theorem 220 Internal state controllability criteria for the IIO system (11.179), (11.180) For the IIO system (11.179), (11.180) to be internal state controllable it is necessary and sufficient that: a) Any of the following equivalent conditions 1. through 6.a) holds if µ = 0, b) Any of the following equivalent conditions 1. through 5, 6.b) or 6.c) holds if µ > 0. In a) and b) Equations (11.187) - (11.191) induced by (11.179) determine the matrices: b (t1 , t) B b (µ) and Φ b (t0 , t) B b (µ) are linearly 1. All rows of both matrices Φ independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT.  −1 b (s) B b (µ) = sI − A b b (µ) and (sI − A)−1 Binv are lin2. All rows of Φ B early independent on C. b (t1 , t) B b (µ) , 3. The Gram matrix GΦb Bb (t1 , t0 ) (11.192) of Φ Z GΦb Bb (t1 , t0 ) =

t1

T  b T (t1 , τ ) dτ b (µ) Φ b (t1 , τ ) B b (µ) B Φ

(11.192)

t0

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGΦb Bb (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 . b 4. The n × n(µ + 1)r controllability matrix C,   b (µ) ... A bB b (µ) ... A b2 B b (µ) ... ... ... A bn−1 B b (µ) ∈ Rnb×bn(µ+1)r , Cb = B

(11.193)

(11.194)

has the full rank n b,

rank Cb = n b. (11.195)   b of the matrix A, b equivalently for every 5. For every eigenvalue si A   .. b (µ) b complex number s ∈ C, the n b × (b n + (µ + 1) r) matrix sInb − A . B has

the full rank n b,   . (µ) . b.B b rank sInb − A =n b,   b ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C. ∀s = si A

(11.196)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 218 — #229

i

218

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY b (.) obeys 6. a) If µ = 0 the control vector function U h i  T b1 − Φ b0 , b (t) = Φ b (t1 , t0 ) X b (t1 , t) B b 0 G−1 (t1 , t0 ) X U bb ΦB

∀t ∈ [t0 , t1 ] , f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 , µ = 0.

(11.197)

b (.) obeys either b) If µ > 0 the control vector function U  T h i b µ (t) = T T B b1 − Φ b 0 , µ > 0, b (µ) Φ b T (t1 , t) G−1 (t1 , t0 ) X b (t1 , t0 ) X T −1 U b BT b Φ (11.198) where T ∈ R(µ+1)r×(µ+1)r is any nonsingular matrix and GΦb BT b (t1 , t0 ) is the b (t1 , t) B b (µ) T, Gram matrix of Φ Z t1 T  b T (t1 , τ ) dτ, b (µ) Φ b (t1 , τ ) B b (µ) T T T B (11.199) GΦb BT Φ b (t1 , t0 ) = t0

b (.) obeys c) or the control vector function U h i b µ (t) = Φ b1 − Φ b 0 , µ > 0, b (µ) U b T (t1 , t) G−1 (t1 , t0 ) X b (t1 , t0 ) X B b Φ

b (t1 , t) , where GΦb (t1 , t0 ) is the Gram matrix of Φ Z t1 b (t1 , τ ) Φ b T (t1 , τ ) dτ. GΦb (t1 , t0 ) = Φ

(11.200)

(11.201)

t0

Let also   e i = Y(i−1) ∈ Rρ , ∀i = 1, 2, ..., ν, i.e., ν > 1 =⇒ X      T  T   . . . . . . T . . . . . . T T T T (1)T (ν−1) e e e e X= X1 .X2 . ... . Xν = Y .Y . ... . Y =       ν−1 n e =Y ∈R , n e = νN, e 1 = X= e Y ∈ RN , ν = 1 =⇒ X (11.202)        

ON ON ON ... ON −Eν−1 E0

e= ν > 1 =⇒ A IN ... ON ON ON ... ON ON ON ... IN ON ... ... ... ... ON ... ON IN −Eν−1 E1 ... −Eν−1 Eν−2 −Eν−1 Eν−1 e = −E −1 E0 ∈ RN ×N , ν = 1 =⇒ A

     ∈ RνN ×νN   

,

1

(11.203)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 219 — #230

i

11.9. IIO SYSTEM STATE CONTROLLABILITY

      O(ν−1)N,N n×N ∈R , ν > 1, e inv = ∈ RνN ×N , B IN   N ×N IN ∈ R , ν = 1,   . . . (µ) (µ) . . . e (0) = Q0 e = Q = Q0 . Q1 . ... . Qµ ∈ RN x(µ+1)r , B B e (µ) = B e inv Eν−1 B e (µ) ∈ RνN ×(µ+1)r , B   .. .. .. e C = IN . ON . ... . ON ∈ RN ×νN .      O(ν−1)N,N νN ×N , ν > 1,  ∈ R e inv = D ∈ RνN ×N , IN   IN ∈ RN ×N , ν = 1,   e(α−1) = Ry0 ... Ry1 ... ... ... Ry,α−1 ∈ RN xαρ , R e(0) = Ry0 R e =D e inv Eν−1 R e(α−1) ∈ RνN ×αρ , D

i

219

(11.204)

(11.205) (11.206) (11.207)

(11.208)

(11.209) (11.210)

These equations permit us to set Equation (11.180) in the following equivalent forms: e dX(t) e e µ (t) + D b (t) , ∀t ∈ T0 , eX(t) e (µ) U eX =A +B dt e (t) , ∀t ∈ T0 . eX Y(t) = C

(11.211) (11.212)

Equations (11.186), (11.211), and (11.212) lead to the following equivalent description of the IIO system (11.179), (11.180):

where

" X=

dX(t) = AX(t) + BUµ (t), ∀t ∈ T0 , dt

(11.213)

Y(t) = CX (t) , ∀t ∈ T0 ,

(11.214)

b X e X

# ∈ Rnb+en , n = n b+n e = αρ + νN, b U, e U =U=

" A=

b Onb,en A e e D A

(11.215) (11.216)

# ∈ Rn×n =⇒ Φ (t1 , t) = eA(t1 −t) ,

(11.217)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 220 — #231

i

220

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY " B(µ) =

b (µ) B e (µ) B

# ∈ Rn×(µ+1)r ,

    .. e .. .. .. .. C = ON,bn . C = ON,bn . IN . ON . ... . ON ∈ RN ×n .

(11.218) (11.219)

The matrices defined in Equations (11.188)-(11.191), (11.203)-(11.210) determine the matrices defined by Equations (11.217)-(11.219). The system (11.213), (11.214) is the equivalent EISO system (11.8), (11.9) to the IIO system (11.179), (11.180) . Theorem 180 for µ = 0, (Section 11.1), and Theorem 205 for µ > 0, (Section 11.5), with the matrices determined by Equations (11.188)-(11.191), (11.203)-(11.210), (11.217)-(11.219) hold for the system (11.213), (11.214), i.e., for the IIO system (11.179), (11.180) due to their equivalency. Theorem 221 State controllability criteria for the IIO system defined by (11.179), (11.180)) For the IIO system (11.179), (11.180) to be state controllable it is necessary and sufficient that: a) Any of the following equivalent conditions 1. through 6.a) holds if µ = 0, b) Any of the following equivalent conditions 1. through 5, 6.b) or 6.c) holds if µ > 0. In a) and b) Equations (11.187)-(11.191) induced by (11.179), (11.202)(11.218) determine the matrices: 1) All rows of the matrices Φ (t1 , t) B(µ) and Φ (t0 , t) B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2) All rows of Φ (s) B(µ) = (sI − A)−1 B(µ) are linearly independent on C. 3) The Gram matrix GctB (t1 , t0 ) of Φ (t1 , t) B(µ) , Z t1  T GctB (t1 , t0 ) = Φ (t1 , τ ) B(µ) B(µ) ΦT (t1 , τ ) dτ, ∀t ∈ T0 , (11.220) t0

is nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., rankGctB (t1 , t0 ) = n, f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 4) The n × n (µ + 1) r controllability matrix CB ,   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. CB = B . AB . A B . ... . A B ,

(11.221)

(11.222)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 221 — #232

i

11.9. IIO SYSTEM STATE CONTROLLABILITY

i

221

has the full rank n, rank CB = n.

(11.223)

5) For every eigenvalue si (A) of the matrix A, equivalently for every  .. (µ) complex number s ∈ C, the n × (n + (µ + 1) r) matrix sI − A . B has the full rank n,   . rank sI − A .. B(µ) = n, ∀s = si (A) ∈ C, ∀i = 1, 2, ..., n, ∀s ∈ C.

(11.224)

6. a) If µ = 0 the control vector function U (.) obeys U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 .

(11.225)

b) If µ > 0 the control vector function U (.) obeys either T  T −1 Uµ (t) = T T B(µ) ΦT (t1 , t) G−1 ΦBT (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0, (11.226) (µ+1)r×(µ+1)r where T ∈ R is any nonsingular matrix and GΦBT (t1 , t0 ) is the Gram matrix of Φ (t1 , t) B(µ) T, Z

t1

GΦBT (t1 , t0 ) =

T  Φ (t1 , τ ) B(µ) T T T B(µ) ΦT (t1 , τ ) dτ.

(11.227)

t0

c) or the control vector function U (.) obeys B(µ) Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [X1 − Φ (t1 , t0 ) X0 ] , µ > 0,

(11.228)

where GΦ (t1 , t0 ) is the Gram matrix of Φ (t1 , t) , Z

t1

GΦ (t1 , t0 ) =

Φ (t1 , τ ) ΦT (t1 , τ ) dτ.

(11.229)

t0

Exercise 222 Test the state controllability of the selected IIO physical plant in Exercise 78, Section 6.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 222 — #233

i

222

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

11.10

IIO system output controllability

11.10.1

Definition

The general output controllability Definition 165, (Section 10.3), holds unchanged for the IIO system (6.1), (6.2): Definition 223 Output controllability of the IIO system (6.1), (6.2) The IIO system (6.1), (6.2) is the output controllable if and only if for every initial output vector Y0 ∈ RN at t0 ∈ T and for any final output vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; Uµ[t0 ,t1 ] ) = Y1 .

11.10.2

Criteria

State space-output space product approach The system (11.213), (11.214) is the EISO system (11.8), (11.9), which is equivalent to the unperturbed both the IO system (2.15), (Section 11.2), and IIO system (6.1), (6.2) with the matrices determined by Equations (11.217)(11.219). With this in mind, the system matrix HIIO (t, t0 ) (11.230), HIIO (t, t0 ) = CΦ(t, t0 )B(µ) ∈ RN ×(µ+1)r ,

(11.230)

and its Laplace transform HIIO (s) (11.231), HIIO (s) = CΦ(s)B(µ) ∈ CN ×(µ+1)r ,

(11.231)

correspond to HIO (t, t0 ) (11.32) and HIO (s) (11.33), (Section 11.2). For the IIO system (6.1), (6.2) Theorem 190, (Section 11.2), reads: Theorem 224 Equivalent conditions for the output controllability of the IIO system (6.1), (6.2) The IIO system (6.1), (6.2) obeys invariantly the output controllability necessary rank condition (10.29), i.e., rankC = N, ∀s ∈ C.

(11.232)

For the IIO system (6.1), (6.2) to be output controllable: A) It is necessary that any of the following conditions 1.-6.a) holds and sufficient that any of the following conditions 1.-4.,6.a) holds if µ = 0, where the conditions 1.-4.,6.a) are equivalent,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 223 — #234

i

11.10. IIO SYSTEM OUTPUT CONTROLLABILITY

i

223

B) It is necessary that any of the following conditions 1.-5., 6.b) holds and sufficient that any of the following conditions 1.-4., 6.b) holds if µ > 0, where the conditions 1.-4.,6.b) are equivalent. In A) and B) Equations (11.188)-(11.191), (11.203)-(11.210) induced by Equations (6.1), (6.2) determine the matrices defined by Equations (11.217)(11.219): 1. All rows of the system matrix HIIO (t, t0 ) (11.230) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . 2. All rows of the system matrix HIIO (s) (11.231) are linearly independent on C. 3. The Gram matrix GHIIO (t, t0 ) of HIIO (t, t0 ) (11.230), Z t T GHIIO (t, t0 ) = HIIO (τ, t0 ) HIIO (τ, t0 ) dτ, t0

f or any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.233)

is nonsingular; i.e., rankGHIIO (t, t0 ) = N, any (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(11.234)

4. The output N × n (µ + 1) r controllability matrix CIIOout ,   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) CIIOout = CB . CAB . CA B . .... CA B , µ ≥ 0, (11.235) has the full rank N, rankCIOout = N.

(11.236)

5. For every eigenvalue si (A) of the matrix A, equivalently for every complex number s ∈ C, the matrix CoutIO (s),   . CoutIO (s) = C (sI − A) .. CB(µ) ∈ RN ×(n+(µ+1)r) , µ ≥ 0, has the full rank N, rankCoutIO (s) = N, ∀s = si (A) ∈ C, ∀i = 1, 2, ...n, i.e., ∀s ∈ C.

(11.237)

6. The control vector function U (.) obeys: a) The following equation if µ = 0: U (t) = (HIIO (t1 , t))T G−1 HIIO (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) X0 ] , µ = 0, (11.238)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 224 — #235

i

224

i

CHAPTER 11. VARIOUS SYSTEMS CONTROLLABILITY

b) The following equations for any (µ + 1) r × (µ + 1) r nonsingular matrix R:   (HIIO (t1 , t) R)T • G−1 HIIOR (t1 , t0 ) • R−1 Uµ (t) = , µ > 0, (11.239) • [Y1 −CΦ (t1 , t0 ) X0 ] where GHIIOR (t1 , t0 ) is the Gram matrix of HIIO (t1 , t) R, Zt1 T GHIIO R (t1 , t0 ) = HIIO (t1 , τ ) RRT HIIO (t1 , τ ) dτ.

(11.240)

t0

Output space approach Equations (6.31)-(6.35), (Section 6.1), determine the system response in terms of its input-output data: Z t ν−1 µ ; U ) = ΓIIO (t, τ )U(τ )dτ + ; Y Y(t; Rα−1 0 0− 0−

+ΓIIOI0 (t)Uµ−1 0−

+

ΓIIOR0 (t)Rα−1 0−

+ ΓIIOY0 (t)Y0ν−1 − , ∀t ∈ T0 .

(11.241)

Theorem 225 Output controllability criterion for the IIO system (6.1), (6.2) In order for the IIO system (6.1), (6.2) to be output controllable it is necessary and sufficient that any of the following equivalent conditions holds: 1) For any (t0 , t1 > t0 ) ∈ InT0 ×InT0 the rows of the system output fundamental matrix ΓIIO (t, t0 ) (6.32), (Section 6.1), are linearly independent on the time interval [t0 , t1 ] . 2) The rows of the system transfer function matrix GIIOU (s) (6.23), (Section 6.1), are linearly independent on C. 3) For any (t0 , t1 > t0 ) ∈ InT0 × InT0 the Gram matrix GΓIIO (t0 , t1 ) of the system matrix ΓIIO (t, t0 ) ∈ RN ×r , Zt1 GΓIIO (t0 , t1 ) = ΓIIO (t1 , τ )ΓTIIO (t1 , τ )dτ ∈ RN ×N .

(11.242)

t0

is nonsingular, i.e., the condition (11.243), detGΓIIO (t0 , t1 ) 6= 0, any (t0 , t1 > t0 ) ∈ InT0 × InT0 ,

(11.243)

holds.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 225 — #236

i

i

11.10. IIO SYSTEM OUTPUT CONTROLLABILITY

225

4) The control vector function obeys the following linear algebraic vector equation: U(t) = =

ΓTIIO (t1 , t)G−1 ΓIIO

 (t1 )

Y1 − ΓIIOI0 (t1 )Uµ−1 − 0− α−1 −ΓIIOR0 (t1 )R0− − ΓIIOY0 (t1 )Y0ν−1 −

 . (11.244)

Appendix D.15 contains the proof of Theorem 195. Exercise 226 Test the output controllability of the selected IIO physical plant in Exercise 78, Section 6.3.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 226 — #237

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 227 — #238

i

i

Part IV

APPENDIX

227 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 228 — #239

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 229 — #240

i

i

Appendix A

Notation The meaning of the notation is explained in the text at its first use.

A.1

Abbreviations

C

controller CS control system Cl closure HISO system Higher order Input-State-Output system defined by (5.1), (5.2) if f if and only if I Input II Input-Internal (dynamics) IIO Input-Internal and Output state IIO system Input-Internal and Output state system defined by (6.1), (6.2) In the interior IO Input-Output IO system Input-Output system defined by (2.1) IS Input-State ISO Input-State-Output ISO system Input-State-Output system defined by (3.1), (3.2) M IM O Multiple-Input-Multiple-Output O object P plant EISO system Extended Input-State-Output system (4.1), (4.2) SISO Single-Input Single-Output 229 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 230 — #241

i

230

i

APPENDIX A. NOTATION System

A.2 A.2.1

Continuous-time time-invariant linear dynamical system

Indexes SUBSCRIPTS

d

the subscript d denotes “desired” e equilibrium i the subscript i denotes “the i -th” j the subscript j denotes “the j -th” P for plant zero the subscript zero denotes “the zero value” 0 the subscript 0 (zero) associated with a variable (.) denotes its initial value (.)0 ; however, if (.) ⊂ T then the subscript 0 (zero) associated with (.) denotes the time set T0 , (.)0 = T0

A.2.2

SUPERSCRIPT

i ∈ {0, 1, ..., η, µ} is the highest derivative of the disturbance vector acting on the plant in general k ∈ {0, 1, ..., µ} is the highest derivative of the control vector acting on the plant in general l ∈ {0, 1, ..., m} is the highest order of the tracking in general m ∈ {1, α, ν, α + ν} is the plant order in general 0 is the highest derivative of both the disturbance vector and control vector acting on the ISO plant 1 is the order of the ISO and EISO plant α is the order of the HISO and IIO plant η is the highest derivative of the disturbance vector acting on the IO plant µ is the highest derivative of the control vector acting on the HISO, IO, IIO and EISO plant, as well as the highest derivative of the disturbance vector acting on the IIO and on the EISO plant ν is the order of the IO plant

A.3

Letters

Lower case block or italic letters are used for scalars. Lower case bold block letters denote vectors. Upper case block letters denote matrices, or points. Upper case Fraktur letters designate sets or spaces.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 231 — #242

i

A.3. LETTERS

i

231

The notation “; t(.)0 ” will be omitted as an argument of a variable if, and only if, a choice of the initial moment t(.)0 does not have any influence on the value of the variable BLACKBOARD BOLD LETTERS C set of complex numbers s Ck k-dimensional complex vector space

A.3.1 C

CALLIGRAPHIC LETTERS

controller, or controllability matrix C,   .. .. 2 .. n−1 C = B . AB . A B · · · . A B

CS control system I integral output space, I = T × RN ∓ L {i(.)} Left (-), right (+), respectively, the Laplace transform of a function i(.), Z ∞  Z ∞ ∓ ∓ −st −st + L {i(t)} = I (s) = i(t)e dt = lim i(t)e dt : ζ −→ 0 0∓

∓ζ

P plant S state S (.) motion

A.3.2

FRAKTUR LETTERS

Capital Fraktur letters are used for spaces or for sets. A ⊆ Rn a nonempty subset of Rn B ⊆ Rn a nonempty subset of Rn C the family of all defined and continuous functions on T0 k C (S) the family of all functions defined, continuous and k-times  continuously differentiable on the set S ⊆ T ∪ Ri , Ck Ri = Cki Ck (T0 ) the family of all functions defined, continuous and k-times continuously differentiable on T0 C0 (S) the family of  all functions defined and continuous on the set 0 i 0,i i S, C R = C = C R Ck− Ri the family of all functions defined everywhere and k-times continuously differentiable on Ri \ {0i }, which have defined and continuous derivatives at the origin 0i of Ri up to the order (k − 1) , which are defined

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 232 — #243

i

232

i

APPENDIX A. NOTATION

and continuous at at the origin 0i and have defined the left and the right k − th order derivative at the origin 0i Dk is a given, or to be determined, family of all bounded k-times continuously differentiable on T0 permitted disturbance vector total functions D(.), or deviation functions d(.), Dk ⊂ Ck , the Laplace transforms of which are strictly proper real rational complex functions,

n o

Dk = D(.) : D(k) (t) ∈C, ∃ζ ∈ R+ =⇒ Dk (t) < ζ, ∀t ∈ T0 , or

n o

k (k) + D = d(.) : d (t) ∈C, ∃ξ ∈ R =⇒ d (t) < ξ, ∀t ∈ T0 k

Dk− is a subfamily of Dk , Dk− ⊂ Dk , such that the real part of every pole of the Laplace transform D(s) of every D(.) ∈ Dk− is negative, D− = D0− D0 = D is the family of all bounded continuous permitted disturbance vector total functions D(.) or deviation functions d(.), D ⊂ C, the Laplace transforms of which are strictly proper real rational complex functions Ik is a given, or to be determined, family of all bounded and k -times continuously differentiable permitted input vector functions I(.), Ik ⊂ Ck ∩ L I0 = I is the family of all bounded continuous permitted input vector functions I(.) I⊂C∩L Ik− is a subfamily of Dk , Ik− ⊂ Ik , such that the real part of every pole of the Laplace transform I(s) of every I(.) ∈ Ik− is negative, I− = I0− L the family of all strictly proper real rational complex functions, the original of which are bounded time-dependent functions,       ∃γ(I) ∈ R+ =⇒ kI(t)k < γ(I), ∀t ∈ T0 ,     ∓ T    ∓   ∓ ∓ ∓   L {I(t)} = I (s) = I1 (s) I2 (s) . . . IM (s) ,           j=ζk       X   j akj s I(.) :  L=    ∓   j=0   Ik (s) = j=ψ     , 0 ≤ ζ < ψ , ∀k = 1, 2, ..., M, k k  k      X         bkj sj     j=0

R R+ R+

the set of all real numbers the set of all positive real numbers the set of all nonnegative real numbers

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 233 — #244

i

A.3. LETTERS

i

233

RνN the extended output space of the IO system, which is simultaneously the space of its internal dynamics - its internal dynamics space Rn an n-dimensional real vector space, the state space of the ISO system T the accepted reference time set, the arbitrary element of which is an arbitrary moment t and the time unit of which is second s, 1t = s, t hsi, T = {t : t[T ] hsi , numt ∈ R, dt > 0} , inf T = −∞, sup T = ∞ T0 the subset of T, which has the minimal element minT0 that is the initial instant t0∓ , numt0∓ = 0∓ , T0 = {t : t ∈ T, t ≥ t0∓ , numt0 = 0} , T0 ⊂ T, minT0 = t0 ∈ T, sup T0 = ∞ a given, or to be determined, family of all bounded k-times conYkd tinuously differentiable realizable desired total output vector functions Yd (.), Ykd ⊂ CkN , the Laplace transforms of which are strictly proper real rational complex functions,

o n

Ykd = Yd (.) : Yd (t)∈CkN , ∃κ ∈ R+ =⇒ Ydk (t) < κ, ∀t ∈ T0 Ykd− is a subfamily of Ykd , Ykd− ⊂ Ykd , such that the real part of every pole of the Laplace transform Yd (s) of every Yd (.) ∈ Ykd− is negative, Yd− = Y0d− k = Y k (t ) of Ykd0 the set of the desired output initial conditions Yd0 d 0 Ydk (t) of every Yd (.)∈ Ykd , n o k k Ykd0 = Yd0 : Yd0 = Ydk (t0 ), Yd (.)∈ Ykd (A.1) Yd = Y0d is the family of all bounded continuous realizable desired total output vector functions Yd (.), Yd = Y0d ⊂ C0d , the Laplace transforms of which are strictly proper real rational complex functions

A.3.3 α

GREEK LETTERS

a nonnegative integer, β a nonnegative integer, η a natural number,, λm (H) λM (H)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 234 — #245

i

234

i

APPENDIX A. NOTATION

µ a nonnegative integer, ν a nonnegative integer, τ a subsidiary notation for time t, Ξ = diag {1 2 ... N } ΦIO (t, t0 ) ∈ RN ×n the IO system output fundamental matrix (Equation (11.52), Section 11.2),  −1  −1 (ν) (ν) ΦIO (t, t0 ) = L A SN (s) ,  −1 (ν) ΦIO (s) = A(ν) SN (s) , ΦISO (t, t0 ) ≡ Φ (t, t0 ) ∈ Rn×n ISO system (3.1), (3.2),

the fundamental matrix (3.4) of the

Φ (t, t0 ) = eAt eAt0

−1

= eA(t−t0 ) ∈ Rn×n , n o Φ (s) = L {Φ (t, t0 )} = L eA(t−t0 ) = (sI − A)−1 , has the following well known properties, Equations (3.5)-(3.7) (all in Section 3.1): detΦ (t, t0 ) 6= 0, ∀t ∈ T0 , Φ (t, t0 ) Φ (t0 , t) = eA(t−t0 ) eA(t0 −t) ≡ eA0 = In , Φ(1) (t, t0 ) = AΦ (t, t0 ) = Φ (t, t0 ) A. ρ

A.3.4

a natural number.

ROMAN LETTERS

A ∈ Rnxn the matrix describing the internal dynamics of the ISO system Ak ∈ RN xN the matrix associated with the k − th derivative Y(k) of the output vector Y of the IO system A(ν) ∈ RN x(ν+1)N the extended matrix  describing the IO system . . . internal dynamics, A(ν) = A .. A .. ... .. A 0

1

ν

RN xM

Bk ∈ the matrix associated with the k − th derivative I(k) of the input vector I of the IO system B (µ) ∈ RN x(µ+1)r the extended matrix describing the transmission of (µ) = the influence of the   control vector U(t) on the system dynamics, B . . . B .. B .. ... .. B 0

1

µ

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 235 — #246

i

A.3. LETTERS

i

235

B(µ) ∈ Rnx(µ+1)r the extended matrix describing the transmission of the influence of the control vector U(t) on the system dynamics, B(µ) =   . . . B .. B .. ... .. B 0

1

µ

RN xn

C∈ the matrix of the ISO system, which describes the transmission of the state vector action on the system output vector Y C0 is the vector of all initial conditions acting on the system,   I0 (1)    I0     ...   (µ−1)   I0    µM +n+νN  C0 =  ,  X0  ∈ R  Y0     (1)   Y0     ...  (ν−1) Y0 d a natural number d ∈ Rd the disturbance deviation vector, (2.53), (Section 2.2), d = D − DN D ∈ Rd the total disturbance vector d DN ∈ R the nominal disturbance vector D ∈ RN xd the ISO system matrix describing the transmission of the influence of I(t) on the system output e the output error vector e ∈RN , e = Yd − Y = −y, e = [e1 e2 ... eN ]T F (.) : T0 −→ RN xN

a matrix function associated with f (.),

f = [f1 f2 . . . fN ]T =⇒ F = diag {f1 f2 . . . fN } G = GT ∈ Rpxp the symmetric matrix of the quadratic form v(w) = T w Gw G(s) is the transfer function matrix of a time-invariant continuoustime linear dynamical system h(.) the Heaviside function, i.e. the unite step function, h(.) : T →[0, 1], h(t) = 0 for t < 0, h(t) ∈ [0, 1] for t = 0, h(t) = 1 for t > 0, Figure A.1,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 236 — #247

i

236

i

APPENDIX A. NOTATION

h(t) 1 0

t

Figure A.1: Heaviside function h (.) . H ∈ RN xr a matrix T pxp H=H ∈R the symmetric matrix of the quadratic form v(w) = wT Hw √ i an arbitrary natural number, or the imaginary unit −1, or the input deviation variable i ∈ RM the input deviation vector, i = [i1 i2 ... iM ]T , (2.54), (Section 2.2), i = I − IN 

iµ (t) ∈ R(µ+1)M the extended input vector at a moment t, iµ (t) = T . . . i(t) .. i(1) (t) .. ... .. i(µ) (t) iµ−1 ∈ RµM 0∓

ment t0 = 0, iµ−1 0∓

the initial extended  . (1) µ−1 ∓ =i (0 ) = i0(∓) .. i0(∓)

input vector at the initial mo .. .. (µ−1) T . ... . i0(∓) ∈ RµM

I the identity matrix of the n-th order, I = diag{1 1 ... 1} ∈ Rnxn , or the total input variable, In = I IN the identity matrix of the N-th order, IN = diag{1 1 ... 1} ∈ N xN R I ∈ RM the total input vector, I = [I1 I2 ... IM ]T Ii ∈ Ri is the i-th order identity matrix, IN ∈ RM the nominal input vector, IN = [IN 1 IN 2 ... IN M ]T IntT0 the interior of the set T0 , IntT0 = {t : t ∈ T0 , t > 0} J ∈ RnxM a matrix k an arbitrary natural number M (.) a complex valued matrix function of any type M (s) a complex valued matrix of any type

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 237 — #248

i

A.3. LETTERS

i

237

m a nonnegative integer n a natural number N a natural number, if N is the dimension of the output vector and if n is the dimension of the state vector then N ≤ n O the zero matrix of the appropriate order p a natural number P ∈ RnxN a matrix ρxM Pk ∈ R a matrix P (β) ∈ RρxM (β+1) an extended matrix describing the transmission of the influence of iβ (t) on the internal dynamics of the IIO system, P (α) = . . [P .. P .. ... P ] 0

1

β

q a natural number Q ∈ RN xN a matrix Qk ∈ Rρxρ a matrix Q(α) ∈ Rρxρ(α+1) the extended matrix describing the internal dynam  .. .. (α) ics of the IIO system, Q = Q0 . Q1 . ... Qα r ∈Rρ a subsidiary deviation vector, which is the internal dynamics deviation vector, (5.43), (Section 5.2), r = R − RN R ∈Rρ a subsidiary total vector, which is the internal dynamics total vector of the HISO system and of the IIO system RN ∈Rρ a subsidiary nominal vector, which is the internal dynamics nominal vector of the HISO system and of the IIO system Rk ∈ RN xρ a matrix (α−1) N xαρ Ry ∈ R the extended matrix describing the action of the extended internal dynamics vector Rα−1 on   the output dynamics of the IIO .. .. (α−1) system, Ry = R . R . ... R y0

S S

y1

y,(α−1)

a state variable a state vector, 

. . . . S = S1 .. S2 .. S3 .. ... .. SK

T

∈ RK

Re s the real part of s = σ + jω, Re s = σ s the basic time unit: second, or a complex variable or a complex number s = σ + jω

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 238 — #249

i

238

i

APPENDIX A. NOTATION sign(.) : R → {−1, 0, 1}

the scalar signum function,

sign(x) = |x|−1 x if x 6= 0, and sign(0) = 0 (k)

Si (.) : C −→ C in Subsection 1.2: (k) Si (s)

i(k+1)xi

the matrix function of s defined by (2.17)

  .. 1 .. 2 .. .. k T 0 = s Ii . s Ii . s Ii . ... . s Ii ∈ C

i(k+1)xi

,

(k, i) ∈ {(µ, M ) , (ν, N )} t time (temporal variable), or an arbitrary time value (an arbitrary moment, an arbitrary instant); and formally mathematically t denotes for short also the numerical time value numt if it does not create a confusion, t[T] hsi , numt ∈ R, dt > 0 , or equivalently: t ∈ T. It has been the common attitude to use the notation t of time and of its arbitrary temporal value also for its numerical value numt, e.g. t = 0 is used in the sense numt = 0. We do the same throughout the book if there is not −1 any confusion because we can replace t everywhere by t1−1 ∈ R, t , t1t  −1 that we denote again by t, numt = num t1t t0 a conventionally accepted initial value of time (initial instant, initial moment), t0 ∈ T, numt0 = 0, i.e., simply t0 = 0 in the sense numt0 = 0 tinf the first instant, which has not happened, tinf = −∞ tsup the last instant, which will not occur, tsup = ∞ tZeroT otal the total zero value of time, which has not existed and will not happen tzero a conventionally accepted relative zero value of time T the temporal dimension, “the time dimension”, which is the physical dimension of time T ∈ R+ the period of a periodic behavior N xM Tk ∈ R a matrix T (µ) ∈ RN xM (µ+1) the extended matrix describing the action of the µ on the output dynamics of the IIO system, T (µ) = extended input vector i   .. .. T . T . ... T 0

1

µ

Uµ[t0 ,t1 ] extended control on the time interval [t0 , t1 ] p v(.) : R → R a quadratic form, v(w) = wT Ww

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 239 — #250

i

A.4. NAME

i

239

V(s) is the Laplace transform of all actions on the system; it is composed of the Laplace transform I(s) of the input vector I(t) and of all (input and output) initial conditions,   I(s) V(s) = C0 p w∈R a subsidiary real valued vector, n o T w = [w1 w2 ... wp]T ∈ rα−1 yν−1 , x, yν−1 , p ∈ {ρ, n, N } W = W T ∈ Rpxp the symmetric matrix of the quadratic form v(w), v(w) = wT W w, W ∈ {G = GT , H = H T } x∈R a real valued scalar state deviation variable x ∈ Rn the state vector deviation of the ISO system, (3.38), (Section 3.2), x = [x1 x2 ... xn ]T , x = X − XN = X − Xd X ∈ Rn the total state vector of the ISO system, X = [X1 X2 .. Xn ]T X N ∈ Rn the total nominal state vector of the ISO system, XN = [XN 1 XN 2 ... XN n ]T y∈R a real valued scalar output deviation variable y ∈ RN a real valued vector output deviation variable - the output deviation vector of both the plant and of its control system, y = [y1 y2 ... yN ]T , (9.3), (Section 2.2), y = Y − Yd = −ε Y ∈ RN a real total valued vector output - the total output vector of both the plant and of its control system, Y = [Y1 Y2 ... YN ]T Yd ∈ RN a desired (a nominal) total valued vector output - the desired total output vector of both the plant and of its control system, Yd = [Yd1 Yd2 ... YdN ]T y0ν−1 ∈ RνN the initial extended output vector at the initial moment ∓   .. (1)T .. .. (ν−1)T T 0 ν−1 ν−1 ∓ T t0 = 0, y0∓ = y (0 ) = y0(∓) . y0(∓) . ... . y0(∓) , y0∓ = y0 (0∓ ) = y0∓ = y(0∓ )

A.4

Name

Stable (stability) matrix: a square matrix is stable (stability) matrix if, and only if, the real parts of all its eigenvalues are negative.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 240 — #251

i

240

i

APPENDIX A. NOTATION

A.5

Symbols, vectors, sets and matrices

(.)

an arbitrary variable, or an index |(.)| : R → R+ the absolute value (module) of a (complex valued) scalar variable (.) k.k : Rn → R+ an accepted norm on Rn , which is the Euclidean norm on Rn iff not stated otherwise: v u i=n uX √ T ||x|| = ||x||2 = x x = t x2i i=1

k.k1 : Rn → R+

is the taxicab norm or Manhattan norm: kxk1 =

i=n X

|xi |

i=1

˜ equivalent h1.. i shows the units 1... of a physical variable [ α, β ] ⊂ R a compact interval [α, β] = {x : x ∈ R, α ≤ x ≤ β} [ α, β [ ⊆ R a left closed, right open interval, [α, β[= {x : x ∈ R, α ≤ x < β} ] α, β ]⊆ R a left open, right closed interval, [α, β[= {x : x ∈ R, α < x ≤ β} ] α, β [ ⊆ R an open interval, ]α, β[= {x : x ∈ R, α < x < β} (σ, ∞[∈ {]σ, ∞[, [σ, ∞[} , ( α, β ) ⊆ R a general interval, ( α, β ) ∈ {[α, β], [α, β[, ]α, β], ]α, β[} λi (A) the eigenvalue λi (A) of the matrix A [ A.. ] shows the   physical dimension A... of a physical variable .. .. .. A . A . ... . A a structured matrix composed of the submatrices 1

2

ν

A1 , A2 , ..., Aν 0k = [0 0 ...0]T ∈ Rk , the elementwise zero vector, 0n = 0 1k = [1 1...1]T ∈ Rk , the elementwise unity vector, 1n = 1 w=ε the elementwise vector equality, ε = [ε1 ε2 . . . εN ]T ∈ RN , w = [w1 w2 . . . wN ]T , w = ε ⇐⇒ wi = εi , ∀i = 1, 2, ..., N w 6= ε

the elementwise vector inequality, w 6= ε ⇐⇒ wi 6= εi , ∀i = 1, 2, ..., N

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 241 — #252

i

A.6. UNITS

i

241

∀ for every adjA the adjoint matrix of the nonsingular square matrix A, detA 6= 0 =⇒ AadjA = (detA) I detA the determinant of the matrix A, detA = |A| A−1 the inverse matrix of the nonsingular square matrix A, detA 6= 0 =⇒ A−1 = adjA/detA blockdiag {· · · .... ·} block diagonal matrix, the entries of which are matrices, e.g.,   Ek ON,r blockdiag {Ek Bk } = , k = 0, 1, .., ν ON,d Bk In T0F

the interior of T0F , In T0F = {t : t ∈ ClT, 0 < t < tF }

min (δ, ∆)

(A.2)

denotes the smaller between δ and ∆,   δ, δ ≤ ∆, min (δ, ∆) = ∆, ∆ ≤ δ

Reλi (A) the real part of the eigenvalue λi (A) of the matrix A ∃ there exist(s) ∈ belong(s) to, are (is) members (a member) √ √ of, respectively −1 the imaginary unit denoted by i, i = −1 inf infimum max maximum min minimum numx the numerical value of x, if x = 50V then numx = 50, phdim x(.) the physical dimension of a variable x(.), x(.) = t =⇒ phdim x(.) = phdim t = T, but dim t = 1 sup k.k

A.6 1(.) 1t

supremum can be any norm if not otherwise specified

Units the unit of a physical variable (.) the time unit of the reference time axis T , 1t = s

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 242 — #253

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 243 — #254

i

i

Appendix B

Example B.1

IO system example

Example 227 Let the IO system (2.1) be defined by #  #   " (1)  " (3)  4 Y1 (t) 2 3 Y1 (t) 1 2 + + (1) (3) 1 1 0 3 −4 Y2 (t) Y2 (t) #  #   " (1)  " (2)  4 U1 (t) 1 0 U1 (t) 5 3 + + = (1) (2) 3 0 3 4 1 U2 (t) U2 (t)

0 1



2 6



Y1 (t) Y2 (t)



U1 (t) U2 (t)

=  . (B.1)

In this case: N = 2, ν = 3, r = 2, µ = 2 < 3 = ν,  A0 =

4 0 1 1



(B.2)



     2 3 0 0 1 2 , A1 = , A2 = = O2 , A3 = , 1 0 0 0 3 −4   4 0 2 3 0 0 1 2 (3) A = , (B.3) 1 1 1 0 0 0 3 −4 

B0 =

4 2 3 6



B (2)



1 0 0 3





, B1 = , B2 =   4 2 1 0 5 3 = , 3 6 0 3 4 1

5 3 4 1

 , (B.4)

243 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 244 — #255

i

244

i

APPENDIX B. EXAMPLE (k)

(ς−1)

The complex functions Si (s) (2.17) and Zk

(s) (2.18) read:

T s0 0 s1 0 s2 0 s3 0 = = 0 s0 0 s1 0 s2 0 s3  T 1 0 s 0 s2 0 s3 0 = =⇒ 0 1 0 s 0 s2 0 s3

(3) S2 (s)



(3) A(3) S2 (s)

(2) S2 (s)

 =

 =

s0 0 s1 0 s2 0 s0 0 s1 0

4 + 2s + s3 1 + s + 3s3 T  1 0 = 2 0 s

3s + 2s3 1 − 4s3

(B.5)

 (B.6) T

0 s 0 s2 0 1 0 s 0 s2

, (B.7)



 O2 O2 O2  s0 U2 O2 O2  (3−1) = Z2 (s) =  1 0  s U2 s U2 O2  s3−1 U2 s3−2 U2 s3−3 U2   0 0 0 0 0 0  0 0 0 0 0 0     1 0 0 0 0 0     0 1 0 0 0 0   = Z (2) (s),  = 2   s 0 1 0 0 0   0 s 0 1 0 0     s2 0 s 0 1 0  0 s2 0 s 0 1 (3)

A

(2) Z2 (s)

 =

2 + s2 3 + 2s2 s 2s 1 2 1 + 3s2 3 − 4s2 3s −4s 3 −4

(B.8)

 ,

and     O2 O2  (2−1) 0   s I2 O2 Z2 (s) = =   s2−1 I2 s1−1 I2  

(1)

B (2) Z2 (s) =



0 0 1 0 s 0

0 0 0 1 0 s

0 0 0 0 1 0

0 0 0 0 0 1

     = Z (1) (s). 2   

1 + 5s 3s 5 3 4s 3+s 4 1

(B.9)

 (B.10)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 245 — #256

i

B.1. IO SYSTEM EXAMPLE

i

245

Equations (B.2)-(B.9) give the following specific form to the equations (2.20) and (2.22) for the specific system (B.1): (k=3 ) X (3) (2) L∓ Ak Y(k) (t) = A(3) S2 (s)Y∓ (s) − A(3) Z3 (s)Y02∓ = k=0

  ∓ Y1 (s) 4 + 2s + s3 3s + 2s3 − = Y ∓ (s) 1 + s + 3s3 1 − 4s3 {z }| 2{z } | 

Y ∓ (s)

(3)

A(3) S3 (s)

      2 2 2+s 3 + 2s s 2s 1 2   − 1 + 3s2 3 − 4s2 3s −4s 3 −4  | {z }   (2) A(3) Z (s) 2

|

Y10∓ Y20∓ (1) Y10∓ (1) Y20∓ (2) Y10∓ (2) Y20∓ {z Y 2∓

     ,    

(B.11)

}

0

i.e., (k=3 X

)

   4 + 2s + s3 Y1∓ (s) + 3s + 2s3 Y2∓ (s) − Ak Y (t) = L 1 + s + 3s3 Y1∓ (s) + 1 − 4s3 Y2∓ (s) k=0 " #   (1) (1) (2) (2) 2 + s2 Y10∓ + 3 + 2s2 Y20∓ + sY10∓ + 2sY20∓ + Y10∓ + 2Y20∓ − ,   (1) (1) (2) (2) 1 + 3s2 Y10∓ + 3 − 4s2 Y20∓ + 3sY10∓ − 4sY20∓ + 3Y10∓ − 4Y20∓ ∓

L∓

(k=2 X

(k)



) Bk U(k) (t)

(2)

(2−1)

= B (2) S2 (s)U∓ (s) − B (2) Z2

(s)U2−1 (0∓ ) =

k=0

 ∓  4 + s + 5s2 2 + 3s2 U1 (s) = − 3 + 4s2 6 + 3s + s2 U ∓ (s) | {z }| 2{z } 

U∓ (s)

(2)

B (2) S2 (s)

   1 + 5s 3s 5 3  −  4s 3+s 4 1  | {z } (1) B (2) Z2 (s) |

U10∓ U20∓ (1) U10∓ (1) U20∓ {z B1∓

    

(B.12)

}

0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 246 — #257

i

246

i

APPENDIX B. EXAMPLE

i.e., L



(k=2 X

)

   4 + s +5s2 U1∓ (s) + 2 + 3s2  U2∓ (s) Bk U (t) = − 3 + 4s2 U1∓ (s) + 6 + 3s + s2 U2∓ (s) k=0 " # (1) (1) (1 + 5s) U10∓ + 3sU20∓ + 5U10∓ + 3U20∓ − . (1) (1) 4sU10∓ + (3 + s) U20∓ + 4U10∓ + U20∓ 

(k)

Besides,  " =

−1 (3) A(3) S2 (s)

 =

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

4 + 2s + s3 3s + 2s3 1 + s + 3s3 1 − 4s3

−1

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4

= # .

(B.13)

Equations (B.11), (B.13) determine (2.24) for the system (B.1) as follows: "

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

Y∓ (s)=

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4   T 3s2

# •

4 + s + 5s2 2+ 2   ∓   3 + 4s 6 + 3s + s2     B (s)   −1 − 5s −3s −5 −3   B1∓  = •   0 −4s −3 − s −4 −1     Y02∓ 2 2   2+s 3 + 2s s 2s 1 2 1 + 3s2 3 − 4s2 3s −4s 3 −4 



∓ (s), = FIO (s) VIO

(B.14)

where - FIO (s) , 

 .. .. FIO (s) = GIO (s) . GIOI0 (s) . GIOY0 (s) = " =

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 s3 +2s+4 10s6 +19s4 +17s3 +3s2 +s−4



 T 4 + s + 5s2 2 + 3s2   3 + 4s2 6 + 3s + s2       −1 − 5s −3s −5 −3  , •   −4s −3 − s −4 −1     2 2   2+s 3 + 2s s 2s 1 2 2 2 1 + 3s 3 − 4s 3s −4s 3 −4 

# •



(B.15)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 247 — #258

i

i

B.1. IO SYSTEM EXAMPLE

247

is the full transfer function matrix of the IO system (B.1), which determines: - The system transfer function GIO (s) relative to the input vector U : # " 3 3 4s −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4  4 + s + 5s2 2

GIO (s) =



3 + 4s2

2s +3s 10s6 +19s4 +17s3 +3s2 +s−4 3 +2s+4 − 10s6 +19ss4 +17s 3 +3s2 +s−4  + 3s2

6 + 3s + s2



=⇒

GIO (s) = " =

−4+8s−5s2 +34s3 +4s4 +28s5 10s6 +19s4 +17s3 +3s2 +s−4 −8−s+6s2 +6s3 +3s4 +11s5 10s6 +19s4 +17s3 +3s2 +s−4

−2+18s+6s2 +23s3 +6s4 +14s5 10s6 +19s4 +17s3 +3s2 +s−4 26+26s+13s2 +17s3 +3s4 +10s5 − 10s6 +19s4 +17s3 +3s2 +s−4

# ,

(B.16)

- The system transfer function GIOU0 (s) relative to the extended initial : control vector U µ−1 0∓ " GIOU0 (s) =

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

 •

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 s3 +2s+4 10s6 +19s4 +17s3 +3s2 +s−4

# •



−1 − 5s −3s −5 −3 −4s −3 − s −4 −1

 =⇒

GIOU0 (s) =    = 

1+5s−12s2 −4s3 −28s4 10s6 +19s4 +17s3 +3s2 +s−4 −6s−3s2 −6s3 −14s4 10s6 +19s4 +17s3 +3s2 +s−4 5−12s −28s3 10s6 +19s4 +17s3 +3s2 +s−4 3−3s+10s3 10s6 +19s4 +17s3 +3s2 +s−4

−1+10s+3s2 −3s3 −11s4 10s6 +19s4 +17s3 +3s2 +s−4 12+7s−s2 +3s3 −8s4 10s6 +19s4 +17s3 +3s2 +s−4 11+3s−11s3 10s6 +19s4 +17s3 +3s2 +s−4 1−s−8s3 10s6 +19s4 +17s3 +3s2 +s−4

T    , 

(B.17)

- The system transfer function GIOY0 (s) relative to the extended initial output vector Y0ν−1 : ∓ " GIOY0 (s) =  •

4s3 −1 10s6 +19s4 +17s3 +3s2 +s−4 3s3 +s+1 10s6 +19s4 +17s3 +3s2 +s−4

2s3 +3s 10s6 +19s4 +17s3 +3s2 +s−4 s3 +2s+4 10s6 +19s4 +17s3 +3s2 +s−4



2 + s2 3 + 2s2 s 2s 1 2 2 2 1 + 3s 3 − 4s 3s −4s 3 −4

# •

 =⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 248 — #259

i

248

i

APPENDIX B. EXAMPLE GIOY0 (s) =      =    

−2+3s−s2 +19s3 +10s5 10s6 +19s4 +17s3 +3s2 +s−4 −3+9s−2s2 +6s3 10s6 +19s4 +17s3 +3s2 +s−4 −s+9s2 +10s4 10s6 +19s4 +17s3 +3s2 +s−4 −2s−12s2 10s6 +19s4 +17s3 +3s2 +s−4 −1+9s+10s3 10s6 +19s4 +17s3 +3s2 +s−4 −2−12s 10s6 +19s4 +17s3 +3s2 +s−4

−2−11s2 10s6 +19s4 +17s3 +3s2 +s−4 −9−3s+18s2 +16s3 +20s5 10s6 +19s4 +17s3 +3s2 +s−4 −11s −5s2 10s6 +19s4 +17s3 +3s2 +s−4 −14s−6s2 +2s4 10s6 +19s4 +17s3 +3s2 +s−4 −11−5s 10s6 +19s4 +17s3 +3s2 +s−4 −14−6s+2s3 10s6 +19s4 +17s3 +3s2 +s−4

T      ,    

(B.18)

∓ (s) of the system action vector VIO (t) reads - The Laplace transform VIO

   ∓ U∓ (s) U (s) ∓ 1 , VIO (s) =  U0∓  = CIO0∓ 2 Y0∓ 

(B.19)

- And the vector CIO0∓ of all the initial conditions is found to be  1  U0∓ CIO0∓ = . (B.20) Y02∓

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 249 — #260

i

i

Appendix C

Transformations C.1

Transformation of IO into ISO system

The state space theory of the linear dynamical and control systems has been mainly established and effective for the ISO systems (3.1), (3.2) (Section 3.1). In order to transform the IO system (2.1), i.e., (2.15) (Section 2.1.1) into the ISO systems (3.1), (3.2) the well-known formal mathematical transformation has been used. It has to satisfy the condition that the transformed system should not contain any derivative of the input vector despite the influence of derivatives of the input vector on the original system and the condition that the only accepted derivative is the first derivative of the state vector and only in the state equation. We will illustrate it for the IO system (2.1) subjected to the external action of the input vector I and its derivatives, i.e., subjected to the action of the extended input vector Iµ . The IO system (2.1): k=ν X k=0

Ak Y

(k)

(t) =

k=µ X

Hk I(k) (t), detAν 6= 0, ∀t ∈ T0 , ν ≥ 1, 0≤µ ≤ ν,

k=0

(C.1) can be formally mathematical transformed into the mathematically equivalent ISO system (3.1), (3.2), dX(t) = AX(t) + HI(t), ∀t ∈ T0 , A ∈ Rnxn , U ∈RM , P ∈ RnxM , (C.2) dt Y(t) = CX(t) + QI(t), ∀t ∈ T0 , C ∈ RN xn , C 6= ON ,n , Q ∈ RN xM . (C.3)

249 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 250 — #261

i

250

i

APPENDIX C. TRANSFORMATIONS

by applying the following formal mathematical transformations: X1 = Y − Hν I,

(C.4)



X2 = X1 + Aν−1 Y−Hν−1 I,

(C.5)



X3 = X2 + Aν−2 Y−Hν−2 I,

(C.6)

....

(C.7)

Xν−1 = Xν−2 + A2 Y−H2 I

(C.8)

• •

Xν = Xν−1 + A1 Y−H1 I,

(C.9)

where Hk = ON,r for k = µ + 1, µ + 2, ..., ν if µ < ν. The vectors X1 , X2 , ... Xν ∈ RN are the mathematical state subvectors of the vector X ∈ Rn that is the mathematical state vector of the IO system (C.1) and of the equivalent ISO system (C.2), (C.3),  T ∈ Rn , n = νN. (C.10) X = XT1 XT2 ... XTν Comment 228 The state subvectors X1 , X2 , ... Xν (C.4)-(C.9) and the state vector X (C.10) do not any physical sense, i.e., they are physically meaningless, if µ > 0, equivalently if Hk = ON,r for k ∈ {1, 2, ..., ν} . This is the consequence of their definitions to be linear combinations of the input vector, the output vector and the derivative of the preceding state subvector if it exists. Their physical nature and properties are most often inherently different. The transformations (C.4)-(C.9) are formal mathematical, physically useless in general. They lead to the following matrices of the ISO system (C.2), (C.3) mathematically formally equivalent to the IO system (C.1):   −Aν−1 IN ... ON ON  −Aν−2 ON ... ON ON    , ... ... ... ... ... (C.11) A=    −A1 ON ... ON IN  −A0 ON ... ON ON   Hν−1 − Aν−1 Hν  Hν−2 − Aν−2 Hν    , .... H= (C.12)     H1 − A1 Hν H0 − A0 Hν

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 251 — #262

i

C.2. ISO AND EISO FORMS OF IIO SYSTEM C=



IN

ON

ON

... ON

ON

i

251 ON



,

Q = Hν .

(C.13) (C.14)

Conclusion 229 The aim of the book and the transformations (C.4)-(C.9) The aim of the book to further develop and generalize the control theory with the simultaneous physical and mathematical, i.e., the full engineering, sense, excludes the use of the pure formal mathematical transformations (C.4)-(C.9) if µ > 0.

C.2

ISO and EISO forms of IIO system

The ISO and EISO forms of the IIO system The compact form of the overall mathematical model of the IIO system (6.1), (6.2), (Section 6.1), reads in terms of the total coordinates:      α−1    (α) A(α−1) Oρ,ν+1 R (t) Aα Oρ,N R (t) = + Zν−1 (t) ON,α Eν Z(ν) (t) −R(α−1) E (ν−1)  (µ)  H Iµ (t), Y(t) = Z(t), (C.15) = Q(µ) where we use a subsidiary vector Z, Z(t) = Y(t) = Sα+1 (t), Z(k) (t) = Y(k) (t) = Sα+k+1 (t), k = 0, 1, .., ν − 1, Zν−1 (t) = Yν−1 (t) = SO (t).

(C.16)

In terms of the deviations the system model is given by (6.65), (6.66) (Section 6.2), which can be set in the form of (C.15), (C.16):    (α)     α−1  Aα Oρ,N r (t) A(α−1) Oρ,ν+1 r (t) + = ON,α Eν zν−1 (t) z(ν) (t) −R(α−1) E (ν−1)  (µ)  H = iµ (t), y(t) = z(t), (C.17) Q(µ) z(t) = y(t) = sα+1 (t), z(k) (t) = y(k) (t) = sα+k+1 (t), k = 0, 1, .., ν − 1, (C.18) zν−1 (t) = yν−1 (t) = sO (t).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 252 — #263

i

252

i

APPENDIX C. TRANSFORMATIONS

We continue to use the system model (C.17), (C.18) in terms of the deviations by recalling the fact that the system models (C.15), (C.16), and (C.17), (C.18) have the same properties. Condition 70 (Section 6.1) and (9.38) (Section 9.5) permit us to transform Equation (C.17) into " #    α−1  (1) (α−1) sα (t) A−1 Oρ,ν+1 r (t) α A + = (1) zν−1 (t) −Eν−1 R(α−1) Eν−1 E (ν−1) sα+ν (t)  −1 (µ)  Aα H iµ (t), = Eν−1 Q(µ)   ON,ρ repeats α−times ON repeats (ν−1)−times z }| { }| { z     α−1 .. .. .. .. .. .. (t)   r y(t) =  ON,ρ . ... . ON,ρ . IN . . ON . ... . ON  {z } |  |  zν−1 (t) {z } O(ν−1)N

ON,αρ =CI

(C.19) In view of Equation (9.38), the extended form of Equation (9.39) (Section 9.5) reads: s1 = r1 (1)

(1)

(1)

(1)

s2 = s1 = r(1) =⇒ s1 = s2 s3 = s2 = r(2) =⇒ s2 = s3 ... (1)

(1)

sα = sα−1 = r(α−1) =⇒ sα−1 = sα sα+1 = y sα+2 = sα+3 =

(1) sα+1 (1) sα+2

(1)

= y(1) =⇒ sα+1 = sα+2 , (1)

= y(2) =⇒ sα+2 = sα+3 , ...

sα+ν =

(1) sα+ν−1

=y

(ν−1)

(1)

=⇒ sα+ν−1 = sα+ν ,

(C.20)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 253 — #264

i

C.2. ISO AND EISO FORMS OF IIO SYSTEM

i

253

Equation (C.19) determines:  T −1 (α−1) T (µ) µ s(1) s1 (t) sT2 (t) ... sTα (t) + A−1 i (t), α (t) = −Aα A α H | {z } sI (t)



 T Eν−1 R(α−1) sT1 (t) sT2 (t) ... sTα (t) − {z } |

  sI (t)   T T (1) T −1 (ν−1) sα+ν (t) =  sα+1 (t) sα+2 (t) ... sTα+ν (t) +  −Eν E  {z } |  sO (t)

    ,   

(C.21)

+Eν−1 Q(µ) iµ (t)  T sI (t) = sT1 (t) sT2 (t) ... sTα (t) ∈ Rαρ ,  T sO (t) = sTα+1 (t) sTα+2 (t) ... sTα+ν (t) ∈ RνN .

(C.22) (C.23)

Equations (C.19) - (C.21) imply:   A11 Oαρ,νN A= ∈ Rn×n , n = αρ + νN, A21 A22  A11

  =  

Oρ Iρ Oρ Oρ : : Oρ Oρ −1 A −A−1 A −A 0 1 α α

... Oρ Oρ ... Oρ Oρ : : : ... Oρ Iρ −1 A ... −A−1 A −A α−2 α−1 α α

(C.24)    ,  

A11 ∈ Rαρ×αρ ,

(C.25)

 ON,ρ ON,ρ ... ON,ρ ON,ρ   ON,ρ ON,ρ ... ON,ρ ON,ρ , =   : : : : : −1 −1 −1 −1 Eν Ry0 Eν Ry1 ... Eν Ry,α−2 Eν Ry,α−1 

A21

A21 ∈ RνN ×αρ ,  A22

  =  

ON IN ON ON : : ON ON −Eν−1 E0 −Eν−1 E1

... ON ON ... ON ON : : : ... ON IN ... −Eν−1 Eν−2 −Eν−1 Eν−1

A22 ∈ RνN ×νN ,

(C.26)    ,   (C.27)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 254 — #265

i

254

i

APPENDIX C. TRANSFORMATIONS 

  C = ON,αρ | {z } CI

  .. .. ..  . IN . ON,(ν−1)N  = CI . CO ∈ RN ×n , {z } | CO

P

   W1 =   

(C.28)

(µ)

 =W =

W1 W2

Oρ,M Oρ,M Oρ,M Oρ,M : : Oρ,M Oρ,M −1 A−1 α H0 Aα H1



∈ Rn×(µ+1)M ,

... Oρ,M Oρ,M ... Oρ,M Oρ,M : : : ... Oρ,M Oρ,M −1 ... A−1 α Hµ−1 Aα Hµ

(C.29)

   ,  

W1 ∈ Rαρ×(µ+1)M,    W2 =   

ON,M ON,M ON,M ON,M : : ON,M ON,M Eν−1 Q0 Eν−1 Q1

... ON,M ON,M ... ON,M ON,M : : : ... ON,M ON,M ... Eν−1 Qµ−1 Eν−1 Qµ

W2 ∈ RνN ×(µ+1)M, W(t) = Iµ (t) ∈ R(µ+1)M , w(t) = iµ (t) ∈ R(µ+1)M .

(C.30)       (C.31) (C.32)

Altogether, dS(t) = AS(t) + W W(t) = AS(t) + P (µ) Iµ (t), dt Y(t) = CS(t),

(C.33) (C.34)

These equations represent the ISO form for Iµ (t) replaced by W(t), and EISO form of the IIO system (6.65), (6.66). In terms of the deviations of all variables which in the free regime, i.e., for w(t) ≡ 0m , Equations (C.33), (C.34) take the following form: ds(t) = As(t), dt y(t) = Cs(t).

(C.35) (C.36)

It is the system (8.11), (8.12) (Section 8.2).

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 255 — #266

i

i

Appendix D

Proofs D.1

Proof of Lemma 97

Proof. This is the proof of Lemma 101, (Section 7.4). For any nonzero row 1 × ν vector g = [γ1 γ2 ...γν ] , g ∈ R1×ν , and for the matrix function R (.) (7.45) the following holds:   ρ1 (t)  ρ2 (t)   gR (t) = [γ1 γ2 ...γν ]   :  = γ1 ρ1 (t) + ... + γν ρν (t) = ρν (t)     = γ1 ρ11 (t) ... ρ1µ (t) + .. + γν ρν1 (t) ... ρνµ (t) = " i=ν # i=ν i=ν i=ν X X X X = γi ρi1 (t) γi ρi2 (t) ... γi ρi,µ−1 (t) γi ρiµ (t) . (D.1) i=1

i=1

i=1

i=1

Linear independence of the rows ρi (.) of the matrix function R (.) (7.45) on T0 guarantees (Definition 98) that there exists τ ∈ T0 such that their linear combination (D.1) vanishes at τ ∈ T0 :   gR (τ ) = γ1 ρ1 (τ ) + ... + γν ρν (τ ) = 0 ... 0 ⇐⇒ " i=ν # i=ν X X ⇐⇒ γi ρi1 (τ ) = 0 ... γi ρiµ (τ ) = 0 , i=1

i=1

255 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 256 — #267

i

256

i

APPENDIX D. PROOFS

if and only if all coefficients γi are zero: All ρi (t) are linearly independent on T0 ⇐⇒   γ1 ρ1 (τ ) + ... + γν ρν (τ ) = 0Tν ⇐⇒ ⇐⇒ ∃τ ∈ T0 =⇒ ⇐⇒ ⇐⇒ γ1 = ... = γν = 0  i=ν  X γi ρik (τ ) = 0, ∀k = 1, 2, ..., µ ⇐⇒   ⇐⇒ ∃τ ∈ T0 =⇒  (D.2) . i=1

⇐⇒ γ1 = ... = γν = 0. This implies the following: All ρi (t) are linearly independent on T0 =⇒  ∀ g 6= 0Tν ∈R1×ν , ∃σ ∈ T0 =⇒ " =

gR (σ) = γ1 ρ1 (σ) + ... + γν ρν (σ) = # i=ν i=ν X X γi ρi1 (σ) ... γi ρiµ (σ) 6= 0Tν =⇒ i=1

i=1

=⇒ ∃k ∈ {1, 2, ..., µ} =⇒

i=ν X

γi ρik (σ) 6= 0.

(D.3)

i=1

This proves Lemma 101.

D.2

Proof of Theorem 110

Lemma 230 Linear independence and the sign of the Gram matrix quadratic form 1) If the rows ri (.) of the matrix function R (.) (7.45) are linearly independent on T0 then for any 1 × ν vector a, a ∈ R1×ν , there exists t ∈ T0 for which the integrand R (t) RT (t) of the Gram matrix (7.69) satisfies the following:  ∀ a 6= 0Tν ∈R1×ν , ∃t ∈ T0 =⇒ aR (t) RT (t) aT > 0. (D.4) 2) There exists t ∈ T0 such that R (t) RT (t) is positive definite at t ∈ T0 . Proof. 1) Let the rows ri (.) of the matrix function R (.) (7.45), R (t) ∈ C (T0 ), be linearly independent on T0 .For any 1×ν vector a, a ∈ R1×ν , there exists t ∈ T0 for which aR (t) is nonzero vector, i.e., (7.65) holds (Lemma

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 257 — #268

i

D.2. PROOF OF THEOREM 110

i

257

101). The product aR (t) RT (t) aT is the norm RT (t) aT of the nonzero vector RT (t) aT = (aR (t))T , which is positive and proves Lemma 230. 2) By definition, a ν × ν matrix M (t) = R (t) RT (t) is positive definite at t ∈ T0 if and only if its quadratic form aR (t) RT (t) aT is positive for every nonzero vector a ∈ R1×ν , i.e., if and only if i) aR (t) RT (t) aT = 0 ⇐⇒ a = 0Tν, t ∈ T0 , ii) aR (t) RT (t) aT > 0, ∀ a 6= 0Tν ∈ R1×ν , t ∈ T0 . This definition, the fact that a = 0Tν implies aR (t) RT (t) aT = 0 and (D.4) prove positive definiteness of aR (t) RT (t) aT at t ∈ T0 . Lemma 230 enables us to prove Theorem 114, (Section 7.4). Proof. This is the proof of Theorem 114, (Section 7.4). If the functions ρi (.) : T −→ R, i = 1, 2, are integrable on a connected nonempty nonsingleton subset T∗ = (ta , tb ) of T, T∗ ⊆ T, then their product ρ1 (.)ρ2 (.) is also integrable on T∗ . Let R (.) (7.45) be integrable on T0 . Then the product R (t) RT (t) is also integrable on T0 , i.e., the Gram matrix (7.69) (Section 7.4) of R (t) exists on T0 . Necessity. Let the rows ri (.) of the integrable matrix function R (.) (7.45) be linearly independent on T0 . For any 1 × ν vector a, a ∈ R1×ν , there exists t ∈ T0 for which the integrand R (t) RT (t) of the Gram matrix (7.69) satisfies (D.4) (Lemma 230), which is possible if, and only if, R (t) RT (t) is positive definite at the moment t ∈ T0 . This and the fact that R (τ ) RT (τ ) is at least positive semidefinite or positive definite at any τ ∈ T0 guarantee that G (t) (7.69) is positive definite, hence nonsingular, on T0 . That can be proved also by contradiction. Let be assumed that G (t) (7.69) is singular for every t ∈ T0 in spite of the linear independence of the rows of the matrix function R (.) (7.45). There exists an 1×ν nonzero vector a, a 6= 0Tν ∈ R1×ν , such that for every t ∈ T0 the product aG (t) is zero vector: aG (t) = 0Tν , ∀t ∈ T0 =⇒ aG (t) aT = 0, ∀t ∈ T0 =⇒ t

Z

T

a

R (τ ) R (τ ) dτ Z



aT = 0, ∀t ∈ T0 =⇒

0 t

 (aR (τ )) RT (τ ) aT dt = 0, ∀t ∈ T0 ,

0

which is possible only if aR (t) = 0Tν , ∀t ∈ T0 . Since a 6= 0Tν then aR (t) = 0Tν , ∀t ∈ T0 implies the linear dependence of the rows of the matrix function R (.) (7.45) (Lemma 100), which contradicts the linear independence of the

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 258 — #269

i

258

i

APPENDIX D. PROOFS

rows of the matrix function R (.) (7.45). The contradiction rejects the assumption that the Gram matrix G (t) ∈ Rν×ν (7.69) is singular on T0 , and proves its nonsingularity on T0 . Sufficiency. Let the Gram matrix G (t) ∈ Rν×ν (7.69) be nonsingular on T0 . Let, in spite of that, the rows of the matrix function R (.) (7.45) be linearly dependent. Then there exists an 1 × ν nonzero vector a, a 6= 0Tν ∈ R1×ν , such that for every t ∈ T0 the product aR (t) is zero vector (Lemma 100). We pre-multiply the Gram matrix G (t) (7.69) by a and post-multiply it by aT : Z t  T aG (t) a = (aR (τ )) RT (τ ) aT dτ = 0Tν , ∀t ∈ T0 . 0

Since G (t) is positive definite at some t ∈ T0 then aG (t) aT = 0Tν implies a = 0Tν that contradicts a 6= 0Tν . The contradiction is the consequence of the assumed linear dependence of the rows of the matrix function R (.) (7.45) on T0 . The contradiction rejects the assumption and proves that the rows of the matrix function R (.) (7.45) are linearly independent on T0 . We can prove the sufficiency also as follows. Let, in spite of the nonsingularity of G (t) on T0 , the rows of the matrix function R (.) (7.45) be linearly dependent. Then there exists an 1 × ν nonzero vector a, a 6= 0Tν ∈ R1×ν , such that for every t ∈ T0 the product aR (t) is zero vector (Lemma 100). We premultiply the Gram matrix G (t) (7.69) by a : Z t aG (t) = (aR (τ )) RT (τ ) dτ = 0Tν , ∀t ∈ T0 . 0

The nonsingularity of G (t) on T0 implies the existence of its inverse G−1 (t) on T0 . The multiplication of the preceding equation on the righthand side by G−1 (t) results in: aG (t) G−1 (t) = a = 0Tν , ∀t ∈ T0 . The result a = 0Tν contradicts the chosen a 6= 0Tν . The contradiction disproves the assumption on the linear dependence of the rows of the matrix function R (.) (7.45) on T0 , which proves that the rows of the matrix function R (.) (7.45) are linearly independent on T0 .

D.3

Proof of Theorem 121

Definition 231 Analytic function ri (.) : T −→ R1×µ

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 259 — #270

i

i

D.3. PROOF OF THEOREM 121

259

A function ri (.) : T −→ R1×µ is analytic on [t0 , t1 ], t1 ∈ InT, if, and only if: 1. ri (.) is infinitely times differentiable on [t0 , t1 ] : ri (t) ∈ C ∞ ([t0 , t1 ]) . 2. ri (t) can be represented by the Taylor series on the ε-neighborhood of t0 for some ε ∈ R+ , ri (t) =

k=∞ X k=0

(t − t0 )k (k) ri (t0 ) , ∀t ∈ [t0 − ε, t0 + ε] . k!

The following theorem is helpful to prove observability and controllability criteria. Proof. This is the proof of Theorem 131. As explained in Section 8.1 the system observability does not depend on the input vector. The system can be considered in the free regime in terms of the variables deviations. It is described by (8.11), (8.12) repeated as ds(t) = As(t), dt y(t) = Cs(t).

(D.5) (D.6)

Its motion s (t; s0 ; 0M ) and its output response y (t; s0 ; 0M ) have the following well-known forms (8.13), (8.14): s (t; s0 ; 0M ) = eAt s0 = Φ (t) s0 ,

(D.7)

y (t; s0 ; 0M ) = CeAt s0 = CΦ (t) s0 .

(D.8)

Necessity. Let the system (8.11), (8.12), i.e., (D.5), (D.6), be observable. Definition 130 holds. Let the output response y (t; s0 ; 0M ) be known for every t ∈ T0 . 1. We multiply (D.8) by ΦT (t) C T on the left, integrate the product from 0 to any t ∈ InT0 and use (8.15):   t∈InT Z 0 0

 t∈InT   Z 0    T T Φ (t) C CΦ (t) dt s0 =⇒ Φ (t) C y (t; s0 ; 0M ) dt =     0  | {z } T

T

GOB (t)

t∈InT Z 0

ΦT (τ ) C T y (τ ; s0 ; 0M ) dτ = GOB (t) s0 .

0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 260 — #271

i

260

i

APPENDIX D. PROOFS

The left-hand side of this equation is known due to the knowledge of the system response y (t; s0 ; 0M ), the matrices C and Φ (t). The system observability means that the left-hand side of that equation uniquely determines the initial state s0 , which implies the nonsingularity of GOB (t) on [0,t] for any t ∈ InT0 . This proves the necessity of the condition 1. 2. The necessity of the nonsingularity of GOB (t) and Theorem 114 imply the linear independence of the rows of ΦT (t) C T that are the columns of CΦ (t) on [0,t] for some t ∈ InT0 .This proves the necessity of the condition 2. 3. The matrix C(sIN −A)−1 is the Laplace transform of CΦ (t, t0 ) . This, the linearity of the Laplace transform (which does not influence the rank) of the matrix, and the necessity of the condition under 2. imply the necessity of the condition under 3. 4. The fundamental matrix Φ (t) = eAt is analytic, i.e., it is infinitely times differentiable and can be represented by the Taylor series (Definition 231), which, together with the Cayley-Hamilton theorem, permits us to express Φ (t) in terms of some continuously differentiable linearly independent functions αk (.) : T −→ R, ∀k = 0, 1, ..., n − 1, and the powers Ak of the matrix A up to k = n − 1 [2, pp. 124, 125], [79, p. 489]: At

Φ (t) = e

=

k=n−1 X

αk (t) Ak .

(D.9)

k=0

Let Hk = HkT



CAk

T

T

= Ak C T ∈ Rn×N , i.e.,

= CAk ∈ RN ×n , ∀k = 0, 1, ..., n − 1.

(D.10)

Equations (D.6) and (D.9) yield y(t) = y (t; s0 ; 0M ) =

k=n−1 X k=0

αk (t) HkT s0 =

k=n−1 X

αk (t) IN HkT s0 ∈ RN ,

k=0

(D.11) or, equivalently:  H0T s0    H1T s0 . . . y(t) = α0 (t) IN .. α1 (t) IN .. ... .. αn−1 (t) IN   ... T s Hn−1 0

   ∈ RN . (D.12) 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 261 — #272

i

D.3. PROOF OF THEOREM 121

i

261

Let   Λ (t) =   |

α0 (t) IN α1 (t) IN ... αn−1 (t) IN {z

   ∈ RnN ×N =⇒ 

Λ(t)

}

  .. .. .. Λ (t) = α0 (t) IN . α1 (t) IN . ... . αn−1 (t) IN ∈ RN ×nN T

(D.13)

This sets Equation (D.12) into  H0T s0  H1T s0   ∈ RN . y(t) = ΛT (t)    ... T Hn−1 s0 

(D.14)

Let this equation be premultiplied by Λ (t) and then be integrated from 0 to t:   H0T s0 Zt Zt  H1T s0  T   Λ (τ ) Λ (τ ) dτ  (D.15)  = Λ (τ ) y(τ )dτ. ... T s 0 0 Hn−1 0 The linear independence of the functions αk (.) : T −→ R, ∀k = 0, 1, ..., n − 1, and Theorem 114, (Section 7.4), imply the nonsingularity of the Gram matrix Gα (t) ∈ RnN ×nN of the functions αk (.) IN : T −→ RN ×N , ∀k = 0, 1, ..., n − 1 : Zt Gα (t) =

Λ (τ ) ΛT (τ ) dτ, detGα (t) 6= 0.

(D.16)

0

This permits to set (D.15) into  H0T s0  H1T s0 Gα (t)   ... T s Hn−1 0

  = 

Zt Λ (τ ) y(τ )dτ 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 262 — #273

i

262

i

APPENDIX D. PROOFS

or, after multiplying this equation on the left by G−1 α (t, t0 ) ,     H0T s0 H0T T  T   H1T s0     =  H1  s0 = H0 ... H1 ... ... ... Hn−1 s0 =    ...  ... T T Hn−1 s0 Hn−1 Zt −1 = Gα (t) Λ (τ ) y(τ )dτ. (D.17) 0

In view of the necessity of the condition 1, Equation (8.15), and in view of Equations (D.9)-(D.14), the observability of the system implies the the unique solution s0 of Equation (D.17) that is possible if and only if:  T .. .. .. rank H0 . H1 . ... . Hn−1 = n, or, equivalently, due to Hk = CAk

T

(D.18)

, ∀k = 0, 1, ..., n − 1 :

   T .. T .. T .. n−1 T rank C . (CA) . ... . CA = rankOOBS = n.

(D.19)

This proves the necessity of the condition 4. 5. In order to prove the necessity of the condition 5) let us first prove the following: Lemma 232 Initial state and zero response of the observable system If the system (8.1), (8.12), equivalently, the system (D.5), (D.6), is observable then  ∀t ∈ InT0 , y(σ; s0 ; 0m ) = CeAσ s0 = 0N , ∀σ ∈ [0, t] ⇐⇒ s0 = 0n , (D.20)   y(s) = C (sIn − A)−1 s0 = 0N , on C ⇐⇒ s0 = 0n . (D.21) Proof. This is the proof of Lemma 232. Let the system (8.1), (8.12), equivalently, the system (D.5), (D.6), be observable. All columns of CΦ (t) are linearly independent on [0,t] for any t ∈ InT0 , i.e., on T0 , due to the necessity of the condition 2, and all columns of C(sIN − A)−1 are linearly independent on C due to the necessity of the

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 263 — #274

i

D.3. PROOF OF THEOREM 121

i

263

condition 3. This means that the rows of [CΦ (t)]T are linearly independent h iT on [0,t] for any t ∈ InT0 and rows of (sIn − A)−1 C T are linearly independent on C. These facts and Lemma 101 (Section 7.4) guarantee that for any nonzero initial state vector s0 ∈ Rn the following statements hold: ∀ (s0 6= 0n ) ∈ Rn =⇒  =⇒

∀t ∈ T0 , ∃τ ∈ [0, t] =⇒ y(τ ; s0 ; 0m ) = CΦ (τ ) s0 6= 0N , ∀ (s0 6= 0N ) ∈ C, ∃s ∈ C =⇒ y(s) = C (sIn − A)−1 s0 6= 0N ,

 .

These statements are equivalent to the statements (D.20) and (D.21) of the Lemma. This completes the proof of the Lemma. Let us continue the proof of Theorem 131. Proof. We continue with the proof of the necessity of the condition 5) of Theorem 131. The system observability and the explanation in Section 8.1 permit us to apply the Laplace transform to the system (D.5), (D.6). The result is (sIn − A) s(s) = s0

(D.22)

Cs(s) = y (s) .

(D.23)

i.e., 

sIn − A C



 s(s) =

s0 y (s)

 .

(D.24)

From Equation (8.13) follows s(t; s0 ; 0M ) = eAt s0 = 0n , ∀t ∈ T0 ⇐⇒ s0 = 0n ,

(D.25)

s(s) = s(s; s0 ; 0M ) = 0n , ∀s ∈ C ⇐⇒ s0 = 0n .

(D.26)

which implies

Equation (D.25) and Equation (D.21) (Lemma 232) show that the system response y(t; s0 ; 0m ) and its Laplace transform y(s) are equal to the zero vector for all t ∈ T0 and for all s ∈ C, respectively, if and only if the initial state vector is the zero vector, s0 = 0n , which is, therefore, observable. Consequently we should analyze the observability of any nonzero zero initial state vector s0 6= 0n for which the system response y(t; s0 ; 0m ) and its Laplace transform y(s) are different from the zero vector for some t ∈ T0 and s ∈ C, respectively, due to Lemma 232. We accept that there is t ∈ T0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 264 — #275

i

264

i

APPENDIX D. PROOFS

at which y(t; s0 ; 0m ) 6= 0N and y(s) 6= 0N for some s ∈ C. The equations (D.22), (D.23) show, equivalently the equation (D.24) shows, that for the unique determination of the nonzero initial state vector s0 from the nonzero output response it is necessary that the state s(t; s0 ; 0m ) and its Laplace transform are also uniquely determined. The system observability guarantees that the nonzero initial state s0 is uniquely determined from the nonzero output response, hence the state s(t; s0 ; 0m ) and its Laplace transform are uniquely determined from the nonzero output response. This implies the full rank of the matrix   sIn − A ∈ C(n+N )×n C in the equation (D.24), which is n because the number n of its columns is less than the number n + N of its rows:   sIn − A = n, ∀s ∈ C(n+N )×n . rank C This proves the necessity of the condition 5. Sufficiency. Let all the conditions of the theorem statement be valid. 1. Let s0 ∈ Rn be arbitrarily chosen. The multiplication of the equation (8.13) or the equation (D.7) by ΦT (t) C T on the left transforms them into ΦT (t) C T y(t; s0 ; 0m ) = ΦT (t) C T CΦ (t) s0 . The integral of this equation on the time interval [0, t] on which the observability Gram matrix GOB (t) (8.15) is nonsingular due to the condition 1., gives: Zt

T

Zt

T

Φ (τ ) C y(τ ; s0 ; 0m )dτ = 0

ΦT (τ ) C T CΦ (τ ) dτ s0 =

0

= GOB (t) s0 . The nonsingularity of GOB (t) permits to multiply the preceding equation on the left by its inverse G−1 OB (t): s0 =

G−1 OB

Zt (t)

ΦT (τ ) C T y(τ ; s0 ; 0m )dτ, ∀s0 ∈ Rn .

0

Every initial state s0 ∈ Rn is uniquely determined by the system response y(t; s0 ; 0m ). The system (8.1), (8.2) is observable in view of Definition 130.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 265 — #276

i

D.3. PROOF OF THEOREM 121

i

265

2. The linear independence of all columns of CΦ (t) on [0,t] for some t ∈ InT0 ensures the linear independence of all rows of ΦT (t) C T on [0,t] for some t ∈ InT0 , which implies the nonsingularity of the observability Gram matrix GOB (t) due to Theorem 114. This satisfies the condition 1. The system (8.1), (8.2) is observable in view of Definition 130. 3. The application of the inverse Laplace transform to C(sIN − A)−1 preserves the linear independence of its columns to the columns of its original CΦ (t). The condition 3 guarantees the validity of the condition 2. The system is observable due to the condition 2. 4. In view of (D.10),  T T Hk = CAk = Ak C T ∈ Rn×N , i.e., = CAk ∈ RN ×n , ∀k = 0, 1, ..., n − 1, (D.27)  T . . . and (8.16) we conclude that H = H0 .. H1 .. ... .. Hn−1 ∈ RnN ×n is HkT

OOBS :   H= 

H0T H1T : T Hn−1





   =   

C CA CA2 ... CAn−1

    = OOBS ∈ RnN ×n .  

This permits us to set Equation (D.17) into the following form: OOBS s0 = G−1 α (t)

Zt Λ (τ ) y(τ )dτ.

(D.28)

0

Since the number nN of the rows of OOBS is not less than the number n of its columns then the condition (8.17) means that the matrix OOBS has the T full rank n. The matrix OOBS OOBS ∈ Rn×n is nonsingular. It forms the left inverse −1 T + T OOBS = OOBS OOBS OOBS ∈ Rn×nN of the matrix OOBS . It has the following property: + OOBS OOBS = I.

(D.29)

+ We premultiply (D.28) by OOBS , + + OOBS OOBS s0 = OOBS G−1 α (t)

Zt Λ (τ ) y(τ )dτ, 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 266 — #277

i

266

i

APPENDIX D. PROOFS

and apply (D.29) to obtain:

s0 =

+ OOBS G−1 α (t)

Zt Λ (τ ) y(τ )dτ. 0

The initial state vector s0 is uniquely determined by the system response y(t) = y(t; s0 ; 0M ). The initial state vector s0 was arbitrarily chosen, (it can be the zero initial state vector s0 = 0n that is observable). The system (8.1), (8.2) is observable in view of Definition 130. 5. Let the equations (D.22), (D.23) be set into the following form:

or

(sIn − A) s(s) − In s0 = On

(D.30)

Cs(s) + ON,n s0 = y (s) ,

(D.31)



sIn − A C





s0 y (s)

s(s) =

 (D.32)

The first form of the proof of the sufficiency of the condition under 5. The condition under 5. of the theorem statement, i.e., (8.19), implies that the matrix OO (s) (8.18),   sIn − A ∈ C(n+N )×n , (D.33) OO (s) = C has the full rank n, rankOO (s) = n, ∀s ∈ C,

(D.34)

due to the general rule: rankOO (s) ≤ min (n, n + N ) = n. T (s) , The same holds for its transpose OO T OO (s) =



sIn − AT

CT



∈ Cn×(n+N ) ,

T rankOO (s) = n, ∀s ∈ C.

(D.35)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 267 — #278

i

D.3. PROOF OF THEOREM 121

i

267

This and (D.34) imply the nonsingularity of the n × n matrix T (s) O (s) , OO O     sIn − A T T T C OO (s) OO (s) = sIn − A = C    = sIn − AT (sIn − A) + C T C ∈ Cn×n ,  T det OOO (s) OO (s) 6= 0, ∀s ∈ C. + (s) of OO (s) reads The left inverse OOL + T OOL (s) = OOO (s) OO (s)

−1

T OOO (s) ,

(D.36)

so that: + (s) OO (s) = I, ∀s ∈ C, OOL

(D.37)

and + (s) = OOL  −1   sIn − AT sIn − AT (sIn − A) + C T C   = G11 (s) G12 (s) ,

CT



= (D.38)

where  −1  sIn − AT (sIn − A) + C T C sIn − AT ∈ Cn×n ,  −1 T  C ∈ Cn×N . G12 (s) = sIn − AT (sIn − A) + C T C

G11 (s) =



(D.39) (D.40)

+ (s) and use (D.37)-(D.40): We premultiply (D.32) by OOL       s0 s0 + s(s) = OOL (s) = G11 (s) G12 (s) . y (s) y (s)

This and (D.30) furnish (sI − A) s(s) = s0 = (sIn − A) G11 (s) s0 + (sIn − A) G12 (s) y (s) , or [I − (sI − A) G11 (s)] s0 = (sIn − A) G12 (s) y (s) , ∀s0 ∈∈ Cn .

(D.41)

The eigenvalues λi of the matrix (sIn − A) G11 (s) depend on s, λi = λi (s) , and are determined as solutions of det [λIn − (sIn − A) G11 (s)] = 0.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 268 — #279

i

268

i

APPENDIX D. PROOFS

Let s∗ be such that λi = λi (s∗ ) 6= 1, ∀i = 1, 2, .., n. Then, det [I − (s∗ I − A) G11 (s∗ )] 6= 0 so that (D.41) becomes s0 = [I − (s∗ I − A) G11 (s∗ )]−1 (s∗ I − A) G12 (s∗ ) y (s∗ ) . Every initial state s0 is uniquely determined by the Laplace transform y (s) of the system response at s = s∗ . The system is observable (Definition 130). The second form of the proof of the sufficiency of the condition under 5. In order to determine s(s) from (D.32), i.e., from     s0 sIn − A s(s) = OO (s) s(s) = (D.42) C y (s) we will transform the matrix OO (s) into the equivalent matrix by introducing two nonsingular matrices P (s) and Q (s) defined as follows: ∃P (s) ∈ C(n+N )×(n+N ) , detP (s) 6= 0, ∀s ∈ C,   P11 (s) P12 (s) , P (s) = P21 (s) P22 (s) P11 (s) ∈ Cn×n , detP11 (s) 6= 0, ∀s ∈ C, P12 (s) ∈ Cn×N , P21 (s) ∈ CN ×n , P22 (s) ∈ CN ×N , detP22 (s) 6= 0, ∀s ∈ C,

(D.43)

∃Q (s) ∈ Cn×n , detQ (s) 6= 0, ∀s ∈ C=⇒

(D.44)

and

 P (s)

sI − A C



 Q (s) =

D (s) ON,n

 ,

D (s) ∈ Cn×n , detD (s) 6= 0, ∀s ∈ C. This and (D.42) imply     sI − A s0 −1 P (s) Q (s) Q (s) s(s) = P (s) =⇒ C y (s) 

D (s) ON,n



Q−1 (s) s(s) = P (s)



s0 y (s)

 =⇒

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 269 — #280

i

D.3. PROOF OF THEOREM 121 

. D (s) .. On,N T

i

269



 D (s) Q−1 (s) s(s) = DT (s) D (s) Q−1 (s) s(s) = ON,n     .. s0 T =⇒ = D (s) . On,N P (s) y (s)

    .. s0 T =⇒ s(s) = Q (s) D (s) D (s) D (s) . On,N P (s) y (s)   .. −1 s0 = (sI − A) s(s) = (sI − A) Q (s) D (s) I . On,N ·    s0 P11 (s) P12 (s) =⇒ · P21 (s) P22 (s) y (s) −1

T

−1

  .. s0 = (sI − A) Q (s) D (s) I . On,N ·   P11 (s) s0 + P12 (s) y (s) =⇒ · P21 (s) s0 + P22 (s) y (s) −1

s0 = = (sI − A) Q (s) D

−1

(s) P11 (s) s0 + (sI − A) Q (s) D−1 (s) P12 (s) y (s)

  I − (sI − A) Q (s) D−1 (s) P11 (s) s0 = = (sI − A) Q (s) D−1 (s) P12 (s) y (s) .

(D.45)

The eigenvalues λk of the matrix (sI − A) Q (s) D−1 (s) P11 (s) depend on s, λk = λk (s) . They are determined as solutions of   det λI − (sI − A) Q (s) D−1 (s) P11 (s) = 0. Let s∗ be such that λk = λk (s∗ ) 6= 1, ∀i = 1, 2, .., n. Then,   det I − (s∗ I − A) Q (s∗ ) D−1 (s∗ ) P11 (s∗ ) 6= 0 so that (D.45) becomes  −1 s0 = I − (s∗ I − A) Q (s∗ ) D−1 (s∗ ) P11 (s∗ ) · · (s∗ I − A) Q (s∗ ) D−1 (s∗ ) P12 (s∗ ) y (s∗ ) . Every initial state s0 is uniquely determined by the Laplace transform y (s) of the system response at s = s∗ . The system is observable (Definition 130). This completes the proof.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 270 — #281

i

270

i

APPENDIX D. PROOFS

D.4

Proof of Theorem 128

Proof. This is the proof of Theorem 138, (Section 8.4). This theorem can be proved by using the Lyapunov method or by following the Bellman application of the matrix infinitesimal calculus. Proof by using the Lyapunov method Necessity. Let the matrix A be stable, equivalently, let the zero equilibrium state se = 0n of the system (8.30), (8.31), (Section 8.1), be (globally) asymptotically stable. Then the zero equilibrium state se = 0n is unique. Let a function v (.) : Rn −→ Rn be the quadratic form of the symmetric matrix H, H = H T , v (s) = sT Hs, (D.46) where the matrix is the matrix solution of Equation (8.35). Let q (s) = −sT C T Cs.Observability of the system guarantees (the condition 2. of Theorem 131 and Lemma 232 in Appendix D.3) that along the system motions s (t; s0 ) ,  ∀t ∈ InT0 , y(σ; s0 ; 0m ) = CeAσ s0 = 0N , ∀σ ∈ [0, t] ⇐⇒ s0 = 0n . Let us assume that for any s0 6= 0n there is θ ∈ InT0 such that CeAθ s0 = 0N . Let θ be the first moment that obeys CeAθ s0 = 0N . This means that CeAθ s0 is in the hyperplane Cs = 0N relative to the system motions. Let us assume that the hyperplane Cs = 0N is positively invariant relative to the system motions, i.e., that s (t; s0 ) ∈ {s :Cs = 0N } , ∀ (t ≥ θ) ∈ InT0 . Then, for sθ = s (θ; s0 ) =eAθ s0 : CeAt s0 = 0N =⇒ CeA(t−θ) eAθ s0 = CeA(t−θ) st = 0N , ∀ (t < θ) ∈ InT0 . This and Equation (D.20), (Appendix D.3), imply sθ = s (θ; s0 ) =eAθ s0 = 0N . This holds if and only if s0 = 0N due to the nonsingularity of eAθ . However, the obtained result s0 = 0N contradicts the selected s0 6= 0N . The contradiction is the consequence of the assumption on the positive invariance of the hyperplane Cs = 0N relative to the system motions. The assumption fails, i.e., the hyperplane Cs = 0N is not positive invariant relative to the system motions, which guarantees that q [s (t; s0 )] vanishes for all t ∈ T0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 271 — #282

i

D.4. PROOF OF THEOREM 128

i

271

only at the zero equilibrium state se = 0n , that it is negative out of the zero equilibrium state se = 0n for almost all t ∈ InT0 and that it is never positive. Since the total derivative of the function v (.) along the motion s (t; s0 ) = Φ (t) s0 = eAt s0 of the system (8.30), (8.31) together with Equation (8.35) has the following form :  d v (s) = sT AT H + HA s = −sT C T Cs τ ) ∈ T0 for which Cs (t1 ; s0 ) 6= 0n .and −sT (t1 ; s0 ) C T Cs (t1 ; s0 ) 0 for every t ∈ T0 , ∀ (s0 6= 0n ) ∈ Rn so that v (s0 ) > 0 for every (s0 6= 0n ) ∈ Rn .This, continuity of v (s) on Rn and v (0n ) = 0 prove that the function v (.) (D.46) is positive definite that implies positive definiteness of the symmetric matrix H due to Equation (D.46). It obeys the relaxed Lyapunov matrix equation (8.35) by its above definition. The proof of the necessity is complete. Sufficiency. Let the symmetric positive definite matrix H, H = H T , be the solution of (8.35). Its quadratic form (D.46) is positive definite. From (D.47) follows that the v [s (t; s0 )] is almost strictly monotonously decreasing for every s (t; s0 ) 6= 0n , that it is never increasing and that the hyperplane Cs = 0N is not positive invariant relative to the system motions. For every s0 6= 0n , v [s (t; s0 )] −→ 0 as t −→ ∞ that further implies s (t; s0 ) −→ 0n as t −→ ∞ for every s0 6= 0n . This proves the global both attraction and, due to the system linearity and time invariance, asymptotic stability of the zero equilibrium state se = 0n of the system (8.30), (8.31), i.e., that the matrix A is stable. This completes the proof by using the Lyapunov method. Proof by applying the matrix infinitesimal calculus Necessity. Let the matrix A be stable. Let, by following Bellman, X = T X : T −→ Rn×n be the symmetric matrix solution of the following linear

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 272 — #283

i

272

i

APPENDIX D. PROOFS

time-invariant matrix differential equation: dX = AT X + XA. dt

(D.48)

It is easy to verify that T

X (t; X0 ) = eA t X0 eAt

(D.49)

is the unique and symmetric matrix solution of Equation (D.48), which obeys the initial condition: T

X (0; X0 ) = eA 0 X0 eA0 = IX0 I = X0 , ∀X0 ∈ Rn×n . Let X0 = C T C so that the solution of Equation (D.48) becomes T

X (t; X0 ) = e0A t C T CeAt = CeAt Let

Z H=



CeAt

T

T

CeAt .

(D.50)

CeAt dt = H T ∈ Rn×n .

(D.51)

0

The observability of the pair (A,C) guarantees the linear independence of the rows of CeAt on T0 (condition 2. of Theorem 131) and the existence of T τ ∈ T0 such that CeAτ CeAτ is positive definite matrix. The Gram matrix H (D.51) of CeAt is nonsingular (Theorem 114, Section 7.4) and positive definite. The integration of Equation (D.48), together with X0 = C T C, Equation D.50 and Equation D.51, leads to X (∞; X0 ) − C T C = AT H + HA.

(D.52)

The stability of the matrix A guarantees X (∞; X0 ) = On that simplifies the preceding equation to: −C T C = AT H + HA. This is the Lyapunov matrix equation (8.35). The positive definite matrix H (D.51) is the unique solution of the Lyapunov matrix equation (8.35). Sufficiency. Let the positive definite symmetric matrix H be defined by (D.51). It satisfies Equation D.52 for X0 = C T C. It is also the unique solution of the Lyapunov matrix equation (8.35) due to the condition of the theorem. Equation D.52 and Equation (8.35) furnish X (∞; X0 ) = On that implies stability of the matrix A.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 273 — #284

i

D.5. PROOF OF THEOREM 130

D.5

i

273

Proof of Theorem 130

Proof. Since the system observability is independent of the input vector Iµ (t) we consider the system (2.15), equivalently, its compact form (9.4), in the free regime characterized by Iµ (t) = 0(µ+1)M , ∀t ∈ T0 .

(D.53)

The IO system (2.15) can be transformed (Theorem 49 in Section 4.1) into the EISO form (4.1), (4.2) written herein in terms of the total coordinates for the free regime, i.e., by applying (D.53): dS(t) = AS(t), A ∈ Rn×n , ∀t ∈ T0 , n = νN, dt Y(t) = CS(t), ∀t ∈ T0 .

(D.54) (D.55)

or in terms of the deviations of all variables: ds(t) = As(t), ∀t ∈ T0 , n = νN, dt y(t) = Cs(t), ∀t ∈ T0 .

(D.56) (D.57)

where the two cases should be distinguished: Case 1 : ν > 1, and Case 2: ν = 1. In both cases, Case 1 and Case 2, we refer to Equations (4.4) through (4.11) (Section 4.1) repeated as ν ≥ 1, µ ≥ 1, n = νN,       

(D.58)

ν > 1 =⇒ Si = Y(i−1) ,    .. T .. .. T T T S = S1 . S2 . ... . Sn = YT

 ∀i = 1, 2, ..., ν, i.e.,   T  .. (1)T .. .. (ν−1)T .Y . ... . Y =⇒    ν−1 n S=Y ∈R

ν = 1 =⇒ S = S1 = Y,

(D.59)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 274 — #285

i

274

i

APPENDIX D. PROOFS

       

ν > 1 =⇒ A = ON −IN ON ON ON ON −IN ON ON ON ON −IN ... ... ... ... ON ON ON ON −1 A −1 A −1 A −A−1 A −A −A −A 0 1 2 3 ν ν ν ν −1 ν = 1 =⇒ A = −A1 A0 ,

... ON ... ON ... ON ... ... ... −IN ... −A−1 ν Aν−1

C = [IN ON ON ON ... ON ] ∈ RN ×n , Q = ON,M ∈ RN ×M .

     ,   

(D.60) (D.61)

The preceding equations (D.58) through (D.61) show that the (n+N )×n complex matrix OO (s) (8.18) (Section 8.3),   sIn − A ∈ R(n+N )×n , (D.62) OO (s) = C takes the following form for the (2.15), equivalently, for its compact form (2.57) in terms of deviations:   sIn − A = ν > 1 =⇒ OO (s) = C   sIN −IN ON ... ON  ON  sIN −IN ... ON    ON  O sI ... O N N N     ∈ C(n+N )×n , n = νN, ... ... ... ... ... =   ON  ON ON ... −IN    A−1 A0 A−1 A1 A−1 A3 ... sIN + A−1 Aν−1  ν

ν

ν

IN

ON

ON

ν

...

ON   sIN + A−1 A0 1 ν = 1 =⇒ n = N =⇒ OO (s) = . C

Case 1 : ν > 1. The rank of OO (s) is  sIN  ON     ON sIn − A rank = rank   ... C   ON IN

n due to the following: −IN sIN ON ... ON ON

ON −IN sIN ... ON ON

ON ON −IN ... ON ON

... ON ... ON ... ON ... ... ... −IN ... ON

    =   

∀ (s, Ak ) ∈ C × RN ×N , ∀k = 1, 2, ..., ν,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 275 — #286

i

D.5. PROOF OF THEOREM 130    = N + rank   

−IN sIN ON ... ON

ON −IN sIN ... ON

ON ON −IN ... ON

i

275 ... ON ... ON ... ON ... ... ... −IN

    = (ν − 1) N = νN = n,  

∀ (s, Ak ) ∈ C × RN ×N , ∀k = 1, 2, ..., ν. The rank of OO (s) is full n and invariant, i.e., independent of the system matrices Ai, ∀i = 1, 2, ..., ν, and Bki, ∀k = 1, 2, ..., µ, i.e., independent of the system matrices A(ν) and B (µ) . Since the rank of the matrix OO (s) (D.62) is full and invariant,   sIn − A = n, ∀ (s, Ak ) ∈ C×RN ×N , ∀k = 1, 2, ..., ν, rankOO (s) = rank C then it follows from the condition 5. of Theorem 131 (Section 8.3) that the system (2.15), equivalently, its compact forms (9.2), (9.4), is observable invariably relative to its matrices Ai, ∀i = 0, 1, 2, ..., ν, and Bki, ∀k = 1, 2, ..., µ, i.e., independent of the system matrices A(ν) and B (µ) in Case 1 : ν > 1. Case 2 : ν = 1. The matrix OO (s) has the following form:   sIN + A−1 A0 1 ν = 1 =⇒ n = N =⇒ OO (s) = , ∀ (s, Ak ) ∈ C × RN ×N , IN   sIN + A−1 1 A0 = rankIN = N = n, rankOO (s) = rank IN ∀ (s, Ak ) ∈ C × RN ×N , ∀k = 0, 1. It follows from the condition 5. of Theorem 131 that the system (2.15), equivalently (2.57), is observable invariably relative to its matrices Ai, ∀i = 0, 1 = ν, and Bki, ∀k = 1, 2, ..., µ, i.e., independent of the system matrices A(1) and B (µ) in Case 2 : ν = 1.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 276 — #287

i

276

i

APPENDIX D. PROOFS

D.6

Proof of Theorem 141

Proof. The extended form of Equation (5.3), (Section 5.1), reads: S1 = R, s1 = r, (1)

(1)

S2 = R(1) = S1 , s2 = r(1) = s1 , ..... (1)

(1)

Sk = R(k−1) = Sk−1 , sk = r(k−1) = sk−1 , ..... (α−1)

Sα = R

=

(1) Sα−1 ,

(1)

sα = r(α−1) = sα−1 ,

and S(1) α

(α)

=R

(α) , s(1) . α =r

The first α equations read also S1 = R, s1 = r, and (1)

(1)

S1 = S2 , s1 = s2 , ..... (1)

(1)

Sk−1 = Sk , sk−1 = sk , ..... (1)

(1)

Sα−1 = Sα , sα−1 = sα .

(D.63)

We link these equations with the HISO system equations (5.1), (5.2) in (1) (1) order to determine Sα = R(α) and sα = r(α) : Aα S(1) α (t)

+

k=α−1 X k=0

Ak Sk+1 (t) =

k=µ X

Hk I(k) (t), ∀t ∈ T0 .

(D.64)

k=0

The compact vector-matrix form of the united equations (D.63) and (D.64) multiplied on the left by A−1 α (which is nonsingular due to (5.1)) reads:   .. T T dS (t) (µ) µ T .. T .. = AS (t) + P I (t) , S = S1 . S2 . ... .Sα ∈ Rn , dt

(D.65)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 277 — #288

i

D.7. PROOF OF THEOREM 147

i

277

where the matrices A, P (µ) , H (µ) and C are defined by Equations (9.28)(11.146), (Section 9.4), respectively. The output vector equation (5.2) gets the following form by recalling Ryα = ON,ρ : Y(t) = CS + QI(t), ∀t ∈ T0 .

(D.66)

Analogously, the HISO system (5.44), (5.45) becomes:   .. T T ds (t) (µ) µ T .. T .. = As (t) + P i (t) , ∀t ∈ T0 , s = s1 . s2 . ... .sα ∈ Rn , dt (D.67) y(t) = Cs + Qi(t), ∀t ∈ T0 .

(D.68)

Equations (D.65), (D.66) determine the (8.1), (8.2) with the matrices defined by Equations (9.28)-(9.30), (11.146). This implies the validity of Theorem 131 for the HISO system (5.1), (5.2), and for the HISO system (5.44), (5.45). Theorem 151 is the HISO form of Theorem 131.

D.7

Proof of Theorem 147

Proof. Appendix C.2 shows how the IIO system (6.1), (6.2) in the total coordinates, or in terms of the deviations of all variables (6.65), (6.66), can be transformed into the following ISO forms, respectively: dS(t) = AS(t) + P (µ) Iµ (t), dt Y(t) = CS(t), ds(t) = As(t), dt y(t) = Cs(t),

(D.69) (D.70) (D.71) (D.72)

with the matrices A, C, and V determined by (C.24)-(C.31)(Appendix C.2). The system (D.74), (D.75) is the system (8.1) and (8.2) (Section 8.1), and the system (D.76), (D.77) is the system the IIO system (6.65), (6.66) in the form of the system (8.11) and (8.12) (Section 8.2). Since Theorem 131 holds for the system (8.1) and (8.2), and for the system (8.11) and (8.12), i.e., for the IIO system (6.65), (6.66), then it is valid also for both the IIO system (D.74), (D.75) and the IIO system (D.76), (D.77). Furthermore, Condition (9.20) (Theorem 131), i.e.,   sIn − A OO (s) = ∈ R(n+N )×n , n = αρ + νN. (D.73) C

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 278 — #289

i

278

i

APPENDIX D. PROOFS

reads

  OO (s) = 



A Oαρ,νN sIαρ Oαρ,νN − 11 A21 A22 OνN,αρ sIνN .. O I .O N,αρ

N

N,(ν−1)N

due to Equations (C.24)-(C.28) (Appendix C.2), i.e.,  sIαρ − A11 Oαρ,νN  −A sI 21 νN − A22 OO (s) =  . O I .. O N,αρ

N

 

  .

N,(ν−1)N

This, Equations (C.26), (C.27) (Appendix C.2) determine that the rank of the matrix rankOO (s) is the rank of the following matrix:   sIαρ − A11 Oαρ,νN   ON,ρ . ON,ρ sIN . ON     : : : : : :    . ON,ρ . ON,ρ ON . −IN    −E −1 Ry0 . −E −1 Ry,α−1 sIN + Eν−1 E0 . sIN + Eν−1 Eν−1  ν ν   .. O I . O N,αρ

N

N,(ν−1)N

The application of the elementary matrix transformation shows that the rank of the matrix rankOO (s) is equal to the rank of the following matrix:   sIαρ − A11 Oαρ,νN   sIN −IN .. ON ON     : : : : :   O(ν−1)N,αρ   O : .. −I O N N N  ,   O : .. sI −I N N N     ..   O I . O N,αρ

N

−Eν−1 Ry0 . −Eν−1 Ry,α−1

N,(ν−1)N

sIN + Eν−1 E0 . sIN + Eν−1 Eν−1

which yields rankOO (s) = 

sIαρ − A11

    rank  O(ν−1)N,αρ    ON,αρ

sIN : ON ON . IN ..

Oαρ,νN −IN ... ON : : : : ... −IN : ... sIN ON,(ν−2)N

 ON : ON −IN .. . ON

     =⇒   

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 279 — #290

i

D.8. PROOF OF THEOREM 148

i

279

rankOO (s) =     = rank    

sIαρ − A11

OνN,αρ

sIN ON ... ON IN

Oαρ,νN −IN ... ON sIN ... ON ... ... ... ON ... sIN ON ... ON

 ON ON ... −IN ON

   =   

rankOO (s) = 

sIαρ − A11

  = N + rank   O(ν−1)N,αρ 

Oαρ,(ν−1)N −IN ... ON sIN ... ON ... ... ... ON ... sIN

 ON ON ... −IN

   =⇒  

rankOO (s) = νN + rank (sIαρ − A11 ) . From this result follows that for the matrix OO (s) (D.73), i.e., (9.20), to have the full rank n = αρ + νN it is necessary and sufficient that the following holds: rank (sIαρ − A11 ) = αρ, ∀s ∈ C, which is violated for every eigenvalue of the matrix A11 . The system violates Condition (9.20) of Theorem 131. The system is not full state observable.

D.8

Proof of Theorem 148

Proof. This is the proof of Theorem 158, (Section 9.5). Appendix C.2 shows how the internal dynamics parts of the IIO system (6.1), (6.2) in total coordinates, or in terms of the deviations of all variables (6.65), (6.66), together with the ouput equation, can be transformed into the following ISO forms, respectively: dSIIOI (t) = A11 SIIOI (t) + W1 W(t), dt Y(t) = ON,αρ SIIOI (t),

(D.74) (D.75)

and in the free regime in terms of deviations: dsIIOI (t) = A11 sIIOI (t), dt y(t) = ON,αρ sIIOI (t),

(D.76) (D.77)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 280 — #291

i

280

i

APPENDIX D. PROOFS

with the matrices A11 , C and W1 determined by (C.25), (C.28), (C.29), and with W(t) determined in Equation (C.32) (Appendix C.2). Condition (9.20) (Theorem 131) for the internal state observability demands that the matrix OOSIIOI (s) ,   sIαρ − A11 OOSIIOI (s) = ON,αρ has the full rank αρ for every s ∈ C. However, its rank is defective for every eigenvalue si (A11 ) of the matrix A11 ,   si (A11 ) Iαρ − A11 rankOOSIIOI (s) = < αρ, ON,αρ ∀si (A11 ) ∈ C, ∀i = 1, 2, ..., αρ. The system violates Condition (9.20) of Theorem 131 applied to the system internal state observability. The system is not internal state observable.

D.9

Proof of Theorem 149

Proof. Let the initial input state deviation sI (0) be arbitrarily chosen, fixed and known. Equations (C.22)-(C.34) (Appendix C.2) in the free regime (i.e., for iµ (t) ≡ 0(µ+1)M ) simplify to: "

(1)

sI (t) (1) sO (t)

#

  A11 Oαρ,νN sI (t) , = A21 A22 sO (t)   sI (t) . y(t) = C sO (t) 

(D.78) (D.79)

The Laplace transforms of these equations have the following forms in view of (C.28) (Appendix C.2):      ssI (s) − sI (0) A11 Oαρ,νN sI (s) = =⇒ A21 A22 sO (s) ssO (s) − sO (0) 

   (sIαρ − A11 ) sI (s) sI (0) = , A21 sI (s) + (sIνN − A22 ) sO (s) sO (0)      .. .. sI (s) y(s) = CI . CO , CI = ON,αρ , CO = IN . ON,(ν−1)N . sO (s)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 281 — #292

i

D.9. PROOF OF THEOREM 149 These equations imply  ..  sIνN − A22 . − IνN . .. I ..O .O N

281







N,νN

N,(ν−1)N

i

sO (s) sO (0)



 =

−A21 sI (s) y(s)

 .

The matrix  

sIνN − A22 . I ..O N

.. . − IνN .. .O

N,(ν−1)N

  ∈ C (ν+1)N ×(ν+1)N .

N,νN

In order to test its rank we proceed as follows:   sIN IN .. ON ON   ON sIN .. ON ON     . : : : : : .. − I   νN   O O .. sI I N N N N rank  =   sIN + −1 −1 −1   E E E E .. E E 0 1 ν−2 ν ν ν   +Eν−1 Eν−1   .. .. IN . ON,(ν−1)N .ON,νN   sIN .. ON −I O .. O O N N N N   ON .. ON   O −I .. O O N N N N   . : : : .   : : : : : .   ON .. IN rank  = ON ON .. −IN IN    sIN +  E −1 E0 ..  ON ON .. ON ON  ν  +Eν−1 Eν−1   .. .. IN . ON,(ν−1)N . ON,νN   sIN −IN ... ON ON  ON ON ... ON ON     : : : : :  = rank  =  Eν−1 E0 ON ... ON −IN  IN ON ... ON ON   −IN ON ... ON  ON −IN ... ON   = N + νN = (ν + 1) N.. = N + rank   : : : :  ON ON ... −IN The matrix

 

sIνN − A22 . I ..O N

.. . − IνN .. .O

N,(ν−1)N

 

N,νN

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 282 — #293

i

282

i

APPENDIX D. PROOFS

has the full rank (ν + 1) N for every s ∈ C independently of the system matrices. Equation       ..  sIνN − A22 . − IνN  sO (s) = −A21 sI (s) . . .. sO (0) y(s) I ..O .O N

N,(ν−1)N

N,νN

has the unique solution sO (0) in terms of the vector     −A21 sI (s) −A21 (sIαρ − A11 )−1 sI (0), = y(s) y(s) for any chosen, fixed and known sI (0) so that sO (0) is well determined by the system response y(t; sI (0); sO (0)). The initial output state sO (0) is observable. The system is output state observable in view of Definition 156. The IIO system (6.1), (6.2), equivalently the IIO system (6.65), (6.66), is invariably output state observable, i.e., it is output state observable independently of the system matrices.

D.10

Proof of Theorem 161

Theorem 233 [18, Theorem 5-3, p. 174] Assume that for each i, ri is analytic function (Definition 231, Appendix D.3) on [t1 , t2 ]. Let R (.) (7.45) be the ν × µ matrix with ri as its i-th row, and let R(k) (t) be the k-th derivative of R (t). Let t0 be any fixed point in [t1 , t2 ]. Then the ri ‘s are linearly independent on [t1 , t2 ] if and only if   .. (1) .. (2) .. .. (ν−1) .. rank R (t0 ) . R (t0 ) . R (t0 ) . ... . R (t0 ) . ... = ν (D.80) Proof. This is the proof of Theorem 171. The proof is a slight modification and extension of the proof by Chen [18, Theorem 5-4, pp. 177, 178] generalized by adjusting it to the system (10.3), (10.4). Proof of the necessity of the conditions 1.-6. and the proof of the sufficiency of the conditions 1.-5. Let the system (10.3), (10.4) be state controllable. Definition 161 holds. We treat simultaneously the case a) µ = 0 and the case b) µ > 0 in the proof of the necessity. 1. The fundamental matrix Φ (t1 , τ ) = eA(t1 −τ ) and the matrix Φ (t0 , τ ) = eA(t0 −τ ) = e−A(τ −t0 ) are integrable, which ensures the existence

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 283 — #294

i

D.10. PROOF OF THEOREM 161

i

283

of the Gram matrix GΦV (t1 , t0 ) of Φ (t1 , t)B(µ) and of the Gram matrix GΦV (t0 , t1 ) of Φ (t0 , t)B(µ) . Let be assumed that the rows of Φ (t1 , t)B(µ) are linearly dependent on [t0 , t1 ] , any (t0 , t1 > t0 ) ∈ InT0 × InT0 . This guarantees the existence of a nonzero vector a ∈R1×n such that (statement under 1) of Lemma 100, Section 7.4): aΦ (t1 , t) B(µ) = 0Tr on [t0 , t1 ] , any (t0 , t1 > t0 ) ∈ InT0 × InT0 . (D.81) Let, for an accepted (t0 , t1 > t0 ) ∈ InT0 × InT0 , S(t0 ) be chosen so that S(t0 ) = S0 = Φ (t0 , t1 ) aT = Φ−! (t1 , t0 ) aT 6= 0n . The solution of the state equation (10.3) is given by Equations (10.5). We apply now the first Equation (10.5) for t = t1 : 

(µ) µ

S t1 ; t0 ; S0 ; B



Zt1 = Φ (t1 , t0 ) S0 + Φ (t1 , τ ) B(µ) Uµ (τ ) dτ

(D.82)

t0

For the chosen S0 = Φ (t0 , t1 ) aT = Φ−! (t1 , t0 ) aT the system motion becomes   µ S t1 ; t0 ; Φ (t0 , t1 ) aT ; B(µ) = −!

+Φ (t1 , t0 ) Φ

Zt1 (t1 , t0 ) a + Φ (t1 , τ ) B(µ) Uµ (τ ) dτ. T

t0

Definition 161 permits an arbitrary choice of the final state vector S (t1 ) ,  S t1 ; t0 ; Φ (t0 , t1 ) aT ; Uµ = S (t1 ) = S1 , so that it can be at the origin: S (t1 ) = 0n , which we accept. For this choice of S (t1 ) follows T

S t1 ; t0 ; Φ (t0 , t1 ) a ; U

µ



Zt1 = 0n = a + Φ (t1 , τ ) B(µ) Uµ (τ ) dτ, T

t0

or, by multiplying this equation on the left by a : Zt1 a0n = 0 = aa + aΦ (t1 , τ ) B(µ) Uµ (τ ) dτ. T

(D.83)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 284 — #295

i

284

i

APPENDIX D. PROOFS

Equation (D.81) reduces Equation (D.83) to: 0 = aaT = kak . This implies a = 0Tn and contradicts the choice of a 6= 0Tn . The contradiction is the consequence of the assumption that the rows of Φ (t1 , t)B(µ) are linearly dependent on [t0 , t1 ] for any accepted (t0 , t1 > t0 ) ∈ InT0 × InT0 . This proves the linear independence of the rows of the matrix function Φ (t1 , .) on ]t0 , t1 [ for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . Let us assume now that the rows of Φ (t0 , t)B(µ) are linearly dependent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . This ensures the existence of a nonzero vector b ∈R1×n such that the following holds: bΦ (t0 , t) B(µ) = 0Tr , ∀t ∈ [t0 , t1 ] , (t0 , t1 > t0 ) ∈ InT0 × InT0 .

(D.84)

Let S0 be chosen so that S(t0 ) = S0 = bT 6= 0n . The solution of the state equation (10.3) is given by Equations (10.5). We apply now the second Equation (10.5):   Zt S (t; t0 ; S0 ; Uµ ) = Φ (t, t0 ) S0 + Φ (t0 , τ ) B(µ) Uµ (τ ) dτ  . (D.85) t0

For the chosen S0 = bT the system motion reads:   Zt  S t; t0 ; bT ; Uµ = Φ (t, t0 ) bT + Φ (t0 , τ ) B(µ) Uµ (τ ) dτ  , t0

so that at the moment t = t1 it becomes T

Φ (t0 , t1 ) S t1 ; t0 ; b ; U

µ



Zt1 = b + Φ (t0 , τ ) B(µ) Uµ (τ ) dτ, T

t0

Definition 161 of the state controllability permits arbitrary choice of the  final state vector S t1 ; t0 ; bT ; Uµ = S (t1 ) so that it can be at the origin: S (t1 ) = 0n , which we again accept. For this choice of S (t1 ) follows: T

µ

Φ (t0 , t1 ) S t1 ; t0 ; b ; U



Zt1 = 0n = b + Φ (t0 , τ ) B(µ) Uµ (τ ) dτ, T

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 285 — #296

i

D.10. PROOF OF THEOREM 161

i

285

or, by multiplying this equation on the left by b : Zt1 0 = bbT + bΦ (t0 , τ ) B(µ) Uµ (τ ) dτ. t0

Equation (D.84) simplifies the preceding equation to: 0 = bbT , which implies b = 0Tn . This is in the contradiction with the choice of b 6= 0Tn . The contradiction is the consequence of the assumption that the rows of Φ (t0 , t)B(µ) are linearly dependent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . This proves that the rows of the n × (µ + 1) r matrix Φ (t0 , t)B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 and completes the proof of the necessity of 1. 2. Since the system is time-invariant it is controllable if and only if it is controllable at any t0 ∈ T. This permits t0 = 0 so that Φ (t, 0)B(µ) = eAt B(µ) = Φ (t)B(µ) . Its Laplace transform is (sIn − A)−1 B(µ) . The Laplace transform is linear one-to-one operator so that for the rows of eAt B(µ) to be linearly independent it is necessary and sufficient that the rows of (sIn − A)−1 B(µ) are linearly independent. This proves the equivalence between the criteria under 1. and 2 and the necessity of the condition 2. 3. Theorem 114 (Section 7.4) applied to the Gram matrix GΦB (t0 , t1 ) (10.7) of Φ (t0 , t)B(µ) proves that for GΦB (t0 , t1 ) to be nonsingular for any (t0 , t1 > t0 ) ∈ InT0 × InT0 ; i.e., for rankGΦB (t0 , t1 ) = n for any instants (t0 , t1 > t0 ) ∈ InT0 × InT0 , it is necessary and sufficient that the rows of Φ (t0 , t)B(µ) are linearly independent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . This proves the equivalence between the conditions 1. and 3. hence between 2. and 3., and proves the necessity of the condition 3. 4. The entries of eAt B(µ) are analytic functions. Theorem 233 implies that for the rows of eAt B(µ) to be linearly independent on T0 it is necessary and sufficient that (D.80) holds, i.e.   .. n−1 (µ) .. (µ) .. (µ) .. 2 (µ) .. rank B . AB . A B . ... . A B . ... = n, (D.86) due to the fact that the k-th derivative of eAt0 B(µ) at t = t0 = 0 is Ak B(µ) . The Cayley-Hamilton theorem ensures that Am for m ≥ n can be expressed as a linear combination of IN , A, ..., An−1 (for more details see [2, Theorem 2.1, p. 124], [76, 4.5.25 Theorem, p. 167]). This implies that the columns of

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 286 — #297

i

286

i

APPENDIX D. PROOFS

Am B(µ) for m ≥ n are linearly dependent of the columns of B(µ) , AB(µ) , A2 B(µ) , ..., An−1 B(µ) , which yields:   .. n−1 (µ) .. (µ) .. (µ) .. 2 (µ) .. rank B . AB . A B . ... . A B . ... =   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. . = rank B . AB . A B . ... . A B It follows that for all rows of eAt B(µ) to be linearly independent on T0 it is necessary and sufficient that, as shown above (Theorem 233), the condition under 4. holds. This, together with the equivalence among the conditions 1.-3., proves the equivalence among the conditions 1.-4. and the necessity of the condition 4. 5. Let at first the system (10.3), (10.4) be assumed controllable. What follows is the generalization of the original necessity proof [92] by Hautus [48] accommodated to the system (10.3), (10.4) and to the notation of this book:     .. (µ) rank sIn − A . B < n =⇒ ∃h 6= 0Tn =⇒ hA = sh, hB(µ) = 0T(µ+1)r =⇒   .. n−1 (µ) T (µ) .. (µ) .. 2 (µ) .. =⇒ ∃h 6= 0n =⇒ h B . AB . A B . .... A B = 0Tn(µ+1)r =⇒   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. =⇒ rank B . AB . A B . .... A B < n =⇒   =⇒ A, B(µ) not controllable. The result contradicts the assume system controllability. The contradiction is the consequence of the assumption that the condition 5., i.e., (10.11) fails. The the condition 5., i.e., (10.11), is necessary for the system controllability. What follows is the slight generalization of the necessity proof by Chen [18, p. 206] to hold for the system (10.3), (10.4). Despite the system (10.3), (10.4) state controllability let be assumed that   .. (µ) rank sIn − A . B < n, f or some si = si (A) ∈ C. (D.87) This implies the existence of a nonzero row vector d ∈ C1×n such that (statement under 3) of Lemma 100, Section 7.4):   .. (µ) d s i In − A . B = 0Tn+(µ+1)r ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 287 — #298

i

D.10. PROOF OF THEOREM 161

i

287

or dsi = si d = dA and dB(µ) = 0T(µ+1)r , which imply dA2 = si dA = s2i d, ...dAk = ski d, k = 1, 2, ... Hence, 

 .. .. n−1 (µ) (µ) .. 2 (µ) .. d B . AB . A B . .... A B =   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. = dB . si dB . si dB . .... si dB =   . . . . = 0T(µ+1)r .. si 0T(µ+1)r .. s2i 0T(µ+1)r .. ..... sn−1 0T(µ+1)r = .0Tn(µ+1)r . i (µ)

This means that   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. rank B . AB . A B . .... A B < n, which implies the state noncontrollability of the system (10.3), (10.4) due to the condition under 4. This contradicts the assumed system state controllability, which is the consequence of the assumption that (D.87) holds. Therefore,   .. (µ) rank sI − A . B = n, ∀si = si (A) ∈ C, i = 1, 2, ..., n, i.e., ∀s ∈ C. This proves that the condition 4. implies the condition 5. and proves the necessity of the condition under 5. In order to complete the proof of the equivalence of the condition 5. with the conditions 1.-4. for the sufficiency  . we accept the validity of the condition 5., i.e., rank sI − A .. B(µ) < n for every s ∈ C. Let us refer at first to Hautus [48]. What follows is the generalization of the original sufficiency proof [92] by Hautus [48] accommodated

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 288 — #299

i

288

i

APPENDIX D. PROOFS

to the system (10.3), (10.4) and to the notation of this book:   A, B(µ) not controllable =⇒ ∃h 6= 0Tn =⇒   .. n−1 (µ) (µ) .. (µ) .. 2 (µ) .. = 0Tn(µ+1)r . =⇒ h B . AB . A B . .... A B Let ψ(z) be minimal degree polynomial of A such that hψ(A) = 0Tn . T hen deg(ψ) ≥ 1. F actorize ψ(z) = ϕ(z)(z − s). T hen

0Tn

= hψ(A) = hϕ(A)(A − sI). Def ine ζ = hϕ(A), ζ 6= 0Tn =⇒

ζA = sζ and ζB(µ) = hϕ(A)B(µ) = 0Tn(µ+1)r =⇒    .. (µ) .. (µ) T =⇒ ζ sI − A . B = 0n+(µ+1)r =⇒ rank sI − A . B < n. 

 .. (µ) The obtained result rank sI − A . B < n contradicts the condition   . 5., i.e., rank sI − A .. B(µ) < n for every s ∈ C. The assumed noncon

trollability of the system caused the contradiction. The condition 5., i.e.,   .. (µ) rank sI − A . B < n for every s ∈ C is sufficient for the system controllability. Let us generalize also the proof by Chen [18, p. 206]. It is accommodated to the system (10.3), (10.4) and to the notation of this book. Let us assume that the condition 5. holds but that the system is state uncontrollable, which means that every condition 1.-4. fails. The supposed system state uncontrollability guarantees the existence of the equivalence transformation T such that   AC A12 −1 T AT = = A ∈ Rn×n , AC ∈ Rk×k , 0 < k < n, On−k,k AN C " # (µ) (µ) BC (µ) TB = = B(µ) ∈ Rn×(µ+1)r , BC ∈ Rk×(µ+1)r , On−k,(µ+1)r where AC corresponds to the state controllable part of the system, and AN C ∈ R(n−k)×(n−k) is determined by the state uncontrollable part of the system. The rows of the submatrix AN C are linearly dependent. For any eigenvalue si = si (A) of the matrix A, i ∈ {1, 2, ..., n}, let a nonzero row vector b ∈R1×(n−k) , b 6= 0Tn−k , obey bAN C = si b. Let another subsidiary

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 289 — #300

i

D.10. PROOF OF THEOREM 161  1 × n nonzero vector a =

0Tk

i

289

   .. .. (µ) : . b multiply si I − A . B

     .. (µ) .. (µ) T .. = 0k . b s i I − A . B = a si I − A . B " #   (µ) .. si Ik − AC A12 BC T .. = 0k . b . = 0Tn+(µ+1)r . On−k,k si In−k − AN C On−k,(µ+1)r This yields     −1 .. −1 .. (µ) a T (si I − A) T . T V = aT (si I − A) T .B = 0Tn+(µ+1)r .   . The nonsingularity of T and a = 0Tk .. b 6= 0Tn ensure c = aT 6= 0Tn so that c (si I − A) T −1 = 0Tn implies c (si I − A) = 0Tn in view of detT 6= 0. Hence,     .. (µ) −1 .. (µ) T aT (si I − A) T .B = 0n+(µ+1)r =⇒ c (si I − A) . B = 0Tn+(µ+1)r . The last equation means that there is an eigenvalue si of the matrix A such that   .. (µ) < n. rank (si I − A) . B  .. (µ) This contradicts the condition 5. that rank (si I − A) . B = n for every 

s ∈ C, i.e., for every eigenvalue si = si (A) of the matrix A. The contradiction is the consequence  of the assumption  that the system is output .. (µ) uncontrollable despite rank (sI − A) . B = n for every s ∈ C, i.e., for every eigenvalue si = si (A) of the matrix A. It follows that the condition 5. guarantees the system state controllability and the complete equivalence of the condition 5. to the conditions 1.-4. 6. a) If µ = 0 then the first Equation (10.5) simplifies to: Zt S (t; t0 ; S0 ; U) = Φ (t, t0 ) S0 +

Φ (t, τ ) B0 U (τ ) dτ.

(D.88)

t0

The system state controllability means that there is control U (t) that an arbitrary initial state S0 steers to an arbitrary final state S1 at a moment

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 290 — #301

i

290

i

APPENDIX D. PROOFS

t1 ∈ InT0 so that Equation (D.88) gives: Zt1 Φ (t1 , τ ) B0 U (τ ) dτ = S1 − Φ (t1 , t0 ) S0 .

(D.89)

t0

The linear independence of all rows of Φ (t1 , τ ) B0 in τ ∈ [t0 , t1 ] for t1 ∈ InT0 , (the condition 1.) guarantees the nonsingularity of the grammian GΦB (t1 , t0 ) of Φ (t1 , τ ) in τ ∈ [t0 , t1 ] for t1 ∈ InT0 , Z t GΦB (t1 , t0 ) = Φ (t1 , τ ) B0 BT0 ΦT (t1 , τ ) dτ. t0

This and the condition 1. imply the following solution to Equation (D.89): U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 .

(D.90)

Equation (D.90) is Equation (10.12). This proves the necessity of the condition 6.a) and its equivalence with the conditions 1.-5. for the necessity. b) The condition 1. guarantees the linear independence of the rows of Φ (t1 , t)B(µ) that further implies the linear independence of the rows of Φ (t1 , t) B(µ) T on [t0 , t1 ] , for any nonsingular matrix T ∈ R(µ+1)r×(µ+1)r and for any (t0 , t1 > t0 ) ∈ InT0 × InT0 (due to the statement 3) of Lemma 107 in Section 7.4). It follows that the Gram matrix GΦBT (t1 , t0 ) , Equation (10.14), of Φ (t1 , t) B(µ) T is nonsingular (Theorem 114). The system motion, Equation (10.5), can be presented in the following form: µ

Zt

S (t; t0 ; S0 ; U ) = Φ (t, t0 ) S0 +

Φ (t, τ ) B(µ) T T −1 Uµ (τ ) dτ,

t0

which for t = t1 can be shown as Zt1 Φ (t1 , τ ) B(µ) T T −1 Uµ (τ ) dτ = S (t1 ; t0 ; S0 ; Uµ ) − Φ (t1 , t0 ) S0 , t0

or, for an arbitrary initial state S0 and final state S1 at t = t1 , S1 = S (t1 ; t0 ; S0 ; Uµ ) , becomes Zt1 Φ (t1 , τ ) B(µ) T T −1 Uµ (τ ) dτ = S1 − Φ (t1 , t0 ) S0 .

(D.91)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 291 — #302

i

D.10. PROOF OF THEOREM 161

i

291

The system state controllability, the linear independence of the rows of Φ (t1 , τ ) B(µ) T on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 , and the nonsingularity of the Gram matrix GΦBT (t1 , t0 ) , Equation (10.14), of Φ (t1 , t) B(µ) T , imply the existence of the solution Uµ (t) to Equation (D.91) in the following form: T −1 Uµ (t) = T T B(µ)T ΦT (t1 , t) GΦBT (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] . This proves the necessity of Equation (10.13) and its equivalence with the conditions 1.-5. for the necessity. c) The condition 1. ensures the linear independence of the rows of Φ (t1 , τ )B(µ) on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 , which implies the linear independence of the rows of Φ (t1 , t) on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 (Lemma 108, Section 7.4). It follows that the Gram matrix GΦ (t1 , t0 ) (10.16) of Φ (t1 , t) is nonsingular (Theorem 114). The system motion, Equation (10.5), can be presented in the following form: S (t; t0 ; S0 ; Uµ ) = Φ (t, t0 ) S0 +

Zt

  Φ (t, τ ) B(µ) Uµ (τ ) dτ,

t0

which for t = t1 can be expressed as Zt1   Φ (t1 , τ ) B(µ) Uµ (τ ) dτ = S (t1 ; t0 ; S0 ; Uµ ) − Φ (t1 , t0 ) S0 , t0

and for an arbitrary initial state S0 and final state S1 at t = t1 , S1 = S (t1 ; t0 ; S0 ; Uµ ) , becomes Zt1   Φ (t1 , τ ) B(µ) Uµ (τ ) dτ = S1 − Φ (t1 , t0 ) S0 . t0

This, the system state controllability and the linear independence of the rows of Φ (t1 , t) on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 , and the nonsingularity of the Gram matrix GΦ (t1 , t0 ) (10.16) of Φ (t1 , t) imply the existence of the solution Uµ (t) to the preceding equation in the following form: B(µ) Uµ (t) = ΦT (t1 , t) G−1 Φ (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] . This proves the necessity of Equation (10.15) and completes the proof of the necessity of the condition 6c and its equivalence with the conditions 1.-5. for the necessity. It completes the proof of the necessity part.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 292 — #303

i

292

i

APPENDIX D. PROOFS

Proof of the sufficiency of condition 6. Let S0 ∈ Rn and S1 ∈ Rn be arbitrarily selected. Let any of the equivalent conditions 1. through 6a if µ = 0 or 1. through 6b or 1. through 6c if µ > 0 be valid. Then, each of them holds. We treat separately the case 6.a) µ = 0 and the case 6.b) or 6.c) in the proof of the sufficiency. Case µ = 0. 6.a) The condition 6.a) guarantees the existence of the control vector function U(.) defined by Equation (10.12): U (t) = (Φ (t1 , t) B0 )T G−1 ΦB (t1 , t0 ) [S1 + Φ (t1 , t0 ) S0 ] , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 . We use this equation to eliminate the control vector from the first Equation (10.5) for t = t1 : Zt1 S (t1 ; t0 ; S0 ; U) = Φ (t1 , t0 ) S0 + Φ (t1 , τ ) B0 U (τ ) dτ = t0

 t Z1    Φ (t , τ ) B (Φ (t , τ ) B )T dτ • 1 0 1 0 = Φ (t1 , t0 ) S0 +  t0   •G−1 ΦB (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ]

   

=

  

= Φ (t1 , t0 ) S0 + GΦB (t1 , t0 ) G−1 ΦB (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] = = Φ (t1 , t0 ) S0 + S1 − Φ (t1 , t0 ) S0 = S1 . The beginning and the end of these equations read: S (t1 ; t0 ; S0 ; U) = S1 . The control U(t) (10.12) steers the system state from an arbitrarily chosen S0 ∈ Rn to any chosen S1 ∈ Rn over the finite time interval [t0 , t1 ] . The system (10.3), (10.4) is state controllable in view of Definition 161 if µ = 0. This completes the necessity and equivalency of the conditions 1.-6.a). Case µ > 0 6.b) Let any of the equivalent conditions 1. through 5., 6.b) be valid. Then, each of them holds. We apply Equation (10.13) to the system motion, Equation (10.5), for t = t1 : Zt1 S (t1 ; t0 ; S0 ; U ) = Φ (t1 , t0 ) S0 + Φ (t1 , τ ) B(µ) Uµ (τ ) dτ = µ

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 293 — #304

i

D.10. PROOF OF THEOREM 161

i

293

Zt1 = Φ (t1 , t0 ) S0 + Φ (t1 , τ ) B(µ) T T −1 Uµ (τ ) dτ = t0

 t  Z1     Φ (t , τ ) B(µ) T T T B(µ)T ΦT (t , τ ) dτ •   1 1 = Φ (t1 , t0 ) S0 + =   t0     •G−1 ΦBT (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ]   GΦBT (t1 , t0 ) • = Φ (t1 , t0 ) S0 + = •G−1 ΦBT (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ] = Φ (t1 , t0 ) S0 + S1 − Φ (t1 , t0 ) S0 = S1 . The result S (t1 ; t0 ; S0 ; Uµ ) = S1 shows that the control Uµ (t) (10.13) steers the system state from an arbitrarily chosen S0 ∈ Rn to any chosen S1 ∈ Rn over the finite time interval [t0 , t1 ] . The system (10.3), (10.4) is state controllable in view of Definition 161 if µ > 0. This completes the proof of the sufficiency and equivalency of every condition 1. -5. and of the condition 6.b) 6.c) Let Equation (10.15) hold. We replace B(µ) Uµ (t) by the righthand side of Equation (10.15) into Equation (10.5), for t = t1 : Zt1   S (t1 ; t0 ; S0 ; U ) = Φ (t1 , t0 ) S0 + Φ (t1 , τ ) B(µ) Uµ (τ ) dτ = µ

t0

= Φ (t1 , t0 ) S0 +

= Φ (t1 , t0 ) S0 +

   

Zt1

Φ (t1 , τ ) ΦT (t1 , τ ) dτ •

  

t0 •G−1 Φ (t1 , t0 ) [S1



GΦ (t1 , t0 ) • −1 •GΦ (t1 , t0 ) [S1 − Φ (t1 , t0 ) S0 ]

− Φ (t1 , t0 ) S0 ]

   

=

    =

= Φ (t1 , t0 ) S0 + S1 − Φ (t1 , t0 ) S0 = S1 . The result S (t1 ; t0 ; S0 ; Uµ ) = S1 proves that the control Uµ (t) (10.15) steers the system state from an arbitrarily chosen S0 ∈ Rn to any chosen S1 ∈ Rn over the finite time interval [t0 , t1 ] . The system (10.3), (10.4) is state controllable in view of Definition 161 if µ > 0. This completes the proof of the sufficiency and equivalency of every condition 1.-5. and 6.c) and the whole proof.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 294 — #305

i

294

D.11

i

APPENDIX D. PROOFS

Proof of Theorem 163

Proof. General necessary condition Equation (10.4) reads at the initial moment t0 :    .. X0 Y0 = CX0 + QU0 = C . Q . U0 In order to satisfy the condition of Definition 165 that the initial output   .. T T N T vector Y0 can be any vector in the output space R the vector X0 . U0   .. T T N T should span the whole output space R . For the vector X0 . U0 to N span the  whole output space R it is necessary and sufficient that the . matrix C .. Q has the full rank N , i.e., that Equation (10.29) holds. If

this condition is not satisfied then there is not a control that can satisfy Definition 165. If the rank condition (10.29) is fulfilled then there can be control that satisfies Definition 165, but the rank condition (10.29) does not guarantee its existence. The rank condition (10.29) is not sufficient, but it is necessary condition for the system output controllability. From now on it is accepted that the system obeys the rank condition (10.29). Necessity Let the system (10.3), (10.4) be output controllable. Definition 165 is fulfilled, i.e., for every initial output vector Y0 ∈ RN at any t0 ∈ T and for any final output vector Y1 ∈ RN there exist a moment t1 ∈ InT0 and an extended control Uµ[t0 ,t1 ] on the time interval [t0 , t1 ] such that Y(t1 ; t0 ; Y0 ; Uµ[t0 ,t1 ] ) = Y1 . 1. Let µ > 0. Let us assume that the rows of the system matrix HS (t, t0 ) (10.21) are linearly dependent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . There is a nonzero 1xN vector w such that (Lemma 100, Section 7.4): w ∈R1×N , w 6= 0TN and wHS (t, t0 ) = 0T(µ+1)r , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 . Let the initial state vector S0 and the final output vector Y1 be chosen as follows: a) if wC 6= 0Tn =⇒ S0 = ΦT (t1 , t0 ) C T wT ∈ Rn and Y1 = Q(µ−1) Uµ−1 , 0 b) wC = 0Tn =⇒ S0 is any vector in Rn and Y1 = wT + Q(µ−1) Uµ−1 . 0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 295 — #306

i

D.11. PROOF OF THEOREM 163

i

295

Let w multiplies on the left the system response (10.28): wY (t; t0 ; S0 ; Uµ ) = Zt = wCΦ (t, t0 ) S0 + w HS (t, τ )Uµ (τ ) dτ + wQ(µ−1) Uµ−1 , 0 t0

which at t = t1 becomes wY (t1 ; t0 ; S0 ; Uµ ) = wY1 = Zt1 = wCΦ (t1 , t0 ) S0 + w HS (t1 , τ )Uµ (τ ) dτ + wQ(µ−1) Uµ−1 =⇒ 0 t0

= In the case a) : wY1 = wQ(µ−1) Uµ−1 0 Zt1 T T T = wCΦ (t1 , t0 ) Φ (t1 , t0 ) C w + w HS (t1 , τ )Uµ (τ ) dτ + wQ(µ−1) Uµ−1 0 t0

=⇒ 0 = wCΦ (t1 , t0 ) ΦT (t1 , t0 ) C T wT 6= 0 because Φ (t1 , t0 ) ΦT (t1 , t0 ) is positive definite matrix at every t ∈ T due to its symmetricity and nonsingularity at every t ∈ T, and because C T wT 6= 0N . The result 0 6= 0 shows that the assumed vector w 6= 0T N does not exist. In the case b) : wY1 = wwT + wQ(µ−1) Uµ−1 = 0 t Z1 = wCΦ (t1 , t0 ) S0 + w HS (t1 , τ )Uµ (τ ) dτ + wQ(µ−1) Uµ−1 =⇒ 0 t0

wwT = 0 ⇐⇒ w = 0TN ; Again, the assumed vector w 6= 0T N does not exist. The conclusions for the cases a) and b) prove that the rows of the system matrix HS (t1 , τ ) (10.28) are linearly independent on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . This proves the necessity of the condition 1. if µ > 0. Let now µ = 0. We slightly modify the preceding proof valid for µ > 0. Let us assume that the rows of the system matrix function HS (t1 , τ )

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 296 — #307

i

296

i

APPENDIX D. PROOFS

(10.21) are linearly dependent on [t0 , t1 ] for any (t0 , t1 > t0 ) ∈ InT0 × InT0 . There is a nonzero 1xN vector w such that w ∈R1×N , w 6= 0TN and wHS (t1 , t) = 0TN , ∀t ∈ [t0 , t1 ] , t1 ∈ InT0 . Let the initial state vector S0 and the final output vector Y1 be chosen as follows: a) if wC 6= 0Tn =⇒ S0 = ΦT (t1 , t0 ) C T wT ∈ Rn and Y1 = 0N , b) wC = 0Tn =⇒ S0 is any vector in Rn and Y1 = wT Let w multiplies on the left the system response (10.28): wY (t; t0 ; S0 ; U) = Zt = wCΦ (t, t0 ) S0 + w HS (t, τ )U (τ ) dτ, t0

which at t = t1 becomes wY (t1 ; t0 ; S0 ; U) = wY1 = Zt1 = wCΦ (t1 , t0 ) S0 + w HS (t1 , τ )U (τ ) dτ =⇒ t0

In the case a) : wY1 = w0N = 0 = Zt1 T T T = wCΦ (t1 , t0 ) Φ (t1 , t0 ) C w + w HS (t1 , τ )U (τ ) dτ t0

=⇒ 0 = wCΦ (t1 , t0 ) ΦT (t1 , t0 ) C T wT 6= 0 because Φ (t, t0 ) ΦT (t, t0 ) is positive definite matrix at every t ∈ T due to its symmetricity and nonsingularity at every t ∈ T, and because C T wT 6= 0N . The result 0 6= 0 shows that the assumed vector w 6= 0T N does not exist. In the case b) : wY1 = wwT = Zt1 = wCΦ (t1 , t0 ) S0 + w HS (t1 , τ )U (τ ) dτ =⇒ t0 T

ww = 0 ⇐⇒ w = 0TN ;

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 297 — #308

i

D.11. PROOF OF THEOREM 163

i

297

Again, the assumed vector w 6= 0T N does not exist. The conclusions for the cases a) and b) prove that the rows of the system matrix HS (t1 , τ ) (10.21) are linearly independent on [t0 , t1 ] , for any (t0 , t1 > t0 ) ∈ InT0 × InT0 , also if µ = 0. This completes the proof of the necessity of the condition 1. 2. HS (s) (10.22) is the Laplace transform of HS (t, t0 ) (10.21). The Laplace transform is linear one-to-one operator so that for the rows of HS (s) to be linearly independent it is necessary and sufficient that the rows of HS (t, t0 ) are linearly independent. This and the criterion under 1. prove the necessity of the criterion under 2. 3. The matrix HS (t1 , t) (10.21) is integrable on T for every τ ∈ T which implies the existence of its Gram matrix GHS (t1 , t0 ) (10.30). The output controllability of the system (10.3), (10.4) implies the linear independence of the rows of the system matrix HS (t, t0 ) (10.21) (due to the condition 1.), which proves the nonsingularity of its Gram matrix GHS (t, t0 ) (10.30) due to Theorem 114 (Section 7.4). The condition 3. is necessary condition. 4. The output controllability of the system guarantees the linear independence of the rows of HS (t, t0 ) (10.21) on T0 , condition 1. It is known that the exponential matrix function Φ (.) = eA(.) , where A ∈ Rn×n , is defined in terms of an infinite series, Φ (t, τ ) = eA(t−τ ) =

i=∞ X

αi (t, τ ) Ai ,

i=0

ti αi (t, τ ) = , i = 0, 1, ..., n − 1, .... =⇒ i! α0 (t, τ ) ≡ 1, αi (0) = 0, i = 1, 2, ..., n − 1, ....

(D.92)

[2, Equation (2.101), p. 125, Definition 4.1, p. 149], [76, 4.11.49 Definition, p. 251], and that it can be expressed, by applying Cayley-Hamilton theorem [2, Theorem 2.1, p. 124], [76, 4.5.25 Theorem, p. 167], as a finite series [2, Equation (2.101), p. 125], [66, Equation (11-8), p. 491], [81, p. 386], A ∈ Rn×n , Φ (t, τ ) = eA(t−τ ) =

i=n−1 X

βi (t, τ ) Ai ,

i=0

βi (t, τ ) ∈ C∞ , i = 0, 1, ..., n − 1 =⇒ β0 (t, τ ) ≡ 1, βi (0, 0) = 0, i = 1, 2, ..., n − 1.

(D.93)

The functions βi (t, τ ) are linearly independent on T for every τ ∈ T. We

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 298 — #309

i

298

i

APPENDIX D. PROOFS

may set HS (t, τ ) (10.21) into the following form:     .. e I(µ+1)r  (µ)  CΦ(t, τ )B .Q , µ > 0,  I(µ+1)r    HS (t, τ ) = . I(µ+1)r    CΦ(t, τ )B0 .. Q , µ = 0, δ (t, τ ) I(µ+1)r

   

(D.94)

  

so that the linear independence of the rows of HS (t1 , τ ) implies, due to 1) of Lemma 107 (Section 7.4), the linear independence of the rows of   . e CΦ(t1 , τ )B(µ) .. Q , µ > 0,   .. CΦ(t1 , τ )B0 . Q , µ = 0, . (D.95)     . e . Equations (D.93) enable us to set CΦ(t1 , τ )B(µ) .. Q and CΦ(t1 , τ )B0 .. Q in the following forms:   (µ) .. e µ > 0 =⇒ CΦ(t1 , τ )B .Q =  = β0 (t1 , τ ) CB

(µ) ..

.β1 (t1 , τ ) CAB

 .. n−1 (µ) .. e . ...βn−1 (t1 , τ ) CA B .Q . (D.96)

(µ) ..

  . µ = 0 =⇒ CΦ(t1 , τ )B0 .. Q =   .. .. .. .. n−1 = β0 (t1 , τ ) CB0 . β1 (t1 , τ ) CAB0 . .... βn−1 (t1 , τ ) CA B0 . Q . (D.97) The rows of the matrix functions on the right-hand sides of these equations are linearly independent in τ ∈ [t0 , t1 ], t1 ∈ InT0 , due to the linear . e independence of the rows of the matrix functions CΦ (t1 , .) B(µ) .. Q and   . CΦ (t1 , .) B0 .. Q on the left-hand sides of the same equations, respectively. In spite of that, let us assume the matrix CSout has rank defective, i.e., that rankCSout < N. This implies (Theorem 86, Section 7.2) that the rows of the

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 299 — #310

i

D.11. PROOF OF THEOREM 163

i

299

constant matrices   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e CB . CAB . CA B . .... CA B . Q ∈ RN ×(n+1)(µ+1)r , (D.98)   .. .. .. .. .. 2 n−1 CB0 . CAB0 . CA B0 . .... CA B0 . Q ∈ RN ×(n+1)r ,

(D.99)

are linearly dependent so that there is a constant non zero, 1 × N vector a, a ∈R1×N , a 6= 0TN , such that (due to Lemma 100, Section 7.4):   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e a CB . CAB . CA B . .... CA B .Q =   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e = aCB . aCAB . aCA B . .... aCA B . aQ = 0T(n+1)(µ+1)r (D.100) ⇐⇒ e = 0T aCB(µ) = aCAB(µ) = aCA2 B(µ) = .. = aCAn−1 B(µ) = aQ (µ+1)r . These equations and Equation (D.97) multiplied on the left by a yield   .. (µ) .. (µ) .. n−1 (µ) .. e a β0 (t1 , τ ) CB .β1 (t1 , τ ) CAB . ....βn−1 (t1 , τ ) CA B .Q =   . . . . e = β0 (t1 , τ ) aCB(µ) ..β1 (t1 , τ ) aCAB(µ) ......βn−1 (t1 , τ ) aCAn−1 B(µ) ..aQ =  =

β0 (t1 , τ ) 0T(µ+1)r

 .. .. .. T T . .... βn−1 (t1 , τ ) 0(µ+1)r . 0r = 0T(n+1)(µ+1)r .

This means that the rows of the matrix function   .. (µ) .. (µ) .. n−1 (µ) .. e β0 (t1 , τ ) CB . β1 (t1 , τ ) CAB . .... βn−1 (t1 , τ ) CA B .Q are linearly dependent that contradicts their independence. The contradiction is the consequence of the assumption that the rows of the constant matrix   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e CB . CAB . CA B . .... CA B .Q are linearly dependent. The assumption failure implies the linear independence of the rows of the constant matrix   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e CB . CAB . CA B . .... CA B . Q ∈ RN ×(n+1)(µ+1)r .

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 300 — #311

i

300

i

APPENDIX D. PROOFS This means that the matrix has the full row rank N :   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) .. e rank CB . CAB . CA B . .... CA B . Q = N.

(D.101)

By repeating the preceding presentation starting with Equation (D.100) e we prove the following: in which Q is replaced by Q   .. .. .. .. .. 2 n−1 rank CB0 . CAB0 . CA B0 . .... CA B0 . Q = N. (D.102) Equation (D.101) and Equation (D.102) prove that the condition for the matrix CSout0 (10.32) to have the full rank N is necessary for the system output controllability, i.e., that Equation (10.33) holds. This completes the proof of the necessity of the condition 4. 5. Let be assumed that   .. (µ) .. e rank C (sIn − A) . CB . Q < N, f or some si = si (A) ∈ C, (D.103) if µ > 0. This implies the existence of a nonzero row vector a ∈ C1×N such that   . . e a C (sIn − A) .. CB(µ) .. Q = 0Tn+2(µ+1)r , or, for s 6= 0 : e = 0T , saC = aCA and aCB(µ) = aQ r which imply aCA2 = saCA = s2 aC, ...aCAk = sk aC, k = 1, 2, ... Hence,   . . . . . e a CAB(µ) .. CA2 B(µ) .. ..... CAn−1 B(µ) .. CB(µ) .. Q =   .. n−1 (µ) .. 2 (µ) .. (µ) .. (µ) .. e = saCB . s aCB . .... s aCB . aCB . aQ =   .. 2 T .. .. n−1 T .. T .. T T = s0(µ+1)r . s 0(µ+1)r . .... s 0(µ+1)r . 0(µ+1)r . 0(µ+1)r = . = 0T(n+2)(µ+1)r . This means that   .. (µ) .. 2 (µ) .. n−1 (µ) .. (µ) .. e rank CAB . CA B . .... CA B . CB . Q < N,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 301 — #312

i

D.11. PROOF OF THEOREM 163

i

301

which implies the output noncontrollability of the system (10.3), (10.4) due to the condition under 4. This contradicts the assumed system output controllability, which is the consequence of the assumption that (D.87) holds. Therefore,   .. (µ) .. e rank C (sIn − A) . CB . Q = N, ∀si = si (A) ∈ C, i = 1, 2, ..., n, i.e., ∀s ∈ C. If µ = 0 then we repeat the preceding proof of the necessity of the e replaced by Q. The result is the following: conditions under 4. with Q   .. .. rank C (sIn − A) . CB0 . Q = N, ∀si = si (A) ∈ C, i = 1, 2, ..., n, i.e., ∀s ∈ C. This completes the proof of the necessity of the condition under 5. 6. a) In the case µ = 0 Equation (10.28) for t = t1 , i.e., the following equation: Zt1 Y (t1 ; t0 ; S0 ; U) = CΦ (t1 , t0 ) S0 + HS (t1 , τ ) U (τ ) dτ,

(D.104)

t0

together with Y (t1 ; t0 ; S0 ; U) = Y1 becomes Zt1 HS (t1 , τ ) U (τ ) dτ = Y1 − CΦ (t1 , t0 ) S0 .

(D.105)

t0

All rows of HS (t1 , τ ) are linearly independent on [t0 , t1 ] , for any moment t1 ∈ InT0 (the condition 1.). This guarantees the nonsingularity of the Gram matrix GHS (t1 , t0 ) of HS (t1 , τ ) is nonsingular. These facts imply the following solution U (t) to Equation (D.105): U (t) = (HS (t1 , τ ))T G−1 HS (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) S0 ] ,

(D.106)

Equation (D.106) is Equation (10.36). In order to verify whether this solves Equation (D.105) we replace U (t) by the right-hand side of Equation (D.106) into Equation (D.105):     t1 Z       T    Zt1   HS (t1 , τ ) (HS (t1 , τ )) dτ  •   = HS (t1 , τ ) U (τ ) dτ = t0   −1     •GHS (t1 , t0 ) • t0     • [Y1 − CΦ (t1 , t0 ) S0 ]

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 302 — #313

i

302

i

APPENDIX D. PROOFS = GHS (t1 , t0 ) G−1 HS (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) S0 ] = = Y (t1 ; t0 ; S0 ; U) − CΦ (t1 , t0 ) S0 ,

i.e., Zt1 HS (t1 , τ ) U (τ ) dτ = Y1 − CΦ (t1 , t0 ) S0 , t0

which is Equation (D.105). This completes successfully the verification. b) Let µ > 0 and R ∈ R(µ+1)r×(µ+1)r be nonsingular constant matrix, detR 6= 0. This enables us to present the system response (10.28) in the following form: Y (t; t0 ; S0 ; Uµ ) = CΦ (t, t0 ) S0 + Zt + HS (t, τ ) RR−1 Uµ (τ ) dτ + Q(µ−1) Uµ−1 , µ > 0. 0

(D.107)

t0

The condition 1. implies the linear independence of the rows of HS (t, τ )R in τ on [t0 , t1 ] , for t1 ∈ InT0 (the statement 3) of Lemma 107 in Section 7.4). The linear independence of the rows of HS (t, τ )R in τ on [t0 , t1 ] , for t1 ∈ InT0 , implies the nonsingularity of its Gram matrix GHS R (t1 , t0 ) , Equation (10.38), and the following solution U (t) to Equation (D.107) for t = t1 : ) ( (HS (t1 , τ ) R)T • G−1 HS R (t1 , t0 ) • i −1 µ h R U (t) = , µ > 0. (D.108) • Y1 −CΦ (t1 , t0 ) S0 − Q(µ−1) Uµ−1 0 This proves the necessity of Equation (10.37). It also completes the proof of the necessity of the condition 6b) and of the necessity part. Sufficiency Let S0 ∈ Rn , Y0 , Y1 ∈ RN be any vectors. Let any of the equivalent conditions 1. - 4. holds, which means that each of them is valid. a) Let µ = 0. Let any of the equivalent conditions 1. - 4. and 6.a) holds, which means that each of them is valid if µ = 0. We exploit the condition 6.a). Equation (10.36) transforms the system response at t = t1 , i.e., Equation (10.28) for t = t1 : Y (t1 ; t0 ; S0 ; U) = CΦ (t1 , t0 ) S0 + Zt1 + HS (t1 , τ ) (HS (t1 , τ ))T dτ G−1 HS (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) S0 ] = t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 303 — #314

i

D.11. PROOF OF THEOREM 163

i

303

= CΦ (t1 , t0 ) S0 + GHS (t1 , t0 ) G−1 HS (t1 , t0 ) [Y1 − CΦ (t1 , t0 ) S0 ] = = CΦ (t1 , t0 ) S0 + Y1 − CΦ (t1 , t0 ) S0 = Y1 . This result, Y (t1 ; t0 ; S0 ; U) = Y1 , proves that the control U (t) defined by the right-hand side of Equation (10.36) steers the system output from an arbitrary initial output Y0 = CS0 to an arbitrary final output Y1 at a moment t = t1 . Definition 165 is fulfilled. The system (10.3), (10.4) is output controllable in the case a). b) Let µ > 0. Let any of the equivalent conditions 1. - 4., 6.b) holds, which means that each of them is valid if µ > 0. Equation (10.37) and the system response Equation (10.28) for t = t1 result in the following: Y (t1 ; t0 ; S0 ; Uµ ) = Zt1 = CΦ (t1 , t0 ) S0 + HS (t1 , τ ) RR−1 Uµ (τ ) dτ + Q(µ−1) Uµ−1 = 0 t0

+ = CΦ (t1 , t0 ) S0 + Q(µ−1) Uµ−1 0 ) ( t 1 Z (HS (t1 , τ ) R)T • G−1 HS R (t1 , t0 ) • i h + HS (t1 , τ ) R dτ = • Y1 −CΦ (t1 , t0 ) S0 − Q(µ−1) Uµ−1 0 t0

+ = CΦ (t1 , t0 ) S0 + Q(µ−1) Uµ−1 0 i h (µ−1) µ−1 = +GHS R (t1 , t0 ) G−1 (t , t ) Y −CΦ (t , t ) S − Q U 1 1 0 0 0 HS R 1 0 = CΦ (t1 , t0 ) S0 + Q(µ−1) Uµ−1 + Y1 −CΦ (t1 , t0 ) S0 − Q(µ−1) Uµ−1 = Y1 . 0 0 This proves Y (t1 ; t0 ; S0 ; Uµ ) = Y1 . The control vector function solution U (.) of the time-invariant linear vector differential Equation (10.37) steers an arbitrary system initial output Y0 at the initial moment t = t0 to an arbitrary final output Y1 = Y (t1 ) at a moment t1 ∈ InT0 . Definition 165 is satisfied. The system (10.3), (10.4) is output controllable. This completes the proof for the case b) and the proof in the whole.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 304 — #315

i

304

i

APPENDIX D. PROOFS

D.12

Proof of Theorem 173

Proof. Let the IO system (2.15) with A0 = ON be robust state controllable relative to arbitrary matrices Ai , i = 1, ..., ν − 1. Anyone of the conditions of Theorem 180 is satisfied for µ ≥ 0. We should now prove that the rank condition (11.23) is also necessary and sufficient. Necessity. In the case A0 = ON for the IO system (2.15) to be state controllable the condition (11.23) becomes also necessary due to the fact that the system state controllability implies:     .. (µ) .. O(ν−1)N,N = n, ∀s ∈ C, rank (sIn − A) . B = sIn − A . (µ) A−1 ν B i.e., 

 .. (µ) ν > 1 =⇒ sIn − A . B = sIN −IN ON sIN ON ON ON ON ON ON ... ... ON ON ON ON −1 Aν A0 A−1 ν A1

.. ON ON ON  .. ON ON ON   .. O O ON N N   .. ON ON ON   .. O O ON N N   .. ... ... ...   .. sIN −IN ON   .. ON sIN −IN −1 −1 .. A−1 ν Aν−2  Aν Aν−2 sIN + Aν Aν−1 . ν = 1 =⇒ A = −A−1 A ∈ RN, sI − A..B(1) = sI + A−1 A 

1

1

0

ON ... ON ON ON ... ON ON ON ... ON ON ... ... ... ... ON ... ON −IN −1 −1 (µ) ...  A−1 ν Aν−2 Aν Aν−1 Aν B . O .. A−1 B (1) , ν = 1.



0

n

N

 ON  ON   ON   ON   ON   ...   ON   ON (µ) A−1 ν B  .. −1 (1) . .A B 1

which for A0 = ON and s = 0 becomes   .. (µ) (sIn − A)s=0 . B =            



      =            

ON ON ON ... ON ON

−IN ON ON ... ON A−1 ν A1

N

1

             , ν > 1,                 

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 305 — #316

i

D.12. PROOF OF THEOREM 173

i

305

This yields   . ∀s ∈ C =⇒ n = rank (sIn − A)s=0 .. B(µ) =   ON        ON    ON    rank    ...     ON =    ON          

−IN ON ON ... ON A−1 ν A1

ON ... ON ON ON ... ON ON ON ... ON ON ... ... ... ... ON ... ON −IN −1 −1 (µ) ... A−1 ν Aν−2 Aν Aν−1 Aν B if ν > 1,   . rank O .. A−1 B (1) , ν = 1 N

1

            ,         =              

    ON −IN ON ... ON ON           O O −I ... O O N N N N N             O O O ... O O N   N N N N     ,  rank    ... ... ... ... ... ...   =  =  ON ON ON ... ON −IN        Aν−2 ON A−1 A2 ... A−1 A1 A−1 A−1   B (µ) ν ν ν ν         if ν > 1,     −1 (1) (1) −1 rankA1 B = rankB due to detAν 6= 0, ν = 1   (µ) = (ν − 1) N + rankB (µ) = νN = n   (ν − 1) N + rankA−1 ν B = ⇐⇒ rankB (µ) = N, ν > 1,   N ⇐⇒ rankB (1) = N, ν = 1.   .. (µ) = n = νN for every s ∈ C is necessary and Since rank (sIn − A) . B sufficient condition for of the system, then it is  the state controllability  .. (µ) necessary that rank (sIn − A)s=0 . B = n that implies rankB (µ) = N independently of the arbitrary matrices Ai , i = 1, ..., ν − 1, which proves the necessity of the condition (11.23) for any matrices Ai , i = 1, ..., ν − 1. Sufficiency. Let the rank condition (11.23) hold in order to prove its sufficiency. It is valid independently of the arbitrary matrices Ai , i = 1, ..., ν − 1. We repeat

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 306 — #317

i

306

i

APPENDIX D. PROOFS

that A0 = ON and 

 .. (µ) ν > 1 =⇒ sIn − A . B =              

sIN ON ON ON ON ... ON ON ON

−IN sIN ON ON ON ... ON ON A−1 ν A1

... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ... ... ... ... ... sIN −IN ON ON ... ON sIN −IN ON −1 A −1 A −1 B (µ) ... A−1 A A sI + A A ν−2 N  ν ν ν−2 ν ν−1 ν  .. (1) .. −1 (1) −1 N = sIN . A1 B . ν = 1 =⇒ A = −A1 A0 ∈ R , sIn − A.B

             

Its submatrix              

−IN sIN ON ON ON ... ON ON A−1 ν A1

ν > 1 =⇒ ... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ON ON ON ON ... ... ... ... ... ... sIN −IN ON ON ... ON sIN −IN ON −1 −1 −1 −1 ... Aν Aν−2 Aν Aν−2 sIN + Aν Aν−1 Aν B (µ) (1) and n = N . ν = 1 =⇒ A−1 1 B

             

has the full rank n due to the full rank N of B (µ) .This proves   .. (µ) rank (sIn − A) . B = n on C and the sufficiency of the condition (11.23) that permits the following determination of the control vector function U (.) as an alternative to Equation (11.21). Let the control vector U (t) be the solution of the following timeinvariant linear vector differential equation: B(µ) Uµ (t) = Aν (Φ (t0 , t) Binv )T G−1 ctBinv (t1 , t0 ) [Φ (t0 , t1 ) X1 − X0 ] , (D.109)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 307 — #318

i

D.13. PROOF OF THEOREM 181

i

307

(µ) where X0 and X1 are arbitrary vectors in Rn . We use B(µ) = Binv A−1 ν B (µ) µ and replace B U (t) by the right hand side of Equation (D.109) into the second Equation (11.10) that for the IO system (2.15) at t = t1 then reads:

X (t1 ; t0 ; X0 ; Uµ ) = 

Zt1 

= Φ (t1 , t0 ) X0 + t0

T Φ (t0 , τ ) Binv A−1 ν Aν (Φ (t0 , t) Binv ) • −1 •GctBinv (t1 , t0 ) [Φ (t0 , t1 ) X1 − X0 ]



 dτ  =

 = Φ (t1 , t0 ) X0 + GctBinv (t1 , t0 ) • G−1 ctBinv (t1 , t0 ) [Φ (t0 , t1 ) X1 − X0 ] =   = Φ (t1 , t0 ) X0 + Φ−1 (t1 , t0 ) X1 − X0 = X1 . The beginning and the end of these equations prove X (t1 ; t0 ; X0 ; Uµ ) = X1 . The control vector U (t) (D.109) steers the system state from an arbitrary initial state X0 into an arbitrary final state X1 at t1 ∈ InT0 . The IO system (2.15) with A0 = ON is robust state controllable relative to arbitrary matrices Ai , i = 1, ..., ν − 1.

D.13

Proof of Theorem 181

Lemma 234 Full rank of CIOout and of B (µ) If ν ≥ 1 and µ ≥ 0 then the full rank N of the matrix B (µ) implies the full rank N of the matrix CIOout (11.37),   .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) B CIOout = CB . CAB . CA B . .... CA , (D.110)

f ull rankB (µ) = N. =⇒ f ull rankCIOout =  .. (µ) .. (µ) .. 2 (µ) .. n−1 (µ) = f ullrank CB . CAB . CA B . .... CA B = N. (D.111) 

Proof. Simple algebraic calculations show that Equations (11.5)-(11.7) (Section 11.1) yield the following:    CA =  ON |

IN

... ON {z

ν columns

 ON  ∈ RN ×νN , CAB(µ) = ON,r , }

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 308 — #319

i

308

i

APPENDIX D. PROOFS

CA2 =



ON

ON

IN

... ON

ON



∈ RN ×νN , CA2 B(µ) = ON,r ,

..... CAν−1 =



ON

ON

ON

... ON

IN



∈ RN ×νN

(µ) CAν−1 B(µ) = A−1 , ν B

which imply (µ) rankCAν−1 B(µ) = rankA−1 = rankB (µ) ν B

because the nonsingular matrix A−1 ν does not influence the rank of the prod−1 (µ) uct Aν B , which equals the rank of B (µ) : (µ) rankA−1 = rankB (µ) . ν B

Therefore, rankB (µ) = N = rankCAν−1 B(µ) =⇒ f ull rankB (µ) = N =⇒ rankCIOout =   .. .. .. .. .. ν−1 (µ) .. ν (µ) .. νN −1 (µ) rank ON . ON . .... ON . CA B . CA B . .... CA B =   .. .. .. .. −1 (µ) .. .. ν (µ) .. νN −1 = rank ON . ON . .... ON . Aν B . CA B . .... CA B = (µ) = f ullrankA−1 = f ullrankB (µ) = N =⇒ ν B

f ull rankB (µ) = N =⇒ rankCIOout = f ull rankCIOout = N..

(D.112)

This proves Lemma 234. Proof. This is the proof of Theorem 191. Let the condition (11.45) be valid. Lemma 234, the equivalence of the conditions rankCIOout = N and the linear independence of the rows of HS (t, t0 ) (10.21) due to Theorem 173, together with (µ) HS (t, t0 ) = CΦ(t, τ )B(µ) = CΦ(t, t0 )Binv A−1 , ν B

in view of Equation (11.32), imply the nonsingularity of G−1 CΦBinv (t1 , t0 ) on [t1 , t0 ] for every t1 > t0 , (t1 , t0 ) ∈ T × T. The right-hand side of Equation (11.46) is well defined and its solution U (t) . Equations (11.31), (11.6) (Section 11.1) and (11.32) result in: Y (t; t0 ; X0 ; Uµ ) = Zt = CΦ(t, t0 )X0 +

(µ) µ CΦ(t, τ )Binv A−1 U (τ ) dτ. ν B

(D.113)

t0

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 309 — #320

i

D.14. PROOF OF THEOREM 185

i

309

In order to verify that U (t) (11.46) steers an arbitrary initial output Y0 = CX0 to an arbitrary final output Y1 at t = t1 we eliminate B (µ) Uµ (τ ) from Equation (D.113) by using the right-hand side of Equation (11.46): Y (t1 ; t0 ; X0 ; Uµ ) = CΦ(t1 , t0 )X0 +   Zt1 (CΦ(t1 , t)Binv )T G−1 (t1 , t0 ) • −1 CΦBinv dτ = + CΦ(t1 , τ )Binv Aν Aν • [Y1 −CΦ (t1 , t0 ) S0 ] t0

 = CΦ(t1 , t0 )X0 +

GCΦBinv (t1 , t0 ) G−1 CΦBinv (t1 , t0 ) • • [Y1 −CΦ (t1 , t0 ) X0 ]

 = Y1 .

The result Y (t1 ; t0 ; X0 ; Uµ ) = Y1 proves that the control U (t) (11.46) steers an arbitrary initial output Y0 = CX0 to an arbitrary final output Y1 at t = t1 . The IO system (2.15) is output controllable (Definition 165).

D.14

Proof of Theorem 185

Proof. Necessity. Let the IO system be output controllable. 1) Let Y0ν−1 and Uµ−1 be arbitrary. The proof is by contradiction. − 0− Let be supposed that the rows of ΓIOU (.) are not linearly independent on [t0 , t1 ] , i.e., that they are linearly dependent on [0, t1 ], t1 ∈ InT0 . Then there is a nonzero, constant, 1 × N vector a, a ∈R1×N , a 6= 0TN ,

(D.114)

aΓIOU (t1 , t) = 0Tr , ∀t ∈ [0, t1 ] , t1 ∈ InT0 .

(D.115)

such that Premultiplying both sides of the first equation (2.40) by a for t = t1 the result is aY(t1 ; Y0ν−1 − ; U) = Z

t1

= 0−

[aΓIOU (t1 , τ )U(τ )dτ ] + aΓIOu0 (t1 )Uµ−1 + aΓIOy0 (t1 )Y0ν−1 − , 0−

i.e., µ−1 ν−1 aY(t1 ; Y0ν−1 − ; U) = aΓIOu0 (t1 )U0− + aΓIOy0 (t1 )Y0− , t1 ∈ InT0 ,

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 310 — #321

i

310

i

APPENDIX D. PROOFS

due to Equation (D.115). Let, for any chosen t1 ∈ InT0 , Y (t1 ) = 0N so that µ−1 ν−1 aY(t1 ; Y0ν−1 − ; U) = 0 = aΓIOu0 (t1 )U0− + aΓIOy0 (t1 )Y0− , µr ∀(Uµ−1 , .Y0ν−1 × RνN . − ) ∈ R 0−

Since this holds for any t1 ∈ InT0 , Y0ν−1 and Uµ−1 then it implies a = 0TN − 0− that contradicts (D.114). Hence, the rows of ΓIOU (t1 , t) are linearly independent on [t0 , t1 ] for t1 ∈ InT0 . 2) The linear independence of the rows of ΓIOU (t1 , τ ) on [t0 , t1 ] for t1 ∈ InT0 implies the linear independence of the rows of ΓIOU (t) on [t0 , t1 ] for t1 ∈ InT0 due to the following fact implied by Equation (D.115): Z t1 Z t1 [aΓIOU (τ )U(t1 , τ )dτ ] = [aΓIOU (t1 , τ )U(τ )dτ ] = 0. 0−

0−

The Laplace transform L {ΓIOU (t)} is the linear operator and represents the system transfer function GIOU (s) due to Equations (2.46). Altogether, The linear independence of the rows of ΓIOU (t1 , τ ) on [t0 , t1 ] for t1 ∈ InT0 implies the linear independence of the rows of the transfer function matrix GIOU (s) of the IO system (2.15) (see Equation (2.46)) on C. This proves the necessity of the condition 2. 3) The equivalency of the conditions 1) and 3) results from Theorem 114, Section 7.4. It follows also from 2) of Lemma 194. µ 4) Another form of the system response Y(t; Y0ν−1 − ; U ) (11.58) is the following for t = t1 : µ Y(t1 ; t0 ; Y0ν−1 − ;U ) =

Z =

t1

t− 0

(µ) µ ΘIO (t1 , τ )A−1 U (τ )dτ + ΓIOy0 (t1 )Y0ν−1 − , ν B

Z ΓIOy0 (t1 ) =

t1

t− 0

n o (ν−1) (ν) −1 ΘIO (t, τ )A−1 A L Z (s) dτ, ν N

(D.116)

The proof is by contradiction. Let be supposed that the rows of the matrix (µ) are linearly dependent on [t , t ], t ∈ InT . product ΘIO (t1 , t0 )A−1 0 1 1 0 ν B Then there is a nonzero, constant, 1 × N vector g, g ∈R1×N , g 6= 0TN

(D.117)

(µ) gΘIO (t1 , t)A−1 = 0T(µ+1)r , ∀t ∈ [t0 , t1 ] . ν B

(D.118)

such that

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 311 — #322

i

D.14. PROOF OF THEOREM 185

i

311

Premultiplying both sides of the first equation (D.116) by g the result is the following: ν−1 gY(t1 ; t0 ; Y0ν−1 − ; U) = gΓIOy0 (t1 )Y0− + Z t1 h i (µ) µ + gΘIO (t1 , τ )A−1 B U (τ )dτ , ν t− 0

i.e., ν−1 gY(t1 ; Y0ν−1 − ; U) = gΓIOy0 (t1 )Y0− , t1 ∈ InT0 ,

due to (D.118). Let any t1 ∈ InT0 be chosen and let Y (t1 ) = 0N .so that ν−1 gY(t1 ; Y0ν−1 − ; U) = 0 = gΓIOy0 (t1 )Y0− , µr × RνN . ∀(Uµ−1 , .Y0ν−1 − ) ∈ R 0−

, which implies g = 0TN that This holds for any t1 ∈ InT0 and any Y0ν−1 − (µ) are linearly indecontradicts (D.117). Hence, the rows of ΘIO (t, t0 )A−1 ν B pendent on [t0 , t1 ] for t1 ∈ InT0 . This and the statement 1) of Lemma 107, (Section 7.4), prove that the rows of ΘIO (t, t0 ) are linearly independent on [t0 , t1 ] for t1 ∈ InT0 . 5. The necessity of the condition 5) follows from the condition 4) of Lemma 194, (Section 11.2), and Theorem 114. 6) a) We apply the system response Y(t1 ; t0 ; Y0ν−1 − ; U) in the form (2.40) for t = t1 : ( ) R t1 Γ (t , τ )U(τ )dτ + − 1 IOU t0 Y(t1 ; t0 ; Y0ν−1 . (D.119) − ; U) = +ΓIOu0 (t1 )Uµ−1 + ΓIOy0 (t1 )Y0ν−1 − 0− It and the conditions 1) and 2) imply Equation h i µ−1 ν−1 U(t) = ΓIOU (t1 , t)G−1 , ΓIO (t1 , t0 ) Y1 − ΓIOu0 (t1 )U0− − ΓIOy0 (t1 )Y0− (D.120) which is Equation (11.75). b) We start with the introduction of any nonsingular square matrix R ∈ R(µ+1)r×(µ+1)r , detR 6= 0. The system response Y(t; Y0ν−1 − ; U) (2.40) can be set in the following equivalent form that incorporates the matrix R and its inverse R−1 , i.e., their product RR−1 = I(µ+1)r : µ Y(t; t0 ; Y0ν−1 − ;U ) =

=

 Z t  0

  •

"

ΘIO (t, τ )• −1 (µ) −1 Uµ (τ )+ Aν B RR n o (ν−1)

(ν) −1 Z +A−1 ν A L N

(s) Y0ν−1 ∓ .

#

   dτ  

.

(D.121)

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 312 — #323

i

312

i

APPENDIX D. PROOFS

This and the conditions 4) and 5), i.e., Equation (11.74) (Section 11.2), imply the control vector function U (.) defined by the following timeinvariant linear differential equation:  T (µ) R−1 Uµ (t) = ΘIO (t1 , t)A−1 R G−1 ν B ΘIOABR (t1 , t0 ) • h n o i (ν−1) ν−1 (ν) −1 • Y1 − A−1 A L Z (s) Y . (D.122) ν N 0∓ This is Equation (11.76). c) The system response Y(t; t0 ; Y0ν−1 − ; U) (11.58) can be set also µ in the following equivalent form: µ Y(t; t0 ; Y0ν−1 − ;U ) =

Z t( = t0

(µ) µ ΘIO (t, τ )A−1 ν B n U (τ )+ o (ν) −1 Z (ν−1) (s) Y ν−1 +ΘIO (t, τ )A−1 ν A L N 0∓

) dτ.

It yields for t = t1 and Y(t1 ; t0 ; Y0ν−1 − ; U) = Y1 : Z t1 (µ) µ ΘIO (t1 , τ )A−1 U (τ )dτ = Y1 − ν B t0

Z

t1

− t0

n o (ν−1) (ν) −1 ΘIO (t1 , τ )A−1 A L Z (s) Y0ν−1 ∓ dτ. ν N

This and the condition 5), i.e., Equation (11.71), imply B (µ) Uµ (t) = Aν ΘTIO (t1 , t)G−1 ΘIO (t1 , t0 ) •   Z t1 n o (ν−1) ν−1 (ν) −1 dτ, Z (s) Y • Y1 − ΘIO (t1 , τ )A−1 A L ν N 0∓ t0

which is Equation (11.77). Sufficiency. Let the initial conditions Uµ−1 and Y0ν−1 − , and the final 0− Y1 be arbitrary and fixed. Let any t1 ∈ InT0 be chosen. Let any of the conditions 1) - 5) holds. a) Let the control vector be defined by Equation (11.75): h i µ−1 ν−1 U(t) = ΓIOU (t1 , t)G−1 (t , t ) Y − Γ (t )U − Γ (t )Y . 1 0 1 1 1 IOu IOy − − 0 0 ΓIO 0 0 We use this equation to eliminate U(τ ) from Equation (D.119): Y(t1 ; t0 ; Y0ν−1 − ; U) = Z

t1

= 0−

(

−1 h ΓIOU (t1 , τ )ΓIOU (t1 , t)GΓIO (t1 , t0 ) • i • Y1 − ΓIOu0 (t1 )Uµ−1 − ΓIOy0 (t1 )Y0ν−1 − 0−

) dτ +

+ΓIOu0 (t1 )Uµ−1 + ΓIOy0 (t1 )Y0ν−1 = − 0−

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 313 — #324

i

D.14. PROOF OF THEOREM 185 ( Y(t1 ; t0 ; Y0ν−1 − ; U) =

i

313

GΓIO (t1 , t0 ) G−1 ΓIO (t1 , t0 ) •

)

h i ν−1 • Y1 − ΓIOu0 (t1 )Uµ−1 − Γ (t )Y 1 IOy − − 0 0 0

+

= + ΓIOy0 (t1 )Y0ν−1 +ΓIOu0 (t1 )Uµ−1 − 0− µ−1 ν−1 Y(t1 ; t0 ; Y0ν−1 − ; U) = Y1 − ΓIOu0 (t1 )U0− − ΓIOy0 (t1 )Y0− +

= Y1 . +ΓIOu0 (t1 )Uµ−1 + ΓIOy0 (t1 )Y0ν−1 − 0− The chosen control vector function U (.) defined by (11.75) steers the system output from any chosen initial output Y0− to any accepted final output Y1 at the chosen moment t1 ∈ InT0 . Definition 165 is satisfied. The IO system (2.15) is output controllable. b) Let the control vector function be defined by Equation (11.76) that reads  T (µ) R−1 Uµ (t) = ΘIO (t1 , t)A−1 R G−1 ν B ΘIOABR (t1 , t0 ) • i n o h (ν−1) (ν) −1 (D.123) ZN (s) Y0ν−1 • Y1 − A−1 − ν A L Equation (D.123) transforms Equation (D.116) into the following for t = t1 : µ Y(t1 ; t0 ; Y0ν−1 − ;U ) = Z =

t1

t− 0

(

)  (µ) R Θ (t , τ )A−1 B (µ) R T • ΘIO (t1 , τ )A−1 IO 1 νh B ν i dτ + n o −1 A(ν) L−1 Z (ν−1) (s) Y ν−1 (t , t ) Y − A •G−1 1 0 1 − ν ΘIOABR N 0 +ΓIOy0 (t1 )Y0ν−1 = − ( =

−1 , t0 ) • i ΘIOABR (t1o hGΘIOABR (t1 , t0 ) Gn (ν−1) (ν) −1 Z • Y1 − A−1 (s) Y0ν−1 − ν A L N o n (ν−1) (ν) −1 = +A−1 ZN (s) Y0ν−1 − ν A L

) +

n o (ν−1) (ν) −1 = Y1 − A−1 A L Z (s) Y0ν−1 − + ν N n o (ν−1) (ν) −1 +A−1 ZN (s) Y0ν−1 = Y1 . − ν A L The chosen control vector function U (.) defined by (11.76) steers the system output from any chosen initial output Y0− to any accepted final output Y1 at

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 314 — #325

i

314

i

APPENDIX D. PROOFS

the chosen moment t1 ∈ InT0 . Definition 165 is satisfied. The IO system (2.15) is output controllable. c) We replace B (µ) Uµ (t) by the right hand side of Equation (11.77) in Equation (11.58) for t = t1 : µ Y(t1 ; t0 ; Y0ν−1 − ;U ) =

=

 Z t1   t0

=

  •

          

ΘIO (t1 , τ )• # " −1 (µ) µ −1 B (µ) Z (µ−1) (τ )Uµ−1 + Aν B U (τ ) − An r o ν 0∓ (ν−1)

(ν) −1 Z +A−1 ν A L N

(s) Y0ν−1 ∓ .

   dτ  

µ Y(t1 ; t0 ; Y0ν−1 − ;U ) = R   t1 −1 T −1    t0 ΘIO"(t1 , τ )AνR t Aν ΘIO (t1 , t)GΘIO (t1 ,#t0 ) dτ •  1 −1 (ν) Y1 − n + t0 ΘIO (t1 , τo)Aν A •   • (ν−1)   ν−1 −1 ZN (s) Y0∓ dτ, •L n o R t1 (ν) −1 Z (ν−1) (s) Y ν−1 dτ + t0 ΘIO (t1 , τ )A−1 ν A L N 0∓

=⇒ .

     

,

    

or, µ Y(t1 ; t0 ; Y0ν−1 − ;U ) =

=

              

 (t1 , t0 ) • GΘIO (t1 , t0 ) G−1 ΘIO       Y1 − R t1 −1 (ν) +   ν A •   •  − t0 nΘIO (t1 , τ )A  o     (ν−1)   •L−1 ZN (s) Y0ν−1 ∓ dτ, R t1 (ν) (ν−1) (τ )Y ν−1 dτ + t0 ΘIO (t1 , τ )A−1 ν A ZN 0∓     

       

=

      

= Y1 . µ The result Y(t1 ; t0 ; Y0ν−1 − ; U ) = Y1 proves that the control vector function U (.) (11.77) steers any initial output Y0− to any final output Y1 at a moment t1 ∈ InT0 . Definition 165 is satisfied. The IO system (2.15) is output controllable.

D.15

Proof of Theorem 215

Proof. Let (t0 , t1 > t0 ) ∈ InT0 × InT0 be any chosen. Necessity. Let the IIO system (11.179), (11.180) be output controllable. 1) Let Y0ν−1 and Uµ−1 be arbitrary. The proof is by contradiction. Let − 0− be supposed that the rows of ΓIIO (t, t0 ) are not linearly independent on

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 315 — #326

i

D.15. PROOF OF THEOREM 215

i

315

[t0 , t1 ] , i.e., that they are linearly dependent on [t0 , t1 ], t1 ∈ InT0 . Then there is a nonzero, constant ,1 × N vector a ∈R1×N , a 6= 0TN

(D.124)

aΓIIO (t, t0 ) = 0Tr , ∀t ∈ [t0 , t1 ] .

(D.125)

such that Premultiplying Equation (11.241) by a the result is for t0 = 0 : aY(t; Rα−1 ; Y0ν−1 − ; U) = 0− Z

t

= 0−

+ [aΓIIO (t, τ )U(τ )dτ ] + aΓIIOU0 (t)Uµ−1 + aΓIIOR0 (t)Rα−1 0− 0−   aΓIIOY0 (t)Y0ν−1 − , ν ≥ 1, + , ∀t ∈ T0 , 0N , ν = 0

i.e., µ−1 α−1 ; Y0ν−1 aY(t; Rα−1 − ; U) = aΓIIOU0 (t)U0− + aΓIIOR0 (t)R0− + 0−   aΓIIOY0 (t)Y0ν−1 − , ν ≥ 1, + , ∀t ∈ T0 , 0N , ν = 0

due to (D.125). Let Y (t1 ) = 0N .so that µ−1 α−1 + ; Y0ν−1 aY(t1 ; Rα−1 − ; U) = 0N = aΓIIOU0 (t1 )U0− + aΓIIOR0 (t1 )R0− 0−   aΓIIOY0 (t1 )Y0ν−1 − , ν ≥ 1, + , 0N , ν = 0 µr × Rαρ × RνN . , Y0ν−1 ∀(t1 , Uµ−1 , Rα−1 − ) ∈ InT0 × R 0− 0− α−1 Since this holds for any t1 , Y0ν−1 and Uµ−1 then it implies a = 0TN that − , R0− 0− contradicts (D.124). Hence, the rows of ΓIIO (t, t0 ) are linearly independent on [t0 , t1 ] . 2) Let t0 = 0. Since the Laplace transform L {ΓIIO (t)} of ΓIIO (t) is the transfer function matrix GIIO (s) of the IIO system (11.179), (11.180) (see Equation (6.32)), L {ΓIIO (t)} = GIIO (s). (D.126)

then the necessity of the condition 2) is implied by the necessity of the condition 1). 3) The equivalency of the conditions 1) and 2) for the necessity results from Theorem 114, Section 7.4.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 316 — #327

i

316

i

APPENDIX D. PROOFS 4) Equation (11.241) for t = t1 reads: Z t1 ΓIIO (t1 , τ )U(τ )dτ = Y(t1 ; Rα−1 ; Y0ν−1 ; Uµ )− 0− 0−

−ΓIIOI0 (t1 )Uµ−1 − ΓIIOR0 (t1 )Rα−1 − ΓIIOY0 (t1 )Y0ν−1 − , 0− 0− t1 ∈ InT0 . The conditions 1) and 2) imply the following solution U (t) to the preceding equation for Y(t1 ; Rα−1 ; Y0ν−1 ; Uµ ) = Y1 and t0 = 0: 0−   Y1 − ΓIIOI0 (t1 )Uµ−1 − − −! T 0 U(t) = ΓIIO (t1 , t)GΓIIO (t1 ) . −ΓIIOR0 (t1 )Rα−1 − ΓIIOY0 (t1 )Y0ν−1 − 0− This is Equation 11.244, which proves its necessity. and Y0ν−1 Sufficiency. Let the initial conditions Uµ−1 − , and the final Y1 0− be arbitrary and fixed. Let any t1 ∈ InT0 be chosen. Let any of the necessary conditions 1) - 3) holds. They guarantee the existence of the control vector function U(.) defined by Equation (11.244). We replace U(t) by the righthand side of Equation (11.244) in the right-hand side of Equation (11.241) for t = t1 : Z t1 ν−1 α−1 µ ΓIIO (t1 , τ )U(τ )dτ + Y(t1 ; R0− ; Y0 ; U ) = +ΓIIOI0 (t1 )Uµ−1 0− Z

t1

=

 



 •

0−

+

0− ΓIIOR0 (t1 )Rα−1 0−

= + ΓIIOY0 (t1 )Y0ν−1 −

 ΓIIO (t1 , τ )ΓTIIO (t1 , t)dτ G−1 ΓIIO (t1 ) •   + Y1 − ΓIIOI0 (t1 )Uµ−1 − 0−  ν−1 α−1 −ΓIIOR0 (t1 )R0− − ΓIIOY0 (t1 )Y0−

+ΓIIOI0 (t1 )Uµ−1 + ΓIIOR0 (t1 )Rα−1 + ΓIIOY0 (t1 )Y0ν−1 = − 0− 0−

=

 



 •

 GΓIIO (t1 ) G−1  ΓIIO (t1 ) •  + Y1 − ΓIIOI0 (t1 )Uµ−1 − 0−  α−1 ν−1 −ΓIIOR0 (t1 )R0− − ΓIIOY0 (t1 )Y0−

+ ΓIIOY0 (t1 )Y0ν−1 = +ΓIIOI0 (t1 )Uµ−1 + ΓIIOR0 (t1 )Rα−1 − 0− 0−  =

Y1 − ΓIIOI0 (t1 )Uµ−1 − 0− ν−1 −ΓIIOR0 (t1 )Rα−1 − Γ (t IIOY0 1 )Y0− 0−

 +

+ΓIIOI0 (t1 )Uµ−1 + ΓIIOR0 (t1 )Rα−1 + ΓIIOY0 (t1 )Y0ν−1 = Y1 . − 0− 0−

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 317 — #328

i

D.15. PROOF OF THEOREM 215

i

317

The chosen control vector function U (.) defined by (11.244) steers the system output from any chosen initial output Y0− to any accepted final output Y1 at the chosen moment t1 ∈ InT0 . Definition 165 is satisfied. The IIO system (11.179), (11.180) is output controllable.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 318 — #329

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 319 — #330

i

i

Bibliography [1] B. D. O. Anderson and J. B. Moore, Linear Optimal Control, Englewood Cliffs: Prentice Hall, 1971. [2] P. J. Antsaklis and A. N. Michel, Linear Systems, New York: The McGraw Hill Companies, Inc., 1997, Boston: Birkha¨ user, 2006. [3] P. J. Antsaklis and A. N. Michel, A Linear Systems Primer, Boston: Boston: Birkha¨ user, 2007. [4] S. Barnett, Introduction to Mathematical Control Theory, Oxford: Clarendon Press, 1975. [5] A. Benzaoiua, F. Mesquine and M. Benhayoun, Saturated Control of Linear Systems, Berlin: Springer, 2018. [6] L. D. Berkovitz, Optimal Control Theory, New York: Springer Verlag, 2010. [7] L. D. Berkovitz and N. G. Medhin, Nonlinear Optimal Control, Boca Raton, FL: Taylor & Francis-CRC, 2013. [8] J. E. Bertram and P. E. Sarachik, “On optimal computer control”, Proc. of the First International Congress of the Federation of Automatic Control, London: Butterworths, pp. 419-422, 1961. [9] S. P. Bhattacharyya, A. Datta and L. H. Keel, Linear Control Theory: Structure, Robustness, and Optimization, Boca Raton, FL: CRC Press, Taylor & Francis Group, 2009. [10] D. Biswa, Numerical Methods for Linear Control Systems, London: Elsevier Inc., 2004.

319 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 320 — #331

i

320

i

BIBLIOGRAPHY

[11] P. Borne, G. Dauphin-Tanguy, J.-P. Richard, F. Rotella and I. Zambet´ takis, Commande et Optimisation des Processus, Paris: Editions TECHNIP, 1990. [12] R. W. Brockett and M. D. Mesarovi´c, “The reproducibility of multivariable systems”, J. Mathematics Analysis and Applications, Vol. 1, pp. 548-563, 1965. [13] W. L. Brogan, Modern Control Theory, New York: Quantum Publishers, Inc., 1974. [14] Z. M. Buchevats and Ly. T. Gruyitch, Linear Discrete-time Systems, Boca Raton, FL: CRC Press, 2018. [15] F. M. Callier and C. A. Desoer, Linear System Theory, New York: Springer-Verlag, 1991. [16] F. M. Callier and C. A. Desoer, Multivariable Feedback Systems, New York: Springer-Verlag, 1982. [17] G. E. Carlson, Signal and Linear Systems Analysis and Matlab, second edition, New York: Wiley, 1998. [18] C.-T. Chen, Linear System Theory and Design, New York: Holt, Rinehart and Winston, Inc., 1984; Oxford: Oxford University Press, 2013. [19] H. Chestnut and R. W. Mayer, Servomechanisms and Regulating System Design, New York: Wiley, 1955. [20] M. J. Corless and A. E. Frazho, Linear Systems and Control, Boca Raton, FL: CRC Press, 2003. [21] J. J. D’Azzo and C. H. Houpis, Linear Control System Analysis & Design, New York: McGraw-Hill Book Company, 1988. [22] J. J. D’Azzo, C. H. Houpis and S. N. Sheldon, Linear Control System Analysis & Design with Matlab, Boca Raton, FL: CRC Press, 2003. [23] L. Debnath, Integral Transformations and Their Applications, Boca Raton, FL: CRC Press, 1995. [24] C. A. Desoer, Notes for A Second Course on Linear Systems, New York: Van Nostrand Reinhold Company, 1970.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 321 — #332

i

BIBLIOGRAPHY

i

321

[25] C. A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Properties, New York: Academic Press, 1975. [26] J.L. Dom´ınguez-Garc´ıa, M.I. Garc´ıa-Planas, “Output Controllability and Steady-Output Controllability Analysis of Fixed Speed Wind Turbine”, PHYSCON 2011, Le´on, Spain, September, 5 – September, 8, pp. 1-5, 2011. [27] F. W. Fairman, Linear Control Theory: The State Space Approach, Chichester, England: John Wiley & Sons, 1998. [28] P. Falb, Methods of Algebraic Geometry in Control Theory: Multivariable Linear Systems and Projective Algebraic Geometry Part II, Boston: Birkhauser, 1999. [29] T. E. Fortmann and K. L. Hitz, An Introduction to Linear Control Systems, New York: Marcel Dekker, Inc., 1977. [30] F. R. Gantmacher, The Theory of Matrices, Vol. 1, New York: Chelsea Publishing Co., 1960, 1974. [31] F. R. Gantmacher, The Theory of Matrices, Vol. 2, New York: Chelsea Publishing Co., 1960, 1974. [32] E. G. Gilbert, “Controllability and Observability in Multivariable Control Systems”, SIAM Journal of Control, Ser.A, Vol. 1, 1963, pp. 128 151. [33] I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, New York: Academic Press, 1982. [34] G. C. Goodwin, S. F. Graebe and M. E. Salgado, Control System Design, New Jersey USA, London UK: Prentice Hall-Pearson, 2001. [35] Lj. T. Gruji´c (Ly. T. Gruyitch), Continuous Time Control Systems, Lecture notes for the course “DNEL4CN2: Control Systems”, Durban: Department of Electrical Engineering, University of Natal, South Africa, 1993. [36] Ly. T. Gruyitch, Advances in the Linear Dynamic Systems Theory, Tamarac, FL: Llumina Press, 2013. [37] Ly. T. Gruyitch, Control of Linear Systems. II: Tracking and Trackability of General Linear Systems, Boca Raton, FL: CRC Press, 2018.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 322 — #333

i

322

i

BIBLIOGRAPHY

[38] Ly. T. Gruyitch, Einstein’s Relativity Theory. Correct, Paradoxical, and Wrong, ISBN 1-4122-0211-6, Trafford, Victoria, Canada, http://www.trafford.com/06-2239, 2006. [39] Ly. T. Gruyitch, Galilean-Newtonean Rebuttal to Einsteins Relativity Theory, Cambridge, UK: Cambridge International Science Publishing, 2015. [40] Ly. T. Gruyitch, Linear Continuous-time Systems, Boca Raton, FL: CRC Press/Taylor & Francis Group, 2017. [41] Ly. T. Gruyitch, Nonlinear Systems Tracking, Boca Raton, FL: CRC Press/Taylor & Francis Group, 2016. [42] Ly. T. Gruyitch, Time and Consistent Relativity. Physical and Mathematical Fundamentals, Waretown NJ and Oakville, ON: Apple Academic Press, Inc., 2015. [43] Ly. T. Gruyitch, Time. Fields, Relativity, and Systems, ISBN 1-59526671-2, LCCN 2006909437, Coral Springs, FL: Llumina, 2006. [44] Ly. T. Gruyitch, Time and Time Fields. Modeling, Relativity, and Systems Control, ISBN 1-4251-0726-5, Victoria, Canada: Trafford, 2006. [45] Ly. T. Gruyitch, Tracking Control of Linear Systems, Boca Raton, FL: CRC Press/Taylor & Francis Group, 2013. [46] M. Haidekker, Linear Feedback Controls, London: Elsevier, 2013 [47] S. C. Hamilton and E. M. Broucke, Geometric Control of Patterned Linear Systems, Berlin: Springer, 2012. [48] M. L. J. Hautus, “Controllability and observability conditions of linear autonomous systems”, Nederlandse Akademie Vanwettenschappen, Series A, V72, 1969, pp. 443-448. [49] E. Hendricks, O. SØrensen and H. Paul, Linear Systems Control, Berlin: Springer, 2008. [50] J. P. Hespanh, Linear Systems Theory, Princeton, NJ: Princeton University Press, 2009. [51] R. M. Hirschorn, “Output tracking in multivariable nonlinear systems”, IEEE Transactions on Automatic Control, Vol, AC-26, No. 6, pp. 593595, 1981.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 323 — #334

i

BIBLIOGRAPHY

i

323

[52] C. H. Houpis and S. N. Sheldon, Linear Control System Analysis and R , Boca Raton, FL: CRC Press, Taylor and Design with MATLAB Francis Group, 2014. [53] D. G. Hull, Optimal Control Theory for Applications, New York: Springer Verlag, 2003. ¨

[54] R. Johnson, R. Obaya, S. Novo, C. N´ unez and R. Fabbri, Nonautonomous Linear Hamiltonian Systems: Oscillation, Spectral Theory and Control, Berlin: Springer, 2016. [55] M. K.-J. Johansson, Picewise Linear Control Systems, Berlin: Springer, 2003. [56] T. Kaczorek, Polynomial and Rational Matrices: Applications in Dynamical Systems Theory, Berlin: Springer, 2007. [57] T. Kailath, Linear Systems, Englewood Cliffs, NJ: Prentice-Hall, Inc., 1980. [58] R. E. Kalman, “Algebraic structure of linear dynamical systems, I. The module of Σ,” Proc. National Academy of Science: Mathematics, USA NAS, Vol. 54, pp. 1503-1508, 1965. [59] R. E. Kalman, “Canonical structure of linear dynamical systems”, Proceedings of the National Academy of Science: Mathematics, USA NAS, Vol. 48, pp. 596-600, 1962. [60] R. E. Kalman, “Mathematical description of linear dynamical systems”, J.S.I.A.M. Control, Ser. A, Vol. 1, No. 2, pp. 152-192, 1963. [61] R. E. Kalman, ”On the General Theory of Control Systems”, Proceedings of the First International Congress on Automatic Control, London: Butterworth, 1960, pp. 481-491. [62] R. E. Kalman, P. L. Falb and M. A. Arbib, Topics on Mathenatical System Theory, New York: McGraw Hill, 1969. [63] R. E. Kalman, Y. C. Ho and K. S. Narendra, ”Controllability of Linear Dynamical Systems”, Contributions to Differential Equations, Vol.1, No. 2, pp. 189-213, 1963. [64] D. E. Kirk, Optimal Control Theory: An Introduction. Englewood Cliffs, NJ: Prentice-Hall, 1970, Dover republication, 2004.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 324 — #335

i

324

i

BIBLIOGRAPHY

[65] B. Kisaˇcanin.and G. C. Agarwal, Linear Control Systems: With Solved Problems and MATLAB Examples, New York: Kluwer Academic Press/Plenum Publishers, 2001. [66] B. C. Kuo, Automatic Control Systems, Englewood Cliffs, NJ: PrenticeHall, Inc., 1967. [67] B. C. Kuo, Automatic Control Systems, Englewood Cliffs, NJ, CA: Prentice-Hall, Inc., 1987. [68] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, New York: Wiley-Interscience, 1972. [69] P. Lancaster and M. Tismenetsky, The Theory of Matrices, San Diego, CA: Academic Press, 1985. [70] J. R. Leigh, Functional Analysis and Linear Control Theory, Mineola, NY: Dover Publications, Inc., 1980, 2007. [71] J. M. Maciejowski, Multivariable Feedback Systems, Wokingham: Addison-Wesley Publishing Company, 1989. [72] G. Marsaglla, Bounds for the rank of the Sum of two Matrices, Seattle: Mathematics Research Laboratory, Boeing Scientific Research Laboratories: Mathematical Note No, 3bA, D1-82-0343, pp. 1-13, April 1964. http://www.dtic.mil/dtic/tr/fulltext/u2/600471.pdf [73] G. Marsaglla, Bounds for the Rank of the Sum of Matrices, Seattle: Mathematics Research Laboratory, Boeing Scientific Research Laboratories, pp. 455-462, April 1964. http://www.ic.unicamp.br/˜meidanis/PUB/Doutorado/2012Biller/Marsaglia1964.pdf [74] E. J. McShane, Integration, Princeton, NJ: Princeton University Press, 1944. [75] J. L. Melsa and D. G, Schultz, Linear Control Systems, New York: McGraw-Hill Book Company, 1969. [76] A. N. Michel and C. J. Herget, Algebra and Analysis for Engineers and Scientists, Boston, MA: Birkh¨auser, 2007. [77] R. K. Miller and A. N. Michel, Ordinary Differential Equations, New York, NY: Academic Press, 1982.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 325 — #336

i

BIBLIOGRAPHY

i

325

[78] B. R. Milojkovi´c and Lj. T. Gruji´c, Automatic Control, the textbook in Serbo-Croatian, Belgrade: Faculty of Mechanical Engineering, University of Belgrade, 1977. [79] B. R. Milojkovi´c and L. T. Gruji´c, Automatic Control, in SerboCroatian, Belgrade: Faculty of Mechanical Engineering, 1981. [80] Sir Isaac Newton, Mathematical Principles of Natural Philosophy BOOK I. The Motion of Bodies, Chicago, IL: William Benton, Publisher, Ecyclopaedia Britannica, Inc., (The first publication: 1687) 1952. [81] K. Ogata, State Space Analysis of Control Systems, Englewood Cliffs, NJ: Prentice Hall, 1967. [82] K. Ogata, Modern Control Engineering, Englewood Cliffs, NJ: Prentice Hall, 1970. [83] D. H. Owens, Feedback and Multivariable Systems, Stevenage, Herts: Peter Peregrinus Ltd., 1978. [84] Polynomial and Polynomial http://www.polyx.cz/glossary.htm, 2017.

Matrix

Glossary,

[85] B. Porter, “Reachability and Controllability of Discrete-Time Linear Dynamical Systems”, Int. J. General Systems, B. Porter, Vol. 2, 1975, p. 115. [86] H. M. Power and R. J. Simpson, Introduction to Dynamics and Control, London: McGraw-Hill Book Company (UK) Limited, 1978. [87] H. H. Rosenbrock, “Some properties of relatively prime polynomial matrices”, Electronic Letters, Vol. 4, 1968, pp. 374 - 375. [88] H. H. Rosenbrock, State-space and Multivariable Theory, London: Thomas Nelson and Sons Ltd., 1970. [89] A. Sinha, Linear Systems: Optimal and Robust Control, Boca Raton, FL: CRC Press, Taylor & Francis Group, 2007. [90] R. E. Skelton, Dynamic Systems Control: Linear Systems Analysis and Synthesis, New York: John Wiley & Sons, 1988. [91] E. D. Sontag, Mathematical Control Theory: Deterministic Finite Dimensional Systems, New York: Springer, 1990.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 326 — #337

i

326

i

BIBLIOGRAPHY

[92] H. L. Trentelman, Sense and Simplicity, Talk at the Occasion of the Afscheidscollege of M.L.J. Hautus on April 8, 2005 at the Eindhoven University of Technology, available at: http://www.math.rug.nl/˜trentelman/seminars/Hautus.pdf [93] H. L. Trentelman, A. A. Stoorvogel, M. Hautus, Control Theory for Linear Systems, London, Berlin: Springer, 2001, 2012. [94] J. Tsiniaas and J. Karafyllis, “ISS property for time-varying systems and application to partial-static feedback stabilization and asymptotic tracking,” IEEE Trans. Automatic Control, Vol. 44, No. 11, pp. 21792184, November 1999. [95] D. M. Wiberg, State Space and Linear Systems, New York: McGrawHill Book Company, 1971. [96] R. L. Williams II and D. A. Lawrence, Linear State-Space Control Systems, Hoboken, NJ: John Wiley & Sons, Inc., 2007. [97] W. A. Wolovich, Linear Multivariable Systems, New York: SpringerVerlag, 1974. [98] W. M. Wonham, Linear Multivariable Control. A Geometric Approach, Berlin: Springer-Verlag, 1974. [99] B.-T. Yazdan, Introduction to Linear Control Systems, New York: Academic Press, 2017.

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 327 — #338

i

i

Part V

INDEX

327 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 328 — #339

i

i

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 329 — #340

i

i

Author Index Agarwal G. C., xii

Fairman F. W., xii

Anderson B.D.O., xii

Frazho A. E., xii

Antsaklis P. J., xii

Gantmacher F. R., xii

Author, xiv

Gilbert E. G., 161

Barnett S., xii

Goodwin G. C., xii

Bellman R.E., 272

Gruyitch L. T., xiv

Bellman R.E., 273

Gruyitch Ly. T., xii

Benhayoun M., xii

Haidekker M., xii

Benzaoiua A., xii

Hautus M. L. J., 161

Berkovitz L. D., xii

Hespanh J.P., xii

Bertram J. E., 161

Houpis C. H., xii

Bhattacharyya S. P., xii

Hull D. G., xii

Biswa D., xii

Johansson M. K.-J., xii

Borne P., xii

Kaczorek T., xii

Brockett R. W., 161

Kailath T., xii

Brogan W. L., xii

Kalman R. E., 161

Callier F. M., xii

Kalman R. E., 135

Chen C.-T., xii

Kalman R.E., 143

Chestnut H., xii

Keel L. H., xii

Corless M. J., xii

Kirk D. E., xii

D‘Azzo J. J., xii

Kisacanin B., xii

Datta A., xii

Kuo B. C., xii

Dauphin-Tanguy G., xii

Kwakernaak H., xii

Desoer C. A., xii

Lancaster P., xii 329

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 330 — #341

i

330

i

AUTHOR INDEX

Lawrence D. A., xii

Yazdan B.-T., xii

Maciejowski J. M., xii

Zambettakis I., xii

Mayer R. W., xii Medhin N. G., xii Melsa J. L., xii Mesarovitch M. D., 161 Mesquine F., xii Michel A. N., xii Miller R. K., xii Moore J.B., xii Newton I., 5 Ogata K., xii Owens D. H., xii Power H. M., xii Richard J.-P., xii Rosenbrock H. H., xii Rotella F., xii Sarachik P. E., 161 Schultz D. G., xii Sheldon S. N., xii Simpson R. J., xii Sinha A., xii Sivan R., xii Skelton R. E., xii Tismenetsky M., xii Vidyasagar M., xii Wiberg D. M., xii Williams II R. L., xii Wolovich W. A., xii Wonham W. M., xii

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 331 — #342

i

i

Subject Index FUNCTION CONTROL complex matrix function concept, 133, 159 (k) control Si (.), 22 nominal, 29 complex matrix function (ζ−1) control system CS, 18 Zi (.), 23 control vector U disturbance extended Uµ , 16 enemy disturbance, 11 control-state pair full fundamental matrix nominal, 38, 39, 62, 63, 75, 90 function controllability IO system, 26 mathematical state, 161, 171, fundamental matrix function 196, 201, 207, 215 normalized, ΘIO (s), 190 output, 159 fundamental matrix function output function, 159 ΦIO (.) physical state, 161, 171, 196, IO system, 28 201, 207, 215 generating matrix state, 159 of the matrix polynomial, 121 controller C, 17 linear dependence disturbance of a matrix function columns, compensation, 17 111 rejection, 17 of a matrix function rows, 111 functional reproducibility, 159 linear independence Input − State − Output systems of a matrix function columns, ISO control systems, xi 113 observability of a matrix function rows, state, 133 101, 112 CONTROLLABILITY scalar functions, 104 physical state controllability vector functions, 108 robustness, 178 matrix polynomial, 120 rank, definition, 121 EVENT rank, theorem, 122 happening, 3 331 i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 332 — #343

i

332

i

SUBJECT INDEX output error vector e, 17 polynomial matrix, 120 rational matrix, 125 rank, definition, 125 resolvent matrix of a matrix A, 126

of a matrix function columns, 113 of a matrix function rows, 101, 112 of matrix rows, 101 matrix polynomial, 120 rank, definition, 121 rank, theorem, 122 maximal number of linearly independent matrix columns/rows, 98 output fundamental matrix ΓIO (t) IO system, 27 polynomial matrix, 120 pseudo inverse matrix, 103 rank, 98 of a matrix function, 117 rankP (s), 122 rankP (s) on C, 122 rational matrix, 125 rank, definition, 125 resolvent matrix of a matrix A, 126 stability, 140 stability matrix, 239 stable, 140 stable matrix, 239 transfer function matrix full , 7

MATHEMATICAL MODEL total coordinates, 20 HISO system, 65 IIO system, 77 IO system, 19 ISO system, 31 MATRIX fundamental matrix function ΦIO (s) IO system, 28 extended matrix compact notation, 21 full transfer function matrix IO system, 27 fundamental matrix, 79 HISO system, 67 fundamental matrix function normalized, ΘIO (.), 190 fundamental matrix function ΦIO (.) IO system, 28 generating matrix of the matrix polynomial, 121 Gram matrix, 120 PLANT criterion, 120 desired output behavior Yd (t), grammian, 120 11 Hurwitz, 140 desired state behavior Sd (t), 12 linear dependence plant P, 17 of a matrix function columns, 111 of a matrix function rows, 111 SET K-dimmensional real vector of matrix rows, 100 space RK , 9 linear independence

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 333 — #344

i

SUBJECT INDEX R, 5 vector space Ck , 20 Rk , 20 SOFTWARE Scientific Work Place SWP, xiv SPACE integral output space I = T × RN , 11 itput space RM , 10 output response, 11 output space RN , 11 STABILITY Lyapunov matrix equation, 141 STATE phase form of the state, 22 SYMBOL ∃!, 4 SYSTEM continuous-time time-invariant linear system for short: system, xii control system, 17 control system CS, 18 controllability state, 159 controller C, 17 description deviations, 30 desired output realizability, 30, 39, 64, 76, 91 dynamical behavior, 10 EISO system, 14 Partially extended ISO system Extended ISO system (EISO system), 41

i

333 full (dynamics) state variable SF , 13 HISO (control) system, 65 HISO system, 15 IIO system, 15, 77 input nominal, 29 input vector of the IO system, 20 Input − State − Output systems ISO control systems, xi internal (dynamics) state variable SI , 13 IO system, 19 state vector S = Yν−1 , 13 ISO system, 14, 31 itput space RM , 10 I, 10 itput vector I, 10 mathematical system, 13 motion desired, 38, 63, 75 object, 10, 159 observability criterion in general, 136 definition in general, 134, 136 state, 133 output (dynamics) state variable SO , 13 output error vector e, 17 output fundamental matrix ΓIO (t) IO system, 27 output response, 11 output space RN , 11 output variable Y, 11

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 334 — #345

i

334

i

SUBJECT INDEX output vector of the IO system, 20 output vector Y, 11 plant, 10, 159 regime desired, 28 disturbed, 29 nominal, 28 non-nominal, 29 nondesired, 29 nonperturbed, 28 perturbed,, 29 response desired, 28 state formal mathematical, 172 mathematical, 250 state dimension, 12 state of a dynamical system, 12 state space RK , 12 state variables Si , 12 state vector, 12

TIME dimension physical, 4 invariant ,5 relative time, 5 scale, 5 speed, 4 temporal variable definition, 3 value arbitrary, 4 initial moment, 5 instantaneous, 4 moment, 4 momentous, 4

numerical, 4 relative zero, 5 variable physical, 4 TIME axis, 5 T, 5 continuous-time set, 6 continuum, 6 speed law, 4 unit, 5 value instant, 3 moment, 3 VARIABLE disturbance compensation, 17 rejection, 17 full (dynamics) state variable SF , 13 internal (dynamics) state variable SI , 13 itput variable I, 10 output (dynamics) state variable SO , 13 output error vector e, 17 output variable, 11 state variables Si , 12 time-dependent variable, 9 value total, 21 total zero, 21 VECTOR control vector U extended Uµ , 16 Gram matrix criterion linear independence o vectors, 120

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 335 — #346

i

SUBJECT INDEX

i

335

input vector of the IO system, 20 itput vector I, 10 linear dependence, 95 of a matrix function column vectors, 111 of a matrix function row vectors, 111 of matrix row vectors, 100 linear independence, 95 of a matrix function column vectors, 113 of a matrix function row vectors, 101, 112 of matrix row vectors, 101 vector functions, 108 output vector of the IO system, 20 output vector Y, 11 phase (state) vector, 22 state vector, 12 unit vector 1k , 20 zero vector 0k , 20

i

i i

i

i

i

“001Book” — 2018/9/28 — 16:26 — page 336 — #347

i

i

i

i i

i